Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

Mastering the Art of Keeping Indians Under Surveillance

by Bhairav Acharya last modified Aug 23, 2015 12:26 PM
In its first year in office, the National Democratic Alliance government has been notably silent on the large-scale surveillance projects it has inherited. This ended last week amidst reports the government is hastening to complete the Central Monitoring System (CMS) within the year.

The article was published in the Wire on May 30, 2015.


In a statement to the Rajya Sabha in 2009, Gurudas Kamat, the erstwhile United Progressive Alliance’s junior communications minister, said the CMS was a project to enable direct state access to all communications on mobile phones, landlines, and the Internet in India. He meant the government was building ‘backdoors’, or capitalising on existing ones, to enable state authorities to intercept any communication at will, besides collecting large amounts of metadata, without having to rely on private communications carriers.

This is not new. Legally sanctioned backdoors have existed in Europe and the USA since the early 1990s to enable direct state interception of private communications. But the laws of those countries also subject state surveillance to a strong regime of state accountability, individual freedoms, and privacy. This regime may not be completely robust, as Edward Snowden’s revelations have shown, but at least it exists on paper. The CMS is not illegal by itself, but it is coloured by the compromised foundation of Indian surveillance law upon which it is built.

Surveillance and social control

The CMS is a technological project. But technology does not exist in isolation; it is contextualised by law, society, politics, and history. Surveillance and the CMS must be seen in the same contexts.

The great sociologist Max Weber claimed the modern state could not exist without monopolising violence. It seems clear the state also entertains the equal desire to monopolise communications technologies. The state has historically shaped the way in which information is transmitted, received, and intercepted. From the telegraph and radio to telephones and the Internet, the state has constantly endeavoured to control communications technologies.

Law is the vehicle of this control. When the first telegraph line was laid down in India, its implications for social control were instantly realised; so the law swiftly responded by creating a state monopoly over the telegraph. The telegraph played a significant role in thwarting the Revolt of 1857, even as Indians attempted to destroy the line; so the state consolidated its control over the technology to obviate future contests.

This controlling impulse was exercised over radio and telephones, which are also government monopolies, and is expressed through the state’s surveillance prerogative. On the other hand, because of its open and decentralised architecture, the Internet presents the single greatest threat to the state’s communications monopoly and dilutes its ability to control society.

Interception in India

The power to intercept communications arises with the regulation of telegraphy. The first two laws governing telegraphs, in 1854 and 1860, granted the government powers to take possession of telegraphs “on the occurrence of any public emergency”. In 1876, the third telegraph law expanded this threshold to include “the interest of public safety”. These are vague phrases and their interpretation was deliberately left to the government’s discretion.

This unclear formulation was replicated in the Indian Telegraph Act of 1885, the fourth law on the subject, which is currently in force today. The 1885 law included a specific power to wiretap. Incredibly, this colonial surveillance provision survived untouched for 87 years even as countries across the world balanced their surveillance powers with democratic safeguards.

The Indian Constitution requires all deprivations of free speech to conform to any of nine grounds listed in Article 19(2). Public emergencies and public safety are not listed. So Indira Gandhi amended the wiretapping provision in 1972 to insert five grounds copied from Article 19(2). However, the original unclear language on public emergencies and public safety remained.

Indira Gandhi’s amendment was ironic because one year earlier she had overseen the enactment of the Defence and Internal Security of India Act, 1971 (DISA), which gave the government fresh powers to wiretap. These powers were not subject to even the minimal protections of the Telegraph Act. When the Emergency was imposed in 1975, Gandhi’s government bypassed her earlier amendment and, through the DISA Rules, instituted the most intensive period of surveillance in Indian history.

Although DISA was repealed, the tradition of having parallel surveillance powers for fictitious emergencies continues to flourish. Wiretapping powers are also found in the Maharashtra Control of Organised Crime Act, 1999 which has been copied by Karnataka, Andhra Pradesh, Arunachal Pradesh, and Gujarat.

Procedural weaknesses

Meanwhile, the Telegraph Act with its 1972 amendment continued to weather criticism through the 1980s. The wiretapping power was largely exercised free of procedural safeguards such as the requirements to exhaust other less intrusive means of investigation, minimise information collection, limit the sharing of information, ensure accountability, and others.

This changed in 1996 when the Supreme Court, on a challenge brought by PUCL, ordered the government to create a minimally fair procedure. The government fell in line in 1999, and a new rule, 419A, was put into the Indian Telegraph Rules, 1951.

Unlike the United States, where a wiretap can only be ordered by a judge when she decides the state has legally made its case for the requested interception, an Indian wiretap is sanctioned by a bureaucrat or police officer. Unlike the United Kingdom, which also grants wiretapping powers to bureaucrats but subjects them to two additional safeguards including an independent auditor and a judicial tribunal, an Indian wiretap is only reviewed by a committee of the original bureaucrat’s colleagues. Unlike most of the world which restricts this power to grave crime or serious security needs, an Indian wiretap can even be obtained by the income tax department.

Rule 419A certainly creates procedure, but it lacks crucial safeguards that impugn its credibility. Worse, the contours of rule 419A were copied in 2009 to create flawed procedures to intercept the content of Internet communications and collect metadata. Unlike rule 419A, these new rules issued under sections 69(2) and 69B(3) of the Information Technology Act 2000 have not been constitutionally scrutinised.

Three steps to tap

Despite its monopoly, the state does not own the infrastructure of telephones. It is dependent on telecommunications carriers to physically perform the wiretap. Indian wiretaps take place in three steps: a bureaucrat authorises the wiretap; a law enforcement officer serves the authorisation on a carrier; and, the carrier performs the tap and returns the information to the law enforcement officer.

There are many moving parts in this process, and so there are leaks. Some leaks are cynically motivated such as Amar Singh’s lewd conversations in 2011. But others serve a public purpose: Niira Radia’s conversations were allegedly leaked by a whistleblower to reveal serious governmental culpability. Ironically, leaks have created accountability where the law has failed.

The CMS will prevent leaks by installing servers on the transmission infrastructure of carriers to divert communications to regional monitoring centres. Regional centres, in turn, will relay communications to a centralised monitoring centre where they will be analysed, mined, and stored. Carriers will no longer perform wiretaps; and, since this obviates their costs of compliance, they are willing participants.

In its annual report of 2012, the Centre for the Development of Telematics (C-DOT), a state-owned R&D centre tasked with designing and creating the CMS, claimed the system would intercept 3G video, ILD, SMS, and ISDN PRI communications made through landlines or mobile phones – both GSM and CDMA.

There are unclear reports of an expansion to intercept Internet data, such as emails and browsing details, as well as instant messaging services; but these remain unconfirmed. There is also a potential overlap with another secretive Internet surveillance programme being developed by the Defence R&D Organisation called NETRA, no details of which are public.

Culmination of surveillance

In its present state, Indian surveillance law is unable to bear the weight of the CMS project, and must be vastly strengthened to protect privacy and accountability before the state is given direct access to communications.

But there is a larger way to understand the CMS in the context of Indian surveillance. Christopher Bayly, the noted colonial historian, writes that when the British set about establishing a surveillance apparatus in colonised India, they came up against an established system of indigenous intelligence gathering. Colonial rule was at its most vulnerable at this point of intersection between foreign surveillance and indigenous knowledge, and the meeting of the two was riven by suspicion. So the colonial state simply co-opted the interface by creating institutions to acquire local knowledge.

The CMS is also an attempt to co-opt the interface between government and the purveyors of communications; because if the state cannot control communications, it cannot control society. Seen in this light, the CMS represents the natural culmination of the progression of Indian surveillance. No challenge against it that does not question the construction of the modern Indian state will be successful.

The Four Parts of Privacy in India

by Bhairav Acharya last modified Aug 23, 2015 01:04 PM
Privacy enjoys an abundance of meanings. It is claimed in diverse situations every day by everyone against other people, society and the state.

Traditionally traced to classical liberalism’s public/private divide, there are now several theoretical conceptions of privacy that collaborate and sometimes contend. Indian privacy law is evolving in response to four types of privacy claims: against the press, against state surveillance, for decisional autonomy, and in relation to personal information. The Indian Supreme Court has selectively borrowed competing foreign privacy norms, primarily American, to create an unconvincing pastiche of privacy law in India. These developments are undermined by a lack of theoretical clarity and the continuing tension between individual freedoms and communitarian values.

This was published in Economic & Political Weekly, 50(22), 30 May 2015. Download the full article here.

The Four Parts of Privacy in India

by Bhairav Acharya last modified Aug 23, 2015 01:02 PM

PDF document icon Acharya - The Four Parts of Privacy in India (EPW Insight).pdf — PDF document, 610 kB (625400 bytes)

Multi-stakeholder Advisory Group Analysis

by Jyoti Panday last modified Apr 12, 2016 10:02 AM
This analysis has been done to see the trend in the selection and rotation of the members of the Multistakeholder advisory group (MAG) in the Internet Governance Forum (IGF). The MAG has been functional for nine years from 2006-2015. The analysis is based on data procured, collated and organised by Pranesh Prakash and Jyoti Panday. Shambhavi Singh, Law Student, NLU Delhi who was interning with CIS at the time also assisted with the organisation and analysis of the data.

The researcher has collected the data from the lists of members available in the public domain from 2010-2015. The lists prior to 2010 have been procured by the Centre for Internet and society from the UN Secretariat of the Internet Governance Forum (IGF).

This research is based solely upon the members and the nature of their stake holding has been analysed in the light of MAG terms of reference. No data has been made available regarding the nomination process and the criteria on which a particular member has been re-elected to the MAG (The IGF Secretariat does not share this data).

According to the analysis, in these six years, the MAG has had around 182 members from various stakeholder groups.

We have divided it into five stakeholder groups, Government, Civil Society, Industry, Technical Community and Academia. Any overlap between two or more of these groups has also been taken into account, for example- A member of the Internet Society (ISOC) being both in the Civil Society and Technical Community.

According to the MAG Terms of Reference[1], it is the prerogative of the UN Secretary General to select MAG Members. The general policy is that the MAG members are appointed for a period of one year, which is automatically renewed for 2 more years consecutively depending on their engagement in MAG activities.

There is also a policy of rotating off 1/3rd members of MAG every year for diversity and taking new viewpoints in consideration. There is also an exceptional circumstance where a person might continue beyond three years in case there is a lack of candidates fitting the desired area.

However, it seems like the exception has become the norm as a whopping number of members have continued beyond 3 years, ranging from 4 years up to as long as 8 years, this figure rounds up to around 49. No doubt some of them are exceptional talents and difficult to replace. However, the lack of transparency in the nomination system makes it difficult to determine the basis on which these people continued beyond the usual term.

S. No.

Stakeholder

Number of years

Total Members continuing beyond 3 years

1

Civil Society

8, 6, 6, 4, 4,

5

2

Government/Industry

4, 5

2

3

Technical community/ Civil society

8, 8, 8, 6, 6, 4, 4, 4, 4,4

10

4

Industry/ Civil society

8, 6,

2

5

Industry

8, 7, 7, 6, 6, 4,

6

6

Industry/Tech Community/ Civil Society

8,

1

7

Government

7, 7, 7, 6, 6, 6, 6, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4,

19

8

Academia

6, 6, 5,

3

9

Industry/ Tech community

6,

1

The stakeholders that have continued beyond 8 years have around 39% members from Government and related agencies. The next being Technical Community/Civil Society with around 20% representation, followed by Industry at 12%, 10% from the Civil Society, 6% from Academia, 4% from Government/Industry, 4% from Industry/Civil Society and 2% each from Industry/Technical Community and Industry/Technical Community/Civil Society respectively.

null

Table with overlapping interests merged

S. No.

Stakeholder

Total Members continuing beyond 3 years

1

Civil Society

7 + 9 + 1+1 = 18

2

Government

19

3

Tech Community

9 + 1 + 1+1 = 12

4

Industry

6 + 2 + 1 + 1+2 = 13

5

Academia

3

When the overlap is grouped separately, as in if a Technical Community/Civil Society person is placed both in Technical Community and Civil Society groups individually, then the representation of stakeholder representation is as follows(approximate values)-

Government- 29%

Civil Society- 28%

Industry- 20%

Technical Community-17%

Academia-5%

This clearly shows us that stakeholders from academia generally did not stay on MAG beyond 3 years. Even when all members that have ever been on MAG are taken into consideration, only around 8% representation has been from the academic community. This needs to be taken into account when new MAG members are selected in 2016.

The researcher has also looked at the MAG Representation based on gender and UN Regional Groups. The results of the analysis were as follows-

The ratio of male members is to female members is approximately 16:9 in the MAG and the approximate value in percentage being 64% and 36% respectively.

null

Now coming to the UN Regional Groups, the results that the analysis yielded were as follows-

The Western European and Others Group (WEOG) has the highest representation in MAG, a large number of members being from Switzerland, USA and UK. This is followed by the Asia Pacific Group which has 20% representation. The third largest is the African group with 19% representation followed by Latin American and Caribbean Group (GRULAC) and Eastern European Group with 13% and 12% representation respectively.

null

The representation of developed, developing and Least Developed Countries is as follows-

Developed countries have approximately 42% representation, developing countries having 53% and LDCs having a mere 5% representation. There should be some effort to strive for better LDC representation as they are the most backward when it comes to Global ICT Penetration. [2]

null


[1] Intgovforum.org, 'MAG Terms Of Reference' (2015) <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference> accessed 13 July 2015.

[2] ICT Facts And Figures (1st edn, International Telecommunication Union 2015) <http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2015.pdf> accessed 11 July 2015.

Supreme Court Order is a Good Start, but is Seeding Necessary?

by Elonnai Hickok and Rohan George — last modified Sep 07, 2015 01:21 PM
This blog post seeks to unpack the ‘seeding’ process in the UIDAI scheme, understand the implications of the Supreme Court order on this process, and identify questions regarding the UID scheme that still need to be clarified by the court in the context of the seeding process.

Introduction

On August 11th 2015, in the writ petition Justice K.S Puttaswamy (Retd.) & Another vs. Union of India & Others1, the Supreme Court of India issued an interim order regarding the constitutionality of the UIDAI scheme. In response to the order, Dr. Usha Ramanathan published an article titled  'Decoding the Aadhaar judgment: No more seeding, not till the privacy issue is settled by the court' which, among other points, highlights concerns around the seeding of Aadhaar numbers into service delivery databases. She writes that "seeding' is a matter of grave concern in the UID project. This is about the introduction of the number into every data base. Once the number is seeded in various databases, it makes convergence of personal information remarkably simple. So, if the number is in the gas agency, the bank, the ticket, the ration card, the voter ID, the medical records and so on, the state, as also others who learn to use what is called the 'ID platform', can 'see' the citizen at will."2

Building off of this statement, this article seeks to unpack the 'seeding' process in the UIDAI scheme, understand the implications of the Supreme Court order on this process, and identify questions regarding the UID scheme that still need to be clarified by the Court in the context of the seeding process.

What is Seeding?

In the UID scheme, data points within databases of service providers and banks are organized via individual Aadhaar numbers through a process known as 'seeding'. The UIDAI has released two documents on the seeding process - "Approach Document for Aadhaar Seeding in Service Delivery Databases version 1.0" (Version 1.0)3 and "Standard Protocol Covering the Approach & Process for Seeding Aadhaar Number in Service Delivery Databases June 2015 Version 1.1" (Version 1.1)4

According to Version 1.0 "Aadhaar seeding is a process by which UIDs of residents are included in the service delivery database of service providers for enabling Aadhaar based authentication during service delivery."5 Version 1.0 further states that the "Seeding process typically involves data extraction, consolidation, normalization, and matching".6 According to Version 1.1, Aadhaar seeding is "a process by which the Aadhaar numbers of residents are included in the service delivery database of service providers for enabling de-duplication of database and Aadhaar based authentication during service delivery".7 There is an extra clause in Version 1.1's definition of seeding which includes "de-duplication" in addition to authentication.

Though not directly stated, it is envisioned that the Aadhaar number will be seeded into the databases of service providers and banks to enable cash transfers of funds. This was alluded to in the Version 1.1 document with the UIDAI stating "Irrespective of the Scheme and the geography, as the Aadhaar Number of a given Beneficiary finally has to be linked with the Bank Account, Banks play a strategic and key role in Seeding."8

How does the seeding process work?

The seeding process itself can be done through manual/organic processes or algorithmic/in-organic processes. In the inorganic process the Aadhaar database is matched with the database of the service provider - namely the database of beneficiaries, KYR+ data from enrolment agencies, and the EID-UID database from the UIDAI. Once compared and a match is found - for example between KYR fields in the service delivery database and KYR+ fields in the Aadhaar database - the Aadhaar number is seeded into the service delivery database.9

Organic seeding can be carried out via a number of methods, but the recommended method from the UIDAI is door to door collection of Aadhaar numbers from residents which are subsequently uploaded into the service delivery database either manually or through the use of a tablet or smart phone. Perhaps demonstrating the fact that technology cannot be used as a 'patch' for a broken or premature system, organic (manual) seeding is suggested as the preferred process by the UIDAI due to challenges such as lack of digitization of beneficiary records, lack of standardization in Name and Address records, and incomplete data.10

According to the 1.0 Approach Paper, to facilitate the seeding process, the UIDAI has developed an in house software known as Ginger. Service providers that adopt the Aadhaar number must move their existing databases onto the Ginger platform, which then organizes the present and incoming data in the database by individual Aadhaar numbers. This 'organization' can be done automatically or manually. Once organized, data can be queried by Aadhaar number by person's on the 'control' end of the Ginger platform.11

In practice this means that during an authentication in which the UIDAI responds to a service provider with a 'yes' or 'no' response, the UIDAI would have access to at least these two sets of data: 1.) Transaction data (date, time, device number, and Aadhaar number of the individual authenticating) 2.) Data associated to an individual Aadhaar number within a database that has been seeded with Aadhaar numbers (historical and incoming). According to the Approach Document version 1.0, "The objective here is that the seeding process/utility should be able to access the service delivery data and all related information in at least the read-only mode." 12 and the Version 1.1 document states "Software application users with authorized access should be able to access data online in a seamless fashion while providing service benefit to residents." 13

What are the concerns with seeding?

With the increased availability of data analysis and processing technologies, organisations have the ability to link disparate data points stored across databases in order that the data can be related to each other and thereby analysed to derive holistic, intrinsic, and/or latent assessments. This can allow for deeper and more useful insights from otherwise standalone data. In the context of the government linking data, such "relating" can be useful - enabling the government to visualize a holistic and more accurate data and to develop data informed policies through research14. Yet, allowing for disparate data points to be merged and linked to each other raises questions about privacy and civil liberties - as well as more intrinsic questions about purpose, access,  consent and choice.  To name a few, linked data can be used to create profiles of individuals, it can facilitate surveillance, it can enable new and unintended uses of data, and it can be used for discriminatory purposes.

The fact that the seeding process is meant to facilitate extraction, consolidation, normalization and matching of data so it can be queried by Aadhaar number, and that existing databases can be transposed onto the Ginger platform can give rise to Dr. Ramanthan's concerns. She argues that anyone having access to the 'control' end of the Ginger platform can access all data associated to a Aadhaar number, that convergence can now easily be initiated with databases on the Ginger platform,  and that profiling of individuals can take place through the linking of data points via the Ginger platform.

How does the Supreme Court Order impact the seeding process and what still needs to be clarified?

In the interim order the Supreme Court lays out four welcome clarifications and limitations on the UID scheme:

  1. The Union of India shall give wide publicity in the electronic and print media including radio and television networks that it is not mandatory for a citizen to obtain an Aadhaar card;
  2. The production of an Aadhaar card will not be condition for obtaining any benefits otherwise due to a citizen;
  3. The Unique Identification Number or the Aadhaar card will not be used by the respondents for any purpose other than the PDS Scheme and in particular for the purpose of distribution of foodgrains, etc. and cooking fuel, such as kerosene. The Aadhaar card may also be used for the purpose of the LPG Distribution Scheme;
  4. The information about an individual obtained by the Unique Identification Authority of India while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation."15

In some ways, the court order addresses some of the concerns regarding the seeding of Aadhaar numbers by limiting the scope of the seeding process to the PDS scheme, but there are still a number of aspects of the scheme as they pertain to the seeding process that need to be addressed by the court.

These include:

The Process of Seeding

Prior to the Supreme Court interim order, the above concerns were quite broad in scope as Aadhaar could be adopted by any private or public entity - and the number was being seeded in databases of banks, the railways, tax authorities, etc. The interim order, to an extent, lessens these concerns by holding that  "The Unique Identification Number or the Aadhaar card will not be used by the respondents for any purpose other than the PDS Scheme…".

However, the Court could have perhaps been more specific regarding what is included under the PDS scheme, because the scheme itself is broad. That said, the restrictions put in place by the court create a form of purpose limitation and a boundary of  proportionality on the UID scheme. By limiting the purpose of the Aadhaar number to use in the PDS system, the  Aadhaar number can only be seeded into the databases of entities involved in the PDS Scheme, rather than any entity that had adopted the number. Despite this, the seeding process is an issue in itself for the following reasons:

Access: Embedding service delivery databases and bank databases with the Aadhaar number allows for the UIDAI or authorized users to access information in these databases. According to version 1.1 of the seeding document from the UIDAI - the UIDAI is carrying out the seeding process through 'seeding agencies'. These agencies can include private companies, public limited companies, government companies, PSUs, semi-government organizations, and NGOs that are registered and operating in India for at least three years.16 Though under contract by the UIDAI, it is unclear what information such organizations would be able to access. This ambiguity leaves the data collected by UIDAI open to potential abuse and unauthorized access. Thus, the Court Ruling fails to provide clarity on the access that the seeding process enables for the UIDAI and for private parties.

Consent: Upon enrolling for an Aadhaar number, individuals have the option of consenting to the UIDAI sharing information in three instances:
  • "I have no objection to the UIDAI sharing information provided by me to the UIDAI with agencies engaged in delivery of welfare services."
  • "I want the UIDAI to facilitate opening of a new Bank/Post Office Account linked to my Aadhaar Number.
  • "I have no objection to sharing my information for this purpose""I have no objection to linking my present bank account provided here to my Aadhaar number"17
Aside for the vague and sweeping language of actions users provide consent for, which raises questions about how informed an individual is of the information he consents to share, at no point is an individual provided the option of  consenting  to the UIDAI accessing data - historic or incoming - that is stored in the database of a service provider in the PDS system seeded with the Aadhaar number. Furthermore, as noted earlier, the fact that the UIDAI concedes that a beneficiary has to be linked with a bank account raises questions of consent to this process as linking one's bank account with their Aadhaar number is an optional part of the enrollment process. Thus, even with the restrictions from the court order, if individuals want to use their Aadhaar number to access benefits, they must also seed their number with their bank accounts. On this point, in an order from the Finance Ministry it was clarified that the seeding of Aadhaar numbers into databases is a voluntary decision, but if a beneficiary provides their number on a voluntary basis - it can be seeded into a database.18

Withdrawing Consent: The Court also did not directly address if individuals could withdraw consent after enrolling in the UID scheme - and if they did - whether Aadhaar numbers should be 'unseeded' from PDS related databases. Similarly, the Court did not clarify whether services that have seeded the Aadhaar number, but are not PDS related, now need to unseed the number. Though news items indicate that in some cases (not all) organizations and government departments not involved in the PDS system are stopping the seeding process19, there is no indication of departments undertaking an 'unseeding' process. Nor is there any indication of the UIDAI allowing indivduals enrolled to 'un-enroll' from the scheme. In being silent on issues around consent, the court order inadvertently overlooks the risk of function creep possible through the seeding process, which "allows numerous opportunities for expansion of functions far beyond those stated to be its purpose"20.

Verification and liability: According to Version 1.0 and Version 1.1 of the Seeding documents, "no seeding is better than incorrect seeding". This is because incorrect seeding can lead to inaccuracies in the authentication process and result in individuals entitled to benefits being denied such benefits. To avoid errors in the seeding process the UIDAI has suggested several steps including using the "Aadhaar Verification Service" which verifies an Aadhaar number submitted for seeding against the Aadhaar number and demographic data such as gender and location in the CIDR. Though recognizing the importance of accuracy in the seeding process, the UIDAI takes no responsibility for the same. According to Version 1.1 of the seeding document, "the responsibility of correct seeding shall always stay with the department, who is the owner of the database."21 This replicates a disturbing trend in the implementation of the UID scheme - where the UIDAI 'initiates' different processes through private sector companies but does not take responsibility for such processes. 22

The Scope of the UIDAI's mandate and the necessity of seeding

Aside from the problems within the seeding process itself, there is a question of the scope of the UIDAI's mandate and the role that seeding plays in fulfilling this. This is important in understanding the necessity of the seeding process.

On the official website, the UIDAI has stated that its mandate is "to issue every resident a unique identification number linked to the resident's demographic and biometric information, which they can use to identify themselves anywhere in India, and to access a host of benefits and services." 23 Though the Supreme Court order clarifies the use of the Aadhaar number, it does not address the actual legality of the UIDAI's mandate - as there is no enabling statute in place -and it does not clarify or confirm the scope of the UIDAI's mandate.

In Version 1.0 of the Seeding document the UIDAI has stated the "Aadhaar numbers of enrolled residents are being 'seeded' ie. included in the databases of service providers that have adopted the Aadhaar platform in order to enable authentication via the Aadhaar number during a transaction or service delivery."24 This statement is only partially correct. For only providing and authenticating of an Aadhaar number - seeding is not necessary as the Aadhaar number submitted for verification alone only needs to be compared with the records in the CIDR to complete authentication of the same. Yet, in an example justifying the need for seeding in the Version 1.0 seeding document the UIDAI states "A consolidated view of the entire data would facilitate the social welfare department of the state to improve the service delivery in their programs, while also being able to ensure that the same person is not availing double benefits from two different districts."25 For this purpose, seeding is again unnecessary as it would be simple to correlate PDS usage with a Aadhaar number within the PDS database. Even if limited to the PDS system,  seeding in the databases of service providers is only necessary for the creation and access to comprehensive information about an individual in order to determine eligibility for a service. Further, seeding is only necessary in the databases of banks if the Aadhaar number moves from being an identity factor - to a transactional factor - something that the UIDAI seems to envision as the Version 1.1 seeding document states that Aadhaar is sufficient enough to transfer payments to an individual and thus plays a key role in cash transfers of benefits.26

Conclusion

Despite the fact that adherence to the interim order from the Supreme Court has been adhoc27, the order does provide a number of welcome limitations and clarifications to the UID Scheme. Yet, despite limited clarification from the Supreme Court and further clarification from the Finance Ministry's Order, the process of seeding and its necessity remain unclear. Is the UIDAI taking fully informed consent for the seeding process and what it will enable? Should the UIDAI be liable for the accuracy of the seeding process? Is seeding of service provider and bank databases necessary for the UIDAI to fulfill its mandate? Is the UIDAI's mandate to provide an identifier and an authentication of identity mechanism or is it to provide authentication of eligibility of an individual to receive services? Is this mandate backed by law and with adequate safeguards? Can the court order be interpreted to mean that to deliver services in the PDS system, UIDAI will need access to bank accounts or other transactions/information stored in a service provider's database to verify the claims of the user?

Many news items reflect a concern of convergence arising out of the UID scheme.28 To be clear, the process of seeding is not the same as convergence. Seeding enables convergence which can enable profiling, surveillance, etc. That said, the seeding process needs to be examined more closely by the public and the court to ensure that society can reap the benefits of seeding while avoiding the problems it may pose.


[1]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[2]. Usha Ramanthan. Decoding the Aadhaar judgment: No more seeding, not till the privacy issues is settled by the court. The Indian Express. August 12th 2015. Available at: http://indianexpress.com/article/blogs/decoding-the-aadhar-judgment-no-more-seeding-not-till-the-privacy-issue-is-settled-by-the-court/

[3]. UIDAI. Approach Document for Aadhaar Seeding in Service Delivery Databases. Version 1.0. Available at: https://authportal.uidai.gov.in/static/aadhaar_seeding_v_10_280312.pdf

[4]. UIDAI. Standard Protocol Covering the Approach & Process for Seeding Aadhaar Numbers in Service Delivery Databases. Available at: https://uidai.gov.in/images/aadhaar_seeding_june_2015_v1.1.pdf

[5]. Version 1.0 pg. 2

[6]. Version 1.0 pg. 19

[7]. Version 1.1 pg. 3

[8]. Version 1.1 pg. 7

[9]. Version 1.1 pg. 5 -7

[10]. Version 1.1 pg. 7-13

[11]. Version 1.0 pg 19-22

[12]. Version 1.0 pg. 4

[13]. Version 1.1 pg. 5, figure 3.

[14]. David Card, Raj Chett, Martin Feldstein, and Emmanuel Saez. Expanding Access to Adminstrative Data for Research in the United States. Available at: http://obs.rc.fas.harvard.edu/chetty/NSFdataaccess.pdf

[15]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[16]. Version 1.1 pg. 18

[17]. Aadhaar Enrollment Form from Karnataka State. http://www.karnataka.gov.in/aadhaar/Downloads/Application%20form%20-%20English.pdf

[18]. Business Line. Aadhaar only for foodgrains, LPG, kerosene, distribution. August 27th 2015. Available at: http://www.thehindubusinessline.com/economy/aadhaar-only-for-foodgrains-lpg-kerosene-distribution/article7587382.ece

[19]. Bharti Jain. Election Commission not to link poll rolls to Aadhaar. The Times of India. August 15th 2015. Available at: http://timesofindia.indiatimes.com/india/Election-Commission-not-to-link-poll-rolls-to-Aadhaar/articleshow/48488648.cms

[20]. Graham Greenleaf. “Access all areas': Function creep guaranteed in Australia's ID Card Bill (No.1) Computer Law & Security Review. Volume 23, Issue 4. 2007. Available at:  http://www.sciencedirect.com/science/article/pii/S0267364907000544

[21]. Version 1.1 pg. 3

[22]. For example, the UIDAI depends on private companies to act as enrollment agencies and collect, verify, and enroll individuals in the UID scheme. Though the UID enters into MOUs with these organizations, the UID cannot be held responsible for the security or accuracy of data collected, stored, etc. by these entities. See draft MOU for registrars: https://uidai.gov.in/images/training/MoU_with_the_State_Governments_version.pdf

[23]. Justice K.S Puttaswamy & Another vs. Union of India & Others. Writ Petition (Civil) No. 494 of 2012. Available at:  http://judis.nic.in/supremecourt/imgs1.aspx?filename=42841

[24]. Version 1.0 pg.3

[25]. Version 1.0  pg.4

[26]. Version 1.1 pg. 3

[27]. For example, there are reports of Aadhaar being introduced for different services such as education. See: Tanu Kulkarni. Aadhaar may soon replace roll numbers. The Hindu. August 21st, 2015. For example: http://www.thehindu.com/news/cities/bangalore/aadhaar-may-soon-replace-roll-numbers/article7563708.ece

[28]. For example see: Salil Tripathi. A dangerous convergence. July 31st. 2015. The Live Mint. Available at: http://www.livemint.com/Opinion/xrqO4wBzpPbeA4nPruPNXP/A-dangerous-convergence.html

Are we Throwing our Data Protection Regimes under the Bus?

by Rohan George — last modified Sep 10, 2015 02:02 PM
In this blog post Rohan examines why the principle of consent is providing us increasingly less of an aegis in protecting our data.

Consent is complicated. What we think of as reasonably obtained consent varies substantially with the circumstance. For example, in treating rape cases, the UK justice system has moved to recognise complications like alcohol and its effect on explicit consent[1]. Yet in contracts, consent may be implied simply when one person accepts another’s work on a contract without objections[2]. These situations highlight the differences between the various forms of informed consent and the implications on its validity.

Consent has emerged as a key principle in regulating the use of personal data, and different countries have adopted different regimes, ranging from the comprehensive regimes like of the EU to more sectoral approaches like that in the USA. However, in our modern epoch characterised by the big data analytics that are now commonplace, many commentators have challenged the efficacy and relevance of consent in data protection. I argue that we may even risk throwing our data protection regimes under the proverbial bus should we continue to focus on consent as a key pillar of data protection.

Consent as a tool in Data Protection Regimes

In fact, even a cursory review of current data protection laws around the world shows the extent of the law’s reliance on consent. In the EU for example, Article 7 of the Data Protection Directive, passed in 1995, provides that data processing is only legitimate when “the data subject has unambiguously given his consent”[3]. Article 8, which guards against processing of sensitive data, provides that such prohibitions may be lifted when “the data subject has given his explicit consent to the processing of those data”[4]. Even as the EU attempts to strengthen data protection within the bloc with the proposed reforms to data protection[5], the focus on the consent of data subject remains strong. There are proposals for an “unambiguous consent by the data subject”[6] requirement to be put in place. Such consent will be mandatory before any data processing can occur[7].

Despite adopting very different overall approaches to data protection and privacy, consent is an equally integral part of data protection frameworks in the USA. In his book Protectors of Privacy[8], Abraham Newman describes two main types of privacy legislation: comprehensive and limited. He argues that places like the EU have adopted comprehensive regimes, which primarily seek to protect individuals because of the “informational and power asymmetry” between individuals and organisations[9]. On the other hand, he classifies the American approach as limited, focusing on more sectoral protections and principles of fair information practice instead of overarching legislation[10]. These sectors include the Fair Credit Reporting Act[11] (which governs consumer credit reporting), the Privacy Act[12] (which governs data collected by Federal government) and Electronic Communications Privacy Act[13] (which deals with email communications) among others. However, the Federal Trade Commission describes itself as having only “limited authority over the collection and dissemination of personal data collected online”[14].

This is because the general data processing that is commonplace in today’s era of big data is only regulated by the privacy protections that come from the Federal Trade Commission’s (FTC) Fair Information Practice Principles (FIPPs). Expectedly, consent is equally important under the FTC’s FIPPs. The FTC describes the principle of consent as “the second widely-accepted core principle of fair information practice”[15] in addition to the principle of notice. Other guidelines on fair data processing published by organisations like the Organisation for Economic Cooperation and Development[16] (OECD) or Canadian Standards Association[17] (CSA) also include consent as a key mechanism in data protection.

The origins of consent in privacy and data protection

Given the clearly extensive reliance on consent in data protection, it seems prudent to examine the origins of consent in privacy and data protection. Just why does consent have so much weight in data protection?

One reason is that data protection, along with inextricably linked concerns about privacy, could be said to be rooted in protecting private property. It was argued that the “early parameters of what was to become the right to privacy were set in cases dealing with unconventional property claims”[18], such as unconsented publication of personal letters[19] or photographs[20]. It was the publication of Brandeis and Warren’s well-known article “The Right to Privacy”[21], that developed “the current philosophical dichotomy between privacy and property rights”[22], as they asserted that privacy protections ought to be recognised as a right in and of themselves and needed separate protection[23]. Indeed, it was Warren and Brandeis who famously borrowed Justice Cooley's expression that privacy is the “right to be let alone”[24].

On the other side of the debate are scholars like Epstein and Posner, who see privacy protections as part of protecting personal property under tort law[25]. However, the central point is that most scholars seem to acknowledge the relationship between privacy and private property. Even Brandeis and Warren themselves argued that one general aim of privacy is “to protect the privacy of private life, and to whatever degree and in whatever connection a man's life has ceased to be private”[26].

It is also important to locate the idea of consent within the domain of privacy and private property protections. Ostensibly, consent seems to have the effect of lessening the privacy protections afforded in a particular situation to a person, because by acquiescing to the situation, one could be seen as waiving their privacy concerns. Brandeis and Warren concur with this position as they acknowledge how “the right to privacy ceases upon the publication of the facts by the individual, or with his consent”[27]. They assert that this is “but another application of the rule which has become familiar in the law of literary and artistic property”[28].

Perhaps the most eloquent articulation of the importance of consent in privacy comes from Sir Edward Coke’s idea that “every man’s house is his castle”[29]. Though the ‘Castle Doctrine’ has been used as a justification for protecting one’s property with the use of force[30], I think that implied in the idea of the ‘Castle Doctrine’ is that consent is necessary in order to preserve privacy. If not, why would anyone be justified in preventing trespass, other than to prevent unconsented entry or use of their property. The doctrine of “Volenti non fit injuria”[31], or ‘to one who consents no injury is done’, is thus the very embodiment of the role of consent in protecting private property. And as conceptions of private property develop to recognise that the data one gives out is part of his private property, for example in US v. Jones, which led scholars to assert that “people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties”[32], so does consent act as an important aspect of privacy protection.

Yet, linking privacy with private property is not universally accepted as the conception of privacy. For instance, Alan Westin, in his book Privacy and Freedom[33], describes privacy as “the right to control information about oneself”[34]. Another scholar, Ruth Gavison, contends instead that “our interest in privacy is related to our concern over our accessibility to others: the extent to which we are known to others, the extent to which others have physical access to us, and the extent to which we are the subject of others' attention”[35].

While these alternative notions about privacy’s foundational principles may differ from those related to linking privacy with private property, locating consent within these formulations of privacy is possible. Regarding Westin’s argument, I think that implicit in the right to control one’s information are ideas about individual autonomy, which is exercised through giving or withholding one’s consent. Similarly, Gavison herself states that privacy functions to advance “liberty, autonomy and selfhood”[36]. Consent plays a key role in upholding this liberty, autonomy and selfhood that privacy affords us. Clearly therefore, it is far from unfounded to claim that consent is an integral part of protecting privacy.

Consent, Big Data and Data protection

Given the solid underpinnings of the principle of consent in privacy protection, it was hardly a coincidence that consent became an integral part of data protection. However, with the rise of big data practices, one quickly finds that consent ceases to work effectively as a tool for protecting privacy. In a big data context, Solove argues that privacy regulation rooted in consent is ineffective, because garnering consent amidst ubiquitous data collection for all the online services one uses as part of daily life is unmanageable[37]. Additionally, the secondary uses of one’s data are difficult to assess at the point of collection, and subsequently meaningful consent for secondary use is difficult to obtain[38]. This section examines these two primary consequences of prioritising consent amidst Big data practises.

Consent places unrealistic and unfair expectations on the Individual

As noted by Tene and Polonetsky, the first concern is that current privacy frameworks which emphasize informed consent “impose significant, sometimes unrealistic, obligations on both organizations and individuals”[39]. The premise behind this argument stems from the way that consent is often garnered by organisations, especially regarding use of their services. An examination of various terms of use policies from banks, online video streaming websites, social networking sites, online fashion or more general online shopping websites reveals a deluge of information that the user has to comprehend. Moreover, there are a too many “entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity”[40].

As Cate and Mayer-Schönberger note in the Microsoft Global Privacy Summit Summary Report, “almost everywhere that individuals venture, especially online, they are presented with long and complex privacy notices routinely written by lawyers for lawyers, and then requested to either “consent” or abandon the use of the desired service”[41]. In some cases, organisations try to simplify these policies for the users of their service, but such initiatives make up the minority of terms of use policies. Tene and Polonetsky assert that “it is common knowledge among practitioners in the field that privacy policies serve more as liability disclaimers for businesses than as assurances of privacy for consumers”[42].

However, it is equally important to consider the principle of consent from perspective of companies. At a time where many businesses have to comply with numerous regulations and processes in the name of ‘compliance’[43], the obligations for obtaining consent could burden some businesses. Firms have to gather consent amidst enhancing user or customer experiences, which represents a tricky balance to find. For example, requiring consent at every stage may make the user experience much worse. Imagine having to give consent for your profile to be uploaded every time you make a high score in a video game? At the same time, “organizations are expected to explain their data processing activities on increasingly small screens and obtain consent from often-uninterested individuals”[44]. Given these factors, it is somewhat understandable for companies to garner consent for all possible (secondary) uses as otherwise it is not feasible to keep collecting.

Nonetheless, this results in situations where “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”[45].The totality of the situation shows the odds stacked against the individual. It could be even argued that this is one manifestation of the informational and power asymmetry that exists between individuals and organisations[46], because users may unwittingly agree to unfair, unclear or even unknown terms and conditions and data practices. Not only are individuals greatly misinformed about data collected about them, but the vast majority of people do not even read these Terms and Conditions or End User license agreements[47]. Solove also argues that “people often lack enough expertise to adequately assess the consequences of agreeing to certain present uses or disclosures of their data”[48].

While the organisational practice of providing extensive and complicated terms of use policies is not illegal, the fact that by one estimation, it may take you would have to take 76 working days to review the privacy policies you have agreed to online[49], or by another, that in the USA the opportunity cost society incurs in reading privacy policies is $781 billion[50], should not go unnoticed. I do think it is unfair for the law to put users into such situations, where they are “forced to make overly complex decisions based on limited information”[51]. There have been laudable attempts by some government organisations like Canada’s Office of the Privacy Commissioner and USA’s Federal Trade Commission to provide guidance to firms to make their privacy policies more accessible[52]. However, these are hard to enforce. Therefore, it can be assumed that when users have neither the expertise nor the rigour to review privacy policies effectively, the consent they provide would naturally be far from informed.

Secondary use, Aggregation and Superficial Consent

What amplifies this informational asymmetry is the potential for the aggregation of individual’s data and subsequent secondary use of that data collected. “Even if people made rational decisions about sharing individual pieces of data in isolation, they greatly struggle to factor in how their data might be aggregated in the future”[53].

This has to do with the prevalence of big data analytics that characterizes our modern epoch, and has major implications for the nature and meaningfulness of the consent users provide. By definition, “big data analysis seeks surprising correlations”[54] and some of its most insightful results are counterintuitive and nearly impossible to conceive at the point of primary data collection. One noteworthy example comes from the USA, with the predictive analytics of Walmart. By studying purchasing patterns of its loyalty card holders[55], the company ascertained that prior to a hurricane the most popular items that people tend to buy are actually Pop Tarts (a pre-baked toaster pastry) and Beer[56]. These correlations are highly counterintuitive and far from what people expect to be necessities before a hurricane. These insights led to Walmart stores being stocked with the most relevant products at the time of need. This is one example of how data might be repurposed and aggregated for a novel purpose, but nonetheless the question about the nature of consent obtained by Walmart for the collection and analysis of the shopping habits of its loyalty card holders stands.

One reason secondary uses make consent less meaningful has been articulated by De Zwart et al, who observe that “the idea of consent becomes unworkable in an environment where it is not known, even by the people collecting and selling data, what will happen to the data”[57]. Taken together with Solove’s aggregation effect, two points become apparent:

  1. Data we consent to be collected about us may be aggregated with other data we may have revealed in the past. While separately they may be innocuous, there is a risk of future aggregation to create new information which one may find overly intrusive and not consent to. However, current data protection regimes make it hard for one to provide such consent, because there is no way for the user to know how his past and present data may be aggregated in the future.
  2. Data we consent to be collected for one specific purpose may be used in a myriad of other ways. The user has virtually no way to know how their data might be repurposed because often time neither do the collectors of that data[58].

Therefore, regulators reliance on principles of purpose limitation and the mechanism of consent for robust data protection seems suboptimal at the very least, as big data practices of aggregation, repurposing and secondary uses become commonplace.

Other problems with the mechanism of consent in the context of Big Data

On one end of the spectrum are situations where organisations garner consent for future secondary uses at the time of data collection. As discussed earlier, this is currently the common practice for organisations and the likelihood of users providing informed consent is low.

However, equally valid is considering the situations on the other end of the spectrum, where obtaining user consent for secondary use becomes too expensive and cumbersome[59]. As a result, potentially socially valuable secondary use of data for research and innovation or simply “the practice of informed and reflective citizenship”[60] may not take place. While potential social research may be hindered by the consent requirement, the reality that one cannot give meaningful consent to an unknown secondary uses of data is more pressing. Essentially, not knowing what you are consenting to scarcely provides the individual with any semblance of strong privacy protections and so the consent that individuals provide is superficial at best.

Many scholars also point to the binary nature of consent as it stands today[61]. Solove describes consent in data protection as nuanced[62] while Cate and Mayer-Schönberger go further to assert that “binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data”. This dichotomous nature of consent further reduces its usefulness in data protection regimes.

Whether data collection is opted into or opted out of also has a bearing on the nature of the consent obtained. Many argue that regulations with options to opt out are not effective as “opt-out consent might be the product of mere inertia or lack of awareness of the option to opt out”[63]. This is in line with initiatives around the world to make gathering consent more explicit by having options to opt in instead of opt out. Noted articulations of the impetus to embrace opt in regimes include ex FTC chairman Jon Leibowitz as early as 2007[64], as well as being actively considered by the EU in the reform of their data protection laws[65].

However, as Solove rightly points out, opt in consent is problematic as well[66]. There are a few reasons for this: first, that many data collectors have the “sophistication and motivation to find ways to generate high opt-in rates”[67] by “conditioning products, services, or access on opting in”[68]. In essence, they leave individuals no choice but to opt into data collection because using their particular product or service is dependant or ‘conditional’ on explicit consent. A pertinent example of this is the end-user license agreement to Apple’s iTunes Store[69]. Solove rightly notes that “if people want to download apps from the store, they have no choice but to agree. This requirement is akin to an opt-in system — affirmative consent is being sought. But hardly any bargaining or choosing occurs in this process”[70]. Second, as stated earlier, obtaining consent runs the risk of impeding potential innovation or research because it is too cumbersome or expensive to obtain[71].

Third, as Tene and Polonetsky argue, “collective action problems threaten to generate a suboptimal equilibrium where individuals fail to opt into societally beneficial data processing in the hope of free-riding on others’ good will”[72]. A useful example to illustrate this comes from another context where obtaining consent is the difference between life and death: organ donation. The gulf in consenting donors between countries with an opt in regime for organ donation and countries with an opt out regime is staggering. Even countries that are culturally similar, such as Austria and Germany, exhibit vast differences in donation rates – Austria at 99% compared to just 12% in Germany[73]. This suggests that in terms of obtaining consent (especially for socially valuable actions), opt in methods may be limiting, because people may have an aversion to anything being presumed about their choices, even if costs of opting out are low[74].

What the above section demonstrates is how consent may be somewhat limited as a tool for data protection regimes, especially in a big data context. That said, consent is not in itself a useless or outdated concept. The problems raised above articulate the problems that relying on consent extensively pose in a big data context. Consent should still remain a part of data protection regimes. However, there are both better ways to obtain consent (for organisations that collect data) as well as other areas to focus regulatory attention on aside from the time of data collection.

What can organisations do better to obtain more meaningful consent

Organisations that collect data could alter the way the obtain user consent. Most people can attest to having checked a box that was lying surreptitiously next to the words ‘I agree’, thereby agreeing to the Terms and Conditions or End-user License Agreement for a particular service or product. This is in line with the need for both parties to assent to the terms of a contract as part of making valid a contract[75]. Some of the more common types of online agreements that users enter into are Clickwrap and Browsewrap agreements. A Clickwrap agreement is “formed entirely in an online environment such as the Internet, which sets forth the rights and obligations between parties”[76]. They “require a user to click "I agree" or “I accept” before the software can be downloaded or installed”[77]. On the other hand, Browsewrap agreements “try to characterize your simple use of their website as your ‘agreement’ to a set of terms and conditions buried somewhere on the site”[78].

Because Browsewrap agreements do not “require a user to engage in any affirmative conduct”[79], the kind of consent that these types of agreements obtain is highly superficial. In fact, many argue that such agreements are slightly unscrupulous because users are seldom aware that such agreements exist[80], often hidden in small print[81] or below the download button[82] for example. And the courts have begun to consider such terms and practices unfair, which “hold website users accountable for terms and conditions of which a reasonable Internet user would not be aware just by using the site”[83]. For example, In re Zappos.com Inc., Customer Data Security Breach Litigation, the court said of their Terms of Use (which is in a browsewrap agreement):

“The Terms of Use is inconspicuous, buried in the middle to bottom of every Zappos.com webpage among many other links, and the website never directs a user to the Terms of Use. No reasonable user would have reason to click on the Terms of Use”[84]

Clearly, courts recognise the potential for consent or assent to be obtained in a hardly transparent or hands on manner. Organisations that collect data should be aware of this and consider other options for obtaining consent.

A few commentators have suggested that organisations switch to using Clickwrap or clickthrough agreements to obtain consent. Undergirding this argument is the fact that courts have on numerous occasions, upheld the validity of a Clickwrap agreement. Such cases include Groff v. America Online, Inc[85] and Hotmail Corporation v. Van Money Pie, Inc[86]. These cases built upon the precedent-setting case of Pro CD v. Zeidenberg, in which the court ruled that “Shrinkwrap licenses are enforceable unless their terms are objectionable on grounds applicable to contracts in general”[87]. Shrinkwrap licenses, which refer to end user license agreements printed on the shrinkwrap of a software product which a user will definitely notice and have the opportunity to read before opening and using the product, and the rules that govern them, have seen application to clickthrough agreements. As Bayley rightly noted, the validity of clickthrough agreements is dependent on “reasonable notice and opportunity to review—whether the placement of the terms and click-button afforded the user a reasonable opportunity to find and read the terms without much effort”[88].

From the perspective of companies and other organisations which attempt to garner consent from users to collect and process their data, utilizing Clickwrap agreements might be one useful solution to consider in obtaining more meaningful and informed consent. In fact Bayley contends that clear Clickwrap agreements are “the “best practice” mechanism for creating a contractual relationship between an online service and a user”[89]. He suggests the following mechanism for acquiring clear and informed consent via contractual agreement[90]:

  1. Conspicuously present the TOS to the user prior to any payment (or other commitment by the user) or installation of software (or other changes to a user’s machine or browser, like cookies, plug-ins, etc.)
  2. Allow the user to easily read and navigate all of the terms (i.e. be in a normal, readable typeface with no scroll box)
  3. Provide an opportunity to print, and/or save a copy of, the terms
  4. Offer the user the option to decline as prominently and by the same method as the option to agree
  5. Ensure the TOS is easy to locate online after the user agrees.

These principles make a lot of sense for organisations, as it requires relatively minor procedural changes instead of more transformational efforts to alter the way the validate their data processing processes entirely.

Herzfield adds two further suggestions to this list. First, organisations should not allow any use of their product or service until “express and active manifestation of assent”[91]. Also, they should institute processes where users re-iterate their consent and assent to the terms of use[92]. He goes further to propose a baseline that organisations should follow: “companies should always provide at least inquiry notice of all terms, and require counterparties to manifest assent, through action or inaction, in a manner that reasonable people would clearly understand to be assent”[93].

While obtaining informed and meaningful consent is neither fool proof nor a process which has widely accepted clear steps, what is clear is that current efforts by organisations may be insufficient. As Cate and Mayer-Schönberger note, “data processors can perhaps too easily point to the formality of notice and consent and thereby abrogate much of their responsibility”[94]. One thing they can do to both ensure more meaningful and informed consent (from the perspective of the users) and preventing potential legal action for unscrupulous or unfair terms is to change the way they obtain consent from opt out to opt in.

Conclusion – how should regulation change

In conclusion, the current emphasis and extensive use of consent in data protection seems to be limited in effectively protecting against illegitimate processing of data in a big data context. More people are starting to use online services extensively. This is coupled by the fact that organisations are realizing the value of collecting and analysing user data to carry out data-driven analytics for insights that can improve the efficacy of the product. Clearly, data protection has never been more crucial.

However not only does emphasising consent seem less relevant, because the consent organisations obtain is seldom informed, but it may even jeopardise the intentions of data protection. Commentators are quick to point out how nimble firms are at acquiring consent in newer ways that may comply with laws but still allow them to maintain their advantageous position of asymmetric power. Kuner, Cate, Millard and Svantesson, all eminent scholars in the field of Big data, asked the prescient question: “Is there a proper role for individual consent?”[95]They believe consent still has a role, but that finding this role in the Big data context is challenging[96]. However, there is surprising consensus on the approach that should be taken as data protection regimes shift away from consent.

In fact, the alternative is staring at us in the face: data protection regimes have to look elsewhere, to other points along the data analysis process for aspects to regulate and ensure legitimate and fair processing of data. One compelling idea which had broad-based support during the aforementioned Microsoft Privacy Summit was that “new approaches must shift responsibility away from data subjects toward data users and toward a focus on accountability for responsible data stewardship”[97], ie creating regulations to guide data processing instead of the data collection. De Zwart et al. suggest that regulation must instead “focus on the processes involved in establishing algorithms and the use of the resulting conclusions”[98].

This might involve regulations relating to requiring data collectors to publish the queries they run on the data. This would be a solution that balances maintaining the ‘trade secret’ of the firm, who has creatively designed an algorithm, with ensuring fairness and legitimacy in data processing. One manifestation of this approach is in conceptualising procedural data due process which “would regulate the fairness of Big Data’s analytical processes with regard to how they use personal data (or metadata derived from or associated with personal data) in any adjudicative process, including processes whereby Big Data is being used to determine attributes or categories for an individual”[99]. While there is debate regarding the usefulness of a data due process, the idea of data due process is just part of the consortium of ideas surrounding alternatives to consent in data protection. The main point is that “greater transparency should be required if there are fewer opportunities for consent or if personal data can be lawfully collected without consent”[100].

It is also worth considering exactly what a single use of group or individual’s data is, and what types of uses or processes require a “greater form of authorization”[101]. Certain data processes could require special affirmative consent to be procured, which is not applicable for other less intimate matters. Canada’s Office of the Privacy Commissioner released a privacy toolkit for organisations, in which they provide some exceptions to the consent principle, one of which is if data collection “is clearly in the individual’s interests and consent is not available in a timely way”[102]. Some therefore suggest that “if notice and consent are reserved for more appropriate uses, individuals might pay more attention when this mechanism is used”[103].

Another option for regulators is to consider the development and implementation of a sticky privacy policies regime. This refers to “machine-readable policies [that] can stick to data to define allowed usage and obligations as it travels across multiple parties, enabling users to improve control over their personal information”[104]. Sticky privacy policies seem to alleviate the risk of repurposed, unanticipated uses of data because users who consent to giving out their data will be consenting to how it is used thereafter. However, the counter to sticky policies is that it places even greater obligations on users to decide how they would like their data used, not just at one point but for the long term. To expect organisations to state their purposes for future use of individuals data or that individuals are to give informed consent to such uses seems farfetched from both perspectives.

Still another solution draws from the noted scholar Helen Nissenbaum’s work on privacy. She argues that “the benchmark of privacy is contextual integrity”[105]. ”Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it”[106]. According to this line of thinking, legislators should instead focus their attention on what constitutes appropriateness in certain contexts, although this could be a challenging task as contexts merge and understandings of appropriateness change according to the circumstances of a context. .

While there is little consensus regarding the numerous ways to focus regulatory attention on data processing and the uses of data collected, there is more support for a shift away from consent, as exemplified by the Microsoft privacy Summit:

“There was broad general agreement that privacy frameworks that rely heavily on individual notice and consent are neither sustainable in the face of dramatic increases in the volume and velocity of information flows nor desirable because of the burden they place on individuals to understand the issues, make choices, and then engage in oversight and enforcement.”[107] I think Cate and Mayer- Schönberger make for the most valid conclusion to this article, as well as to summarise the debate I have presented. They say that “in short, ensuring individual control over personal data is not only an increasingly unattainable objective of data protection, but in many settings it is an undesirable one as well.”[108] We might very well be throwing the entire data protection regimes under the bus.


[1] Gordon Rayner and Bill Gardner, “Men Must Prove a Woman Said ‘Yes’ under Tough New Rape Rules - Telegraph,” The Telegraph, January 28, 2015, sec. Law and Order, http://www.telegraph.co.uk/news/uknews/law-and-order/11375667/Men-must-prove-a-woman-said-Yes-under-tough-new-rape-rules.html.

[2] Legal Information Institute, “Implied Consent,” accessed August 25, 2015, https://www.law.cornell.edu/wex/implied_consent.

[3] European Parliament, Council of the European Union, Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data, 1995, http://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX:31995L0046.

[4] See supra note 3.

[5] European Commission, “Stronger Data Protection Rules for Europe,” European Commission Press Release Database, June 15, 2015, http://europa.eu/rapid/press-release_MEMO-15-5170_en.htm.

[6] Council of the European Union, “Data Protection: Council Agrees on a General Approach,” June 15, 2015, http://www.consilium.europa.eu/en/press/press-releases/2015/06/15-jha-data-protection/.

[7] See supra note 6.

[8] Abraham L. Newman, Protectors of Privacy: Regulating Personal Data in the Global Economy (Ithaca, NY: Cornell University Press, 2008).

[9] See supra note 8, at 24.

[10] Ibid.

[11] 15 U.S.C. §1681.

[12] 5 U.S.C. § 552a.

[13] 18 U.S.C. § 2510-22.

[14] Federal Trade Commission, “Privacy Online: A Report to Congress,” June 1998, https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-congress/priv-23a.pdf: 40.

[15] See supra note 14, at 8.

[16] Organisation for Economic Cooperation and Development, “2013 OECD Privacy Guidelines,” 2013, http://www.oecd.org/internet/ieconomy/privacy-guidelines.htm.

[17] Canadian Standards Association, “Canadian Standards Association Model Code,” March 1996, https://www.cippguide.org/2010/06/29/csa-model-code/.

[18] Mary Chlopecki, “The Property Rights Origins of Privacy Rights | Foundation for Economic Education,” August 1, 1992, http://fee.org/freeman/the-property-rights-origins-of-privacy-rights.

[19] See Pope v. Curl (1741), available here.

[20] See Prince Albert v. Strange (1849), available here.

[21] Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4, no. 5 (December 15, 1890): 193–220, doi:10.2307/1321160.

[22] See supra note 18.

[23] Ibid.

[24] See supra note 21.

[25] See for example, Richard Epstein, “Privacy, Property Rights, and Misrepresentations,” Georgia Law Review, January 1, 1978, 455. And Richard Posner, “The Right of Privacy,” Sibley Lecture Series, April 1, 1978, http://digitalcommons.law.uga.edu/lectures_pre_arch_lectures_sibley/22.

[26] See supra note 21, at 215.

[27] See supra note 21, at 218.

[28] Ibid.

[29] Adrienne W. Fawcett, “Q: Who Said: ‘A Man’s Home Is His Castle’?,” Chicago Tribune, September 14, 1997, http://articles.chicagotribune.com/1997-09-14/news/9709140446_1_castle-home-sir-edward-coke.

[30] Brendan Purves, “Castle Doctrine from State to State,” South Source, July 15, 2011, http://source.southuniversity.edu/castle-doctrine-from-state-to-state-46514.aspx.

[31] “Volenti Non Fit Injuria,” E-Lawresources, accessed August 25, 2015, http://e-lawresources.co.uk/Volenti-non-fit-injuria.php.

[32] Bryce Clayton Newell, “Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 16, 2013), http://papers.ssrn.com/abstract=2341182.

[33] Alan Westin, Privacy and Freedom (Ig Publishing, 2015).

[34] Helen Nissenbaum, “Privacy as Contextual Integrity,” Washington Law Review 79 (2004): 119.

[35] Ruth Gavison, “Privacy and the Limits of Law,” The Yale Law Journal 89, no. 3 (January 1, 1980): 421–71, doi:10.2307/795891: 423.

[36] Ibid.

[37] Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 4, 2012), http://papers.ssrn.com/abstract=2171018: 1888.

[38] Ibid, at 1889.

[39] Omer Tene and Jules Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, September 20, 2012), http://papers.ssrn.com/abstract=2149364: 261.

[40] See supra note 37, at 1881.

[41] Fred H. Cate and Viktor Mayer-Schönberger, “Notice and Consent in a World of Big Data - Microsoft Global Privacy Summit Summary Report and Outcomes,” Microsoft Global Privacy Summit, November 9, 2012, http://www.microsoft.com/en-us/download/details.aspx?id=35596: 3.

[42] See supra note 39.

[43] See for example, US Securities and Exchange Commission, “Corporation Finance Small Business Compliance Guides,” accessed August 26, 2015, https://www.sec.gov/info/smallbus/secg.shtml and Australian Securities & Investments Commission, “Compliance for Small Business,” accessed August 26, 2015, http://asic.gov.au/for-business/your-business/small-business/compliance-for-small-business/.

[44] See supra note 39.

[45] See supra note 41.

[46] See supra note 8, at 24.

[47] See for example, James Daley, “Don’t Waste Time Reading Terms and Conditions,” The Telegraph, September 3, 2014, and Robert Glancy, “Will You Read This Article about Terms and Conditions? You Really Should Do,” The Guardian, April 24, 2014, sec. Comment is free, http://www.theguardian.com/commentisfree/2014/apr/24/terms-and-conditions-online-small-print-information.

[48] See supra note 37, at 1886.

[49] Alex Hudson, “Is Small Print in Online Contracts Enforceable?,” BBC News, accessed August 26, 2015, http://www.bbc.com/news/technology-22772321.

[50] Aleecia M. McDonald and Lorrie Faith Cranor, “Cost of Reading Privacy Policies, The,” I/S: A Journal of Law and Policy for the Information Society 4 (2009 2008): 541

[51] See supra note 41, at 4.

[52] For Canada, see Office of the Privacy Commissioner of Canada, “Fact Sheet: Ten Tips for a Better Online Privacy Policy and Improved Privacy Practice Transparency,” October 23, 2013, https://www.priv.gc.ca/resource/fs-fi/02_05_d_56_tips2_e.asp. And Office of the Privacy Commissioner of Canada, “Privacy Toolkit - A Guide for Businesses and Organisations to Canada’s Personal Information Protection and Electronic Documents Act,” accessed August 26, 2015, https://www.priv.gc.ca/information/pub/guide_org_e.pdf.

For USA, see Federal Trade Commission, “Internet of Things: Privacy & Security in a Connected World,” Staff Report (Federal Trade Commission, January 2015), https://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf.

[53] See supra note 37, at 1889.

[54] See supra note 39, at 261.

[55] Jakki Geiger, “The Surprising Link Between Hurricanes and Strawberry Pop-Tarts: Brought to You by Clean, Consistent and Connected Data,” The Informatica Blog - Perspectives for the Data Ready Enterprise, October 3, 2014, http://blogs.informatica.com/2014/03/10/the-surprising-link-between-strawberry-pop-tarts-and-hurricanes-brought-to-you-by-clean-consistent-and-connected-data/#fbid=PElJO4Z_kOu.

[56] Constance L. Hays, “What Wal-Mart Knows About Customers’ Habits,” The New York Times, November 14, 2004, http://www.nytimes.com/2004/11/14/business/yourmoney/what-walmart-knows-about-customers-habits.html.

[57] M. J. de Zwart, S. Humphreys, and B. Van Dissel, “Surveillance, Big Data and Democracy: Lessons for Australia from the US and UK,” Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2, 2014, https://digital.library.adelaide.edu.au/dspace/handle/2440/90048: 722.

[58] Ibid.

[59] See supra note 41, at 3.

[60] Julie E. Cohen, “What Privacy Is For,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, November 5, 2012), http://papers.ssrn.com/abstract=2175406.

[61] See supra note 37, at 1901.

[62] Ibid.

[63] See supra note 37, at 1899.

[64] Jon Leibowitz, “So Private, So Public: Individuals, The Internet & The paradox of behavioural marketing” November 1, 2007, https://www.ftc.gov/sites/default/files/documents/public_statements/so-private-so-public-individuals-internet-paradox-behavioral-marketing/071031ehavior_0.pdf: 6.

[65] See supra note 5.

[66] See supra note 37, at 1898.

[67] Ibid.

[68] Ibid.

[69] Ibid.

[70] Ibid.

[71] See supra note 41, at 3.

[72] See supra note 39, at 261.

[73] Richard H. Thaler, “Making It Easier to Register as an Organ Donor,” The New York Times, September 26, 2009, http://www.nytimes.com/2009/09/27/business/economy/27view.html.

[74] Ibid.

[75] The Oxford Introductions to U.S. Law: Contracts, 1 edition (New York: Oxford University Press, 2010): 67.

[76] Francis M. Buono and Jonathan A. Friedman, “Maximizing the Enforceability of Click-Wrap Agreements,” Journal of Technology Law & Policy 4, no. 3 (1999), http://jtlp.org/vol4/issue3/friedman.html.

[77] North Carolina State University, “Clickwraps,” Software @ NC State Information Technology, accessed August 26, 2015, http://software.ncsu.edu/clickwraps.

[78] Ed Bayley, “The Clicks That Bind: Ways Users ‘Agree’ to Online Terms of Service,” Electronic Frontier Foundation, November 16, 2009, https://www.eff.org/wp/clicks-bind-ways-users-agree-online-terms-service.

[79] Ibid, at 2.

[80] Ibid.

[81] See Nguyen v. Barnes & Noble Inc., (9th Cir. 2014), available here.

[82] See Specht v. Netscape Communications Corp.,(2d Cir. 2002), available here.

[83] See supra note 78, at 2.

[84] See In Re: Zappos.com, Inc., Customer Data Security Breach Litigation, No. 3:2012cv00325: pg 8 line 23-26, available here.

[85] See Groff v. America Online, Inc., 1998, available here.

[86] Hotmail Corp. v. Van$ Money Pie, Inc., 1998, available here.

[87] ProCD Inc. v. Zeidenberg, (7th. Cir. 1996), available here.

[88] See supra note 78, at 1.

[89] See supra note 78, at 2.

[90] Ibid.

[91] Oliver Herzfeld, “Are Website Terms Of Use Enforceable?,” Forbes, January 22, 2013, http://www.forbes.com/sites/oliverherzfeld/2013/01/22/are-website-terms-of-use-enforceable/.

[92] Ibid.

[93] Ibid.

[94] See supra note 41, at 3.

[95] Christopher Kuner et al., “The Challenge of ‘big Data’ for Data Protection,” International Data Privacy Law 2, no. 2 (May 1, 2012): 47–49, doi:10.1093/idpl/ips003: 49.

[96] Ibid.

[97] See supra note 41, at 5.

[98] See supra note 57, at 723.

[99] Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, October 1, 2013), http://papers.ssrn.com/abstract=2325784: 109.

[100] See supra note 41, at 13.

[101] See supra note 41, at 5.

[102] See supra note 52, Privacy Toolkit, at 14.

[103] See supra note 41, at 6.

[104] Siani Pearson and Marco Casassa Mont, “Sticky Policies: An Approach for Managing Privacy across Multiple Parties,” Computer, 2011.

[105] See supra note 34, at 138.

[106] See supra note 34, at 118.

[107] See supra note 41, at 5.

[108] See supra note 41, at 4.

CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015

by Elonnai Hickok, Vipul Kharbanda and Vanya Rakesh — last modified Sep 02, 2015 05:09 PM
The Centre for Internet & Society (CIS) submitted a clause-by-clause comments on the Human DNA Profiling Bill that was circulated by the Department of Biotechnology on June 9, 2015.

The Centre for Internet and Society is a non-profit research organisation that works on policy issues relating to privacy, freedom of expression, accessibility for persons with diverse abilities, access to knowledge, intellectual property rights and openness. It engages in academic research to explore and affect the shape and form of Internet, along with its relationship with the Society, with particular emphasis on South-South dialogues and exchange. The Centre for Internet and Society was also a member of the Expert Committee which was constituted in the year 2013 by the Department of Biotechnology to discuss the draft Human DNA Profiling Bill.

Missing aspects from the Bill

The Human DNA Profiling Bill, 2015 has overlooked and has not touched upon the following crucial factors :

  • Objects Clause

An ‘objects clause,’ detailing the intention of the legislature and containing principles to inform the application of a statute, in the main body of the statute is an enforceable mechanism to give directions to a statute and can be a formidable primary aid in statutory interpretation. [See, for example, section 83 of the Patents Act, 1970 that directly informed the Order of the Controller of Patents, Mumbai, in the matter of NATCO Pharma and Bayer Corporation in Compulsory Licence Application No. 1 of 2011.] Therefore, the Bill should incorporate an objects clause that makes clear that

“DNA profiles merely estimate the identity of persons, they do not conclusively establish unique identity, therefore forensic DNA profiling should only have probative value and not be considered as conclusive proof.

The Act recognises that all individuals have a right to privacy that must be continuously weighed against efforts to collect and retain DNA and in order to protect this right to privacy the principles of notice, confidentiality, collection limitation, personal autonomy, purpose limitation and data minimization must be adhered to at all times.”

  • Collection and Consent

The Bill does not contain provisions regarding instances when the DNA samples can be collected from the individuals without consent (nor does the Bill establish or refer to an authorization procedure for such collection), when DNA samples can be collected from individuals only with informed consent, and how and in what instances individuals can withdraw their consent.  The issue of whether DNA samples can be collected without the consent of the individual is a vexed one and requires complex questions relating to individual privacy as well as the right against self incrimination. While the question of whether an accused can be made to give samples of blood, semen, etc. which had been in issue in a wide gamut of decisions in India has finally been settled by section 53 of the Code of Criminal Procedure, which allows collection of medical evidence from an accused, thus laying to rest any claims based on the right against self incrimination. However there are still issues dealing with the right to privacy and the violation thereof due to the non-consensual collection of DNA samples. This is an issue which needs to be addressed in this Act itself and should not be left unaddressed as this would only lead to a lack of clarity and protracted court cases to determine this issue. An illustration of this problem is where the Bill allows for collection of intimate body samples. There is a need for inclusion of stringent safeguard measures regarding the same since without such safeguards, the collection of intimate body samples would be an outright infringement of privacy. Further, maintaining a database for convicts and suspects is one thing, however collecting and storing intimate samples of individuals is a gross violation of the citizens’ right to privacy, and without adequate mechanisms regarding consent and security, stands at a huge risk of being misused.

  • Privacy Safeguards

Presently, the Bill is being introduced without comprehensive privacy safeguards in place on issues such as consent, collection, retention, etc. as is evident from the comments made below. Though the DNA Board is given the responsibility of recommending best practices pertaining to privacy  (clause 13 (l)) – this is not adequate given the fact that India does not have a comprehensive privacy legislation. Though section 43A and associated Rules of the Information Technology Act would apply to the collection, use, and sharing of DNA data by DNA laboratories  (as they would fall under the definition of ‘body corporate’ under the IT Act), the National and State Data Banks and the DNA Board would not clearly be body corporate as per the IT Act and would not fall under the ambit of the provision or Rules.  Safeguards are needed to protect against the invasion of informational privacy and physical privacy at the level of these State controlled bodies.  The fact that the Bill is to be introduced into Parliament prior to the enactment of a privacy legislation in India is significant as according to discussions in the Record Notes of the 4h Meeting of the Expert Committee - “the Expert Committee also discussed and emphasized that the Privacy Bill is being piloted by the Government. That Bill will over-ride all the other provisions on privacy issues in the DNA Bill.”

  • Lack of restriction on type of analysis to be performed

The Bill currently does not provide any restriction on the types of analysis that can be performed on a DNA sample or profile. This could allow for DNA samples to be analyzed for purposes beyond basic identification of an individual – such as for health, genetic, or racial purposes. As a form of purpose limitation the Bill should define narrowly the types of analysis that can be performed on a DNA sample.

  • Purpose Limitation

The Bill does not explicitly restrict the use of a DNA sample or DNA profile to the purpose it was originally collected and created for. This could allow for the re-use of samples and profiles for unintended purposes.

  • Annual Public Reporting

The Bill does not require the DNA Board to disclose publicly available information on an annual basis regarding the functioning and financial aspects of matters contained within the Bill. Such disclosure is crucial in ensuring that the public is able to make informed decisions. Categories that could be included in such reports include: Number of DNA profiles added to each indice within the databank, total number of DNA profiles contained in the database, number of DNA profiles deleted from the database, the number of matches between crime scene DNA profiles and DNA profiles, the number of cases in which DNA profiles were used in and the percentage in which DNA profiles assisted in the final conclusion of the case, and the number and categories of DNA profiles shared with international entities.

  • Elimination Indice

An elimination indice containing the profiles of medical professionals, police, laboratory personnel etc. working on a case is necessary in case they contaminate collected samples by accident.

Clause by Clause Recommendations

As stated the Human DNA Profiling Bill 2015 is to regulate the use of DNA analysis of human body substances profiles and to establish the DNA Profiling Board for laying down the standards for laboratories, collection of human body substances, custody trail from collection to reporting and also to establish a National DNA Data Bank.

Comment:

  1. As stated, the purpose of the DNA Human Profiling Bill is to broadly regulate the of DNA analysis and establish a DNA Data Bank.  Despite this, the majority of provisions in the Bill pertain to the collection, use, access etc. of DNA samples and profiles for civil and criminal purposes. The result of this is an 'unbalanced Bill' - with the majority of provisions focusing on issues related to forensic use. At the same time the Bill is not a comprehensive forensic bill – resulting in legislative gaps.
  2. Additionally, the Bill contains provisions beyond the stated purpose. These include:
  • Facilitating the creation of a Data Bank for statistical purposes (Clause 33(e))
  • Establishing state and regional level databanks in addition to a national level databank (Clause 24)
  • Developing procedure and providing for the international sharing of DNA profiles with foreign Governments, organizations, institutions, or agencies. (Clause 29)

Recommendation:

  • The Bill should ideally be limited to regulating the use of DNA samples and profiles for criminal purposes. If the scope remains broad, all purposes should be equally and comprehensively regulated.
  • The stated purpose of the Bill should address all aspects of the Bill. Provisions beyond the scope of the Bill should be removed.

Chapter 1: Preliminary

  • Clause 2: This clause defines the terms used in the Bill.

Comment: A number of terms are incomplete and some terms used in the Bill have not been included in the list of definitions.

Recommendation:

  • The definition of DNA Data bank manager - clause 2 (1)(g) - must be renamed as National DNA Data bank manager.
  • The definition of “DNA laboratory” in clause 2(1)(h) should refer to the specific clauses that empower the Central Government and State Governments to license and recognise DNA laboratories. This is a drafting error.
  • The definition of “DNA profile” in clause 2(1)(i) is too vague. Merely the results of an analysis of a DNA sample may not be sufficient to create an actual DNA profile. Further, the results of the analysis may yield DNA information that, because of incompleteness or lack of information, is inconclusive. These incomplete bits of information should not be recognised as DNA profiles. This definition should be amended to clearly specify the contents of a complete and valid DNA profile that contains, at least, numerical representations of 17 or more loci of short tandem repeats that are sufficient to estimate biometric individuality of a person. The definition of “DNA profile” does not restrict the analysis to forensic DNA profiles: this means additional information, such as health-related information could be analyzed and stored against the wishes of the individual, even though such information plays no role in solving crimes.
  • The term “known sample” that is defined in clause 2(1)(m) is not used anywhere outside the definitions clause and should be removed.
  • The definition of “offender” in clause 2(1)(q) is vague because it does not specify the offenses for which an “offender” needs to be convicted. It is also linked to an unclear definition of the term “under trial”, which does not specify the nature of pending criminal proceedings and, therefore, could be used to describe simple offenses such as, for example, failure to pay an electricity bill, which also attracts criminal penalties.
  • The term “proficiency testing” that is defined in clause 2(1)(t) is not used anywhere in the text of the DNA Bill and should be removed.
  • The definitions of “quality assurance”, “quality manual” and “quality system” serve no enforceable purpose since they are used only in relation to the DNA Profiling Board’s rule making powers under Chapter IX, clause 58. Their inclusion in the definitions clause is redundant. Accordingly, these definitions should be removed.
  • The term “suspect” defined in clause 2(1)(za) is vague and imprecise. The standard by which suspicion is to be measured, and by whom suspicion may be entertained – whether police or others, has not been specified. The term “suspect” is not defined in either the Code of Criminal Procedure, 1973 ("CrPC") or the Indian Penal Code, 1860 ("IPC").
  • The term volunteer defined in clause 2(zf) only addresses consent from the parent or guardian of a child or an incapable person. This term should be amended to include informed consent from any volunteer.

Chapter II: DNA Profiling Board

  • Clause 4: This clause addresses the composition of the DNA Profiling Board.

Comment: The size and composition of the Board that is staffed under clause 4 is extremely large. The number of members remains to be 15, as it was in the 2012 Bill.

Recommendation: Drawing from the experiences of other administrative and regulatory bodies in India, the size of the Board should be reduced to no more than five members. The Board must contain at least:

  • One ex-Judge or senior lawyer
  • Civil society – both institutional and non-institutional
  • Privacy advocates

Note: The reduction of the size of the Board was agreed upon by the Expert Committee from 16 members (2012 Bill) to 11 members. This recommendation has not been incorporated.

  • Clause 5(1): The clause specifies the term of the Chairperson of the DNA Profiling Board to be five years and also states that the person shall not be eligible for re-appointment or extension of the term so specified.

Comment: The Chairperson of the Board, who is first mentioned in clause 5(1), has not been duly and properly appointed.

Recommendation: Clause 4 should be amended to mention the appointment of the Chairperson and other Members.

  • Clause 7: The clause requires members to react on a case-by-case basis to the business of the Board by excusing themselves from deliberations and voting where necessary.

Comment: This clause addresses the issue of conflict of interest only in narrow cases and does not provide penalty if a member fails to adhere to the laid out procedure.

Recommendation: The Bill should require members to make full and public disclosures of their real and potential conflicts of interest and the Chairperson must have the power to prevent such members from voting on interested matters. Failure to follow such anti-collusion and anti-corruption safeguards should attract criminal penalties.

  • Clause 12(5): The clause states that the board shall have the power to co-opt such number of persons as it may deem necessary to attend the meetings of the Board and take part in the proceedings of the board, but such persons will not have the right to vote.

Comment: While serving on the Expert Committee, CIS provided language   regarding how the Board could consult with the public. This language has not been fully incorporated.

Recommendation: As per the recommendation of CIS, the following language should be adopted in the Bill: The Board, in carrying out its functions and activities, shall be required to consult with all persons and groups of persons whose rights and related interests may be affected or impacted by any DNA collection, storage, or profiling activity. The Board shall, while considering any matter under its purview, co-opt or include any person, group of persons, or organisation, in its meetings and activities if it is satisfied that that person, group of persons, or organisation, has a substantial interest in the matter and that it is necessary in the public interest to allow such participation. The Board shall, while consulting or co-opting persons, ensure that meetings, workshops, and events are conducted at different places in India to ensure equal regional participation and activities.

  • Clause 13: The clause lays down the functions to be performed by the DNA Profiling Board, which includes it’s role in regulation of the DNA Data Banks, DNA Laboratories and techniques to be adopted for collection of the DNA samples.

Comment: While serving on the Expert Committee, CIS recommended that the functions of the DNA Profiling Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority.

Furthermore, this clause delegates a number of functions to the Board that places the Board in the role of a manager and regulator for issues pertaining to DNA Profiling including functions of the DNA Databases, DNA Laboratories, ethical concerns, privacy concerns etc.

Recommendation: As per CIS’s recommendations the functions of the Board should be limited to licensing, developing standards and norms, safeguarding privacy and other rights, ensuring public transparency, promoting information and debate and a few other limited functions necessary for a regulatory authority.

Towards this, the Board should be comprised of separate Committees to address these different functions. At the minimum, there should be a Committee addressing regulatory issues pertaining to the functioning of Data Banks and Laboratories and an Ethics Committee to provide independent scrutiny of ethical issues.  Additionally:

  • Clause 13(j) allows the Board to disseminate best practices concerning the collection and analysis of DNA samples to ensure quality and consistency. The process for collection of DNA samples and analysis should be established in the Bill itself or by regulations. Best practices are not enforceable and do not formalize a procedure.
  • Clause 13(q)  allows the Board to establish procedure for cooperation in criminal investigation between various investigation agencies within the country and with international agencies. This procedure, at the minimum, should be subject to oversight by the Ministry of External Affairs.

Chapter III: Approval of DNA Laboratories

  • Clause 15: This clause states that every DNA Laboratory has to make an application before the Board for the purpose of undertaking DNA profiling and also for renewal.

Comment: Though the Bill requires DNA Laboratories to make an application for the undertaking DNA Profiling, it does not clarify that the Lab must receive approval before collection and analysis of DNA samples and profiles.

Recommendation: The Bill should clarify that all DNA Laboratories must receive approval for functioning prior to the collection or analysis of any DNA samples and profiles.

Chapter IV: Standards, Quality Control and Quality Assurance Obligations of DNA Laboratory and Infrastructure and Training

  • Clause 19: This clause defines the obligations of a DNA laboratory. Sub-section (d) maintains that one such obligation is the sharing of the 'DNA data' prepared and maintained by the laboratory with the State DNA Data Bank and the National DNA Data Bank.

Comment: ‘DNA Data’ is a new term that has not been defined under Clause 2  of the Bill. It is thus unclear what data would be shared between State DNA data banks and the National DNA data bank - DNA samples? DNA profiles? associated records?  It is also unclear in what manner and on what basis the information would be shared.

Recommendation: The term ‘DNA Data’ should be defined to clarify what information will be shared between State and National DNA Data Banks. The flow of and access to data between the State DNA Data Bank and National DNA Data Bank should also be established in the Bill.

  • Clause 22: The clause lays down the measures to be adopted by a DNA Laboratory and 22(h) includes a provision requiring the conducting of annual audits according to prescribed standards.

Comment:

  • The definition of “audit” under Chapter VI in clause 22 under ‘Explanation’ is relevant for measuring the training programmes and laboratory conditions. However, the term “audit” is subsequently used in an entirely different manner in Chapter VII which relates to financial information and transparency.
  • The standards for the destruction of DNA samples have not been included within the list of measures that DNA laboratories must take.

Recommendation:

  • The definition of ‘audit’ must be amended or removed as it is being used in different contexts. The term “audit” has a well established use for financial information that does not require a definition.
  • Standards for the destruction of DNA samples should be developed and included as a measure DNA laboratories must take.
  • Clause 23: This clause lays down the sources for collection of samples for the purpose of DNA profiling. 23(1)(a) includes collection from bodily substances and 23(1)(c) includes clothing and other objects. Explanation (b) provides a definition of 'intimate body sample'.

Comment:

  • Permitting the collection of DNA samples from bodily substances and clothing and other objects allows for the broad collection of DNA samples without contextualizing such collection. In contrast 23(b) Scene of occurrence or scene of crime limits the collection of samples to a specific context.
  • This clause also raises the issue of consent and invasion of privacy of an individual. If “intimate body samples” are to be taken of individuals, then this would be an invasion of the person’s right to bodily privacy if such collection is done without the person’s consent (except in the specific instance when it is done in pursuance of section 53 of the Criminal Procedure Code).

Recommendation:

  • Sources for the collection of DNA samples should be contextualized to prevent broad, unaccounted for, or unregulated collection. Clause (a) and (c) should be deleted and replaced with contexts in which the collection DNA collection would be permitted.
  • The Bill should specify circumstances on which non-intimate samples can be collected and the process for the same.
  • The Bill should specify that intimate body samples can only be taken with informed consent except as per section 53 of the Criminal Procedure Code.
  • The Bill should require that any individual that has a sample taken (intimate and non-intimate) is provided with notice of their rights and the future uses of their DNA sample and profile.

Chapter V: DNA Data Bank

  • Clause 24:This clause addresses establishment of DNA Data Banks at the State and National Level. 24(5) establishes that the National DNA Data Bank will receive data from State DNA Data Banks and store the approved DNA Profiles  as per regulations.

Comment:

  • As noted previously, ‘DNA Data’ is a new term that has not been defined in the Bill. It is thus unclear what data would be shared between State DNA data banks and the National DNA data bank - DNA samples? DNA profiles? associated records?
  • The process for sharing Data between the State and National Data Banks is not defined.

Recommendation:

  • The term ‘DNA Data’ should be defined to clarify what information will be shared between State and National DNA Data Banks.
  • The process for the National DNA Data Bank receiving DNA data from State DNA Data Banks and DNA laboratories needs to be defined in the Bill or by regulation. This includes specifying how frequently information will be shared etc.
  • Clause 25: This clause establishes standards for the maintenance of indices by DNA databanks. 25(1) states that every DNA Data Bank needs to maintain the prescribed indices for various categories of data including an index for a crime scene, suspects, offenders, missing persons, unknown deceased persons, volunteers, and other indices as may be specified by regulation. 25(2) states that in addition to the indices, the DNA Data Bank should contain information regarding each of the DNA profiles. It can either be the identity of the person from whose bodily substance the profile was derived in case of a suspect or an offender, or the case reference number of the investigation associated with such bodily substances in other cases. 25(3) states that the indices maintained shall include information regarding the data which is based on the DNA profiling and the relevant records.

Comment:

  • 25(1): The creation of multiple indices cannot be justified and must be limited since collection of biological source material is an invasion of privacy that must be conducted only in strict conditions when the potential harm to individuals is outweighed by the public good. This balance may only be struck when dealing with the collection and profiling of samples from certain categories of offenders. The implications of collecting and profiling DNA samples from corpses, suspects, missing persons and others are vast.  Specifically a 'volunteer' index could possibly be used for racial/community/religious profiling.
  • 25(2): This clause requires the names of individuals to be connected to their profiles, and hence accessible to persons having access to the databank.
  • 25(3) The clause states that only information related to DNA profiling and will be stored in an indice. Yet, it is unclear what such information might be. This could allow inconsistencies in data stored in an indice and could allow for unnecessary information to be stored on an indice.

Recommendation:

  • 25(1) Ideally, DNA databanks should be created for dedicated purposes. This would mean that a databank for forensic purposes should contain only an offenders’ index and a crime scene index while a databank for missing persons would contain only a missing persons indice etc. If numerous indices are going to be contained in one databank, the Bill needs to recognize the sensitivity of each indice as well as the difference between each indice and lay down appropriate and strict conditions for collection of data for such indice, addition of data into the indice, as well as use, access, and retention of data within the indice.
  • 25(2) DNA profiles, once developed, should be maintained with complete anonymity and retained separate from the names of their owners. This amendment becomes even more important if we consider the fact that an “offender” may be convicted by a lower court and have his profile included in the data bank, but may get acquitted later. However, till the time that such person is acquitted, his/her profile with the identifying information would still be in the data bank, which is an invasion of privacy.
  • 25(3) What information will be stored in indices should be clearly defined in the Bill and should be tailored appropriately to each category of indice.
  • Clause 28: This clause addresses the comparison and communication of DNA profiles.  28(1) states that the DNA profile entered in the offenders or crime scene index shall be compared by the DNA Data Bank Manger against profiles contained in the DNA Data Bank and the DNA Data Bank Manager will communicate such information with any court, tribunal, law enforcement agency, or approved DNA laboratory which he may consider appropriate for the purpose of investigation. 28(2) allows for any information relating to a person's DNA profile contained in the suspect's index or offenders' index to be communicated to authorised persons.

Comment:

  • 28(1) (a-c) allows for the DNA Bank Manager to communicate the following: 1.) if the DNA profile is not contained in the Data Bank and what information is not contained, 2.) if the DNA profile is contained in the data bank and what information is contained, and if in the opinion of the Manager, 3.) the DNA profile is similar to one stored in the Databank. These options of communication are problematic as they 1. allow for all associated information to be communicated – even if such information is not necessary, 2.) Allows for the DNA Databank Manager to communicate that a profile is  'similar' without defining what 'similar' would constitute.
  • 28(1) only addresses the comparison of DNA profiles entered  into the offenders index or the crime scene index against all other profiles entered into the DNA Data Bank.
  • 28(1) gives the DNA Data Bank manager broad discretion in determining if information should be communicated and requires no accountability for such a decision.
  • 28(2) only addresses information in the suspect's and offender's index and does not address information in any other index.

Recommendation:

  • Rather than allowing for broad searches across the entire database, the Bill should be clear about which profiles can be compared against which indices. Such distinctions must take into consideration if a profile was taken on consent and what was consented to.
  • Ideally, the response from the DNA Databank Manager should be limited to a 'yes' or 'no' response and only further information should be revealed on receipt of a court order.
  • The Bill should define what constitutes 'similar'
  • A process for determining if information should be communicated should be established in the Bill and followed by the DNA Data Bank Manager. The Manager should also be held accountable through oversight mechanisms for such decisions. This is particularly important, as a DNA laboratory would be a private body.
  • Information stored in any index should be disclosed to only authorized parties.
  • Clause 29: This clause provides for comparison and sharing of DNA profiles with foreign Government, organisations, institutions or agencies. 29(1) allows the DNA Bank Manager to run a comparison of the received profile against all indices in the databank and communicate specified responses through the Central Bureau of Investigation.

Comment: This clause allows for international disclosures of DNA profiles of  Indians through a procedure that is to be established by the Board (see clause 13(q))

Recommendation: The disclosure of DNA profiles of Indians with international entities should be done via the MLAT process as it is the typical process followed when sharing information with international entities for law enforcement purposes.

  • Clause 30: This clause provides for the permanent retention of information pertaining to a convict in the offenders’ index and the expunging of such information in case of a court order establishing acquittal of a person, or the conviction being set aside.

Comment: This clause addresses only the retention and expunging of records of a  convict stored in the offenders index upon the receipt of a court order or the conviction being set aside. This implies that records in all other indices - including volunteers - can be retained permanently. This clause also does not address situations where an individuals DNA profile is added to the databank, but the case never goes to court.

Recommendation: The Bill should establish retention standards and deletion standards for each indice that it creates. Furthermore, the Bill should require the immediate destruction of DNA samples once a DNA profile for identification purposes has been created. An exception to this should be the destruction of samples stored in the crime scene index.

Chapter VI: Confidentiality of and Access to DNA Profiles, Samples, and Records

  • Clause 33: This provision lays down the cases and the persons to which information pertaining to DNA profiles, samples and records stored in the DNA Data Bank shall be made available. Specifically, 33(e) permits disclosure for the creation and maintenance of a population statistics Data Bank.

Comment:

  • This clause addresses disclosure of information in the DNA Data Bank, but does not directly address the use of DNA samples or DNA profiles. This allows for the possibility of re-use of samples and profiles.
  • There is no limitation on the information that can be disclosed. The clause allows for any information stored in the Data Bank to be disclosed for a number of circumstances/to a variety of people.
  • There is no authorization process for the disclosure of such information. Of the circumstances listed – an authorization process is mentioned only for the disclosure of information in the case of investigations relating to civil disputes or other civil matters with the concurrence of the court. This implies that there is no procedure for authorizing the disclosure of information for identification purposes in criminal cases, in judicial proceedings, for facilitating prosecution and adjudication of criminal cases, for the purpose of taking defence by an accused in a criminal case, and for the creation and maintenance of a population statistics Data Bank.

Recommendation:

  • The Bill should establish an authorization process for the disclosure of information stored in a data bank. This process must limit the disclosure of information to what is necessary and proportionate for achieving the requested purpose.
  • Clause 33(e) should be deleted as the non-consensual disclosure of DNA profiles for the study of population genetics is specifically illegal. The use of the database for statistical purposes should be limited to purposes pertaining to understanding effectiveness of the databank.
  • Clause 33(f) should be deleted as it is not necessary for DNA profiles to be stored in a database to be useful for civil purposes. Instead samples for civil purposes are only needed as per the relevant case and specified persons.
  • Clause 33(g) should be deleted as it allows for the scope of cases in which DNA can be disclosed to by expanded as prescribed.
  • Clause 34: This clause allows for access to information for operation maintenance and training.
  • Comment: This clause would allow individuals in training access to data stored on the database for training purposes. This places the security of the databank and the data stored in the databank at risk.
  • Recommendation: Training of individuals should be conducted via simulation only.
  • Clause 35: This clause allows for access to information in the DNA Data Bank for the purpose of a one time keyboard search. A one time keyboard search allows for information from a DNA sample to be compared with information in the index without the information from the DNA sample being included in the index. The clause allows for an authorized individual to carry out such a search on information obtained from an DNA sample lawfully collected for the purpose of criminal investigation, except if the DNA sample was submitted for elimination purposes.
  • Comment: The purpose of this clause is unclear as is the scope. The clause allows for the sample to be compared against 'the index' without specifying which index. The clause also allows for 'information obtained from a DNA sample' rather than a profile.  Thus, the clause appears to allow for any information derived from a DNA sample collected for a criminal investigation to be compared against all data within the databank – without recording such information. Such a comparison is vast in scope and open to abuse.
  • Recommendation: To ensure that this provision is not used for conducting searches outside of the scope of the original purpose, only DNA profiles, rather than 'information derived from a sample' should be allowed to be compared,  only the indices relevant to the sample should be compared, and the search should be authorized and justified.
  • Clause 36 : This clause addresses the restriction of access to information in the crime scene index if the individual is a victim of a specified offense or if the person has been eliminated as a suspect of an investigation.

Comment:

  • This clause only addresses restriction of access to the crime scene index and does not address restriction of access to other indices.
  • This clause only restricts access to the indice for certain category of individual and for a specific status of a person. Oddly, the clause does not include authorization or rank as a means for determining or restricting access.

Recommendation:

  • This clause should be amended to lay down standards for restriction of access for all indices.
  • Access to all information in the databank should be restricted by default and permission should be based on authorization rather than category or status of individual.
  • Clause 38: This clause sets out a post-conviction right related to criminal procedure and evidence.

Comment: This clause would fundamentally alter the nature of India’s criminal justice system, which currently does not contain specific provisions for post-conviction testing rights.

Recommendation: This clause should be deleted and the issue of post conviction rights related to criminal procedure and evidence referenced to the appropriate legislation.  Clause 38 is implicated by Article 20(2) of the Constitution of India and by section 300 of the CrPC. The principle of autrefois acquit that informs section 300 of the CrPC specifically deals with exceptions to the rule against double jeopardy that permit re-trials. [See, for instance, Sangeeta Mahendrabhai Patel (2012) 7 SCC 721.] The person must be duly accorded with a right to know rules may provide for- the authorized persons to whom information relating to a person’s DNA profile contained in the offenders’ index shall be communicated. Alternatively, this right could be limited only to accused persons who’s trial is still at the stage of production of evidence in the Trial Court. This suggestion is being made because unless the right as it currently stands, is limited in some manner, every convict with the means to engage a lawyer would ask for DNA analysis of the evidence in his/her case thereby flooding the system with useless requests risking a breakdown of the entire machinery.

Chapter VII: Finance, Accounts, and Audit

Clause 39: This clause allows the Central Government to make grants and loans to the DNA Board after due appropriation by Parliament.

Comment: This clause allows the Central Government to grant and loan money to the DNA Board, but does not require any proof or justification for the sum of money being given.

Recommendation: This clause should require a formal cost benefit analysis, and financial assessment prior to the giving of any grants or loans.

Chapter VIII: Offences and Penalties

Chapter IX: Miscellaneous

Clause 53: This clause allows protects the Central Government and the Members of the Board from suit, prosecution, or other legal proceedings for actions that they have taken in good faith.

Comment: Though it is important to take into consideration if an action has been taken in good faith, absolving the Government and Board from accountability for actions leaves little course of redress for the individual. This is particularly true as the Central Government and the Board are given broad powers under the Bill.

Recommended: If the Central Government and the Board will be protected for actions taken in good faith, their powers should be limited. Specifically, they should not have the ability to widen the scope of the Bill.

Clause 57: This clause states that the Central Government will have the powers to make Rules for a number of defined issues.

Comment: 57(d) allows for the regulations to be created regarding the use of population statistics Data Bank created and maintained for the purposes of identification research and protocol development or quality control.

Recommendation: 57(d) should be deleted as any use for the creation of a population statistics Data Bank created and maintained for the purposes of identification research and protocol  development or quality control is beyond the scope of the Bill.

  • Clause 58: This clause empowers the Board to make regulations regarding a number of aspects related to the Bill.
  • Comment: There a number of functions that the Board can make regulations for that should be defined within the Bill itself to ensure that the scope of the Bill does not expand without Parliamentary oversight and approval.
  • Recommendation: 58(2)(g) should be deleted as it allows the Board to create regulations for other relevant uses of DNA techniques and technologies, 58(2)(u) should be deleted as it allows the Board to include new categories of indices to databanks, and 58(2) (aa) should be deleted as it allows the Board to decide which other indices a DNA profile may be compared with in the case of sharing of DNA profiles with foreign Governments, organizations, or institutions.

Clause 61: This clause states that no civil court will have jurisdiction to entertain any suit or proceeding in respect of any matter which the Board is empowered to determine and no injunction shall be granted.

Comment: This clause in practice will limit the recourse that individuals can take and will exclude the Board from the oversight of civil or criminal courts.

Recommendation: The power to collect, store and analyse human DNA samples has wide reaching consequences for people whose samples are being utilised for this purpose, specially if their samples are being labeled in specific indexes such as “index of offenders”, etc. The individual should therefore have a right to approach the court of law to safeguard his/her rights. Therefore this provision barring the jurisdiction of the courts should be deleted.

Schedule

  • Schedule A: The schedule refers to section 33(f) which allows for disclosure of information in relation to DNA profiles, DNA samples, and records in a DNA Data Bank to be communicated in cases of investigations relating to civil disputes or other civil matters or offenses or cases listed in the schedule with the concurrence of the court.

Comment: As 33(f) requires the concurrence of the court for disclosure of information, it is unclear what purpose the schedule serves. If the Schedule is meant to serve as a guide to the Court on appropriate instances for the disclosure of information stored in the DNA databank – the schedule is too general by listing entire Acts, while at the same time being too specific by naming specific Acts. Ideally, courts should use principles and the greater public interest to reach a decision as to whether or not disclosure of information in the DNA databank is appropriate. At a minimum these principles should include necessity (of the disclosure) and proportionality (of the type/amount of information disclosed).

Recommendation: As we recommended the deletion of clause 33(f) as it is not necessary to databank DNA profiles for civil purposes, the schedule should also be deleted.

  • Note: The schedule differs drastically from previous drafts and from discussions  held in the Expert Committee and recommendations agreed upon. As per the Meeting Minutes of the Expert Committee meeting held on November 10th 2014 “The Committee recommended incorporation of the comments received from the members of the Expert Committee appropriately in the draft Bill...Point no. 1 suggested by Mr. Sunil Abraham in the Schedule of the draft Bill to define the cases in which DNA samples can be collected without consent by incorporating point no. 1 (I.e 'Any offence under the Indian Penal Code, 1860 if it is listed as a cognizable offence in Part I of the First Schedule of the code of Criminal Procedure, 1973)

Download CIS submission here. See the cover letter here.

CIS Human DNA Profiling Bill 2015

by Prasad Krishna last modified Sep 02, 2015 05:04 PM

PDF document icon CIS_Human_DNA_Profiling_Bill_Comments.pdf — PDF document, 200 kB (204983 bytes)

Cover Letter for DNA Profiling Bill 2015

by Prasad Krishna last modified Sep 02, 2015 05:05 PM

PDF document icon CIS Cover Letter.pdf — PDF document, 105 kB (107663 bytes)

Data Flow in the Unique Identification Scheme of India

by Vidushi Marda last modified Sep 03, 2015 05:02 PM
This note analyses the data flow within the UID scheme and aims at highlighting vulnerabilities at each stage. The data flow within the UID Scheme can be best understood by first delineating the organizations involved in enrolling residents for Aadhaar. The UIDAI partners with various Registrars usually a department of the central or state Government, and some private sector agencies like LIC etc– through a Memorandum of Understanding for assisting with the enrollment process of the UID project.

Many thanks to Elonnai Hickok for her invaluable guidance, input and feedback


These Registrars then appoint Enrollment Agencies that enroll residents by collecting the necessary data and sharing this with the UIDAI for de-duplication and issuance of an Aadhaar number, at enrolment centers that they set up. The data flow process of the UID is described below:[1]

Data Capture

  • Filling out an enrollment form – To enroll for an Aadhaar number, individuals are required to provide proof of address and proof of identity. These documents are verified by an official at the enrollment center.

Vulnerability: Though an official is responsible for verifying these documents, it is unclear how this verification is completed. It is possible for fraudulent proof of address and proof of identity to be verified and approved by this official.

  • The 'introducer' system: For individuals who do not have a Proof of Identity, Proof of Address etc the UIDAI has established an 'introducer' system. The introducer verifies that the individual is who they claim to be and that they live where they claim to live.

Vulnerability: This introducer is akin to the introducer concept in banking; except that here, the introducer must be approved by the Registrar, and need not know the person bring enrolled. This leads to questions of authenticity and validity of the data collected and verified by an 'introducer'. The Home Ministry in 2012, indicated that this must be reviewed.[2]

  • Categories of data for enrollment: The UIDAI has a standard enrollment form and list of documents required for enrollment. This includes: name, address, birth date, gender, proof of address and proof of identity. Some MoUs (Memorandum of Understanding) permit for the Registrars to collect additional information in addition to what is required by the UIDAI. This could be any information the Registrar deems necessary for any purpose.

Vulnerability: The fact that a Registrar may collect any information they deem necessary and for any purpose leads to concerns regarding (1) informed consent – as individuals are in placed in a position of having to provide this information as it is coupled with the Aadhaar enrollment process (2) unauthorized collection - though the MOU between the UIDAI and the Registrar has authorized the Registrar to collect additional information – if the information is personal in nature and the Registrar is a body corporate it must be collected as per the Information Technology Rules 2011 under section 43A. It is unclear if Registrars that are body corporates are collecting data in accordance to these rules. (3) As Registrars are permitted to collect any data they deem necessary for any purpose – this leads to concerns regarding misuse of this data..[3]

  • Verification of Resident’s Documents: true copies of original  documents, after verification are sent to the Registrar for “permanent storage.”[4]

Vulnerability: It is unclear as to what extent and form this storage takes place. There is no clarity on who is responsible for the data once collected, and the permissible uses of such data are also unclear. The contracts between the UID and Registry claim that guidelines must be followed, while the guidelines state that, “The documents are required to be preserved by Registrar till the UIDAI finalizes its document storage agency” and states that the “Registrars must ensure that the documents are stored in a safe and secure manner and protected from unauthorized access.” [5] The question of what is “unauthorized access”, “secure storage”, when is data transferred to the UIDAI and when the UIDAI will access it and why remain unanswered. Moreover, there is nothing about deleting documents once the MoU lapses. The guidelines in question were also developed post facto.

  • Data collection for enrollment: After verification of proof of address and proof of identity, operators at the enrolling the agency will be enrolling individuals.  Data Collection is completed by operators at the enrolling agency. This includes the digitization of enrollment forms and collection of biometrics. Enrollment information is manually collected and entered into computers operating software provided by the UIDAI and then transferred to the UIDAI. Biometrics are collected through devices that have been provided by third parties such as Accenture and L1Identity Solutions.

Vulnerability: After data is collected by enrollment operators it is  possible for data leakage to occur at the point of collection or during transfer to the Registrar and UIDAI. Data operators, are therefore not answerable to the UIDAI, but to a private agency; a fact which has been the cause of concern even within the government.[6] There have also been instances of sub contracting which leads to more complications in respect of accountability. Misuse[7] and loss of data is a very real possibility, and irregularities have been reported as well.[8] By relying on technology that is provided by third parties (in many cases foreign third parties) data collected by these devices is also available to these companies while at the same time the companies are not regulated by Indian law.

  • Import pre-enrolment data into Aadhaar enrollment client, Syncing NPR/census data into the software: The National Population Register (NPR) enrolls usual residents, and is governed by the Citizenship Rules, which prescribe a penalty for non disclosure of information.

Vulnerability: Biometrics does not form part of the Rules that govern NPR data collection; the Citizenship Rules, 2003. In many ways, collection of biometrics without amending the citizenship laws amounts to a worrying situation. The NPR hands over information that it collects to UIDAI, biometrics collected as part of the UIDAI is included in the NPR, leading to concerns surrounding legality and security of such data.

  • Resident’s consent: for “whether the resident has agreed to share the captured information with organizations engaged in delivery of welfare services.”

Vulnerability: This allows the UIDAI to use data in an almost unfettered fashion. The enrolment form reads, “‘‘I have no objection to the UIDAI sharing information provided by me to the UIDAI with agencies engaged in delivery of welfare services.” Informed consent, Vague. What info and with whom. Why is necessary for the UIDAI to share this information, when the organization is only supposed to be a passive intermediary? Does beyond the mandate of the UIDAI, which is only to provide and authenticate the number.

  • Biometric exceptions: The operator checks if the resident’s eyes/hands are amputated/missing, and after the Supervisor verifies the same, the record is made as an exception and only the individuals photograph is recorded.

Vulnerability: There has widespread misuse of this clause, with data being fabricated to fall into this category, making it unreliable as a whole. In March 2013, 3.84 lakh numbers were cancelled as they were based on fraudulent use of the exception clause. [9]

  • Operator checks if resident wants Aadhaar enabled bank account: The UID project was touted to be a scheme that would ensure access to benefits and subsidies that are provided through cash transfers as well as enabling financial inclusion. Subsequently, the need for a Aadhaar embedded bank account was made essential to avail of these benefits. The operator at this point checks whether the resident would like to open such a bank account.

Vulnerability: The data provided at the time of linking UID with a bank account cannot be corrected or retracted. Although this has the vision of financial inclusion, it is now a threat of exclusion.

  • Capturing biometrics- The UIDAI scheme includes assigning each individual a unique identification number after collecting their demographic and biometric information. One Time Passwords are used to manually override a situation in which biometric identification fails.[10] The UIDAI data collection process was revamped in 2012 to include best finger detection and multiple try method.[11]

Vulnerabilities: The collection process is not always accurate, in fact, 70% of the residents who enrolled in Salt Lake, will have to re-enroll due to discrepancies at the time of enrollment.[12] Further, a large number of people in India are unable to give biometric information due to manual labour, or cataracts etc.

After such data is entered, the Operator shows such data to the Resident or Introducer or Head of the Family (as the case may be) for validation.

  • Operator Sign off – Each set of data needs to be verified by an Operator whose fingerprint is already stored in the system.

Vulnerability: Vesting authority to sign off in an operator allows for  signing off on inaccurate or fraudulent data. For example, the issuance of aadhaar numbers to biometric exceptions highlight issues surrounding misuse and unreliability of this function.[13]

After this, the Enrolment operator gets supervisor’s sign off for any exceptions that might exist, Acknowledgement and consent for enrolment is stored. Any correction to specified data can be made within 96 hours.

Document Storage, Back up and Sync

After gathering and verifying all the information about the resident, the Enrolment Agency Operator will store photocopies of the documents of the resident. These Agencies also backup data “from time to time” (recommended to be twice a day), and maintain it for a minimum of 60 days. They also sync with the server every 7-10 days.

Vulnerability: The security implications of third party operators storing information is greatly exacerbated by the fact that these operators use technology and devices from companies have close ties to intelligence agencies in other countries; L-1 Identity Solutions have close ties with America’s CIA, Accenture with French intelligence etc. [14]

Transfer of Demographic and Biometric Data Collected to CIDR

“First mile logistics” include transferring data by using Secure File Transfer Protocol) provided by UIDAI or through a “suitable carrier” such as India Post.

Vulnerability: There is no engagement between the UIDAI and the enrolling agencies; the registrars engage private enrolment agencies, and not the UIDAI. Further, the scope of people authorized to collect information, the information that can be collected, how such information is stored etc are all vague. In 2009, there was a notification that claimed that the UIDAI owns the database[15] but there is no indication on how it may be used, how this might react to instances of identity fraud, etc.

Data De-duplication and Aadhar Generation at CIDR

On receiving biometric information, the de-duplication is done to ensure that each individual is given only one UID number.

Vulnerability:

  • This de-duplication is carried out by private companies, some of which are not of indian origin and thus are also not bound by Indian law. Also, the volume of Aadhaar numbers rejected due to quality or technical reasons is a cause of worry; the count reaching 9 crores in May 2015.[16]
  • The MoUs promise registrars access to information contained in the Aadhaar letter, although individuals are ensured that such letter is only sent to them. [17]
  • General compliance and de-duplication has been an issue, with over 34,000 people being issued more than one Aadhaar number,[18] and innumerable examples of faulty Aadhaar cards being issued.[19]

[1] Enrolment Process Essentials : UIDAI , (December 13,2012), http://nictcsc.com/images/Aadhaar%20Project%20Training%20Module/English%20Training%20Module/module2_aadhaar_enrolment_process17122012.pdf

[2] UIDAI to review biometric data collection process of 60 crore resident Indians: P Chidambaram, Economic Times, (Jan 31, 2012), http://articles.economictimes.indiatimes.com/2012-01-31/news/31010619_1_biometrics-uidai-national-population-register.

[3]See: an MoU signed between the UIDAI and the Government of Madhya Pradesh. Also see: Usha Ramanathan, “States as handmaidens of UIDAI”, The Statesman (August 8, 2013).

[4]http://nictcsc.com/images/Aadhaar%20Project%20Training%20Module/English%20Training%20Module/module2_aadhaar_enrolment_process17122012.pdf

[5] Document Storage Guidelines for Registrars – Version 1.2, https://uidai.gov.in/images/mou/D11%20Document%20Storage%20Guidelines%20for%20Registrars%20final%2005082010.pdf

[6] Arindham Mukherjee, Lola Nayar, Aadhaar,A Few Basic Issues, Outlook India, (December 5, 2011), http://dataprivacylab.org/TIP/2011sept/India4.pdf.

[7] Aadhaar: UIDAI probing several cases of misuse of personal data, The Hindu, (April 29, 2012), http://www.thehindubusinessline.com/economy/aadhar-uidai-probing-several-cases-of-misuse-of-personal-data/article3367092.ece.

[8] Harsimran Julka, UIDAI wins court battle against HCL technologies, The Economic Times, (October 4, 2011), http://articles.economictimes.indiatimes.com/2011-10-04/news/30242553_1_uidai-bank-guarantee-hp-and-ibm.

[9] Chetan Chauhan, UIDAI cancels 3.84 lakh fake Aadhaar numbers, The Hindustan Times, (December 26, 2012), http://www.hindustantimes.com/newdelhi/uidai-cancels-3-84-lakh-fake-aadhaar-numbers/article1-980634.aspx.

[10] Usha Ramanathan, “Inclusion project that excludes the poor”, The Statesman (July 4, 2013).

[11] UIDAI to Refresh Data Collection Process, Zee News, (February 7, 2012) http://zeenews.india.com/news/delhi/uidai-to-refresh-data-collection-process_757251.html.

[12] Snehal Sengupta, Queue up again to apply for Aadhaar, The Telegraph, (February 27, 2015), http://www.telegraphindia.com/1150227/jsp/saltlake/story_5642.jsp#.VayjDZOqqko

[13] Chauhan, supra note 7.

[14] Usha Ramanathan, Three Supreme Court Orders Later, What’s the Deal with Aadhaar? Yahoo News, (April 13, 2015), https://in.news.yahoo.com/three-supreme-court-orders-later--what-s-the-deal-with-aadhaar-094316180.html.

[15] Usha Ramanathan, “Threat of Exclusion and of Surveillance, The Statesman (July 2, 2013).

[16] Over 9 Crore Aadhaar enrolments rejected by UIDAI, Zee News (May 8, 2015).

[17] Usha Ramanathan, “States as handmaidens of UIDAI”, The Statesman (August 8, 2013).

[18] Surabhi Agarwal, Duplicate Aadhar numbers within estimate, Live Mint (March 5, 2013).

[19] Usha Ramanathan, “Outsourcing enrolment, gathering dogs and trees”, The Statesman (August 7, 2013).

The seedy underbelly of revenge porn

by Prasad Krishna last modified Sep 27, 2015 02:25 PM
Intimate photos posted by angry exes are becoming part of an expanding online body of dirty work.

The article by Sandhya Soman was published in the Times of India on August 23, 2015.


Three lakh 'Likes' aren't easy to come by. But Geeta isn't gloating. She's livid, and waiting for the day a video-sharing site will take down the popular clip of her having sex with her vengeful ex-husband. "Every other day somebody calls or messages to say they've seen me," says Geeta.

She is not alone. Two weeks ago, law student Shrutanjaya Bhardwaj Whatsapped women he knew asking if any of them had come across cases of online sexual harassment. In a few hours, his phone was filled with tales of harassment by ex-boyfriends and strangers. Instances ranged from strangers publishing morphed photographs on Facebook, to ex-husbands and boyfriends circulating intimate photos and videos on porn sites. Of the 40 responses, around 25 were cases of abuse by former partners. "I have heard friends talking about the problem, but never realized it was this bad," says Bhardwaj.

These days, revenge is best served online - it travels faster and has potential for greater damage. But despite the widespread nature of the crime, many targets hesitate to complain for fear of being shamed and blamed. "A 15-year-old girl is going to worry about how her parents will react if she talks about it," says Chinmayi Arun, research director, Centre for Communication Governance at Delhi National Law University. There is also fear of harassment by the police, says Rohini Lakshane, researcher, Centre for Internet and Society. Worst of all is the waiting. "Even if a police complaint is filed, it takes ages to find out who shot it, who uploaded it and where it is circulated. Such content is mirrored across many sites," she says.

Geeta is familiar with the routine. Her harassment started with photographs sent to family, friends and colleagues. After an acrimonious divorce, several videos were released in 2013. "There were some 25-30 videos on various sites.

After an FIR was filed, the police wrote to websites and some of the links were removed," says Geeta, who has been flagging content on a popular site, which has not yet responded to her privacy violation report. "My face is seen clearly on it. People even come up to me in restaurants saying they've seen it. How do I get on with my life?" asks a distraught Geeta. She also recently filed an affidavit supporting the controversial porn ban PIL in a last-ditch effort to erase the abuse that began after her divorce.

The cyber cell officer in charge of her case says he had got websites to shut down several URLs but was thwarted by the repeal of section 66A of the IT Act that dealt with offensive messages sent electronically. When asked why section 67 (cyber pornography) of the same act and various sections in the criminal law couldn't be used, the officer says that only 66A is applicable to the evidence he has. "I asked for more links and she sent them to me. We'll see if other sections can be applied," he says. Lawyers and activists, argue that existing laws are good enough like sections 354A (sexual harassment), 354C (voyeurism), 354D (stalking) and 509 (outraging modesty) of the IPC.

Though there are no official statistics for what is popularly referred to as 'revenge' porn, there is a flood of such images online. Lakshane, who studied consent in amateur pornography for the NGO-run EroTICs India project in 2014, found clandestinely shot clips to exhibitionist ones where faces are blurred or cropped.

Social activist Sunita Krishnan has raised the red flag over several video clips, including two that show gang rape, which were circulated on Whatsapp. Some of the content she came across showed familiarity between the man and woman, indicating an existing relationship. In one clip, the man says: "How dare you go with that fellow. What you did it to him, do it to me."

Most home-grown clips end up on desi sites with servers abroad, making it difficult to take down content. Some do have a policy of asking for consent of people in the frame. But Lakshane, who wanted to test this policy, says when she approached one website that has servers abroad saying that she had a sexually explicit video, the reply was a one-liner asking her to send it. "They didn't ask for any consent emails," she says. In lieu of payment, they offered her a free account on another file-sharing site, which seemed to partner with the site. With no financial links to those submitting videos, sites like these make money out of subscriptions from consumers, or ads.

A few months ago, the CBI arrested a man from Bengaluru for uploading porn clips, using high-end editing software and cameras. Kaushik Kuonar allegedly headed a syndicate and was supposed to be behind the rape clips reported by Krishnan. "I am skeptical of the idea of amateur porn being randomly available across the Internet. There seem to be people like the man in Bengaluru who are apparently sourcing, distributing and making money out of it," says Chinmayi Arun. "He had 474 clips, including some of rape," adds Krishnan.

Social media companies, meanwhile, say they're working with authorities to prevent such violations. Facebook spokesperson says the company removes content that violates its community standards. It also works with the women and child development ministry to help women stay safe online. Google, Microsoft, Twitter and Reddit have promised to remove links to revenge porn on request, while countries like Japan and Israel have made it illegal.

In India, the National Commission for Women started a consultation on online harassment but is yet to submit a report. In the absence of clarity, activists like Krishnan endorse the banning of porn sites. Not all agree with sweeping solutions. Lakshane says sometimes a court order helps to get tech companies to act faster on requests as in the case of a 2012 sex tape scandal where Google removed search results to 360 web pages. Also, the term 'revenge' porn, she says, is a misnomer as the videos are meant to shame women. "These are not movies where actors get paid. Somebody else is making money off this gross violation of privacy."

Human DNA Profiling Bill 2012 v/s 2015 Bill

by Vanya Rakesh last modified Sep 06, 2015 02:10 PM
This entry analyses the Human DNA Profiling Bill introduced in 2012 with the provisions of the 2015 Bill

A comparison of changes that have been introduced in the Human DNA Profiling Bill, June 2015.

  • Definitions:

1. 2012 Bill: The definition of "analytical procedure" was included under clause 2 (1) (a) and was defined as an orderly step by step procedure designed to ensure operational uniformity.

2015 Bill: This definition has been included under the Explanation under clause 22 which provides for measures to be taken by DNA Laboratory.

2. 2012 Bill: The definition of "audit" was earlier defined under clause 2 (1) (b) and was defined as an inspection used to evaluate, confirm or verify activity related to quality.

2015 Bill: This definition has been included under the Explanation under clause 22 which provides for measures to be taken by DNA Laboratory.

3. 2012 Bill: There was no definition of "bodily substance".

2015 Bill: Clause 2(1) (b) defines bodily substance to be any biological material of or from a body of the person (whether living or dead) and includes intimate/non-intimate body samples as well.

4. 2012 Bill: The definition of "calibration" was included under clause 2 (1) (d) in the previous Bill.

2015 Bill: The definition has been removed from the definition clause and has been included as an explanation under clause 22.

5. 2012 Bill: Previously "DNA Data Bank" was defined under clause 2(1)(h) as a consolidated DNA profile storage and maintenance facility, whether in computerized or other form, containing the indices as mentioned in the Bill.

2015 Bill: However, in this version, the definition has been briefed under clause 2(1) (f) to mean as a DNA Data Bank as established under clause 24.

6. 2012 Bill: Previously a "DNA Data Bank Manager" was defined clause 2(1) (i) as the person responsible for supervision, execution and maintenance of the DNA Data Bank.

2015 Bill: In the new Bill, it is defined clause 2(1) (g) as a person appointed under clause 26.

7. 2012 Bill: Under clause 2(1) (j), the definition of "DNA laboratory" was defined to be any laboratory established to perform DNA procedures.

8. 2015 Bill: Under clause 2(1) (h) "DNA laboratory" has been now defined to be any laboratory established to perform DNA profiling.

9. 2012 Bill: "DNA procedure" was defined under clause 2(1) (k) as a procedure to develop DNA profile for use in the applicable instances as specified in the Schedule.

2015 Bill: This definition has been removed from the Bill.

10. 2012 Bill: There was no definition of "DNA Profiling".

2015 Bill: DNA profiling has been defined under clause 2(1) (j) as a procedure to develop DNA profile for human identification.

11. 2012 Bill: "DNA testing" was defined under clause 2(1) (n) as the identification and evaluation of biological evidence using DNA technologies for use in the applicable instances.

2015 Bill: This definition has been removed.

12. 2012 Bill: "forensic material" was defined under clause 2(1) (o) as biological material of or from the body of a person living or dead, and representing an intimate body sample or non-intimate body sample.

2015 Bill: This definition has been included under the definition of "bodily substance" under clause 2(1) (b).

13. 2012 Bill: "intimate body sample" was defined under clause 2(1) (q).

2015 Bill: This has been removed from the definitions clause and has been included as an explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

14. 2012 Bill: "intimate forensic procedure" was defined under 2(1) (r).

2015 Bill: This has been removed from the definitions clause and has been included as an explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

15. 2012 Bill: "non-intimate body sample" was defined under clause 2(1) (v) in 2012 Bill.

2015 Bill: The definition of "non-intimate body sample" has not been included in the definitions clause and has been included as an Explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

16. 2012 Bill: "non-intimate forensic procedure" was defined under clause 2(1) (w) in 2012 Bill.

2015 Bill: The definition of "non-intimate forensic procedure" has not been included in the definitions clause and has been included as an Explanation under clause 23 which addresses sources and manner of collection of samples for DNA profiling.

17. 2012 Bill: "undertrial" was defined under clause 2(1) (zk) as a person against whom a criminal proceeding is pending in a court of law.

2015 Bill: The definition now states such a person against whom charges have been framed for a specified offence in a court of law under clause 2(1) (zc).

  • DNA Profiling Board:

1. 2012 Bill: Under clause 4 (a), the Bill stated that a renowned molecular biologist must be appointed as the Chairperson.

2015 Bill: Under clause 4 addressing Composition of the Board, the Bill states that the Board shall consist of a Chairperson who shall be appointed by the Central Government and must have at least fifteen years' experience in the field of biological sciences.

2. 2012 Bill: Under clause 4 (i), the Chairman of National Bioethics Committee of Department of Biotechnology, Government of India was to be included as a member under the DNA Profiling Board.

2015 Bill: This member has been removed from the composition.

3. 2012 Bill: Under clause 4 (m), the term of 1 person from the field of genetics was not mentioned in the 2012 Bill.

2015 Bill: In this Bill under clause 4 (m), it has been stated that such a person must have minimum experience of twelve years in the field.

4. 2012 Bill: The term of 2 people from the field of biological sciences was not mentioned in the 2012 Bill under clause 4 (l).

2015 Bill: Under clause 4 (l), it has been stated that such 2 people must have minimum experience of twelve years in the field.

5. The following members have been included in the 2015 Bill-

i. Chairman of National Human Rights Commission or his nominees, as an ex-officio member under clause 4 (a).

ii. Secretary to Government of India, Ministry of Law and Justice or his nominees (not below rank of Joint Secretary), as an ex-officio member under clause 4 (b).

6. 2012 Bill: Under clause 5, the term of the members was not uniform and varied for all members.

2015 Bill: The term of people from the field of biological sciences and the person from the field of genetics has been states to be five years from the date of their entering upon the office, and would be eligible for re-appointment for not more than 2 consecutive terms.

Also, the age of a Chairperson or a member cannot exceed seventy years.

The term of members under clauses (c), (f), (h), and (i) of clause 4 is 3 years and for others the term shall continue as long as they hold the office.

  • Chief Executive Officer:

2012 Bill: Earlier it was stated in the Bill under clause 10 (3) that such a person should be a scientist with understanding of genetics and molecular biology.

2015 Bill: The Bill states under clause 11 (3) that the CEO shall be a person possessing qualifications and experience in science or as specified under regulations. The specific experience has been removed.

A new clause- 12(5) addresses power of the Board to co-opt the number of people for attending the meetings and take part in proceedings; however such a person shall be devoid of voting rights. Also, such a person shall be entitled to specified allowances for attending the meetings.

  • Officers and Other Employees of Board:

2012 Bill: The Bill stated under clause 11 (3) that the Board may appoint consultants required to assist in the discharge of its functions on such terms and conditions as may be specified by the regulations.

2015 Bill: The 2015 Bill states under clause 12 (3) that the Board may appoint experts to assist for discharging its functions and may hold consultations with people whose rights may be affected by DNA profiling.

  • Functions of the Board:

2012 Bill: 26 functions were stated in the 2012 Bill.

2015 Bill: The number of the functions has been reduced to 22 with a few changes based on recommendations of Expert Committee.

  • Power of Board to withdraw approval:

2015 Bill: The circumstances in which the Board could withdraw its approval have not been changed from the 2012 Bill (previously under clause 16). There's an addition to the list as provided under clause 17 (1) (d) wherein the Board can also withdraw its approval in case the DNA laboratory fails to comply with any directions issued by the DNA Profiling Board or any such regulatory Authority under any other Act.

  • Obligations of DNA Laboratory:

2015 Bill: There is an addition to the list of obligations to be undertaken by a DNA laboratory under clause 19 (d). The laboratory has an additional obligation to share the DNA data prepared and maintained by it with the State DNA Data Bank and the National DNA Data Bank.

  • Qualification and experience of Head, technical and managerial staff and employees of DNA Laboratory:

2012 Bill: The previous Bill clearly mandated under clause 19 (2) the qualifications of the Head of every DNA laboratory to be a person possessing educational qualifications of Doctorate in Life Sciences from a recognised University with knowledge and understanding of the foundation of molecular genetics as applied to DNA work and such other qualifications as may be specified by regulations made by the Board.

2015 Bill: The provision has been generalized and provides under clause 20 (1) for a person to be possess the specified educational qualifications and experience.

  • Measures to be taken by DNA Laboratory:

2012 Bill: In the previous Bill, there were separate clauses with regard to security, minimization of contamination, evidence control system, validation process, analytical procedure, equipment calibration and maintenance, audits of laboratory to be followed by a DNA Laboratory.

2015 Bill: In the 2015 Bill, these measures to be adopted by DNA Laboratory have been included under one clause itself-clause 22.

  • Infrastructure and training:

2012 Bill: The specific provisions regarding infrastructure, fee, recruitment, training and installing of security system in the DNA Laboratory were present in the Bill under clauses 28-31.

2015 Bill: These provisions have been removed from the 2015 Bill.

  • Sources and manner of collection of samples for DNA profiling:

2012 Bill: Part II of the Schedule in the Bill provided for sources and manner of collection of samples for DNA Profiling.

The sources include: Tissue and skeleton remains and Already preserved body fluids and other samples.

Also, it provided for a list of the manner in which the profiling can be done:

(1) Medical Examination (2) Autopsy examination (3) Exhumation

Also, provision for collection of intimate and non-intimate body samples was provided as an Explanation.

2015 Bill: Under Clause 23, the sources include bodily substances and other sources as specified in Regulations. The other sources remain unchanged.

Also, provision for collection of intimate and non-intimate body samples is addressed in clause 23(2).

The explanation to the provision states what would be implied by the terms medical practitioner, intimate body sample, intimate forensic procedure, non-intimate body sample and non-intimate forensic procedure.

  • DNA Data Bank:

- Establishment:

2012 Bill: The Bill did not specify any location for establishment of the National DNA Data Bank.

2015 Bill: The Bill states under clause 24 (1) that the Central Government shall establish a National DNA Data Bank at Hyderabad.

-Maintenance of indices of DNA Data Bank:

2012 Bill: Apart from the DNA profiles, every DNA Data Bank shall contain the identity of the person from whose body the substances are taken in case of a profile in the offenders' index as under clause 32 (6) (a).

2015 Bill: Clause 25 (2) (a) states that the DNA Data Bank shall contain the identity for the suspects' or offenders' index.

  • DNA Data Bank Manager:

2012 Bill: The Bill States under clause 33 (1) that a DNA Data Bank Manger shall be appointed for conducting all operations of the National DNA Data Bank. The functions were not specific.

2015 Bill: The Bill states under clause 26 (1) specifically that a DNA Data Bank Manger shall be appointed for the purposes of execution, maintenance and supervision of the National DNA Data Bank.

- Qualification:

2012 Bill: In the previous Bill, it was stated under clause 33 (3) that the DNA Data Bank Manager must be a scientist with understanding of computer applications and statistics.

2015: The Bill states under clause 26 (2) that the DNA Data Bank Manager must possess educational qualification in science and any such experience as prescribed by the regulations.

  • Officers and other employees of the National DNA Data Bank:

2012 Bill: The Bill stated under clause 34 (3) that the Board may appoint consultants required to assist in the discharge of the functions of the DNA Data Banks.

2015 Bill: The Bill provides under clause 27 (3) that the Board may appoint experts required to assist in the discharge of the functions of the DNA Data Banks

  • Comparison and Communication of DNA profiles:

2015 Bill: The New Bill specifically addresses comparison and communication the DNA profiles as that in the offenders' or crime scene index under clause 28 (1). Also, there is an additional provision under clause 29 (3) which states that the National DNA Data Bank Manger may communicate a DNA profile through Central Bureau of Investigation on request of a court, tribunal, law enforcement agency or DNA laboratory to the Government of a foreign State, an international organization or institution of Government.

  • Use of DNA profiles and DNA samples and records:

2012 Bill: The Bill provided under clause 39 that all DNA profiles, samples and records would be used solely for purpose of facilitating identification of perpetrator of an offence as listed under the Schedule. The proviso to this provision addressed the fact that such samples could be used to identify victims of accidents or disaster or missing persons, or any purpose of civil dispute.

2015 Bill: The Bill restricts the use of all DNA profiles, samples and records solely for purpose of facilitating identification of a person under the Act under clause 32.

  • DNA Profiling Board Fund:

2012 Bill: The Bill stated under clause 47 (2) that the financial power for the application of monies of the Fund shall be delegated to the Board in such manner as may be prescribed and as may be specified by the regulations made by the Board.

Also, the Bill stated that the Fund shall be applied for meeting remuneration requirements to be paid to the consultants under clause 47 (3) (c).

2015 Bill: This provision has not been included in the Bill. Also, the Bill does not include the provision of paying the remuneration to the experts from the Fund.

  • Delegation of Powers:

2012 Bill: The Bill provided under clause 61 that The Board may delegate its powers and functions to the Chairperson or any other Member or officer of the Board subject to such conditions, if necessary.

2015 Bill: This provision has not been included in the 2015 Bill.

  • Powers of Board to make rules:

2012 Bill: The Bill provided for an exhaustive list consisting of 33 powers listed under clause 65.

2015 Bill: The Bill provides for a list of 27 powers of the Board under clause 57.

  • Schedule:

2012 Bill: In the list of offense where human DNA profiling would be applicable, there was an inclusion of any law as may be specified by the regulations made by the Board.

2015 Bill: This provision has been removed from the 2015 Bill.

Responsible Data Forum: Discussion on the Risks and Mitigations of releasing Data

by Vanya Rakesh last modified Sep 06, 2015 02:29 PM

The Responsible Data Forum initiated a discussion on 26th August 2015 to discuss the risks and mitigations of releasing data.

The discussion was regarding the question of adoption of adequate measures to mitigate risks to people and communities when some data is prepared to be released or for sharing purposes.

The following concerns entailed the discussion:

  • What is risk- risks in releasing development data and PII
  • What kinds of risks are there
  • Risk to whom?
  • Risks in dealing with PII, discussed by way of several examples
  • What is missing from the world

The first thing to be done is that if a dataset is made, then you have the responsibility that no harm is caused to the people who are connected to the dataset and a balance must be created between good use of the data on one hand and protecting data subjects, sources and managers on the other.

To answer what is risk, it was defined to be the “probability of something happening multiplied by the resulting cost or benefit if it does” (Oxford English Dictionary). So it is based on cost/benefit, probability, and a subject. For probability, all possible risks must be considered and work in terms of how much harm would happen and how likely that is about to happen. These issues must be considered necessarily.

An example in this context was that of the Syrian government where the bakeries were targeted as the bombers knew where the bakeries are, making them easy targets. It was discussed how in this backdrop of secure data release mechanism, local context is an important issue.

Another example of bad practice was the leak of information in the Ashley Madison case wherein several people have committed suicide.

  • Kinds of risk:
  1. physical harm:

The next point of discussion was regarding kinds of the physical risks to data subjects when there is release/sharing of data related to them. Some of them were:

  1. i.  security issues
  2. ii. hate speech
  3. iii. voter issues
  4. iv. police action

Hence PII goes both ways- where some choose to run the risk of PII being identified; on the other hand some run the risk of being identified as the releaser of information.

  1. Legal harms- to explain what can be legal harms posed in releasing or sharing data, an example was discussed of an image marking exercise of a military camp wherein people joined in, marked military equipment and discovered people who are from that country.
  2. Reputational harm as an organization primarily.
  3. Privacy breach- which can lead to all sorts of harms.
  • Risk to whom?

Data subjects – this includes:

  1. i.  Data collectors
  2. ii. Data processing team
  3. iii. Person releasing the data
  4. iv. Person using the data

Also, the likely hood of risk ranges from low, medium and high. We as a community are at a risk at worse.

  • PII:

- Any data which can be used to identify any specific individual. Such information does not only include names, addresses or phone numbers but could also be data sets that don’t in themselves identify an individual.

For example, in some places sharing of social security number is required for HIV+ status check-up; hence, one needs to be aware of the environment of data sets that go into it. In another situation where there is a small population and there is a need to identify people of a street, village or town for the purpose of religion, then even this data set can put them to risk.

Hence, awareness with respect to the demographics is important to ascertain how many people reside in that place, be aware of the environment and accordingly decide what data set must be made.

- Another way to mitigate risks at the time of release/sharing of data is partial release only to some groups, like for the purpose of academics or to data subjects.

- Different examples were discussed to identify how release of data irresponsibly has affected the data subjects and there is a need to work to mitigate harms caused in such cases.

Example- in the New York City taxi case data about every taxi ride was released-including pickup and drop locations, times, fares. Here it becomes more problematic if someone is visiting strip clubs, then re-identification takes place and this necessitates protection of people against such insinuation.

This shows how data sets can lead to re-identification, even when it is not required. Hence, the involved actors must understand the responsibilities when engaging in data collection or release and accordingly mitigate the risks so associated.

- A concern was raised over collection and processing of the information of genetic diseases of a small population since practically it is not possible to guarantee that the information of data subjects to whom the data relates will not be released or exposed or it won’t be re-identifiable. Though best efforts would be made by experts, however, realistically, it is not possible to guarantee people that they will not be identified. So the question of informing people of such risks is highly crucial. It is suggested that one way of mitigating risks is involving the people and letting them know. Awareness regarding potential impact by breach of data or identification is very important.

- Another factor for consideration is the context in which the information was collected. The context for collection of data seems to change over a period of time. For example, many human rights funders want information on their websites changed or removed in the backdrop of changing contexts, circumstances and situation. In this case also, the collection and release of data and the risks associated become important due to changing contexts.

  • What is missing from the world?

Though recognition of risks has been done and is an ongoing process, what is missing from the world are uniform guidelines, rules or law. There are no policies for informed consent or for any means to mitigate risks collectively in a uniform manner. There must be adoption of principles of necessity, proportionality and informed consent.

Connected Choices

by Melissa Hathaway — last modified Sep 09, 2015 01:26 AM

Modern societies are in the middle of a strategic, multi-dimensional competition for money, power and control over all aspects of the Internet and the Internet economy. Ms. Hathaway will discuss the increasing pace of discord and the competing interests that are unfolding in the current debate concerning the control and governance of the Internet and its infrastructure. Some countries are more prepared for and committed to winning tactical battles than others on the road to asserting themselves as an Internet power. Some are acutely aware of what is at stake; the question is whether they will be the master or the victim of these multi-layered power struggles as subtle and not-so-subtle connected choices are being made. Understanding this debate requires an appreciation of the entangled economic, technical, regulatory, political, and social interests implicated by the Internet. Those states that are prepared for and understand the many facets the Internet presents will likely end up on top.

Anonymity in Cyberspace

by Sunil Abraham last modified Sep 09, 2015 01:31 AM

While security threats require one to be identified in the Cyberspace, on the other hand, the need for privacy and freedom of speech without being targeted, calls for providing means for  anonymous browsing and ability to express without being identified. Where do we draw the line , and how do we balance it? The group will dwell on need for anonymity in various sectors such as government, commercial, employers etc. Apart from security & privacy, the presentation will also cover social and technological perspectives.

DIDP Request #11: NETmundial Principles

by Aditya Garg — last modified Sep 14, 2015 03:08 PM
The Centre for Internet & Society (CIS) followed up on the implementation of the NETmundial Principles that ICANN has been endorsing by sending them a second request under their Documentary Information Disclosure Policy. This request and their response have been described in this blog post.

22 July 2015

To:

Mr. Fadi Chehade, CEO and President

Mr. Steve Crocker, Chairman of the Board

Mr. Cherine Chalaby, Chair, Finance Committee of the Board

Mr. Xavier Calvez, Chief Financial Officer

Sub: Details of documents within ICANN regarding implementation of NETmundial Principles and documents modified within ICANN as a result of the same

It  is  our  understanding  that  ICANN  is  one  of  the founding  members  of  the  NETmundial Initiative. And hence, it has been credited in the public forum for championing the Initiative.[1]

Mr.  Fadi  Chehade,  CEO  and  President  of  ICANN,  has  maintained  that  it  is  time  for  the  global community to act and implement the Principles set forth in the initiative.[2]

ICANN itself, in response to one of our earlier requests, has acknowledged that "NETmundial Principles are high-level statements that permeate through the work of any entity –particularly a multistakeholder entity like ICANN."[3]

We,  therefore,  request  for  all  existing  documents  within  ICANN  which  represent  its  efforts  to implement  the  NETmundial  Principles  within  its  working.  Additionally, we would  also  want  to request  for  all  the  documents  which  were  modified  as  the  result  of  ICANN’s support of the NETmundial Initiave, highlighting the modification so made.

We look forward to the receipt of this information within the stipulated period of 30 days. Please feel free to contact us in the event of any doubts regarding our queries.

Thank you very much.

Warm regards,
Aditya Garg,
1st Year, National Law University, Delhi for Centre for Internet & Society

ICANN Response

ICANN in their response pointed to an earlier DIDP request that we had sent in, and they replied along the same lines. They brought to our attention that ICANN was not responsible for the implementation of the NETMundial Principles, despite it being one of the founding members of the Initiative. They reiterated their earlier statement of ICANN not being the “…home for implementation of the NETmundial Principles or the evolution of multistakeholder participation in Internet governance.”  They have failed to provide us with documentary proof of the implementation of these principles, and have only pointed to statements which indicate a potential prospective adoption of said the initiative [4]; the responses have been near identical to those for the earlier DIDP request, which you can find here.

Further, ICANN claims that the information we seek falls within the scope of the exceptions to disclosure they lay down, as it is not within their operational activities, an explanation that fails to satisfy us. As always, they have used the wide scope of their non-disclosure policy to avoid providing us with the requisite information.

The request can be found here, and ICANN’s response has been linked here.


[1]. See McCarthy, I’m Begging You To Join, The Register (12 December 2014), http://www.theregister.co.uk/2014/12/12/im_begging_you_to_join_netmundial_initiative_gets_desperate/

[2]. See NETmundial Initiative Goes Live, Gobal Internet Community Invited to Participate (Press Release), https://www.netmundial.org/press-release-1

[3]. See Response to Documentary Information Disclosure Policy Request No. 20141228-1-NETmundial, https://www.icann.org/en/system/files/files/cis-netmundial-response-27jan15-en.pdf

[4]. Such as Objective 4.3 of their Strategic Five Year Plan. “Demonstrate leadership by implementing best practices in multistakeholder mechanisms within the distributed Internet governance ecosystem while encouraging all stakeholders to implement the principles endorsed at NETmundial” at https://www.icann.org/en/system/files/files/strategic-plan-2016-2020-10oct14-en.pdf

DIDP Request #12: Revenues

by Aditya Garg — last modified Sep 14, 2015 03:32 PM
The Centre for Internet & Society (CIS) sought information from ICANN on their revenue streams by sending them a second request under their Documentary Information Disclosure Policy. This request and their response have been described in this blog post.

CIS Request

22 July 2015

To:

Mr. Cherine Chalaby, Chair, Finance Committee of the Board

Mr. Xavier Calvez, Chief Financial Officer

Mr. Samiran Gupta, ICANN India

All other members of Staff involved in accounting and financial tasks

Sub: Raw data with respect to granular income/revenue statements of ICANN from 1999-2011

We  would  like  to  thank  ICAN  for  their  prompt  response  to  our  earlier  requests.  We appreciate that the granular Revenue Details  for FY14  have been  posted online.[1] We also appreciate that a similar  document  has  been  posted  for  FY13.[2]

And  we  hope  that  one  for  FY12  would  be  posted soon, as noted by you in your Response to our Request No. 20141222-1.[3]

As also noted by you in the same request, similar reports cannot be prepared for FY99 to  FY11 since “[i]t would be extremely time consuming and overly burdensome to cull through the raw data in order to compile the reports for the prior years”.[4]

Additionally, it was also mentioned that the “relevant information is available in other public available documents”.[5]

Hence, we  would like to request  for the raw  data for years FY99 to FY11, for our research on accountability  and  transparency  mechanisms  in  Internet  governance,  specifically  of  ICANN. Additionally,  we  would  also  like  to  request  for  the links  to  such  public  documents where the information is available.

We look forward to the receipt of this information within the stipulated period of 30 days. Please feel free to contact us in the event of any doubts regarding our queries.
Thank you very much.
Warm regards,
Aditya Garg,  
I Year, National Law University, Delhi
For Centre for Internet & Society
W: http://cis-india.org

ICANN Response

ICANN referred to our earlier DIDP request (see here) where we had sought for a detailed report of their granular income and revenue statements from 1999-2014. They refused to disclose the data on grounds that it would be ‘time consuming’ and ‘overly burdensome’, which is a ground for refusal as per their exceptions to disclosure.

Our request may be found here, and their response is linked to here.


[1]. See FY14 Revenue Detail By Source, https://www.icann.org/en/system/files/files/fy2014-revenue-source-01may15-en.pdf.

[2]. See FY13 Revenue Detail By Source, https://www.icann.org/en/system/files/files/fy2013-revenue-source-01may15-en.pdf

[3]. See Response to Documentary Information Disclosure Policy Request No. 20141222-1, https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf.

[4]. Id

[5]. See Response to Documentary Information Disclosure Policy Request No. 20141222-1, https://www.icann.org/en/system/files/files/cis-response-21jan15-en.pdf.

India’s digital check

by Sunil Abraham last modified Sep 15, 2015 02:55 PM
All nine pillars of Digital India directly correlate with policy research conducted at the Centre for Internet and Society, where I have worked for the last seven years. This allows our research outputs to speak directly to the priorities of the government when it comes to digital transformation.

The article was originally published by DNA on July 8, 2015.


Broadband Highways and Universal Access to Mobile Connectivity: The first two pillars have been combined in this paragraph because they both require spectrum policy and governance fixes. Shyam Ponappa, a distinguished fellow at our Centre calls for the leveraging of shared spectrum and also shared backhaul infrastructure. Plurality in spectrum management, for eg, unlicensed spectrum should be promoted for accelerating backhaul or last mile connectivity, and also for community or local government broadband efforts. Other ideas that have been considered by Ponappa include getting state owned telcos to exit completely from the last mile and only focus on running an open access backhaul through Bharat Broadband Limited. Network neutrality regulations are also required to mitigate free speech, diversity and competition harms as ISPs and TSPs innovate with business models such as zero-rating.

Public Internet Access Programme: Continuing investments into Common Service Centres (CSCs) for almost a decade may be questionable and therefore a citizen’s audit should be undertaken to determine how the programme may be redesigned. The reinventing of post offices is very welcome, however public libraries are also in need urgent reinventing. CSCs, post offices and public libraries should all leverage long range WiFi for Internet and intranet, empowering BYOD [Bring Your Own Device] users. Applications will take time to develop and therefore immediate emphasis should be on locally caching Indic language content. State Public Library Acts need to be amended to allow for borrowing of digital content. Flat-fee licensing regimes must be explored to increase access to knowledge and culture. Commons-based peer production efforts like Wikipedia and Wikisource need to be encouraged.

e-Governance: Reforming Government through Technology: DeitY, under the leadership of free software advocate Secretary RS Sharma, has accelerated adoption and implementation of policies supporting non-proprietary approaches to intellectual property in e-governance. Policies exist and are being implemented for free and open source software, open standards and electronic accessibility for the disabled. The proprietary software lobby headed by Microsoft and industry associations like NASSCOM have tried to undermine these policies but have failed so far.

The government should continue to resist such pressures. Universal adoption of electronic signatures within government so that there is a proper audit trail for all communications and transactions should be made an immediate priority. Adherence to globally accepted data protection principles such as minimisation via “form simplification and field reduction” for Digital India should be applauded. But on the other hand the mandatory requirement of Aadhaar for DigiLocker and eSign amounts to contempt of the Supreme Court order in this regard.

e-Kranti — Electronic Delivery of Services: The 41 mission mode projects listed are within the top-down planning paradigm with a high risk of failure — the funds reserved for these projects should instead be converted into incentives for those public, private and public private partnerships that accelerate adoption of e-governance. The dependency on the National Informatics Centre (NIC) for implementation of e-governance needs to be reduced, SMEs need to be able to participate in the development of e-governance applications. The funds allocated for this area to DeitY have also produced a draft bill for Electronic Services Delivery. This bill was supposed to give RTI-like teeth to e-governance service by requiring each government department and ministry to publish service level agreements [SLAs] for each of their services and prescribing punitive action for responsible institutions and individuals when there was no compliance with the SLAs.

Information for All: The open data community and the Right to Information movement in India are not happy with the rate of implementation of National Data Sharing and Accessibility Policy (NDSAP). Many of the datasets on the Open Data Portal are of low value to citizens and cannot be leveraged commercially by enterprise. Publication of high-value datasets needs to be expedited by amending the proactive disclosure section of the Right to Information Act 2005.

Electronics Manufacturing: Mobile patent wars have begun in India with seven big ticket cases filed at the Delhi High Court. Our Centre has written an open letter to the previous minister for HRD and the current PM requesting them to establish a device level patent pool with a compulsory license of 5%. Thereby replicating India’s success at becoming the pharmacy of the developing world and becoming the lead provider of generic medicines through enabling patent policy established in the 1970s. In a forthcoming paper with Prof Jorge Contreras, my colleague Rohini Lakshané will map around fifty thousand patents associated with mobile technologies. We estimate around a billion USD being collected in royalties for the rights-holders whilst eliminating legal uncertainties for manufacturers of mobile technologies.

IT for Jobs: Centralised, top-down, government run human resource development programmes are not useful. Instead the government needs to focus on curriculum reform and restructuring of the education system. Mandatory introduction of free and open source software will give Indian students the opportunity to learn by reading world-class software. They will then grow up to become computer scientists rather than computer operators. All projects at academic institutions should be contributions to existing free software projects — these projects could be global or national, for eg, a local government’s e-governance application. The budget allocated for this pillar should instead be used to incentivise research by giving micro-grants and prizes to those students who make key software contributions or publish in peer-reviewed academic journals or participate in competitions. This would be a more systemic approach to dealing with the skills and knowledge deficit amongst Indian software professionals.

Early Harvest Programmes: Many of the ideas here are very important. For example, secure email for government officials — if this was developed and deployed in a decentralised manner it would prevent future surveillance of the Indian government by the NSA. But a few of the other low-hanging fruit identified here don’t really contribute to governance. For example, biometric attendance for bureaucrats is just glorified bean-counting — it does not really contribute to more accountability, transparency or better governance.


The author works for the Centre for Internet and Society which receives funds from Wikimedia Foundation that has zero-rating alliances with telecom operators in many countries across the world

Sustainable Smart Cities India Conference 2015, Bangalore

by Vanya Rakesh last modified Sep 21, 2015 02:24 AM
Nispana Innovative Platforms organized a Sustainable Smart Cities India Conference 2015, in Bangalore on 3rd and 4th September, 2015. The event saw participation from people across various sectors including Government Representatives from Ministries, Municipalities, Regulatory Authorities, as well as Project Management Companies, Engineers, Architects, Consultants, Handpicked Technology Solution Providers and Researchers. National and International experts and stakeholders were also present to discuss the opportunities and challenges in creating smart and responsible cities as well as citizens, and creating a roadmap for converting the smart cities vision into a reality that is best suited for India.

The objective of the conference was to discuss the meaning of a smart city, the promises made, the challenges and possible solutions for implementation of ideas by transforming Indian Cities towards a Sustainable and Smart Future.

Smart Cities Mission

Considering the pace of rapid urbanization in India, it has been estimated that the urban population would rise by more than 400 million people by the year 2050[1] and would contribute nearly 75% to India’s GDP by the year 2030. It has been realized that to foster such growth, well planned cities are of utmost importance. For this, the Indian government has come up with a Smart Cities initiative to drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology, especially technology that leads to Smart outcomes.

Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY2015-16 to FY2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporating the learnings into the Mission. This initiative aims to focus on area-based development in the form of redevelopment, or developing new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all, especially the poor and the disadvantaged.[2]

What is being done?

The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year.The Government has come up with 2 missions:Atal Mission for Rejuvenation and Urban Transformation (AMRUT) and Smart Cities Mission for the purpose of achieving urban transformation.The vision is to preserve India’s traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. Additionally, Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission.

What is a Smart City?

Over the two-day conference, various speakers shared a common sentiment that the Governments’ mission does not clearly define what encompasses the idea of a Smart City. There is no universally accepted definition of a Smart City and its conceptualization varies from city to city and country to country.

A global consensus on the idea of a smart city is a city which is livable, sustainable and inclusive. Hence, it would mean a city which has mobility, healthcare, smart infrastructure, smart people, traffic maintenance, efficient waste resource management, etc.

Also, there is a global debate at United Nations regarding developmental goals. One of these goals is gender equality which is very important for the smart city initiative. According to this, a smart city must be such where the women have a life free from violence, must be made to participate and are economically empowered.

Promises

The promises of the Smart City mission include:

Make a sustainable future, reduce carbon footprint, adequate water supply, assured electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing especially for the poor, robust IT connectivity and digitalization, good governance, especially e-Governance and citizen participation, sustainable environment, safety and security of citizens, particularly women, children and the elderly, and health and education.

The vision is to preserve country’s traditional architecture, culture & ethnicity while implementing modern technology. It was discussed how the Smart City Mission is currently attracting global investment, will create new job opportunities, improve communications and infrastructure, decrease pollution and ultimately improve the quality of living.

Challenges

The main challenges for implementation of these objectives are with respect to housing, dealing

with existing cities and adopting the idea of retro-fitting.

Also, another challenge is that of eradicating urban poverty, controlling environment degradation, formulating a fool-proof plan, proper waste management mechanism, widening roads but not at the cost of pedestrians and cyclist and building cities which are inclusive and cater to the needs of women, children and disabled people.

Some of the top challenges will include devising a fool-proof plan to develop smart cities, meaningful public-private partnership, increasing the renewable energy, water supply, effective waste management, traffic management, meeting power demand, urban mobility, ICT connectivity, e-governance, etc., while preparing for new threats that can emerge with implementation of these new technologies.

What needs to be done?

The following suggestions were made by the experts to successfully implement government’s vision of creating successful smart cities in India.

  • Focus on the 4 P’s: Public-Private-People Partnership since people very much form a part of the cities.
  • Integration of organizations, government bodies, and the citizens. The Government can opt for a sentiment analysis.
  • Active participation by state governments since Land is a state subject under the Constitution. There must be a detailed framework to monitor the progress and the responsibilities must be clearly demarcated.
  • Detailed plans, policies and guidelines
  • Strengthen big data initiatives
  • Resource maximization
  • Make citizens smart by informing them and creating awareness
  • Need for competent people to run the projects
  • Visionary leadership
  • Create flexible and shared spaces for community development.

National/International case studies

Several national and international case studies were discussed to list down practical challenges to enable the selected Indian cities learn from their mistakes or include successful schemes in their planning from its inception.

  • Amsterdam Smart City: It is said to be a global village which was transformed into a smart city by involving the people. They took views of the citizens to make the plan a success. The role of big data and open data was highly emphasized. Also, it was suggested that there must be alignment with respect to responsibilities with the central, state and district government to avoid overlap of functions. The city adopted smart grid integration to make intelligent infrastructure and subsidized initiatives to make the city livable.
  • GIFT City, Gujarat: This is an ICT based sustainable city which is a Greenfield development. It is strategically situated. One of the major features of the City is a utility tunnel for providing repair services and the top of the tunnel can be utilized as a walking/jogging track. The city has smart fire safety measures, wide roads to control traffic, smart regulations.
  • TEL AVIV Smart City, Israel: It has been named as the Mediterranean cool city with young and free spirted people. The city comprises of creative class with 3 T’s-talent, technology and tolerance. The city welcomes startups and focuses on G2G, G2C and C2C initiatives by adopting technologically equipped initiatives for effective governance and community building programmes.

Participation

The event saw participation from people across various sectors including Government Representatives of Ministries, Municipalities, Regulatory Authorities, as well as Project Management Companies, Engineers, Architects, Consultants, Handpicked Technology Solution Providers and Researchers.

  • Foundation for Futuristic Cities: The conference saw participation from this think tank based out of Hyderabad working on establishing vibrant smart cities for a vibrant India. They are currently working on developing a "Smart City Protocol" for Indian cities collaborating with Technology, Government and Corporate partners by making a framework for Smart Cities, Big Data and predictive analytics for safe cities, City Sentiment Analysis, Situation Awareness Tools and mobile Apps for better city life by way of Hackathons and Devthons.
  • Centre for SMART cities, Bangalore: This is a research organization which aims to address the challenge of collaborating and sharing knowledge, resources and best practices that exist both in the private sector and governments/municipal bodies in a usable form and format.
  • BDP – India (Studio Leader – Urbanism): The Organization is based out of Delhi and is involved in providing services relating to master planning, urbanism, design and landscape design. The team includes interior designers, engineers, urbanists, sustainability experts, lighting designers, etc. The vision is to help build and create high quality, effective and inspiring built spaces.
  • UN Women: It is a United Nations Organization working on gender equality, women empowerment and elimination of discrimination. They strive to strengthen rights of women by working with women, men, feminists, women’s networks, governments, local authorities and civil society to create national strategies to advance gender equality in line with national and international priorities. The UN negotiated the 2030 Agenda for Sustainable Development in August 2015 (which would be formally adopted by World leaders in September 2015) and it feature 17 sustainable development goals, one of them being achievement of gender equality and empowerment of all women and girls.
  • Elematic India Pvt. Ltd.: The Company is a leading supplier of precast concrete technology worldwide providing smart solutions for concrete buildings to help enable build smart cities with safe infrastructure.

Conclusion

The event discussed in great detail about what a smart city would look like in a country like India where every city has different demographics, needs and resources.

The Participants had a mutual understanding that a city is not gauged by its length and width, but by the broadness of its vision and height of its dream. The initiative of creating smart cities would echo across the country as a whole and would not be limited to the urban centers. Hence, the plan must be inclusive in implementation and right from its inception, the people and their needs must be given due consideration to make it a success. The issue of the road ahead was resonating in the minds of many, as to how would this exactly happen. Hence, the first step, as was suggested by the experts, was to involve the citizens by primarily informing them, taking their suggestions and planning the project for every city accordingly. While focusing on cities which would be made better by human ingenuity and technology, along with building mechanism for housing, commerce, transportation and utilities, it must not be forgotten that technology is timely, but culture is timeless. The cities must not be faceless and community space must be built with walkable spaces with smart utilization of limited resources. Also, it must be ensured that the cities do not cater to the needs of the elite and skilled population, but also the less privileged community. Adequate urban mapping must be done to ensure placement for community facilities, such as restrooms, trash bins, and information kiosks.

A story shared from personal experience by an expert Architect in building Green infrastructure was highly instrumental in setting the tone of the conference and is bound to stay with many of the participants. The son of the Architect, a small child from Baroda left his father speechless when he questioned him about the absence of butterflies from the Big City of Mumbai since he used to play with butterflies every morning in his hometown in Gujarat. The incident was genuinely thought provoking and left every architect, government representative and engineer thinking that before they step on to build a smart cities with technologically equipped infrastructure and utilities - can we, as a country, come together and ensure to build a smart city with butterflies? Can we pay equal attention to sustainability, environment and requirements of a community in the smart city that is envisioned by the Government to make the city livable and inclusive?

Questions that I, as a participant, am left with are:

  • Building a Greenfield project is comparatively easier than upgrading the existing cities into Smart ones, which requires planning and optimum utilization of resources. The role of local bodies needs to be strengthened which would primarily require skilled workforce, beginning from planning to execution. Therefore, what must be done to make the current cities “Smarter” and how encourage and fund ordinary citizens to redefine and prioritize local needs?
  • The conference touched upon the need for a well-planned policy framework to govern the smart cities; however, what was missing was a discussion on the kind of policies that would be required for every city to ensure governance and monitor the operations. Chalking out well thought of urban policies is the first step towards implementation of the Project and requires deliberation in this regard.
  • The Government plans seem to cater to the needs of a handful of sections of the society and must focus on safety of women, chalk out initiatives to build basic utilities like public toilets, plan the infrastructure keeping in mind the disabled individuals, etc.

This is of paramount importance since it is necessary for the Government to consider who would be the potential inhabitants of these future smart cities and what would be their particular needs. Before the cities are made better by use of technology, there is a requirement of more toilets as a basic utility. Thus, instead of focusing on technological advancement as the sole foundation to make lives of the people easy, the cities must have provision of utilities which are accessible to develop livable smart cities. Hence, what measures would the Government and other bodies involved in the plan take to ensure that the urban enclaves would not oversee the under privileged class?

Another issue that went unnoticed during the two-day event was pertaining to the Fundamental Rights of individuals within the city. For example, the right of privacy, right to access services and utilities, right to security, etc. These basic rights must be given due recognition by the smart city developers to uphold the spirit of these internationally accepted Human Rights principles. Therefore, it is important to ask how these future cities are going to address the rights of its people in the cities?

Apart from plans of working on waste management, another important factor that must not be overlooked is sustainability in terms of maximization of the available resources in the best possible ways and techniques to be adopted to stop the fast paced degradation of the environment.

The conference could suggest more solutions to adopt measures like rain water harvesting, better sewage management in the existing cities.

Also, the importance of big data in building the smart cities was emphasized by many experts. However, the question of regulation of data being generated and released was not talked about. Use of big data analytics involves massive streaming of data which required regulation and control over its use and generation to ensure such information is not misutilised in any way. In such a scenario, how would these cities regulate and govern big data techniques to make the infrastructure and utilities technologically efficient on one hand, but also to use the large data sets in a monitored fashion on the other?

An answer to these crucial issues and questions would have brought about a lot of clarity in minds of all the officials, planners and the potential residents of the Smart Cities in India.


[1] 2014 revision of the World Urbanization Prospects, United Nations, Department of Economic and Social Affairs, July 2014, Available at : http://www.un.org/en/development/desa/publications/2014-revision-world-urbanization-prospects.html

[2] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

Peering behind the veil of ICANN’s DIDP (I)

by Padmini Baruah — last modified Oct 15, 2015 02:42 AM
One of the key elements of the process of enhancing democracy and furthering transparency in any institution which holds power is open access to information for all the stakeholders. This is critical to ensure that there is accountability for the actions of those in charge of a body which utilises public funds and carries out functions in the public interest.

As the body which “...coordinates the Internet Assigned Numbers Authority (IANA) functions, which are key technical services critical to the continued operations of the Internet's underlying address book, the Domain Name System (DNS)[1], the centrality of ICANN in regulating the Internet (a public good if there ever was one) makes it vital that ICANN’s decision-making processes, financial flows, and operations are open to public scrutiny. ICANN itself echoes the same belief, and upholds “...a proven commitment to accountability and transparency in all of its practices[2], which is captured in their By-Laws and Affirmation of Commitments. In furtherance of this, ICANN has created its own Documentary Information Disclosure Policy, where it promises to “...ensure that information contained in documents concerning ICANN's operational activities, and within ICANN's possession, custody, or control, is made available to the public unless there is a compelling reason for confidentiality.[3]

ICANN has a vast array of documents that are already in the public domain, listed here. These include annual reports, budgets, registry reports, speeches, operating plans, correspondence, etc. However, their Documentary Information Disclosure Policy falls short of meeting international standards for information disclosure. In this piece, I have focused on an examination of their defined conditions for non-disclosure of information, which seem to undercut the entire process of transparency that the DIDP process aims towards upholding. The obvious comparison that comes to mind is with the right to information laws that governments the world over have enacted in furtherance of democracy. While ICANN cannot be equated to a democratically elected government, it nonetheless does exercise sufficient regulatory power of the functioning of the Internet for it to owe a similar degree of information to all the stakeholders in the internet community. In this piece, I have made an examination of ICANN’s conditions for non-disclosure, and compared it to the analogous exclusions in India’s Right to Information Act, 2005

ICANN’ꜱ Defined Conditions for Non-Disclosure versus Exclusions in Indian Law :

ICANN, in its DIDP policy identifies a lengthy list of conditions as being sufficient grounds for non-disclosure of information. One of the most important indicators of a strong transparency law is said to be minimum exclusions.[4] However, as seen from the table below, ICANN’s exclusions are extensive and vast, and this has been a barrier in the way of free flow of information. An analysis of their responses to various DIDP requests (available here) shows that the conditions for non-disclosure have been invoked in over 50 of the 85 requests responded to (as of 11.09.2015); i.e., over two-thirds of the requests that ICANN receives are subjected to the non-disclosure policies.

In contrast, an analysis of India’s Right to Information Act, considered to be among the better drafted transparency laws of the world, reveals a much narrower list of exclusions that come in the way of a citizen obtaining any kind of information sought. The table below compares the two lists:

No.

ICANN[5]

India

Analysis

1.

Information provided by or to a government or international organization which was to be kept confidential or would materially affect ICANN’s equation with the concerned body.

Information, disclosure of which would prejudicially affect the sovereignty and integrity of India, the security, "strategic, scientific or economic" interests of the State, relation with foreign State or lead to incitement of an offense[6]/ information received in confidence from foreign government[7]

The threshold for both the bodies is fairly similar for this exclusion.

2.

Internal (staff/Board) information that, if disclosed, would or would be likely to compromise the integrity of ICANN's deliberative and decision-making process

Cabinet papers including records of deliberations of the Council of Ministers, Secretaries and other officers, provided that such decisions the reasons thereof, and the material on the basis of which the decisions were taken shall be made public after the decision has been taken, and the matter is complete, or over (unless subject to these exemptions)[8]

The Indian law is far more transparent as it ultimately allows for the records of internal deliberation to be made public after the decision is taken.

3.

Information related to the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process

No similar provision in Indian Law.

This is an additional restriction that ICANN introduces in addition to the one above, which in itself is quite broad.

4.

Records relating to an individual's personal information

Information which relates to personal information the disclosure of which has no relationship to any public activity or interest, or which would cause unwarranted invasion of the privacy of the individual (but it is also provided that the information which cannot be denied to the Parliament or a State Legislature shall not be denied by this exemption);[9]

Again, the Indian law contains a proviso for information with “relationship to any public activity or interest

5.

Proceedings of internal appeal mechanisms and investigations.

Information which has been expressly forbidden to be published by any court of law or tribunal or the disclosure of which may constitute contempt of court;[10]

While ICANN prohibits the disclosure of all proceedings, in India, the exemption is only to the limited extent of information that the court prohibits from being made public.

6.

Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement.

Information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;[11]

This is fairly similar for both lists.

7.

Confidential business information and/or internal policies and procedures.

No similar provision in Indian Law. This is encapsulated in the abovementioned provision

This is fairly similar in both lists.

8.

Information that, if disclosed, would or would be likely to endanger the life, health, or safety of any individual or materially prejudice the administration of justice.

Information, the disclosure of which would endanger the life or physical safety of any person or identify the source of information or assistance given in confidence for law enforcement or security purposes;[12]

This is fairly similar for both lists.

9.

Information subject to any kind of privilege, which might prejudice any investigation

Information, the disclosure of which would cause a breach of privilege of Parliament or the State Legislature[13]/Information which would impede the process of investigation or apprehension or prosecution of offenders;[14]

This is fairly similar in both lists.

10.

Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication.

No similar provision in Indian Law

This exclusion is not present in Indian law, and it is extremely broadly worded, coming in the way of full transparency.

11.

Information that relates in any way to the security and stability of the Internet

No similar provision in Indian Law

This is perhaps necessary to ICANN’s role as the IANA Functions Operator. However, given the large public interest in this matter, there should be some proviso to make information in this regard available to the public as well.

12.

Trade secrets and commercial and financial information not publicly disclosed by ICANN.

Information including commercial confidence, trade secrets or intellectual property, the disclosure of which would harm the competitive position of a third party, unless the competent authority is satisfied that larger public interest warrants the disclosure of such information;[15]

This is fairly similar in both cases.

13.

Information requests:

● which are not reasonable;

● which are excessive or overly burdensome

● complying with which is not feasible

● which are made with an abusive or vexatious purpose or by a vexatious or querulous individual.

No similar provision in Indian Law

Of all the DIDP exclusions, this is the one which is most loosely worded. The terms in this clause are not clearly defined, and it can effectively be used to deflect any request sought from ICANN because of its extreme subjectivity. What amounts to ‘reasonable’? Whom is the process going to ‘burden’? What lens does ICANN use to define a ‘vexatious’ purpose? Where do we look for answers?

14.

No similar provision in ICANN’s DIDP.

Information available to a person in his fiduciary relationship, unless the competent authority is satisfied that the larger public interest warrants the disclosure of such information;[16]

-

15.

No similar provision in ICANN’s DIDP.

Information which providing access to would involve an infringement of copyright subsisting in a person other than the State.[17]

-

Thus, the net cast by the DIDP exclusions policy is more vast than even than that of a democratic state’s transparency law. Clearly, the exclusions above have effectively allowed ICANN to dodge answers to most of the requests floating its way. One can only hope that ICANN realises that these exclusions come in the way of the transparency that they are so committed to, and does away with this unreasonably wide range on the road to the IANA Transition.


[1] https://www.icann.org/resources/pages/welcome-2012-02-25-en

[2] https://www.icann.org/resources/accountability

[3] https://www.icann.org/resources/pages/didp-2012-02-25-en

[4] Shekhar Singh, India: Grassroot Initiatives in Tʜᴇ Rɪɢʜᴛ ᴛᴏ Kɴᴏᴡ 19, 44 (Ann Florin ed., 2007)

[5] In a proviso, ICANN’s DIDP states that all these exemptions can be overridden if the larger public interest is higher. However, this has not yet been reflected in their responses to any DIDP requests.

[6] Section 8(1)(a), Right to Information Act, 2005.

[7] Section 8(1)(f), Right to Information Act, 2005.

[8] Section 8(1)(i), Right to Information Act, 2005.

[9] Section 8(1)(j), Right to Information Act, 2005.

[10] Section 8(1)(b), Right to Information Act, 2005.

[11] Section (1)(d), Right to Information Act, 2005

[12] Section 8(1)(g), Right to Information Act, 2005.

[13] Section 8(1)(c), Right to Information Act, 2005.

[14] Section 8(1)(h), Right to Information Act, 2005.

[15] Section (1)(d), Right to Information Act, 2005

[16] Section 8(1)(e), Right to Information Act, 2005.

[17] Section 9, Right to Information Act, 2005.

Hits and Misses With the Draft Encryption Policy

by Sunil Abraham last modified Sep 26, 2015 04:46 PM
Most encryption standards are open standards. They are developed by open participation in a publicly scrutable process by industry, academia and governments in standard setting organisations (SSOs) using the principles of “rough consensus” – sometimes established by the number of participants humming in unison – and “running code” – a working implementation of the standard. The open model of standards development is based on the Free and Open Source Software (FOSS) philosophy that “many eyes make all bugs shallow”.

The article was published in the Wire on September 26, 2015.


This model has largely been a success but as Edward Snowden in his revelations has told us, the US with its large army of mathematicians has managed to compromise some of the standards that have been developed under public and peer scrutiny. Once a standard is developed, its success or failure depends on voluntary adoption by various sections of the market – the private sector, government (since in most markets the scale of public procurement can shape the market) and end-users. This process of voluntary adoption usually results in the best standards rising to the top. Mandates on high quality encryption standards and minimum key-sizes are an excellent idea within the government context to ensure that state, military, intelligence and law enforcement agencies are protected from foreign surveillance and traitors from within. In other words, these mandates are based on a national security imperative.

However, similar mandates for corporations and ordinary citizens are based on a diametrically opposite imperative – surveillance. Therefore these mandates usually require the use of standards that governments can compromise usually via a brute force method (wherein supercomputers generate and attempt every possible key) and smaller key-lengths for it is generally the case that the smaller the key-length the quicker it is for the supercomputers to break in. These mandates, unlike the ones for state, military, intelligence and law enforcement agencies, interfere with the market-based voluntary adoption of standards and therefore are examples of inappropriate regulation that will undermine the security and stability of information societies.

Plain-text storage requirement

First, the draft policy mandates that Business to Business (B2B) users and Consumer to Consumer (C2C) users store equivalent plain text (decrypted versions) of their encrypted communications and storage data for 90 days from the date of transaction. This requirement is impossible to comply with for three reasons. Foremost, encryption for web sessions are based on dynamically generated keys and users are not even aware that their interaction with web servers (including webmail such as Gmail and Yahoo Mail) are encrypted. Next, from a usability perspective, this would require additional manual steps which no one has the time for as part of their daily usage of technologies. Finally, the plain text storage will become a honey pot for attackers. In effect this requirement is as good as saying “don’t use encryption”.

Second, the policy mandates that B2C and “service providers located within and outside India, using encryption” shall provide readable plain-text along with the corresponding encrypted information using the same software/hardware used to produce the encrypted information when demanded in line with the provisions of the laws of the country. From the perspective of lawful interception and targeted surveillance, it is indeed important that corporations cooperate with Indian intelligence and law enforcement agencies in a manner that is compliant with international and domestic human rights law. However, there are three circumstances where this is unworkable: 1) when the service providers are FOSS communities like the TOR project which don’t retain any user data and as far as we know don’t cooperate with any government; 2) when the service provider provides consumers with solutions based on end-to-end encryption and therefore do not hold the private keys that are required for decryption; and 3) when the Indian market is too small for a foreign provider to take requests from the Indian government seriously.

Where it is technically possible for the service provider to cooperate with Indian law enforcement and intelligence, greater compliance can be ensured by Indian participation in multilateral and multi-stakeholder internet governance policy development to ensure greater harmonisation of substantive and procedural law across jurisdictions. Options here for India include reform of the Mutual Legal Assistance Treaty (MLAT) process and standardisation of user data request formats via the Internet Jurisdiction Project.

Regulatory design

Governments don’t have unlimited regulatory capability or capacity. They have to be conservative when designing regulation so that a high degree of compliance can be ensured. The draft policy mandates that citizens only use “encryption algorithms and key sizes will be prescribed by the government through notification from time to time.” This would be near impossible to enforce given the burgeoning multiplicity of encryption technologies available and the number of citizens that will get online in the coming years. Similarly the mandate that “service providers located within and outside India…must enter into an agreement with the government”, “vendors of encryption products shall register their products with the designated agency of the government” and “vendors shall submit working copies of the encryption software / hardware to the government along with professional quality documentation, test suites and execution platform environments” would be impossible for two reasons: that cloud based providers will not submit their software since they would want to protect their intellectual property from competitors, and that smaller and non-profit service providers may not comply since they can’t be threatened with bans or block orders.

This approach to regulation is inspired by license raj thinking where enforcement requires enforcement capability and capacity that we don’t have. It would be more appropriate to have a “harms”-based approach wherein the government targets only those corporations that don’t comply with legitimate law enforcement and intelligence requests for user data and interception of communication.

Also, while the “Technical Advisory Committee” is the appropriate mechanism to ensure that policies remain technologically neutral, it does not appear that the annexure of the draft policy, i.e. “Draft Notification on modes and methods of Encryption prescribed under Section 84A of Information Technology Act 2000”, has been properly debated by technical experts. According to my colleague Pranesh Prakash, “of the three symmetric cryptographic primitives that are listed – AES, 3DES, and RC4 – one, RC4, has been shown to be a broken cipher.”

The draft policy also doesn’t take into account the security requirements of the IT, ITES, BPO and KPO industries that handle foreign intellectual property and personal information that is protected under European or American data protection law. If clients of these Indian companies feel that the Indian government would be able to access their confidential information, they will take their business to competing countries such as the Philippines.

And the good news is…

On the other hand, the second objective of the policy, which encourages “wider usage of digital Signature by all entities including Government for trusted communication, transactions and authentication” is laudable but should have ideally been a mandate for all government officials as this will ensure non-repudiation. Government officials would not be able to deny authorship for their communications or approvals that they grant for various applications and files that they process.

Second, the setting up of “testing and evaluation infrastructure for encryption products” is also long overdue. The initiation of “research and development programs … for the development of indigenous algorithms and manufacture of indigenous products” is slightly utopian because it will be a long time before indigenous standards are as good as the global state of the art but also notable as an important start.

The more important step for the government is to ensure high quality Indian participation in global SSOs and contributions to global standards. This has to be done through competition and market-based mechanisms wherein at least a billion dollars from the last spectrum auction should be immediately spent on funding existing government organisations, research organisations, independent research scholars and private sector organisations. These decisions should be made by peer-based committees and based on publicly verifiable measures of scientific rigour such as number of publications in peer-reviewed academic journals and acceptance of “running code” by SSOs.

Additionally the government needs to start making mathematics a viable career in India by either employing mathematicians directly or funding academic and independent research organisations who employ mathematicians. The basis of all encryptions standards is mathematics and we urgently need the tribe of Indian mathematicians to increase dramatically in this country.

Cyber 360 Agenda

by Prasad Krishna last modified Oct 02, 2015 03:41 PM

PDF document icon Agenda & Speakers - Cyber 360 conference-1.pdf — PDF document, 886 kB (907878 bytes)

Open Governance and Privacy in a Post-Snowden World : Webinar

by Vanya Rakesh last modified Oct 04, 2015 11:09 AM
On 10th September 2015, the OGP Support Unit, the Open Government Guide, and the World Bank held a webinar on “Open Governance and Privacy in a Post-Snowden World” presented by Carly Nyst, Independent consultant and former Legal Director of Privacy International and Javier Ruiz, Policy Director of Open Rights Group. This is a summary of the key issues that were discussed by the speakers and the participants.

See Open Governance and Privacy in a Post-Snowden World


Summary

The webinar discussed how Government surveillance has become an important and key issue in the 21st century, thanks to Edward Snowden. The main concern raised was with respect to what a democracy should look like in the present day. Should the states’ use of technology enable state surveillance or an open government? Typically, there is a balance that must be achieved between the privacy of an individual and the security of the state – particularly as the former is primarily about social rights and collective interest of citizens.

At the international level, the right to privacy has been recognized as a basic human right and an enabler of other individual freedoms. This right encapsulates protection of personal data where citizens have the authority to choose whether to share or reveal their personal data or not. Due to technological advancement that has enabled collection, storage and sharing of personal data, the right to privacy and data protection frameworks have become of utmost importance and relevance with regard to open government efforts. Therefore, it is important for Governments to be transparent in handling sensitive data that they collect and use.

Many countries have also introduced laws to balance the right to privacy and right to information.  The role of the private sector and NGOs involved in enabling an open and transparent government must also be duly addressed at a national level.

Key Questions:

  • Why should the government release information?

There are multiple reasons for doing so including:

For the purposes of research and public policy (which relates to healthcare, social issues, economics, national statistics, census, etc.)

Transparency and accountability (politicians, registers, public expenses, subsidies, fraud, court records, education)

Public participation and public services (budgets, anti-corruption, engagement, and e-governance).

However, all these have certain risks and privacy implications:

  1. Risk of identification of individual: Any individual whose information is released has the risk of identification, followed by issues like identity theft, discrimination, stigmatization or repression. Normally, the solution for this would be anonymization of the data; however, this is not an absolute solution. Privacy laws can generally cope with such risks, but with pseudonymous data it becomes difficult in preventing identification.
  2. Profiling of social categories which can lead to discrimination: In such a situation, policies and other legislations regulating the use of data and providing remedy for violations can help.
  3. Exploitation and unfair/unethical use of information: When understanding the potential exploitation of information it is useful to consider who is going to benefit from the release of information.  For example, in UK, with respect to release of Health Data, the main concern is that people and companies will benefit commercially from the information released, despite of the result potentially being improved drugs and treatment.
  • What are the Solutions?

The webinar also discussed potential solutions to the questions and challenges posed. For example, when commitments of Open Government Data Partnership are considered, privacy legislations must also be proposed. Further, key stakeholders must make commitments to take pro-active measures to reduce informational asymmetries between the state and citizens.  To reduce the risks, measures must be taken to publish what information the State has or what the Government knows about the citizens. For example, in UK, within the civil society network, it is being duly considered in the national plan that the government will publicize how it will share data and have a centralized view on the process of information handling and usage of the data.

The Open Government Guide provides for Illustrative Commitments like enactment of data protection legislation, establishing programmes for awareness and assessment of their impact, giving citizens control of their personal information and the right to redress when that information is misused, etc.

Surveillance

The issue of surveillance and the role of privacy in an open government context was also discussed.  The need for creating a balance between the legitimate interest of national security and the privacy of individuals was emphasized. With the rise of digital technologies, many governmental measures pertaining to surveillance intervene in individual privacy. There are many forms of surveillance and this has serious privacy implications, especially in developing countries. For example:

  1. Communications surveillance
  2. Visual surveillance
  3. Travel surveillance

This raises the question: When is surveillance legitimate and when must it be allowed?

The International Principles on the Application of Human Rights to Communications Surveillance acts as a soft law and tries to set out what a good surveillance system looks like by ensuring that governments are in compliance with international human rights law.

In essence surveillance does not violate privacy, however, there must be a clear and foreseeable legal framework laying circumstances when the government has the power to collect data and when individuals might be able to foresee when they might be under surveillance.

Also, a competent judicial authority must be established to oversee surveillance and keep a check on executive power by placing restrictions on privacy invasions. The actions of the government must be proportionate and the benefits must not outweigh harm caused by surveillance.

Role of openness in a “mass surveillance” state

Surveillance measures that are being undertaken by governments are increasingly secretive. The European court of Human Rights has held that Secret surveillance may undermine democracy under the cloak of protecting it. Hence, open government and openness will work towards protecting privacy and not undermining it.

To balance the measure of government surveillance with privacy, there is a need to publish laws regulating such powers; publish transparency reports about surveillance, interception and access to communications data; reform legislations relating to surveillance by state agencies to ensure it complies with human rights and establish safeguards to ensure that new technologies used for surveillance and interception respect the right to privacy.

Conclusion

The conclusion one can draw is that Privacy concerns have gained importance in today’s data driven world. The main question that needs to be answered is whether Government’s should adopt surveillance measures or adopt an Open Government?

Considering equal importance of national security and privacy of individuals, it is required that a balance must be crafted between the two. This could be possibly done by enacting foreseeable and clear laws outlining scope of surveillance by the Government on one hand, and informing citizens about such measures on the other. Establishment of a competent judicial authority to keep a check on Government actions is also suggested to work out the delicate balance between surveillance and privacy.

The Legal Validity of Internet Bans: Part I

by Geetha Hariharan and Padmini Baruah — last modified Oct 08, 2015 11:18 AM
In recent months, there has been a spree of bans on access to Internet services in Indian states, for different reasons. The State governments have relied on Section 144, Code of Criminal Procedure 1973 to institute such bans. Despite a legal challenge, the Gujarat High Court found no infirmity in this exercise of power in a recent order. We argue that it is Section 69A of the Information Technology Act 2000, and the Website Blocking Rules, which set out the legal provision and procedure empowering the State to block access to the Internet (if at all it is necessary), and not Section 144, CrPC.

 

 

In recent months, there has been a spree of bans on access to Internet services in India states, for different reasons. In Gujarat, the State government banned access to mobile Internet (data services) citing breach of peace during the Hardik Patel agitation. In Godhra in Gujarat, mobile Internet was banned as a precautionary measure during Ganesh visarjan. In Kashmir, mobile Internet was banned for three days or more because the government feared that people would share pictures of slaughter of animals during Eid on social media, which would spark unrest across the state.

Can State or Central governments impose a ban on Internet access? If the State or its officials anticipate disorder or a disturbance of ‘public tranquility’, can Internet access through mobiles be banned? According to a recent order of the Gujarat High Court: Yes; Section 144 of the Code of Criminal Procedure, 1973 (“CrPC”) empowers the State government machinery to impose a temporary ban.

But the Gujarat High Court’s order neglects the scope of Section 69A, IT Act, and wrongly finds that the State government can exercise blocking powers under Section 144, CrPC. In this post and the next, we argue that it is Section 69A of the Information Technology Act, 2000 (“IT Act”) which is the legal provision empowering the State to block access to the Internet (including data services), and not Section 144, CrPC. Section 69A covers blocks to Internet access, and since it is a special law dealing with the Internet, it prevails over the general Code of Criminal Procedure.

Moreover, the blocking powers must stay within constitutional boundaries prescribed in, inter alia, Article 19 of the Constitution. Blocking powers are, therefore, subject to the widely-accepted tests of legality (foresight and non-arbitrariness), legitimacy of the grounds for restriction of fundamental rights and proportionality, calling for narrowly tailored restrictions causing minimum disruptions and/or damage.

In Section I of this post, we set out a brief record of the events that preceded the blocking of access to data services (mobile Internet) in several parts of Gujarat. Then in Section II, we summarise the order of the Gujarat High Court, dismissing the petition challenging the State government’s Internet-blocking notification under Section 144, CrPC. In the next post, we examine the scope of Section 69A, IT Act to determine whether it empowers the State and Central government agencies to carry out blocks on Internet access through mobile phones (i.e., data services such as 2G, 3G and 4G) under certain circumstances. We submit that Section 69A does, and that Section 144, CrPC cannot be invoked for this purpose.

I. The Patidar Agitation in Gujarat:

This question arose in the wake of agitation in Gujarat in the Patel community. The Patels or Patidars are politically and economically influential in Gujarat, with several members of the community holding top political, bureaucratic and industrial positions. In the last couple of months, the Patidars have been agitating, demanding to be granted status as Other Backward Classes (OBC). OBC status would make the community eligible for reservations and quotas in educational institutions and for government jobs.

Towards this demand, the Patidars organised multiple rallies across Gujarat in August 2015. The largest rally, called the Kranti Rally, was held in Ahmedabad, Gujarat’s capital city, on August 25, 2015. Hardik Patel, a leader of the agitation, reportedly went on hunger strike seeking that the Patidars’ demands be met by the government, and was arrested as he did not have permission to stay on the rally grounds after the rally. While media reports vary, it is certain that violence and agitation broke out after the rally. Many were injured, some lost their lives, property was destroyed, businesses suffered; the army was deployed and curfew imposed for a few days across the State.

In addition to other security measures, the State government also imposed a ban on mobile Internet services across different parts of Gujarat. Reportedly, Hardik Patel had called for a state-wide bandh over Whatsapp. The police citedconcerns of rumour-mongering and crowd mobilisation through Whatsapp” as a reason for the ban, which was instituted under Section 144, Code of Criminal Procedure, 1973 (“CrPC”). In most of Gujarat, the ban lasted six days, from August 25 to 31, 2015, while it continued in Ahmedabad and Surat for longer.

II. The Public Interest Litigation:

A public interest petition was filed before the Gujarat High Court, challenging the mobile Internet ban. Though the petition was dismissed at the preliminary stage by Acting Chief Justice Jayant Patel and Justice Anjaria by an oral order delivered on September 15, 2015, the legal issues surrounding the ban are important and the order calls for some reflection.

In the PIL, the petitioner prayed that the Gujarat High Court declare that the notification under Section 144, CrPC, which blocked access to mobile Internet, is “void ab initio, ultra vires and unconstitutional” (para 1 of the order). The ban, argued the petitioner, violated Articles 14, 19 and 21 of the Constitution by being arbitrary and excessive, violating citizens’ right to free speech and causing businesses to suffer extensive economic damage. In any event, the power to block websites was specifically granted by Section 69A, IT Act, and so the government’s use of Section 144, CrPC to institute the mobile Internet block was legally impermissible. Not only this, but the government’s ban was excessive in that mobile Internet services were completely blocked; had the government’s concerns been about social media websites like Whatsapp or Facebook, the government could have suspended only those websites using Section 69A, IT Act. And so, the petitioner prayed that the Gujarat High Court issue a writ “permanently restraining the State government from imposing a complete or partial ban on access to mobile Internet/broadband services” in Gujarat.

The State Government saw things differently, of course. At the outset, the government argued that there was “sufficient valid ground for exercise of power” under Section 144, CrPC, to institute a mobile Internet block (para 4 of the order). Had the blocking notification not been issued, “peace could not have been restored with the other efforts made by the State for the maintenance of law and order”. The government stressed that Section 144, CrPC notifications were generally issued as a “last resort”, and in any case, the Internet had not been shut down in Gujarat; broadband and WiFi services continued to be active throughout. Since the government was the competent authority to evaluate law-and-order situations and appropriate actions, the Court ought to dismiss the petition, the State prayed.

The Court agreed with the State government, and dismissed the petition without issuing notice (para 9 of the order). The Court examined two issues in its order (very briefly):

  1. The scope and distinction between Section 144, CrPC and Section 69A, IT Act, and whether the invocation of Section 144, CrPC to block mobile Internet services constituted an arbitrary exercise of power;
  2. The proportionality of the blocking notification (though the Court doesn’t use the term ‘proportionality’).

We will examine the Court’s reading of Section 69A, IT Act and Section 144, CrPC, to see whether their fields of operation are in fact different.

 

Acknowledgements: We would like to thank Pranesh Prakash, Japreet Grewal, Sahana Manjesh and Sindhu Manjesh for their invaluable inputs in clarifying arguments and niggling details for these two posts.


Geetha Hariharan is a Programme Officer with Centre for Internet & Society. Padmini Baruah is in her final year of law at the National Law School of India University, Bangalore (NLSIU) and is an intern at CIS.

The Legal Validity of Internet Bans: Part II

by Geetha Hariharan and Padmini Baruah — last modified Oct 08, 2015 11:17 AM
In recent months, there has been a spree of bans on access to Internet services in Indian states, for different reasons. The State governments have relied on Section 144, Code of Criminal Procedure 1973 to institute such bans. Despite a legal challenge, the Gujarat High Court found no infirmity in this exercise of power in a recent order. We argue that it is Section 69A of the Information Technology Act 2000, and the Website Blocking Rules, which set out the legal provision and procedure empowering the State to block access to the Internet (if at all it is necessary), and not Section 144, CrPC.

As we saw earlier, the Gujarat High Court held that Section 144, CrPC empowers the State apparatus to order blocking of access to data services. According to the Court, Section 69A, IT Act can be used to block certain websites, while under Section 144, CrPC, the District Magistrate can direct telecom companies like Vodafone and Airtel, who extend the facility of Internet access. In effect, the High Court agreed with the State government’s argument that the scope of Section 69A, IT Act covers only blocking of certain websites, while Section 144, CrPC grants a wider power.

This is what the Court said (para 9 of the order):

If the comparison of both the sections in the field of operations is made, barring certain minor overlapping more particularly for public order [sic], one can say that the area of operation of Section 69A is not the same as that of Section 144 of the Code. Section 69A may in a given case also be exercised for blocking certain websites, whereas under Section 144 of the Code, directions may be issued to certain persons who may be the source for extending the facility of internet access. Under the circumstances, we do not find that the contention raised on behalf of the petitioner that the resort to only Section 69A was available and exercise of power under Section 144 of the Code was unavailable, can be accepted.” (emphases ours)

We submit that the High Court’s reasoning failed to examine the scope of Section 69A, IT Act thoroughly. Section 69A does, in fact, empower the government to order blocking of access to data services, and it is a special law. Importantly, it sets forth a procedure that State governments, union territories and the Central Governments must follow to order blocks on websites or data services.

I. Special Law Prevails Over General Law

The IT Act, 2000 is a special law dealing with matters relating to the Internet, including offences and security measures. The CrPC is a general law of criminal procedure.

When a special law and a general law cover the same subject, then the special law supersedes the general law. This is a settled legal principle. Several decisions of the Supreme Court attest to this fact. To take an example, in Maya Mathew v. State of Kerala, (2010) 3 SCR 16 (18 February 2010), when there was a contention between the Special Rules for Kerala State Homoeopathy Services and the general Rules governing state and subordinate services. The Supreme Court held that when a special law and a general law both govern a matter, the Court should try to interpret them harmoniously as far as possible. But if the intention of the legislature is that one law should prevail over another, and this intention is made clear expressly or impliedly, then the Court should give effect to this intention.

On the basis of this principle, let’s take a look at the IT Act, 2000. Section 81, IT Act expressly states that the provisions of the IT Act shall have overriding effect, notwithstanding anything inconsistent with any other law in force. Moreover, in the Statement of Objects and Reasons of the IT (Amendment) Bill, 2006, the legislature clearly notes that amendments inserting offences and security measures into the IT Act are necessary given the proliferation of the Internet and e-transactions, and the rising number of offences. These indicate expressly the legislature’s intention for the IT Act to prevail over general laws like the CrPC in matters relating to the Internet.

Now, we will examine whether the IT Act empowers the Central and State governments to carry out complete blocks on access to the Internet or data services, in the event of emergencies. If the IT Act does cover such a situation, then the CrPC should not be used to block data services. Instead, the IT Act and its Rules should be invoked.

II. Section 69A, IT Act Allows Blocks on Internet Access

Section 69A(1), IT Act says:

“Where the Central Government or any of its officer specially authorised by it in this behalf is satisfied that it is necessary or expedient so to do, in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above, it may subject to the provisions of sub-section (2) for reasons to be recorded in writing, by order, direct any agency of the Government or intermediary to block for access by the public or cause to be blocked for access by the public any information generated, transmitted, received, stored or hosted in any computer resource.” (emphasis ours)

Essentially, Section 69A says that the government can block (or cause to be blocked) for access by the public, any information generated, transmitted, etc. in any computer resource, if the government is satisfied that such a measure is in the interests of public order.

Does this section allow the government to institute bans on Internet access in Gujarat? To determine this, we will examine each underlined term from above.

Access: Section 2(1)(a), IT Act defines access as “...gaining entry into, instructing or communicating with… resources of a computer, computer system or computer network”.

Computer resource: Section 2(1)(k), IT Act defines computer resource as “computer, computer system, computer network...”

Information: Section 2(1)(v), IT Act defines information as “includes… data, message, text, images, sound, voice...”

So ‘blocking for access’ under Section 69A includes preventing gaining entry or communicating with the resources of a computer, computer system or computer network, and it includes blocking communication of data, message, text, images, sound, etc. Now two questions arise:

(1) Do 2G and 3G services, broadband and Wifi fall within the definition of ‘computer network’?

Computer network: Section 2(1)(j), IT Act defines computer network as “inter-connection of one or more computers or computer systems or communication device…” by “...use of satellite, microwave, terrestrial line, wire, wireless or other communication media”.

(2) Do mobile phones that can connect to the Internet (we say smartphones for simplicity) qualify as fall within the definition of ‘computer resource’?

Communication device: Section 2(1)(ha), IT Act defines communication device as “cell phones, personal digital assistance or combination of both or any other device used to communicate, send or transmit any text, video, audio or image”.

So a cell phone is a communication device. A computer network is an inter-connection of communication devices by wire or wireless connections. A computer network is a computer resource also. Blocking of access under Section 69A, IT Act includes, therefore, gaining entry into or communicating with the resources of a computer network, which is an interconnection of communication devices, including smartphones. Add to this, the fact that any information (data, message, text, images, sound, voice) can be blocked, and the conclusion seems clear.

The power to block access to Internet services (including data services) can be found within Section 69A, IT Act itself, the special law enacted to cover matters relating to the Internet. Not only this, the IT Act envisages emergency situations when blocking powers may need to be invoked.

III. Section 69A Permits Blocking in Emergency Situations

Section 69A, IT Act doesn’t act in isolation. The Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (“Blocking Rules”) operate together with Section 69A(1).

Rule 9 of the Blocking Rules deals with blocking of information in cases of emergency. It says that in cases of emergency, when “no delay is acceptable”, the Designated Officer (DO) shall examine the request for blocking. If it is within the scope of Section 69A(1) (i.e., within the grounds of public order, etc.), then the DO can submit the request to the Secretary, Department of Electronics and Information Technology (DeitY). If the Secretary is satisfied of the need to block during the emergency, then he may issue a reasoned order for blocking, in writing as an interim measure. The intermediaries do not need to be heard in such a situation.

After a blocking order is issued during an urgent situation, the DO must bring the blocking request to the Committee for Examination of Request constituted under Rule 7, Blocking Rules. There is also a review process, by a Review Committee that meets every two months to evaluate whether blocking directions are in compliance with Section 69A(1) [Rule 14].

We submit, therefore, that the Gujarat High Court erred in holding that Section 144, CrPC is the correct legal provision to enable Internet bans. Not only does Section 69A, IT Act cover blocking of access to Internet services, but it also envisages blocking in emergency situations. As a special law for matters surrounding the Internet, Section 69A should prevail over the general law provision of Section 144, CrPC.

 

Acknowledgements: We would like to thank Pranesh Prakash, Japreet Grewal, Sahana Manjesh and Sindhu Manjesh for their invaluable inputs in clarifying arguments and niggling details for these two posts.


Geetha Hariharan is a Programme Officer with Centre for Internet & Society. Padmini Baruah is in her final year of law at the National Law School of India University, Bangalore (NLSIU) and is an intern at CIS.

GSMA Conference Invite

by Prasad Krishna last modified Oct 14, 2015 01:49 AM

PDF document icon Conference Invite.pdf — PDF document, 68 kB (70004 bytes)

Participants of I&J Meeting in Berlin

by Prasad Krishna last modified Oct 14, 2015 02:49 AM

PDF document icon PARTICIPANTS - I&J MEETING BERLIN 8.-9.10.2015.pdf — PDF document, 131 kB (134278 bytes)

Agenda of I&J Meeting in Berlin

by Prasad Krishna last modified Oct 14, 2015 02:52 AM

PDF document icon AGENDA - I&J MEETING BERLIN 8.-9.10.2015-2.pdf — PDF document, 96 kB (99176 bytes)

Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India

by Jyoti Panday last modified Oct 14, 2015 02:40 PM
The European Court of Justice has invalidated a European Commission decision, which had previously concluded that the 'Safe Harbour Privacy Principles' provide adequate protections for European citizens’ privacy rights for the transfer of personal data between European Union and United States. The inadequacies of the framework is not news for the European Commission and action by ECJ has been a long time coming. The ruling raises important questions about how the claims of citizenship are being negotiated in the context of the internet, and how increasingly the contestations of personal data are being employed in the discourse.

The European Court of Justice (ECJ) has invalidated a European Commission (EC) decision1 which had previously concluded that the 'Safe Harbor Privacy Principles'2 provide adequate protections for European citizens’ privacy rights3 for the transfer of personal data between European Union and United States. This challenge stems from the claim that public law enforcement authorities in America obtain personal data from organisations in safe harbour for incompatible and disproportionate purposes in violation of the Safe Harbour Privacy Principles. The court's judgment follows the advice of the Advocate General of the Court of Justice of the European Union (CJEU) who recently opined4 that US practices allow for large-scale collection and transfer of personal data belonging to EU citizens without them benefiting from or having access to judicial protection under US privacy laws. The inadequacies of the framework is not news for the Commission and action by ECJ has been a long time coming. The ruling raises important questions about how increasingly the contestations of personal data are being employed in asserting claims of citizenship in context of the internet.

As the highest court in Europe, the ECJ's decisions are binding on all member states. With this ruling the ECJ has effectively restrained US firms from indiscriminate collection and sharing of European citizens’ data on American soil. The implications of the decision are significant, because it shifts the onus of evaluating protections of personal data for EU citizens from the 4,400 companies5 subscribing to the system onto EU privacy watchdogs. Most significantly, in addressing the rights of a citizen against an established global brand, the judgement goes beyond political and legal opinion to challenge the power imbalance that exists with reference to US based firms.

Today, the free movement of data across borders is a critical factor in facilitating trade, financial services, governance, manufacturing, health and development. However, to consider the ruling as merely a clarification of transatlantic mechanisms for data flows misstates the real issue. At the heart of the judgment is the assessment whether US firms apply the tests of ‘necessity and proportionality’ in the collection and surveillance of data for national security purposes. Application of necessity and proportionality test to national security exceptions under safe harbor has been a sticking point that has stalled the renegotiation of the agreement that has been underway between the Commission and the American data protection authorities.6

For EU citizens the stake in the case are even higher, as while their right to privacy is enshrined under EU law, they have no administrative or judicial means of redress, if their data is used for reasons they did not intend. In the EU, citizens accessing and agreeing to use of US based firms are presented with a false choice between accessing benefits and giving up on their fundamental right to privacy. In other words, by seeking that governments and private companies provide better data protection for the EU citizens and in restricting collection of personal data on a generalised basis without objective criteria, the ruling is effectively an assertion of ‘data sovereignty’. The term ‘data sovereignty’, while lacking a firm definition, refers to a spectrum of approaches adopted by different states to control data generated in or passing through national internet infrastructure.7 Underlying the ruling is the growing policy divide between the US and EU privacy and data protection standards, which may lead to what is referred to as the balkanization8 of the internet in the future.

US-EU Data Protection Regime

The safe harbor pact between the EU and US was negotiated in the late 1990s as an attempt to bridge the different approaches to online privacy. Privacy is addressed in the EU as a fundamental human right while in the US it is defined under terms of consumer protection, which allow trade-offs and exceptions when national security seems to be under threat. In order to address the lower standards of data protection prevalent in the US, the pact facilitates data transfers from EU to US by establishing certain safeguards equivalent to the requirements of the EU data protection directive. The safe harbor provisions include firms undertaking not to pass personal information to third parties if the EU data protection standards are not met and giving users right to opt out of data collection.9

The agreement was due to be renewed by May 201510 and while negotiations have been ongoing for two years, EU discontent on safe harbour came to the fore following the Edward Snowden revelations of collection and monitoring facilitated by large private companies for the PRISM program and after the announcement of the TransAtlantic Trade and Investment Partnership (TTIP).11 EU member states have mostly stayed silent as they run their own surveillance programs often times, in cooperation with the NSA. EU institutions cannot intervene in matters of national security however, they do have authority on data protection matters. European Union officials and Members of Parliament have expressed shock and outrage at the surveillance programs unveiled by Snowden's 2013 revelations. Most recently, following the CJEU Advocate General’s opinion, 50 Members of European Parliament (MEP) sent a strongly worded letter the US Congress hitting back on claims of ‘digital protectionism’ emanating from the US12. In no uncertain terms the letter clarified that the EU has different ideas on privacy, platforms, net neutrality, encryption, Bitcoin, zero-days, or copyright and will seek to improve and change any proposal from the EC in the interest of our citizens and of all people.

Towards Harmonization

In November 2013, as an attempt to minimize the loss of trust following the Snowden revelations, the European Commission (EC) published recommendations in its report on 'Rebuilding Trust is EU-US Data Flows'.13 The recommendations revealed two critical initiatives at the EU level—first was the revision of the EU-US safe harbor agreement14 and second the adoption of the 'EU-US Umbrella Agreement15'—a framework for data transfer for the purpose of investigating, detecting, or prosecuting a crime, including terrorism. The Umbrella Agreement was recently initialed by EU and US negotiators and it only addresses the exchange of personal data between law enforcement agencies.16 The Agreement has gained momentum in the wake of recent cases around issues of territorial duties of providers, enforcement jurisdictions and data localisation.17 However, the adoption of the Umbrella Act depends on US Congress adoption of the Judicial Redress Act (JRA) as law.18

Judicial Redress Act

The JRA is a key reform that the EC is pushing for in an attempt to address the gap between privacy rights and remedies available to US citizens and those extended to EU citizens, including allowing EU citizens to sue in American courts. The JRA seeks to extend certain protections under the Privacy Act to records shared by EU and other designated countries with US law enforcement agencies for the purpose of investigating, detecting, or prosecuting criminal offenses. The JRA protections would extend to records shared under the Umbrella Agreement and while it does include civil remedies for violation of data protection, as noted by the Center for Democracy and Technology, the present framework does not provide citizens of EU countries with redress that is at par with that which US persons enjoy under the Privacy Act.19

For example, the measures outlined under the JRA would only be applicable to countries that have outlined appropriate privacy protections agreements for data sharing for investigations and ‘efficiently share’ such information with the US. Countries that do not have agreements with US cannot seek these protections leaving the personal data of their citizens open for collection and misuse by US agencies. Further, the arrangement leaves determination of 'efficiently sharing' in the hands of US authorities and countries could lose protection if they do not comply with information sharing requests promptly. Finally, JRA protections do not apply to non-US persons nor to records shared for purposes other than law enforcement such as intelligence gathering. JRA is also weakened by allowing heads of agencies to exercise their discretion to seek exemption from the Act and opt out of compliance.

Taken together the JRA, the Umbrella Act and the renegotiation of the Safe Harbor Agreement need considerable improvements. It is worth noting that EU’s acceptance of the redundancy of existing agreements and in establishing the independence of national data protection authorities in investigating and enforcing national laws as demonstrated in the Schrems and in the Weltimmo20 case point to accelerated developments in the broader EU privacy landscape.

Consequences

The ECJ Safe Harbor ruling will have far-reaching consequences for the online industry. Often, costly government rulings solidify the market dominance of big companies. As high regulatory costs restrict the entrance of small and medium businesses the market, competition is gradually wiped out. Further, complying with high standards of data protection means that US firms handling European data will need to consider alternative legal means of transfer of personal data. This could include evolving 'model contracts' binding them to EU data protection standards. As Schrems points out, “Big companies don’t only rely on safe harbour: they also rely on binding corporate rules and standard contractual clauses.”21

The ruling is good news for European consumers, who can now approach a national regulator to investigate suspicions of data mishandling. EU data protection regulators may be be inundated with requests from companies seeking authorization of new contracts and with consumer complaints. Some are concerned that the ruling puts a dent in the globalized flow of data22, effectively requiring data localization in Europe.23 Others have pointed out that it is unclear how this decision sits with other trade treaties such as the TPP that ban data localisation.24 While the implications of the decision will take some time in playing out, what is certain is that US companies will be have to restructure management, storage and use of data. The ruling has created the impetus for India to push for reforms to protect its citizens from harms by US firms and improve trade relations with EU.

The Opportunity for India

Multiple data flows taking place over the internet simultaneously and that has led to ubiquity of data transfers o ver the Internet, exposing individuals to privacy risks. There has also been an enhanced economic importance of data processing as businesses collect and correlate data using analytic tools to create new demands, establish relationships and generate revenue for their services. The primary concern of the Schrems case may be the protection of the rights of EU citizens but by seeking to extend these rights and ensure compliance in other jurisdictions, the case touches upon many underlying contestations around data and sovereignty.

Last year, Mr Ram Narain, India Head of Delegation to the Working Group Plenary at ITU had stressed, “respecting the principle of sovereignty of information through network functionality and global norms will go a long way in increasing the trust and confidence in use of ICT.”25 In the absence of the recognition of privacy as a right and empowering citizens through measures or avenues to seek redressal against misuse of data, the demand of data sovereignty rings empty. The kind of framework which empowered an ordinary citizen in the EU to approach the highest court seeking redressal based on presumed overreach of a foreign government and from harms abetted by private corporations simply does not exist in India. Securing citizen’s data in other jurisdictions and from other governments begins with establishing protection regimes within the country.

The Indian government has also stepped up efforts to restrict transfer of data from India including pushing for private companies to open data centers in India.26 Negotiating data localisation does not restrict the power of private corporations from using data in a broad ways including tailoring ads and promoting products. Also, data transfers impact any organisation with international operations for example, global multinationals who need to coordinate employee data and information. Companies like Facebook, Google and Microsoft transfer and store data belonging to Indian citizens and it is worth remembering that the National Security Agency (NSA) would have access to this data through servers of such private companies. With no existing measures to restrict such indiscriminate access, the ruling purports to the need for India to evolve strong protection mechanisms. Finally, the lack of such measures also have an economic impact, as reported in a recent Nasscom-Data Security Council of India (DSCI) survey27 that pegs revenue losses incurred by the Indian IT-BPO industry at $2-2.5 billion for a sample size of 15 companies. DSCI has further estimated that outsourcing business can further grow by $50 billion per annum once India is granted a “data secure” status by the EU.28 EU’s refusal to grant such a status is understandable given the high standard of privacy as incorporated under the European Union Data Protection Directive a standard to which India does not match up, yet. The lack of this status prevents the flow of data which is vital for Digital India vision and also affects the service industry by restricting the flow of sensitive information to India such as information about patient records.

Data and information structures are controlled and owned by private corporations and networks transcend national borders, therefore the foremost emphasis needs to be on improving national frameworks. While, enforcement mechanisms such as the Mutual Legal Assistance Treaty (MLAT) process or other methods of international cooperation may seem respectful of international borders and principles of sovereignty,29 for users that live in undemocratic or oppressive regimes such agreements are a considerable risk. Data is also increasingly being stored across multiple jurisdictions and therefore merely applying data location lens to protection measures may be too narrow. Further it should be noted that when companies begin taking data storage decisions based on legal considerations it will impact the speed and reliability of services.30 Any future regime must reflect the challenges of data transfers taking place in legal and economic spaces that are not identical and may be in opposition. Fundamentally, the protection of privacy will always act as a barrier to the free flow of information even so, as the Schrems case ruling points out not having adequate privacy protections could also restrict flow of data, as has been the case for India.

The time is right for India to appoint a data controller and put in place national frameworks, based on nuanced understanding of issues of applying jurisdiction to govern users and their data. Establishing better protection measures will not only establish trust and enhance the ability of users to control data about themselves it is also essential for sustaining economic and social value generated from data generation and collection. Suggestions for such frameworks have been considered previously by the Group of Experts on Privacy constituted by the Planning Commission.31 By incorporating transparency in mechanisms for data and access requests and premising requests on established necessity and proportionality Indian government can lead the way in data protection standards. This will give the Indian government more teeth to challenge and address both the dangers of theft of data stored on servers located outside of India and restrain indiscriminate access arising from terms and conditions of businesses that grant such rights to third parties. 

1 Commission Decision of 26 July 2000 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the safe harbour privacy principles and related frequently asked questions issued by the US Department of Commerce (notified under document number C(2000) 2441) (Text with EEA relevance.) Official Journal L 215 , 25/08/2000 P. 0007 -0047 2000/520/EC: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000D0520:EN:HTML

2 Safe Harbour Privacy Principles Issued by the U.S. Department of Commerce on July 21, 2000 http://www.export.gov/safeharbor/eu/eg_main_018475.asp

4 Advocate General’s Opinion in Case C-362/14 Maximillian Schrems v Data Protection Commissioner Court of Justice of the European Union, Press Release, No 106/15 Luxembourg, 23 September 2015 http://curia.europa.eu/jcms/upload/docs/application/pdf/2015-09/cp150106en.pdf

5 Jennifer Baker, ‘EU desperately pushes just-as-dodgy safe harbour alternatives’, The Register, October 7, 2015 http://www.theregister.co.uk/2015/10/07/eu_pushes_safe_harbour_alternatives/ 

6 Draft Report, General Data Protection Regulation, Committee on Civil Liberties, Justice and Home Affairs, European Parliament, 2009-2014 http://www.europarl.europa.eu/meetdocs/2009_2014/documents/libe/pr/922/922387/922387en.pdf

7 Dana Polatin-Reuben, Joss Wright, ‘An Internet with BRICS Characteristics: Data Sovereignty and the Balkanisation of the Internet’, University of Oxford, July 7, 2014 https://www.usenix.org/system/files/conference/foci14/foci14-polatin-reuben.pdf

8 Sasha Meinrath, The Future of the Internet: Balkanization and Borders, Time, October 2013 http://ideas.time.com/2013/10/11/the-future-of-the-internet-balkanization-and-borders/

9 Safe Harbour Privacy Principles, Issued by the U.S. Department of Commerce, July 2001 http://www.export.gov/safeharbor/eu/eg_main_018475.asp

10 Facebook case may force European firms to change data storage practices, The Guardian, September 23, 2015 http://www.theguardian.com/us-news/2015/sep/23/us-intelligence-services-surveillance-privacy

11 Privacy Tracker, US-EU Safe Harbor Under Pressure, August 2, 2013 https://iapp.org/news/a/us-eu-safe-harbor-under-pressure

12 Kieren McCarthy, Privacy, net neutrality, security, encryption ... Europe tells Obama, US Congress to back off, The Register, 23 September, 2015 http://www.theregister.co.uk/2015/09/23/european_politicians_to_congress_back_off/

13 Communication from the Commission to the European Parliament and the Council, Rebuilding Trust in EU-US Data Flows, European Commission, November 2013 http://ec.europa.eu/justice/data-protection/files/com_2013_846_en.pdf

14 Safe Harbor on trial in the European Union, Access Blog, September 2014 https://www.accessnow.org/blog/2014/11/13/safe-harbor-on-trial-in-the-european-union

15 European Commission - Fact Sheet Questions and Answers on the EU-US data protection "Umbrella agreement", September 8, 2015 http://europa.eu/rapid/press-release_MEMO-15-5612_en.htm 

16 McGuire Woods, ‘EU and U.S. reach “Umbrella Agreement” on data transfers’, Lexology, September 14, 2015 http://www.lexology.com/library/detail.aspx?g=422bca41-2d54-4648-ae57-00d678515e1f

17 Andrew Woods, Lowering the Temperature on the Microsoft-Ireland Case, Lawfare September, 2015 https://www.lawfareblog.com/lowering-temperature-microsoft-ireland-case

18 Jens-Henrik Jeppesen, Greg Nojeim, ‘The EU-US Umbrella Agreement and the Judicial Redress Act: Small Steps Forward for EU Citizens’ Privacy Rights’, October 5, 2015 https://cdt.org/blog/the-eu-us-umbrella-agreement-and-the-judicial-redress-act-small-steps-forward-for-eu-citizens-privacy-rights/

19 Ibid 18.

20 Landmark ECJ data protection ruling could impact Facebook and Google, The Guardian, 2 October, 2015 http://www.theguardian.com/technology/2015/oct/02/landmark-ecj-data-protection-ruling-facebook-google-weltimmo

21 Julia Powles, Tech companies like Facebook not above the law, says Max Schrems, The Guardian, Octover 9, 2015 http://www.theguardian.com/technology/2015/oct/09/facebook-data-privacy-max-schrems-european-court-of-justice

22 Adam Thierer, Unintended Consequences of the EU Safe Harbor Ruling, The Technology Liberation Front, October 6, 2015 http://techliberation.com/2015/10/06/unintended-consequenses-of-the-eu-safe-harbor-ruling/#more-75831

23 Anupam Chander, Tweeted ECJ #schrems ruling may effectively require data localization within Europe, https://twitter.com/AnupamChander/status/651369730754801665

24 Lokman Tsui, Tweeted, “If the TPP bans data localization, but the ECJ ruling effectively mandates it, what does that mean for the internet?” https://twitter.com/lokmantsui/status/651393867376275456

26 Sounak Mitra, Xiaomi bets big on India despite problems, Business Standard, December 2014 http://www.business-standard.com/article/companies/xiaomi-bets-big-on-india-despite-problems-114122201023_1.html

27 Neha Alawadi, Ruling on data flow between EU & US may impact India’s IT sector, Economic Times,October 7, 2015 http://economictimes.indiatimes.com/articleshow/49250738.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

28 Pranav Menon, Data Protection Laws in India and Data Security- Impact on India and Data Security-Impact on India - EU Free Trade Agreement, CIS Access to Knowledge, 2011 http://cis-india.org/a2k/blogs/data-security-laws-india.pdf

29 Surendra Kumar Sinha, India wants Mutual Legal Assistance treaty with Bangladesh, Economic Times, October 7, 2015 http://economictimes.indiatimes.com/articleshow/49262294.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

30 Pablo Chavez, Director, Public Policy and Government Affairs, Testifying before the U.S. Senate on transparency legislation, November 3, 2013 http://googlepublicpolicy.blogspot.in/2013/11/testifying-before-us-senate-on.htm 

31 Report of the Group of Experts on Privacy (Chaired by Justice A P Shah, Former Chief Justice, Delhi High Court), Planning Commission, October 2012 http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

 

 

 

Peering behind the veil of ICANN's DIDP (II)

by Padmini Baruah — last modified Oct 15, 2015 03:14 AM
In a previous blog post, I had introduced the concept of ICANN’s Documentary Information Disclosure Policy (“DIDP”) and their extremely vast grounds for non-disclosure. In this short post, I have made an analysis of every DIDP request that ICANN has ever responded to, to point out the flaws in their policy that need to be urgently remedied.

Read the previous blog post here. Every DIDP request that ICANN has ever responded to can be accessed here.


The table here is a comprehensive breakdown of all the different DIDP requests that ICANN has responded to. This table is to be read with this document, which has a numbered list of the different non-disclosure exceptions outlined in ICANN’s policy. What I sought to scrutinize was the number of times ICANN has provided satisfactory information, the number of times it has denied information, and the grounds for the same. What we found was alarming:

  1. Of a total of 91 requests (as of 13/10/2015), ICANN has fully and positively responded to only 11.
  2. It has responded partially to 47 of 91 requests, with some amount of information (usually that which is available as public records).
  3. It has not responded at all to 33 of 91 requests.
  4. The Non-Disclosure Clause (1)[1] has been invoked 17 times.
  5. The Non-Disclosure Clause (2)[2] has been invoked 39 times.
  6. The Non-Disclosure Clause (3)[3] has been invoked 31 times.
  7. The Non-Disclosure Clause (4)[4] has been invoked 5 times.
  8. The Non-Disclosure Clause (5)[5] has been invoked 34 times.
  9. The Non-Disclosure Clause (6)[6] has been invoked 35 times.
  10. The Non-Disclosure Clause (7)[7] has been invoked once.
  11. The Non-Disclosure Clause (8)[8] has been invoked 22 times.
  12. The Non-Disclosure Clause (9)[9] has been invoked 30 times.
  13. The Non-Disclosure Clause (10)[10] has been invoked 10 times.
  14. The Non-Disclosure Clause (11)[11] has been invoked 12 times.
  15. The Non-Disclosure Clause (12)[12] has been invoked 18 times.

This data is disturbing because it reveals that ICANN has in practice been able to deflect most requests for information. It regularly utilised its internal processes and discussions with stakeholders clauses, as well as clauses on protecting financial interests of third parties (over 50% of the total non-disclosure clauses ever invoked - see chart below) to do away with having to provide information on pertinent matters such as its compliance audits and reports of abuse to registrars. We believe that even if ICANN is a private entity legally, and not at the same level as a state, it nonetheless plays the role of regulating an enormous public good, namely the Internet. Therefore, there is a great onus on ICANN to be far more open about the information that they provide.

Finally, it is extremely disturbing that they have extended full disclosure to only 12% of the requests that they receive. An astonishing 88% of the requests have been denied, partly or otherwise. Therefore, it is clear that there is a failure on part of ICANN to uphold the transparency it claims to stand for, and this needs to be remedied at the earliest.

Pie Chart 1


 

Pie Chart 2


[1]Information provided by or to a government or international organization, or any form of recitation of such information, in the expectation that the information will be kept confidential and/or would or likely would materially prejudice ICANN's relationship with that party

[2]Internal information that, if disclosed, would or would be likely to compromise the integrity of ICANN's deliberative and decision-making process by inhibiting the candid exchange of ideas and communications, including internal documents, memoranda, and other similar communications to or from ICANN Directors, ICANN Directors' Advisors, ICANN staff, ICANN consultants, ICANN contractors, and ICANN agents

[3]Information exchanged, prepared for, or derived from the deliberative and decision-making process between ICANN, its constituents, and/or other entities with which ICANN cooperates that, if disclosed, would or would be likely to compromise the integrity of the deliberative and decision-making process between and among ICANN, its constituents, and/or other entities with which ICANN cooperates by inhibiting the candid exchange of ideas and communications

[4]Personnel, medical, contractual, remuneration, and similar records relating to an individual's personal information, when the disclosure of such information would or likely would constitute an invasion of personal privacy, as well as proceedings of internal appeal mechanisms and investigations

[5]Information provided to ICANN by a party that, if disclosed, would or would be likely to materially prejudice the commercial interests, financial interests, and/or competitive position of such party or was provided to ICANN pursuant to a nondisclosure agreement or nondisclosure provision within an agreement

[6]Confidential business information and/or internal policies and procedures

[7]Information that, if disclosed, would or would be likely to endanger the life, health, or safety of any individual or materially prejudice the administration of justice

[8]Information subject to the attorney– client, attorney work product privilege, or any other applicable privilege, or disclosure of which might prejudice any internal, governmental, or legal investigation

[9]Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication

[10]Information that relates in any way to the security and stability of the Internet, including the operation of the L Root or any changes, modifications, or additions to the root zone

[11]Trade secrets and commercial and financial information not publicly disclosed by ICANN

[12]Information requests: (i) which are not reasonable; (ii) which are excessive or overly burdensome; (iii) complying with which is not feasible; or (iv) are made with an abusive or vexatious purpose or by a vexatious or querulous individual

Comments on the Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

by Geetha Hariharan last modified Oct 16, 2015 02:44 AM
On 9 October 2015, the Zero Draft of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released. Comments were sought on the Zero Draft from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

These comments were prepared by Geetha Hariharan with inputs from Sumandro Chattapadhyay, Pranesh Prakash, Sunil Abraham, Japreet Grewal and Nehaa Chaudhari. Download the comments here.


  1. The Zero Draft of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“Zero Draft”) is divided into three sections: (A) ICT for Development; (B) Internet Governance; (C) Implementation and Follow-up. CIS’ comments follow the same structure.
  2. The Zero Draft is a commendable document, covering crucial areas of growth and challenges surrounding the WSIS. The Zero Draft makes detailed references to development-related challenges, noting the persistent digital divide, the importance of universal access, innovation and investment, and of enabling legal and regulatory environments conducive to the same. It also takes note of financial mechanisms, without which principles would remain toothless. Issues surrounding Internet governance, particularly net neutrality, privacy and the continuation of the IGF are included in the Zero Draft.
  3. However, we believe that references to these issues are inadequate to make progress on existing challenges. Issues surrounding ICT for Development and Internet Governance have scarcely changed in the past ten years. Though we may laud the progress so far achieved, universal access and connectivity, the digital divide, insufficient funding, diverse and conflicting legal systems surrounding the Internet, the gender divide and online harassment persist. Moreover, the working of the IGF and the process of Enhanced Cooperation, both laid down with great anticipation in the Tunis Agenda, have been found wanting.
  4. These need to be addressed more clearly and strongly in the Zero Draft. In light of these shortcomings, we suggest the following changes to the Zero Draft, in the hope that they are accepted.
    A. ICT for Development
  5. Paragraphs 16-21 elaborate upon the digital divide – both the progresses made and challenges. While the Zero Draft recognizes the disparities in access to the Internet among countries, between men and women, and of the languages of Internet content, it fails to attend to two issues.
  6. First, accessibility for persons with disabilities continues to be an immense challenge. Since the mandate of the WSIS involves universal access and the bridging of the digital divide, it is necessary that the Zero Draft take note of this continuing challenge.
  7. We suggest the insertion of Para 20A after Para 20:
    “20A. We draw attention also to the digital divide adversely affecting the accessibility of persons with disabilities. We call on all stakeholders to take immediate measures to ensure accessibility for persons with disabilities by 2020, and to enhance their capacity and access to ICTs.”
  8. Second, while the digital divide among the consumers of ICTs has decreased since 2003-2005, the digital production divide goes unmentioned. The developing world continues to have fewer producers of technology compared to their sheer concentration in the developed world – so much so that countries like India are currently pushing for foreign investment through missions like ‘Digital India’. Of course, the Zero Draft refers to the importance of private sector investment (Para 31). But it fails to point out that currently, such investment originates from corporations in the developed world. For this digital production divide to disappear, restrictions on innovation – restrictive patent or copyright regimes, for instance – should be removed, among other measures. Equitable development is the key.
  9. Ongoing negotiations of plurilateral agreements such as the Trans-Pacific Partnership (TPP) go unmentioned in the Zero Draft. This is shocking. The TPP has been criticized for its excessive leeway and support for IP rightsholders, while incorporating non-binding commitments involving the rights of users (see Clause QQ.G.17 on copyright exceptions and limitations, QQ.H.4 on damages and QQ.C. 12 on ccTLD WHOIS, https://wikileaks.org/tpp-ip3/WikiLeaks-TPP-IP-Chapter/WikiLeaks-TPP-IP-Chapter-051015.pdf). Plaudits for progress make on the digital divide would be lip service if such agreements were not denounced.
  10. Therefore, we propose the addition of Para 20B after Para 20:
    “20B. We draw attention also to the digital production divide among countries, recognizing that domestic innovation and production are instrumental in achieving universal connectivity. Taking note of recent negotiations surrounding restrictive and unbalanced plurilateral trade agreements, we call on stakeholders to adopt policies to ensure globally equitable development, removing restrictions on innovation and conducive to fostering domestic and local production.”
  11. Paragraph 22 of the Zero Draft acknowledges that “school curriculum requirements for ICT, open access to data and free flow of information, fostering of competition, access to finance”, etc. have “in many countries, facilitated significant gains in connectivity and sustainable development”.
  12. This is, of course, true. However, as Para 23 also recognises, access to knowledge, data and innovation have come with large costs, particularly for developing countries like India. These costs are heightened by a lack of promotion and adoption of open standards, open access, open educational resources, open data (including open government data), and other free and open source practices. These can help alleviate costs, reduce duplication of efforts, and provide an impetus to innovation and connectivity globally.
  13. Not only this, but the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of “Data Revolution for Sustainable Development”. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.
  14. As such, we suggest that following clause be inserted as Para 23A to the Zero Draft:

“23A. We recognize the importance of access to open, affordable, and reliable technologies and services, open access to knowledge, and open data, including open government data, and encourage all stakeholders to explore concrete options to facilitate the same.”

15. Paragraph 30 of the Zero Draft laments “the lack of progress on the Digital Solidarity Fund”, and calls “for a review of options for its future”.

16. The Digital Solidarity Fund was established with the objective of “transforming the digital divide into digital opportunities for the developing world” through voluntary contributions [Para 28, Tunis Agenda]. It was an innovative financial mechanism to help bridge the digital divide between developed and developing countries. This divide continues to exist, as the Zero Draft itself recognizes in Paragraphs 16-21.

17. Given the persistent digital divide, a “call for review of options” as to the future of the Digital Solidarity Fund is inadequate to enable developing countries to achieve parity with developed countries. A stronger and more definite commitment is required.

18. As such, we suggest the following language in place of the current Para 30:

“30. We express concern at the lack of progress on the Digital Solidarity Fund, welcomed in Tunis as an innovative financial mechanism of a voluntary nature, and we call for voluntary commitments from States to revive and sustain the Digital Solidarity Fund.”

19. Paragraph 31 of the Zero Draft recognizes the importance of “legal and regulatory frameworks conducive to investment and innovation”. This is eminently laudable. However, a broader vision is more compatible with paving the way for affordable and widespread access to devices and technology necessary for universal connectivity.

20. We suggest the following additions to Para 31:

“31. We recognise the critical importance of private sector investment in ICT access, content and services, and of legal and regulatory frameworks conducive to local investment and expansive, permissionless innovation.”

B. Internet Governance

21. Paragraph 32 of the Zero Draft recognizes the “general agreement that the governance of the Internet should be open, inclusive, and transparent”. Para 37 takes into account “the report of the CSTD Working Group on improvements to the IGF”. Para 37 also affirms the intention of the General Assembly to extend the life of the IGF by (at least) another 5 years, and acknowledges the “unique role of the IGF”.

22. The IGF is, of course, unique and crucial to global Internet governance. In the last 10 years, major strides have been made among diverse stakeholders in beginning and sustaining conversations on issues critical to Internet governance. These include issues such as human rights, inclusiveness and diversity, universal access to connectivity, emerging issues such as net neutrality, the right to be forgotten, and several others. Through its many arms like the Dynamic Coalitions, the Best Practices Forums, Birds-of-a-Feather meetings and Workshops, the IGF has made it possible for stakeholders to connect.

23. However, the constitution and functioning of the IGF have not been without lament and controversy. Foremost among the laments was the IGF’s evident lack of outcome-orientation; this continues to be debatable. Second, the composition and functioning of the MAG, particularly its transparency, have come under the microscope several times. One of the suggestions of the CSTD Working Group on Improvements to the IGF concerned the structure and working methods of the Multistakeholder Advisory Group (MAG). The Working Group recommended that the “process of selection of MAG members should be inclusive, predictable, transparent and fully documented” (Section II.2, Clause 21(a), Page 5 of the Report).

24. Transparency in the structure and working methods of the MAG are critical to the credibility and impact of the IGF. The functioning of the IGF depends, in a large part, on the MAG. The UN Secretary General established the MAG, and it advises the Secretary General on the programme and schedule of the IGF meetings each year (see <http://www.intgovforum.org/cms/mag/44-about-the-mag>). Under its Terms of Reference, the MAG decides the main themes and sub-themes for each IGF, sets or modifies the rules of engagement, organizes the main plenary sessions, coordinates workshop panels and speakers, and crucially, evaluates the many submissions it receives to choose from amongst them the workshops for each IGF meeting. The content of each IGF, then, is in the hands of the MAG.

25. But the MAG is not inclusive or transparent. The MAG itself has lamented its opaque ‘black box approach’ to nomination and selection. Also, CIS’ research has shown that the process of nomination and selection of the MAG continues to be opaque. When CIS sought information on the nominators of the MAG, the IGF Secretariat responded that this information would not be made public (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

26. Further, our analysis of MAG membership shows that since 2006, 26 persons have served for 6 years or more on the MAG. This is astounding, since under the MAG Terms of Reference, MAG members are nominated for a term of 1 year. This 1-year-term is “automatically renewable for 2 more consecutive years”, but such renewal is contingent on an evaluation of the engagement of MAG members in their activities (see <http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference>). MAG members ought not serve for over 3 consecutive years, in accordance with their Terms of Reference. But out of 182 MAG members, around 62 members have served more than the 3-year terms designated by their Terms of Reference (see <http://cis-india.org/internet-governance/blog/mag-analysis>).

27. Not only this, but our research showed 36% of all MAG members since 2006 have hailed from the Western European and Others Group (see <http://cis-india.org/internet-governance/blog/mag-analysis>). This indicates a lack of inclusiveness, though the MAG is certainly more inclusive than the composition and functioning of other I-Star organisations such as ICANN.

28. Tackling these infirmities within the MAG would go a long way in ensuring that the IGF lives up to its purpose. Therefore, we suggest the following additions to Para 37:

“37. We acknowledge the unique role of the Internet Governance Forum (IGF) as a multistakeholder platform for discussion of Internet governance issues, and take note of the report and recommendations of the CSTD Working Group on improvements to the IGF, which was approved by the General Assembly in its resolution, and ongoing work to implement the findings of that report. We reaffirm the principles of openness, inclusiveness and transparency in the constitution, organisation and functioning of the IGF, and in particular, in the nomination and selection of the Multistakeholder Advisory Group (MAG). We extend the IGF mandate for another five years with its current mandate as set out in paragraph 72 of the Tunis Agenda for the Information Society. We recognize that, at the end of this period, progress must be made on Forum outcomes and participation of relevant stakeholders from developing countries.”

29. Paragraphs 32-37 of the Zero Draft make mention of “open, inclusive, and transparent” governance of the Internet. It fails to take note of the lack of inclusiveness and diversity in Internet governance organisations – extending across representation, participation and operations of these organisations. In many cases, mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle. In substantive terms, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For example, the ‘diversity’ mailing list of IETF has very low traffic. Within ICANN, 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed.

30. We propose, therefore, the addition of a Para 37A after Para 37:

“37A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, and call upon these organisations to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

31. Paragraphs 36 of the Zero Draft notes that “a number of member states have called for an international legal framework for Internet governance.” But it makes no reference to ICANN or the importance of the ongoing IANA transition to global Internet governance. ICANN and its monopoly over several critical Internet resources was one of the key drivers of the WSIS in 2003-2005. Unfortunately, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

32. Therefore, we suggest the following Para 37A after Para 37:

“37A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

33. Paragraph 43 of the Zero Draft encourages “all stakeholders to ensure respect for privacy and the protection of personal information and data”. But the Zero Draft inadvertently leaves out the report of the Office of the UN High Commissioner for Human Rights on digital privacy, ‘The right to privacy in the digital age’ (A/HRC/27/37). This report, adopted by the Human Rights Council in June 2014, affirms the importance of the right to privacy in our increasingly digital age, and offers crucial insight into recent erosions of privacy. It is both fitting and necessary that the General Assembly take note of and affirm the said report in the context of digital privacy.

34. We offer the following suggestion as an addition to Para 43:

“43. We emphasise that no person shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home, or correspondence, consistent with countries’ applicable obligations under international human rights law. In this regard, we acknowledge the report of the Office of the UN High Commissioner for Human Rights, ‘The right to privacy in the digital age’ (A/HRC/27/37, 30 June 2014), and take note of its findings. We encourage all stakeholders to ensure respect for privacy and the protection of personal information and data.”

35. Paragraphs 40-44 of the Zero Draft state that communication is a fundamental human need, reaffirming Article 19 of the Covenant on Civil and Political Rights, with its attendant narrow limitations. The Zero Draft also underscores the need to respect the independence of the press. Particularly, it reaffirms the principle that the same rights that people enjoy offline must also be protected online.

36. Further, in Para 31, the Zero Draft recognizes the “critical importance of private sector investment in ICT access, content, and services”. This is true, of course, but corporations also play a crucial role in facilitating the freedom of speech and expression (and all other related rights) on the Internet. As the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online. They are, in sum, intermediaries without whom the Internet cannot function.

37. Given this, it is essential that the outcome document of the WSIS+10 Overall Review recognize and affirm the role of the private sector, and crucially, its responsibilities to respect and protect human rights online.

38. We suggest, therefore, the insertion of the following paragraph Para 42A, after Para 42:

“42A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

C. Implementation and Follow-up

39. Para 57 of the Zero Draft calls for a review of the WSIS Outcomes, and leaves a black space inviting suggestions for the year of the review. How often, then, should the review of implementation of WSIS+10 Outcomes take place?

40. It is true, of course, that reviews of the implementation of WSIS Outcomes are necessary to take stock of progress and challenges. However, we caution against annual, biennal or other such closely-spaced reviews due to concerns surrounding budgetary allocations.

41. Reviews of implementation of outcomes (typically followed by an Outcome Document) come at considerable cost, which are budgeted and achieved through contributions (sometimes voluntary) from states. Were Reviews to be too closely spaced, budgets that ideally ought to be utilized to bridge digital divides and ensure universal connectivity, particularly for developing states, would be misspent in reviews. Moreover, closely-spaced reviews would only provide superficial quantitative assessments of progress, but would not throw light on longer term or qualitative impacts.

Comments on the Zero Draft of the UN General Assembly

by Prasad Krishna last modified Oct 16, 2015 02:41 AM

PDF document icon Final_CIS_Comments_UNGA_WSIS_Zero_Draft.pdf — PDF document, 478 kB (490106 bytes)

CyFy Agenda

by Prasad Krishna last modified Oct 16, 2015 03:01 AM

PDF document icon CyFyAgendaFinal.pdf — PDF document, 190 kB (195156 bytes)

The 'Global Multistakholder Community' is Neither Global Nor Multistakeholder

by Pranesh Prakash last modified Nov 03, 2016 10:42 AM
CIS research shows how Western, male, and industry-driven the IANA transition process actually is.

 

In March 2014, the US government announced that they were going to end the contract they have with ICANN to run something called the Internet Assigned Numbers Authority (IANA), and hand over control to the “global multistakeholder community”. They insisted that the plan for transition had to come through a multistakeholder process and have stakeholders “across the global Internet community”.

Analysis of the process since then shows how flawed the “global multistakeholder community” that converges at ICANN has not actually represented the disparate interests and concerns of different stakeholders. CIS research has found that the discussions around IANA transition have not been driven by the “global multistakeholder community”, but mostly by males from industry in North America and Western Europe.

CIS analysed the five main mailing lists where the IANA transition plan was formulated: ICANN’s ICG Stewardship and CCWG Accountability lists; IETF’s IANAPLAN list; and the NRO’s IANAXFER list and CRISP lists. What we found was quite disheartening.

  • A total of 239 individuals participated cumulatively, across all five lists.
  • Only 98 substantively contributed to the final shape of the ICG proposal, if one takes a count of 20 mails (admittedly, an arbitrary cut-off) as a substantive contribution, with 12 of these 98 being ICANN staff some of whom were largely performing an administrative function.

We decided to look at the diversity within these substantive contributors using gender, stakeholder grouping, and region. We relied on public records, including GNSO SOI statements, and extensive searches on the Web. Given that, there may be inadvertent errors, but the findings are so stark that even a few errors wouldn’t affect them much.

  • 2 in 5 (39 of 98, or 40%) were from a single country: the United States of America.
  • 4 in 5 (77 of 98) were from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries.
  • None were from the EEC (Eastern European and Russia) group, and only 5 of 98 from all of GRULAC (Latin American and Caribbean Group).
  • 4 in 5 (77 of 98) were male and 21 were female.
  • 4 in 5 (76 of 98) were from industry or the technical community, and only 4 (or 1 in 25​) were identifiable as primarily speaking on behalf of governments.

This shows also that the process has utterly failed in achieving the recommendation of Paragraph 6 of the 3 in 5 registrars are from the United States of America (624 out of 1010, as of March 2014, according to ICANN's accredited registrars list), with only 0.6% being from the 54 countries in Africa (7 out of 1010).

  • 45% of all the registries are from the United States of America! (307 out of 672 registries listed in ICANN’s registry directory in August 2015.)
  • 66% (34 of 51) of the Business Constituency at ICANN are from a single country: the United States of America. (N.B.: This page doesn’t seem to be up-to-date.)
  • This shows that businesses from the United States of America continues to dominate ICANN to a very significant degree, and this is also reflected in the nature of the dialogue within ICANN, including the fact that the proposal that came out of the ICANN ‘global multistakeholder community’ on IANA transition proposes a clause that requires the ‘IANA Functions Operator’ to be a US-based entity. For more on that issue, see this post on the jurisdiction issue at ICANN (or rather, on the lack of a jurisdiction issue at ICANN).

    Policy Brief: Oversight Mechanisms for Surveillance

    by Elonnai Hickok last modified Nov 24, 2015 06:09 AM

    Download the PDF


    Introduction

    Across jurisdictions, the need for effective and relevant oversight mechanisms (coupled with legislative safeguards) for state surveillance has been highlighted by civil society, academia, citizens and other key stakeholders.[1] A key part of oversight of state surveillance is accountability of intelligence agencies. This has been recognized at the international level. Indeed, the Organization for Economic Co-operation and Development, The United Nations, the Organization for Security and Cooperation in Europe, the Parliamentary Assembly of the Council of Europe, and the Inter-Parliamentary Union have all recognized that intelligence agencies need to be subject to democratic accountability.[2] Since 2013, the need for oversight has received particular attention in light of the information disclosed through the 'Snowden Revelations'. [3] Some countries such as the US, Canada, and the UK have regulatory mechanisms for the oversight of state surveillance and the intelligence community, while many other countries – India included - have piecemeal oversight mechanisms in place. The existence of regulatory mechanisms for state surveillance does not necessarily equate to effective oversight – and piecemeal mechanisms – depending on how they are implemented, could be more effective than comprehensive mechanisms. This policy brief seeks to explore the purpose of oversight mechanisms for state surveillance, different forms of mechanisms, and what makes a mechanism effective and comprehensive. The brief also reviews different oversight mechanisms from the US, UK, and Canada and provides recommendations for ways in which India can strengthen its present oversight mechanisms for state surveillance and the intelligence community.

    What is the purpose and what are the different components of an oversight mechanism for State Surveillance?

    The International Principles on the Application of Human Rights to Communication Surveillance, developed through a global consultation with civil society groups, industry, and international experts recommends that public oversight mechanisms for state surveillance should be established to ensure transparency and accountability of Communications Surveillance. To achieve this, mechanisms should have the authority to:

    • Access all potentially relevant information about State actions, including, where appropriate, access to secret or classified information;
    • Assess whether the State is making legitimate use of its lawful capabilities;
    • Evaluate whether the State has been comprehensively and accurately publishing information about the use and scope of Communications Surveillance techniques and powers in accordance with its Transparency obligations publish periodic reports and other information relevant to Communications Surveillance;
    • Make public determinations as to the lawfulness of those actions, including the extent to which they comply with these Principles[4]

    What can inform oversight mechanisms for state surveillance?

    The development of effective oversight mechanisms for state surveillance can be informed by a number of factors including:

    • Rapidly changing technology – how can mechanisms adapt, account for, and evaluate perpetually changing intelligence capabilities?
    • Expanding surveillance powers – how can mechanisms evaluate and rationalize the use of expanding agency powers?
    • Tensions around secrecy, national interest, and individual rights – how can mechanisms respect, recognize, and uphold multiple competing interests and needs including an agency's need for secrecy, the government's need to protect national security, and the citizens need to have their constitutional and fundamental rights upheld?
    • The structure, purpose, and goals of specific intelligence agencies and circumstances– how can mechanisms be sensitive and attuned to the structure, purpose, and functions of differing intelligence agencies and circumstances?

    These factors lead to further questions around:

    • The purpose of an oversight mechanism: Is an oversight mechanism meant to ensure effectiveness of an agency? Perform general reviews of agency performance? Supervise the actions of an agency? Hold an agency accountable for misconduct?
    • The structure of an oversight mechanism: Is it internal? External? A combination of both? How many oversight mechanisms that agencies should be held accountable to?
    • The functions of an oversight mechanism: Is an oversight mechanism meant to inspect? Evaluate? Investigate? Report?
    • The powers of an oversight mechanism: The extent of access that an oversight mechanism needs and should have to the internal workings of security agencies and law enforcement to carry out due diligence? The extent of legal backing that an oversight mechanism should have to hold agencies legally accountable.

    What oversight mechanisms for State Surveillance exist in India?

    In India the oversight 'ecosystem' for state surveillance is comprised of:

    1. Review committee: Under the Indian Telegraph Act 1885 and the Rules issued thereunder (Rule 419A), a Central Review Committee that consists of the Cabinet Secretary, Secretary of Legal Affairs to the Government of India, Secretary of Department of Telecommunications to the Government of India is responsible for meeting on a bi-monthly basis and reviewing the legality of interception directions. The review committee has the power to revoke the directions and order the destruction of intercepted material.[5] This review committee is also responsible for evaluating interception, monitoring, and decryption orders issued under section 69 of the Information Technology Act 2000.[6] and orders for the monitoring and collection of traffic data under section 69B of the Information Technology Act 2000.[7]
    2. Authorizing Authorities: The Secretary in the Ministry of Home Affairs of the Central Government is responsible for authorizing requests for the interception, monitoring, and decryption of communications issued by central agencies.[8] The Secretary in charge of the Home Department is responsible for authorizing requests for the interception, monitoring, and decryption of communications from state level agencies and law enforcement.[9] The Secretary to the Government of India in the Department of Information Technology under the Ministry of Communications and Information Technology is responsible for authorizing requests for the monitoring and collection of traffic data.[10] Any officer not below the rank of Joint Secretary to the Government of India, who has been authorised by the Union Home Secretary or the State Home Secretary in this behalf, may authorize the interception of communications in case of an emergency.[11] A Commissioner of Police, District Superintendent of Police or Magistrate may issue requests for stored data to any postal or telegraph authority.[12]
    3. Administrative authorities: India does not have an oversight mechanism for intelligence agencies, but agencies do report to different authorities. For example: The Intelligence Bureau reports to the Home Minister, the Research and Anaylsis Wing is under the Cabinet Secretariat and reports to the Prime Minister, the Joint Intelligence Committee (JIC), National Technical Research Organisation (NTRO) and Aviation Research Centre (ARC) report to the National Security Adviser; and the National Security Council Secretariat under the NSA which serves the National Security Council.[13]

    It is important to note that though India has a Right to Information Act, but most of the security agencies are exempt from the purview of the Act[14] as is disclosure of any information that falls under the purview of the Official Secrets Act 1923.[15] [Note: There is no point in listing out all the exceptions given in section 8 and other sections as well. I think the point is sufficiently made when we say that security agencies are exempt from the purview of the Act.] The Official Secrets Act does not provide a definition of an 'official secret' and instead protects information: pertaining to national Security, defence of the country, affecting friendly relations with foreign states, etc.[16] Information in India is designated as classified in accordance to the Manual of Departmental Security Instruction which is circulated by the Ministry of Home Affairs. According to the Public Records Rules 1997, “classified records" means the files relating to the public records classified as top-secret, confidential and restricted in accordance with the procedure laid down in the Manual of Departmental Security Instruction circulated by the Ministry of Home affairs from time to time;”[17] Bi-annually officers evaluate and de-classify classified information and share the same with the national archives.[18] In response to questions raised in the Lok Sabha on the 5th of May 2015 regarding if the Official Secrets Act, 1923 will be reviewed, the number of classified files stored with the Government under the Act, and if the Government has any plans to declassify some of the files – the Ministry of Home Affairs clarified that a committee consisting of Secretaries of the Ministry of Home Affairs, the Department of Personnel and Training, and the Department of Legal Affairs has been established to examine the provisions of the Official Secrets Act, 1923 particularly in light of the Right to Information Act, 2005. The Ministry of Home Affairs also clarified that the classification and declassification of files is done by each Government Department as per the Manual of Departmental Security Instructions, 1994 and thus there is no 'central database of the total number of classified files'.[19]

    How can India's oversight mechanism for state surveillance be clarified?

    Though these mechanisms establish a basic framework for an oversight mechanism for state surveillance in India, there are aspects of this framework that could be clarified and there are ways in which the framework could be strengthened.

    Aspects of the present review committee that could be clarified:

    1. Powers of the review committee: Beyond having the authority to declare that orders for interception, monitoring, decryption, and collection of traffic data are not within the scope of the law and order for destruction of any collected information – what powers does the review committee have? Does the committee have the power to compel agencies to produce additional or supporting evidence? Does the committee have the power to compel information from the authorizing authority?
    2. Obligations of the review committee: The review committee is required to 'record its findings' as to whether the interception orders issued are in accordance with the law. Is there a standard set of questions/information that must be addressed by the committee when reviewing an order? Does the committee only review the content of the order or do they also review the implementation of the order? Beyond recording its findings, are there any additional reporting obligations that the review committee must fulfill?
    3. Accountability of the review committee: Does the review committee answer to a higher authority? Do they have to submit their findings to other branches of the government – such as Parliament? Is there a mechanism to ensure that the review committee does indeed meet every two months and review all orders issued under the relevant sections of the Indian Telegraph Act 1885 and the Information Technology Act 2008?

    Proposed oversight mechanisms in India

    Oversight mechanisms can help with avoiding breaches of national security by ensuring efficiency and effectiveness in the functioning of security agencies. The need for the oversight of state surveillance is not new in India. In 1999 the Union Government constituted a Committee with the mandate of reviewing the events leading up to Pakistani aggression in Kargil and to recommend measures towards ensuring national security. Though the Kargil Committee was addressing surveillance from the perspective of gathering information on external forces, there are parellels in the lessons learned for state surveillance. Among other findings, in their Report the Committee found a number of limitations in the system for collection, reporting, collation, and assessment of intelligence. The Committee also found that there was a lack of oversight for the intelligence community in India – resulting in no mechanisms for tasking the agencies, monitoring their performance and overall functioning, and evaluating the quality of the work.

    The Committee also noted that such a mechanism is a standard feature in jurisdictions across the world. The Committee emphasized this need from an economic perspective – that without oversight – the Government and the nation has no way of evaluating whether or not they are receiving value for their money. The Committee recommended a review of the intelligence system with the objective of solving such deficiencies.[20]

    In 2000 a Group of Ministers was established to review the security and intelligence apparatus of the country. In their report issued to the Prime Minister, the Group of Ministers recommended the establishment of an Intelligence Coordination Group for the purpose of providing oversight of intelligence agencies at the Central level. Specifically the Intelligence Coordination Group would be responsible for:

    • Allocation of resources to the intelligence agencies
    • Consideration of annual reviews on the quality of inputs
    • Approve the annual tasking for intelligence collection
    • Oversee the functions of intelligence agencies
    • Examine national estimates and forecasts[21]

    Past critiques of the Indian surveillance regime have included the fact that intelligence agencies do not come under the purview of any overseeing mechanism including Parliament, the Right to Information Act 2005, or the General Comptroller of India.

    In 2011, Manish Tewari, who at the time was a Member of Parliament from Ludhiana, introduced the Private Member's Bill - “The Intelligence Services (Powers and Regulation) Bill” proposed stand alone statutory regulation of intelligence agencies. In doing so it sought to establish an oversight mechanism for intelligence agencies within and outside of India. The Bill was never introduced into Parliament.[22] Broadly, the Bill sought to establish: a National Intelligence and Security Oversight Committee which would oversee the functionings of intelligence agencies and would submit an annual report to the Prime Minister, a National Intelligence Tribunal for the purpose of investigating complaints against intelligence agencies, an Intelligence Ombudsman for overseeing and ensuring the efficient functioning of agencies, and a legislative framework regulating intelligence agencies.[23]

    Proposed policy in India has also explored the possibility of coupling surveillance regulation and oversight with private regulation and oversight. In 2011 the Right to Privacy Bill was drafted by the Department of Personnel and Training. The Bill proposed to establish a “Central Communication Interception Review Committee” for the purposes of reviewing orders for interception issued under the Telegraph Act. The Bill also sought to establish an authorization process for surveillance undertaken by following a person, through CCTV's, or other electronic means.[24] In contrast, the 2012 Report of the Group of Experts on Privacy, which provided recommendations for a privacy framework for India, recommended that the Privacy Commissioner should exercise broad oversight functions with respect to interception/access, audio & video recordings, the use of personal identifiers, and the use of bodily or genetic material.[25]

    A 2012 report by the Institute for Defence Studies and Analyses titled “A Case for Intelligence Reforms in India” highlights at least four 'gaps' in intelligence that have resulted in breaches of national security including: zero intelligence, inadequate intelligence, inaccurate intelligence, and excessive intelligence – particularly in light of additional technical inputs and open source inputs.[26] In some cases, an oversight mechanism could help in remediating some of these gaps. Returning to the 2012 IDSA Report, the Report recommends the following steps towards an oversight mechanism for Indian intelligence:

    • Establishing an Intelligence Coordination Group (ICG) that will exercise oversight functions for the intelligence community at the Central level. This could include overseeing functions of the agencies, quality of work, and finances.
    • Enacting legislation defining the mandates, functions, and duties of intelligence agencies.
    • Holding intelligence agencies accountable to the Comptroller & Auditor General to ensure financial accountability.
    • Establishing a Minister for National Security & Intelligence for exercising administrative authority over intelligence agencies.
    • Establishing a Parliamentary Accountability Committee for oversight of intelligence agencies through parliament.
    • Defining the extent to which intelligence agencies can be held accountable to reply to requests pertaining to violations of privacy and other human rights issued under the Right to Information Act.

    Highlighting the importance of accountable surveillance frameworks, in 2015 the external affairs ministry director general of India Santosh Jha stated at the UN General Assembly that the global community needs to "to create frameworks so that Internet surveillance practices motivated by security concerns are conducted within a truly transparent and accountable framework.”[27]

    In what ways can India's mechanisms for state surveillance be strengthened?

    Building upon the recommendations from the Kargil Committee, the Report from the Group of Ministers, the Report of the Group of Experts on Privacy, the Draft Privacy Bill 2011, and the IDSA report, ways in which the framework for oversight of state surveillance in India could be strengthened include:

    • Oversight to enhance public understanding, debate, accountability, and democratic governance: State surveillance is unique in that it is enabled with the objective of protecting a nations security. Yet, to do so it requires citizens of a nation to trust the actions taken by intelligence agencies and to allow for possible access into their personal lives and possible activities that might infringe on their constitutional rights (such as freedom of expression) for a larger outcome of security. Because of this, oversight mechanisms for state surveillance must balance securing national security while submitting itself to some form of accountability to the public.
    • Independence of oversight mechanisms: Given the Indian context, it is particularly important that an oversight mechanism for surveillance powers and the intelligence community is capable of addressing and being independent from political interference. Indeed, the majority of cases regarding illegal interceptions that have reached the public sphere pertain to the surveillance of political figures and political turf wars.[28] Furthermore, though the current Review Committee established in the Indian Telegraph Act does not have a member from the Ministry of Home Affairs (the Ministry responsible for authorizing interception requests), it is unclear how independent this committee is from the authorizing Ministry. To ensure non-biased oversight, it is important that oversight mechanisms are independent.
    • Legislative regulation of intelligence agencies: Currently, intelligence agencies are provided surveillance powers through the Information Technology Act and the Telegraph Act, but beyond the National Intelligence Agency Act which establishes the National Intelligence Agency, there is no legal mechanism creating, regulating and overseeing intelligence agencies using these powers. In the 'surveillance ecosystem' this creates a policy vacuum, where an agency is enabled through law with a surveillance power and provided a procedure to follow, but is not held legally accountable for the effective, ethical, and legal use of the power. To ensure legal accountability of the use of surveillance techniques, it is important that intelligence are created through legislation that includes oversight provisions.
    • Comprehensive oversight of all intrusive measures: Currently the Review Committee established under the Telegraph Act is responsible for the evaluation of orders for the interception, monitoring, decryption, and collection of traffic data. The Review Committee is not responsible for reviewing the implementation or effectiveness of such orders and is not responsible for reviewing orders for access to stored information or other forms of electronic surveillance. This situation is a result of 1. Present oversight mechanisms not having comprehensive mandates 2. Different laws in India enabling different levels of access and not providing a harmonized oversight mechanism and 3.Indian law not formally addressing and regulating emerging surveillance technologies and techniques. To ensure effectiveness, it is important for oversight mechanisms to be comprehensive in mandate and scope.
    • Establishment of a tribunal or redress mechanism: India currently does not have a specified means for individuals to seek redress for unlawful surveillance or surveillance that they feel has violated their rights. Thus, individuals must take any complaint to the courts. The downsides of such a system include the fact that the judiciary might not be able to make determinations regarding the violation, the court system in India is overwhelmed and thus due process is slow, and given the sensitive nature of the topic – courts might not have the ability to immediately access relevant documentation. To ensure redress, it is important that a tribunal or a redress mechanism with appropriate powers is established to address complaints or violations pertaining to surveillance.
    • Annual reporting by security agencies, law enforcement, and service providers: Information regarding orders for surveillance and the implementation of the same is not disclosed by the government or by service providers in India.[29] Indeed, service providers by law are required to maintain the confidentiality of orders for the interception, monitoring, or decryption of communications and monitoring or collection of traffic data. At the minimum, an oversight mechanism should receive annual reports from security agencies, law enforcement, and service providers with respect to the surveillance undertaken. Edited versions of these Reports could be shared with Parliament and the public.
    • Consistent and mandatory reviews of relevant legislation: Though committees have been established to review various legislation and policy pertaining to state surveillance, the time frame for these reviews is not clearly defined by law. These reviews should take place on a consistent and publicly stated time frame. Furthermore, legislation enabling surveillance in India do not require review and assessment for relevance, adequacy, necessity, and proportionality after a certain period of time. Mandating that legislation regulating surveillance is subject to review on a consistent is important in ensuring that the provisions are relevant, proportionate, adequate, and necessary.
    • Transparency of classification and declassification process and centralization of de-classified records: Currently, the Ministry of Home Affairs establishes the process that government departments must follow for classifying and de-classifying information. This process is not publicly available and de-classified information is stored only with the respective department. For transparency purposes, it is important that the process for classification of records be made public and the practice of classification of information take place in exceptional cases. Furthermore, de-classified records should be stored centrally and made easily accessible to the public.
    • Executive and administrative orders regarding establishing of agencies and surveillance projects should be in the public domain: Intelligence agencies and surveillance projects in India are typically enabled through executive orders. For example, NATGRID was established via an executive order, but this order is not publicly available. As a form of transparency and accountability to the public, it is important that if executive orders establish an agency or a surveillance project, these are made available to the public to the extent possible.
    • Oversight of surveillance should incorporate privacy and cyber/national security: Increasingly issues of surveillance, privacy, and cyber security are interlinked. Any move to establish an oversight mechanism for surveillance and the intelligence committee must incorporate and take into consideration privacy and cyber security. This could mean that an oversight mechanism for surveillance in India works closely with CERT-IN and a potential privacy commissioner or that the oversight mechanism contains internal expertise in these areas to ensure that they are adequately considered.
    • Oversight by design: Just like the concept of privacy by design promotes the ideal that principles of privacy are built into devices, processes, services, organizations, and regulation from the outset – oversight mechanisms for state surveillance should also be built in from the outset of surveillance projects and enabling legislation. In the past, this has not been the practice in India– the National Intelligence Grid was an intelligence system that sought to link twenty one databases together – making such information easily and readily accessible to security agencies – but the oversight of such a system was never defined.[30] Similarly, the Centralized Monitoring System was conceptualized to automate and internalize the process of intercepting communications by allowing security agencies to intercept communications directly and bypass the service provider.[31] Despite amending the Telecom Licenses to provide for the technical components of this project, oversight of the project or of security agencies directly accessing information has yet to be defined.[32]

    Examples of oversight mechanisms for State Surveillance: US, UK, Canada and United States

    United States

    In the United States the oversight 'ecosystem' for state surveillance is made up of:

    The Foreign Intelligence Surveillance Court

    The U.S Foreign Intelligence Surveillance Court (FISA) is the predominant oversight mechanism for state surveillance and oversees and authorizes the actions of the Federal Bureau of Investigation and the National Security Agency.[33] The court was established by the enactment of the Foreign Intelligence Surveillance Act 1978 and is governed by Rules of Procedure, the current Rules being formulated in 2010.[34] The Court is empowered to ensure compliance with the orders that it issues and the government is obligated to inform the Court if orders are breached.[35] FISA allows for individuals who receive an order from the Court to challenge the same,[36] and public filings are available on the Court's website.[37] Additionally, organizations, including the American Civil Liberties Union[38] and the Electronic Frontier Foundation, have filed motions with the Court for release of records. [39] Similarly, Google has approached the Court for the ability to publish aggregate information regarding FISA orders that the company recieves.[40]

    Government Accountability Office

    The U.S Government Accountability Office (GAO) is an independent office that works for Congress and conducts audits, investigates, provides recommendations, and issues legal decisions and opinions with regard to federal government spending of taxpayer's money by the government and associated agencies including the Defence Department, the FBI, and Homeland Security.[41] The head of the GAO is the Comptroller General of the United States and is appointed by the President. The GAO will initiate an investigation if requested by congressional committees or subcommittees or if required under public law or committee reports. The GOA has reviewed topics relating to Homeland Security, Information Security, Justice and Law Enforcement, National Defense, and Telecommunications.[42] For example, in June 2015 the GOA completed an investigation and report on 'Foreign Terrorist Organization Process and U.S Agency Enforcement Actions” [43] and an investigation on “Cyber Security: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies”.[44]

    Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence

    The U.S. Senate Select Committee on Intelligence is a standing committee of the U.S Senate with the mandate to review intelligence activities and programs and ensure that these are inline with the Constitution and other relevant laws. The Committee is also responsible for submitting to Senate appropriate proposals for legislation, and for reporting to Senate on intelligence activities and programs.[45] The House Permanent Select Committee holds similar jurisdiction. The House Permanent Select Committee is committed to secrecy and cannot disclose classified information excepted authorized to do so. Such an obligation does not exist for the Senate Select Committee on Intelligence and the committee can disclose classified information publicly on its own.[46]

    Privacy and Civil Liberties Oversight Board (PCLOB)

    The Privacy and Civil Liberties Oversight Board was established by the Implementing Recommendations of the 9/11 Commission Act of 2007 and is located within the executive branch.[47] The objective of the PCLOB is to ensure that the Federal Government's actions to combat terrorism are balanced against privacy and civil liberties. Towards this, the Board has the mandate to review and analyse ant-terrorism measures the executive takes and ensure that such actions are balanced with privacy and civil liberties, and to ensure that privacy and civil liberties are liberties are adequately considered in the development and implementation of anti-terrorism laws, regulations and policies.[48] The Board is responsible for developing principles to guide why, whether, when, and how the United States conducts surveillance for authorized purposes. Additionally, officers of eight federal agencies must submit reports to the PCLOB regarding the reviews that they have undertaken, the number and content of the complaints, and a summary of how each complaint was handled. In order to fulfill its mandate, the Board is authorized to access all relevant records, reports, audits, reviews, documents, papers, recommendations, and classified information. The Board may also interview and take statements from necessary personnel. The Board may request the Attorney General to subpoena on the Board's behalf individuals outside of the executive branch.[49]

    To the extent possible, the Reports of the Board are made public. Examples of recommendations that the Board has made in the 2015 Report include: End the NSA”s bulk telephone records program, add additional privacy safeguards to the bulk telephone records program, enable the FISC to hear independent views on novel and significant matters, expand opportunities for appellate review of FISC decisions, take advantage of existing opportunities for outside legal and technical input in FISC matters, publicly release new and past FISC and DISCR decisions that involve novel legal, technical, or compliance questions, publicly report on the operation of the FISC Special Advocate Program, Permit Companies to Disclose Information about their receipt of FISA production orders and disclose more detailed statistics on surveillance, inform the PCLOB of FISA activities and provide relevant congressional reports and FISC decisions, begin to develop principles for transparency, disclose the scope of surveillance authorities affecting US Citizens.[50]

    The Wiretap Report

    The Wiretap Report is an annual compilation of information provided by federal and state officials regarding applications for interception orders of wire, oral, or electronic communications, data address offenses under investigation, types and locations of interception devices, and costs and duration of authorized intercepts.[51] When submitting information for the report a judge will include the name and jurisdiction of the prosecuting official who applied for the order, the criminal offense under investigation, the type of intercept device used, the physical location of the device, and the duration of the intercept. Prosecutors provide information related to the cost of the intercept, the number of days the intercept device was in operation, the number of persons whose communications were intercepted, the number of intercepts, and the number of incriminating intercepts recorded. Results of the interception orders such as arrest, trials, convictions, and the number of motions to suppress evidence are also noted in the prosecutor reports. The Report is submitted to Congress and is legally required under Title III of the Omnibus Crime Control and Safe Streets Act of 1968. The report is issued by the Administrative Office of the United States Courts.[52]

    United Kingdom

    The Intelligence and Security Committee (ISC) of Parliament

    The Intelligence Security Committee was established by the Intelligence Services Act 1994. Members are appointed by the Prime Minster and the Committee reports directly to the same. Additionally, the Committee submits annual reports to Parliament. Towards this, the Committee can take evidence from cabinet ministers, senior officials, and from the public.[53] The most recent report of the Committee is the 2015 “Report on Privacy and Security”.[54] Members of the Committee are subject to the Official Secrets Act 1989 and have access to classified material when carrying out investigations.[55]

    Joint Intelligence Committee (JIC)

    This Joint Intelligence Committee is located in the Cabinet office and is broadly responsible for overseeing national intelligence organizations and providing advice to the Cabinet on issues related to security, defense, and foreign affairs. The JIC is overseen by the Intelligence and Security Committee.[56]

    The Interception of Communications Commissioner

    The Interception of Communications Commissioner is appointed by the Prime Minster under the Regulation of Investigatory Powers Act 2000 for the purpose of reviewing surveillance conducted by intelligence agencies, police forces, and other public authorities. Specifically, the Commissioner inspects the interception of communications, the acquisition and disclosure of communications data, the interception of communications in prisons, and the unintentional electronic interception.[57] The Commissioner submits an annual report to the Prime Minister. The Reports of the Commissioner are publicly available.[58]

    The Intelligence Services Commissioner

    The Intelligence Services Commissioner is an independent body appointed by the Prime Minister that is legally empowered through the Regulation of Investigatory Powers Act (RIPA) 2000. The Commissioner provides independent oversight on the use of surveillance by UK intelligence services.[59] Specifically, the Commissioner is responsible for reviewing authorized interception orders and the actions and performance of the intelligence services.[60] The Commissioner is also responsible for providing assistance to the Investigatory Powers Tribunal, submitting annual reports to the Prime Minister on the discharge of its functions, and advising the Home Office on the need of extending the Terrorism Prevention and Investigation Measures regime.[61] Towards these the Commissioner conducts in-depth audits on the orders for interception to ensure that the surveillance is within the scope of the law, that the surveillance was necessary for a legally established reason, that the surveillance was proportionate, that the information accessed was justified by the privacy invaded, and that the surveillance authorized by the appropriate official. The Commissioner also conducts 'site visits' to ensure that orders are being implemented as per the law.[62] As a note, the Intelligence Services Commissioner does not undertake any subject that is related to the Interception of Communications Commissioner. The Commissioner has access to any information that he feels is necessary to carry out his investigations. The Reports of the Intelligence Service Commissioner are publicly available.[63]

    Investigatory Powers Tribunal

    The Investigatory Powers Tribunal is a court which investigates complaints of unlawful surveillance by public authorities or intelligence/law enforcement agencies.[64] The Tribunal was established under the Regulation of Investigatory Powers Act 2000 and has a range of oversight functions to ensure that public authorities act and agencies are in compliance with the Human Rights Act 1998.[65] The Tribunal specifically is an avenue of redress for anyone who believes that they have been a victim of unlawful surveillance under RIPA or wider human rights infringements under the Human Rights Act 1998. The Tribunal can provide seven possible outcomes for any application including 'found in favor of complainant, no determination in favour of complainant, frivolous or vexatious, out of time, out of jurisdiction, withdrawn, or no valid complaint.[66] The Tribunal has the authority to receive and consider evidence in any form, even if inadmissible in an ordinary court.[67] Where possible, cases are available on the Tribunal's website. Decisions by the Tribunal cannot be appealed, but can be challenged in the European Court of Human Rights.[68]

    Canada

    In Canada the oversight 'ecosystem' for state surveillance includes:

    Security Intelligence Review Committee

    The Security Intelligence Review Committee is an independent body that is accountable to the Parliament of Canada and reports on the Canadian Security Intelligence Service.[69] Members of the Security Intelligence Review Committee are appointed by the Prime Minister of Canada. The committee conducts reviews on a pro-active basis and investigates complaints. Committee members have access to classified information to conduct reviews. The Committee submits an annual report to Parliament and an edited version is publicly available. The 2014 Report was titled “Lifting the Shroud of Secrecy”[70] and includes reviews of the CSIS's activities, reports on complaints and subsequent investigations, and provides recommendations.

    Office of the Communications Security Establishment Commissioner

    The Communications Security Commissioner conducts independent reviews of Communications Security Establishment (CSE) activities to evaluate if they are within the scope of Canadian law.[71] The Commissioner submits a report to Parliament on an annual basis and has a number of powers including the power to subpoena documents and personnel.[72] If the Commissioner believes that the CSE has not complied with the law – it must report this to the Attorney General of Canada and to the Minister of National Defence. The Commissioner may also receive information from persons bound to secrecy if they deem it to be in the public interest to disclose such information.[73] The Commissioner is also responsible for verifying that the CSE does not surveil Canadians and for promoting measures to protect the privacy of Canadians.[74] When conducting a review, the Commissioner has the ability to examine records, receive briefings, interview relevant personnel, assess the veracity of information, listen to intercepted voice recordings, observe CSE operators and analysts to verify their work, examine CSI electronic tools, systems and databases to ensure compliance with the law.[75]

    Office of the Privacy Commissioner

    The Office of the Privacy Commissioner of Canada (OPC) oversees the implementation of and compliance with the Privacy Act and the Personal information and Electronic Documents Act.[76]

    The OPC is an independent body that has the authority to investigate complaints regarding the handling of personal information by government and private companies, but can only comment on the activities of security and intelligence agencies. For example, in 2014 the OPC issued the report “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber Surveillance”[77] The OPC can also provide testimony to Parliament and other government bodies.[78] For example, the OPC has made appearances before the Senate Standing Committee of National Security and Defense on Bill C-51.[79] The OPC cannot conduct joint audits or investigations with other bodies.[80]

    Annual Interception Reports

    Under the Criminal Code of Canada, regional governments must issue annual interception reports. The reports must include number of individuals affected by interceptions, average duration of the interception, type of crimes investigated, numbers of cases brought to court, and number of individuals notified that interception had taken place.[81]

    Conclusion

    The presence of multiple and robust oversight mechanisms for state surveillance does not necessarily correlate to effective oversight. The oversight mechanisms in the UK, Canada, and the U.S have been criticised. For example, Canada . For example, the Canadian regime has been characterized as becoming weaker it has removed one of its key over sight mechanisms – the Inspector General of the Canadian Security Intelligence Service which was responsible for certifying that the Service was in compliance with law.[82]

    Other weaknesses in the Canadian regime that have been highlighted include the fact that different oversight bodies do not have the authority to share information with each other, and transparency reports do not include many new forms of surveillance.[83] Oversight mechanisms in the U.S on the other hand have been criticized as being opaque[84] or as lacking the needed political support to be effective.[85] The UK oversight mechanism has been criticized for not having judicial authorization of surveillance requests, have opaque laws, and for not having a strong right of redress for affected individuals.[86] These critiques demonstrate that there are a number of factors that must come together for an oversight mechanism to be effective. Public transparency and accountability to decision making bodies such as Parliament or Congress can ensure effectiveness of oversight mechanisms, and are steps towards providing the public with means to debate in an informed manner issues related to state surveillance and allows different bodies within the government the ability to hold the state accountable for its actions.


      .[1]. For example, “Public Oversight” is one of the thirteen Necessary and Proportionate principles on state communications surveillance developed by civil society and academia globally, that should be incorporated by states into communication surveillance regimes. The principles can be accessed here: https://en.necessaryandproportionate.org/

      [2]. Hans Born and Ian Leigh, “Making Intelligence Accountable. Legal Standards and Best Practice for Oversight of Intelligence Agencies.” Pg. 13. 2005. Available at: http://www.prsindia.org/theprsblog/wp-content/uploads/2010/07/making-intelligence.pdf. Last accessed: August 6, 2015.

      [3]. For example, this point was made in the context of the UK. For more information see: Nick Clegg, 'Edward Snowden's revelations made it clear: security oversight must be fit for the internet age,”. The Guardian. March 3rd 2014. Available at: http://www.theguardian.com/commentisfree/2014/mar/03/nick-clegg-snowden-security-oversight-internet-age. Accessed: July 27, 2015.

      [4]. International Principles on the Application of Human Rights to Communications Surveillance. Available at: https://en.necessaryandproportionate.org/

      [5]. Sub Rules (16) and (17) of Rule 419A, Indian Telegraph Rules, 1951. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf Note: This review committee is responsible for overseeing interception orders issued under the Indian Telegraph Act and the Information Technology Act.

      [6]. Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information Rules 2009. Definition q. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [7]. Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Definition (n). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [8]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [9]. This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act. Section 2, Indian Telegraph Act 1885 and Section 4, Information Technology Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009

      [10]. Definition (d) and section 3 of the Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information Rules, 2009). Available at: http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

      [11]. Rule 1, of the 419A Rules, Indian Telegraph Act 1885. Available at:http://www.dot.gov.in/sites/default/files/march2007.pdf This authority is responsible for authorizing interception requests issued under the Indian Telegraph Act and the Information Technology Act.

      [12]. Section 92, CrPc. Available at: http://www.icf.indianrailways.gov.in/uploads/files/CrPC.pdf

      [13]. Press Information Bureau GOI. Reconstitution of Cabinet Committees. June 19th 2014. Available at: http://pib.nic.in/newsite/PrintRelease.aspx?relid=105747. Accessed August 6, 2015.

      [14]. Press Information Bureau, Government of India. Home minister proposes radical restructuring of security architecture. Available at: http://www.pib.nic.in/newsite/erelease.aspx?relid=56395. Accessed August 6, 2015.

      [15]. Section 24 read with Schedule II of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [16]. Section 8 of the Right to Information Act 2005. Available at: http://rti.gov.in/rti-act.pdf

      [17]. Abhimanyu Ghosh. “Open Government and the Right to Information”. Legal Services India. Available at: http://www.legalservicesindia.com/articles/og.htm. Accessed: August 8, 2015

      [18]. Public Record Rules 1997. Section 2. Definition c. Available at: http://nationalarchives.nic.in/writereaddata/html_en_files/html/public_records97.html. Accessed: August 8, 2015

      [19]. Times of India. Classified information is reviewed after 25-30 years. April 13th 2015. Available at: http://timesofindia.indiatimes.com/india/Classified-information-is-reviewed-after-25-30-years/articleshow/46901878.cms. Accessed: August 8, 2015.

      [20]. Government of India. Ministry of Home Affairs. Lok Sabha Starred Question No 557. Available at: http://mha1.nic.in/par2013/par2015-pdfs/ls-050515/557.pdf.

      [21]. The Kargil Committee report Executive Summanry. Available at: http://fas.org/news/india/2000/25indi1.htm. Accessed: August 6, 2015.

      [22]. PIB Releases. Group of Ministers Report on Reforming the National Security System”. Available at: http://pib.nic.in/archieve/lreleng/lyr2001/rmay2001/23052001/r2305200110.html. Last accessed: August 6, 2015

      [23]. The Observer Research Foundation. “Manish Tewari introduces Bill on Intelligence Agencies Reform. August 5th 2011. Available at: http://www.observerindia.com/cms/sites/orfonline/modules/report/ReportDetail.html?cmaid=25156&mmacmaid=20327. Last accessed: August 6, 2015.

      [24]. The Intelligence Services (Powers and Regulation) Bill, 2011. Available at: http://www.observerindia.com/cms/export/orfonline/documents/Int_Bill.pdf. Accessed: August 6, 2015.

      [25]. The Privacy Bill 2011. Available at: https://bourgeoisinspirations.files.wordpress.com/2010/03/draft_right-to-privacy.pdf

      [26]. The Report of Group of Experts on Privacy. Available at: http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf

      [27]. Institute for Defence Studies and Analyses. “A Case for Intelligence Reforms in India”. Available at: http://www.idsa.in/book/AcaseforIntelligenceReformsinIndia.html. Accessed: August 6, 2015.

      [28]. India Calls for Transparency in internet Surveillance. NDTV. July 3rd 2015. Available at: http://gadgets.ndtv.com/internet/news/india-calls-for-transparency-in-internet-surveillance-710945. Accessed: July 6, 2015.

      [29]. Lovisha Aggarwal. “Analysis of News Items and Cases on Surveillance and Digital Evidence in India”. Available at: http://cis-india.org/internet-governance/blog/analysis-of-news-items-and-cases-on-surveillance-and-digital-evidence-in-india.pdf

      [30]. Rule 25 (4) of the Information Technology (Procedures and Safeguards for the Interception, Monitoring, and Decryption of Information Rules) 2011. Available at: http://dispur.nic.in/itact/it-procedure-interception-monitoring-decryption-rules-2009.pdf

      [31]. Ministry of Home Affairs, GOI. National Intelligence Grid. Available at: http://www.davp.nic.in/WriteReadData/ADS/eng_19138_1_1314b.pdf. Last accessed: August 6, 2015

      [32]. Press Information Bureau, Government of India. Centralised System to Monitor Communications Rajya Sabha. Available at: http://pib.nic.in/newsite/erelease.aspx?relid=54679. Last accessed: August 6, 2015.

      [33]. Department of Telecommunications. Amendemnt to the UAS License agreement regarding Central Monitoring System. June 2013. Available at: http://cis-india.org/internet-governance/blog/uas-license-agreement-amendment

      [34]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [35]. United States Foreign Intelligence Surveillance Court. Rules of Procedure 2010. Available at: http://www.fisc.uscourts.gov/sites/default/files/FISC%20Rules%20of%20Procedure.pdf

      [36]. United States Foreign Intelligence Court. Honorable Patrick J. Leahy. 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf

      [37]. United States Foreign Intelligence Surveillance Court. July 29th 2013. Available at: http://www.fisc.uscourts.gov/sites/default/files/Leahy.pdf. Last accessed: August 8, 2015

      [38]. Public Filings – U.S Foreign Intelligence Surveillance Court. Available at: http://www.fisc.uscourts.gov/public-filings

      [39]. ACLU. FISC Public Access Motion – ACLU Motion for Release of Court Records Interpreting Section 215 of the Patriot Act. Available at: https://www.aclu.org/legal-document/fisc-public-access-motion-aclu-motion-release-court-records-interpreting-section-215

      [40]. United States Foreign Intelligence Surveillance Court Washington DC. In Re motion for consent to disclosure of court records or, in the alternative a determination of the effect of the Court's rules on statutory access rights. Available at: https://www.eff.org/files/filenode/misc-13-01-opinion-order.pdf

      [41]. Google Official Blog. Shedding some light on Foreign Intelligence Surveillance Act (FISA) requests. February 3rd 2014. Available at: http://googleblog.blogspot.in/2014/02/shedding-some-light-on-foreign.html

      [42]. U.S Government Accountability Office. Available at: http://www.gao.gov/key_issues/overview#t=1. Last accessed: August 8, 2015.

      [43]. Report to Congressional Requesters. Combating Terrorism: Foreign Terrorist Organization Designation Proces and U.S Agency Enforcement Actions. Available at: http://www.gao.gov/assets/680/671028.pdf. Accessed: August 8, 2015

      [44]. United States Government Accountability Office. Cybersecurity: Recent Data Breaches Illustrate Need for Strong Controls across Federal Agencies. Available: http://www.gao.gov/assets/680/670935.pdf. Last accessed: August 6, 2015.

      [45]. Committee Legislation. Available at: http://ballotpedia.org/United_States_Senate_Committee_on_Intelligence_(Select)#Committee_legislation

      [46]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8, 2015

      [47]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [48]. The Privacy and Civil Liberties Oversight Board: About the Board. Available at: https://www.pclob.gov/aboutus.html

      [49]. Congressional Research Service. Congressional Oversight of Intelligence: Current Structure and Alternatives. May 14th 2012. Available at: https://fas.org/sgp/crs/intel/RL32525.pdf. Last Accessed: August 8th 2015

      [50]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statistics-reports/analysisreports/wiretap-reports

      [51]. United States Courts. Wiretap Reports. Available at: http://www.uscourts.gov/statisticsreports/
      analysis-reports/wiretap-reports/faqs-wiretap-reports#faq-What-information-does-the-AO-receive-from-prosecutors?. Last Accessed: August 8th 2015

      [52]. Intelligence and Security Committee of Parliament. Transcripts and Public Evidence. Available at: http://isc.independent.gov.uk/public-evidence. Last accessed: August 8th 2015.

      [53]. Intelligence and Security Committee of Parliament. Special Reports. Available at http://isc.independent.gov.uk/committee-reports/special-reports. Last accessed: August 8th 2015.

      [54]. Hugh Segal. The U.K. has legislative oversight of surveillance. Why not Canada. The Globe and Mail. June 12th 2013. Available at: http://www.theglobeandmail.com/globe-debate/uk-haslegislative-oversight-of-surveillance-why-not-canada/article12489071/. Last accessed: August 8th 2015

      [55]. The Joint Intelligence Committee home page. For more information see: https://www.gov.uk/government/organisations/national-security/groups/joint-intelligence-committee

      [56]. Interception of Communications Commissioner's Office. RIPA. Available at: http://www.iocco-uk.info/sections.asp?sectionID=2&type=top. Last accessed: August 8th 2015

      [57]. Interception of Communications Commissioner's Office. Reports. Available at: http://www.iocco-uk.info/sections.asp?sectionID=1&type=top. Last accessed: August 8th 2015

      [58]. The Intelligence Services Commissioner's Office Homepage. For more information see: http://intelligencecommissioner.com/

      [59]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [60]. The Intelligence Services Commissioner's Office – The Commissioner's Statutory Functions. Available at: http://intelligencecommissioner.com/content.asp?id=4

      [61]. The Intelligence Services Commissioner's Office. What we do. Available at: http://intelligencecommissioner.com/content.asp?id=5. Last Accessed: August 8th 2015.

      [62]. The Intelligence Services Commissioner's Office. Intelligence Services Commissioner's Annual Reports. Available at: http://intelligencecommissioner.com/content.asp?id=19. Last
      accessed: August 8th 2015

      [63]. The Investigatory Powers Tribunal Homepage. Available at: http://www.ipt-uk.com/

      [64]. The Investigatory Powers Tribunal – Functions – Key role. Available at: http://www.ipt-uk.com/section.aspx?pageid=1

      [65]. Investigatory Powers Tribunal. Functions – Decisions available to the Tribunal. Available at: http://www.ipt-uk.com/section.aspx?pageid=4. Last accessed: August 8th 2015

      [66]. Investigator Powers Tribunal. Operation - Available at: http://www.ipt-uk.com/section.aspx?pageid=7

      [67]. Investigatory Powers Tribunal. Operation- Differences to the ordinary court system. Available at: http://www.ipt-uk.com/section.aspx?pageid=7. Last accessed: August 8th 2015

      [68]. Security Intelligence Review Committee – Homepage. Available at: http://www.sirc-csars.gc.ca/index-eng.html

      [69]. SIRC Annual Report 2013-2014: Lifting the Shroud of Secrecy. Available at: http://www.sirccsars. gc.ca/anrran/2013-2014/index-eng.html. Last accessed: August 6th 2015.

      [70]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [71]. The Office of the Communications Security Establishment – Homepage. Available at: http://www.ocsecbccst.gc.ca/index_e.php

      [72]. The Office of the Communications Security Establishment – Mandate. Available at: http://www.ocsecbccst.gc.ca/mandate/index_e.php

      [73]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [74]. The Office of the Communications Security Establishment – Functions. Available at: http://www.ocsecbccst.gc.ca/functions/review_e.php

      [75]. Office of the Privacy Commissioner of Canada. Homepage. Available at: https://www.priv.gc.ca/index_e.ASP

      [76]. Office of the Privacy Commissioner of Canada. Reports and Publications. Special Report to Parliament “Checks and Controls: Reinforcing Privacy Protection and Oversight for the Canadian Intelligence Community in an Era of Cyber-Surveillance. January 28th 2014. Available at: https://www.priv.gc.ca/information/srrs/201314/sr_cic_e.asp

      [77]. Office of the Privacy Commissioner of Canada. Available at: https://www.priv.gc.ca/index_e.asp. Last accessed: August 6th 2015.

      [78]. Office of the Privacy Commissioner of Canada. Appearance before the Senate Standing Commitee National Security and Defence on Bill C-51, the Anti-Terrorism Act, 2015. Available at: https://www.priv.gc.ca/parl/2015/parl_20150423_e.asp. Last accessed: August 6th 2015.

      [79]. Office of the Privacy Commissioner of Canada. Special Report to Parliament. January 8th 2014. Available at: https://www.priv.gc.ca/information/sr-rs/201314/sr_cic_e.asp. Last accessed: August 6th 2015.

      [80]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [81]. Patrick Baud. The Elimination of the Inspector General of the Canadian Security Intelligence Serive. May 2013. Ryerson University. Available at; http://www.academia.edu/4731993/The_Elimination_of_the_Inspector_General_of_the_Canadian_Security_Intelligence_Service

      [82]. Telecom Transparency Project. The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians. Available at: http://www.telecomtransparency.org/wp-content/uploads/2015/05/Governance-of-Telecommunications-Surveillance-Final.pdf. Last accessed: August 6th 2015.

      [83]. Glenn Greenwald. Fisa court oversight: a look inside a secret and empty process. The Guardian. June 19th 2013. Available at: http://www.theguardian.com/commentisfree/2013/jun/19/fisa-court-oversight-process-secrecy, Nadia Kayyali. Privacy and Civil Liberties Oversight Board to NSA: Why is Bulk Collection of Telelphone Records Still Happening? February 2105. Available at :https://www.eff.org/deeplinks/2015/02/privacy-and-civil-liberties-oversight-board-nsa-whybulk-collection-telephone. Last accessed: August 8th 2015.

      [84]. Scott Shance. The Troubled Life of the Privacy and Civil Liberties Oversight Board. August 9th 2012. The Caucus. Available at: http://thecaucus.blogs.nytimes.com/2012/08/09/thetroubled-life-of-the-privacy-and-civil-liberties-oversight-board/?_r=0. Last accessed: August 8th 2015

      [85]. The Open Rights Group. Don't Spy on Us. Reforming Surveillance in the UK. September 2014. Available at: https://www.openrightsgroup.org/assets/files/pdfs/reports/DSOU_Reforming_surveillance_old.pdf

      [86].

    Do we need a Unified Post Transition IANA?

    by Pranesh Prakash, Padmini Baruah and Jyoti Panday — last modified Oct 27, 2015 12:46 AM
    As we stand at the threshold of the IANA Transition, we at CIS find that there has been little discussion on the question of how the transition will manifest. The question we wanted to raise was whether there is any merit in dividing the three IANA functions – names, numbers and protocols – given that there is no real technical stability to be gained from a unified Post Transition IANA. The analysis of this idea has been detailed below.

    The Internet Architecture Board, in a submission to the NTIA in 2011 claims that splitting the IANA functions would not be desirable.[1] The IAB notes, “There exists synergy and interdependencies between the functions, and having them performed by a single operator facilitates coordination among registries, even those that are not obviously related,” and also that that the IETF makes certain policy decisions relating to names and numbers as well, and so it is useful to have a single body. But they don’t say why having a single email address for all these correspondences, rather than 3 makes any difference: Surely, what’s important is cooperation and coordination. Just as IETF, ICANN, NRO being different entities doesn’t harm the Internet, splitting the IANA function relating to each entity won’t harm the Internet either. Instead will help stability by making each community responsible for the running of its own registers, rather than a single point of failure: ICANN and/or “PTI”.

    A number of commentators have supported this viewpoint in the past: Bill Manning of University of Southern California’s ISI (who has been involved in DNS operations since DNS started), Paul M. Kane (former Chairman of CENTR's Board of Directors), Jean-Jacques Subrenat (who is currently an ICG member), Association française pour le nommage Internet en coopération (AFNIC), the Internet Governance Project, InternetNZ, and the Coalition Against Domain Name Abuse (CADNA).

    The Internet Governance Project stated: “IGP supports the comments of Internet NZ and Bill Manning regarding the feasibility and desirability of separating the distinct IANA functions. Structural separation is not only technically feasible, it has good governance and accountability implications. By decentralizing the functions we undermine the possibility of capture by governmental or private interests and make it more likely that policy implementations are based on consensus and cooperation.”[2]

    Similarly, CADNA in its 2011 submission to NTIA notes that that in the current climate of technical innovation and the exponential expansion of the Internet community, specialisation of the IANA functions would result in them being better executed. The argument is also that delegation of the technical and administrative functions among other capable entities (such as the IETF and IAB for protocol parameters, or an international, neutral organization with understanding of address space protocols as opposed to RIRs) determined by the IETF is capable of managing this function would ensure accountability in Internet operation. Given that the IANA functions are mainly registry-maintenance function, they can to a large extent be automated. However, a single system of automation would not fit all three.

    Instead of a single institution having three masters, it is better for the functions to be separated. Most importantly, if one of the current customers wishes to shift the contract to another IANA functions operator, even if it isn’t limited by contract, it is limited by the institutional design, since iana.org serves as a central repository. This limitation didn’t exist, for instance, when the IETF decided to enter into a new contract for the RFC Editor role. This transition presents the best opportunity to cleave the functions logically, and make each community responsible for the functioning of their own registers, with IETF, which is mostly funded by ISOC, taking on the responsibility of handing the residual registries, and a discussion about the .ARPA and .INT gTLDs.

    From the above discussion, three main points emerge:

    • Splitting of the IANA functions allows for technical specialisation leading to greater efficiency of the IANA functions.
    • Splitting of the IANA functions allows for more direct accountability, and no concentration of power.
    • Splitting of the IANA functions allows for ease of shifting of the {names,number,protocol parameters} IANA functions operator without affecting the legal structure of any of the other IANA function operators.

    [1]. IAB response to the IANA FNOI, July 28, 2011. See: https://www.iab.org/wp-content/IAB-uploads/2011/07/IANA-IAB-FNOI-2011.pdf

    [2]. Internet Governance Project, Comments of the Internet Governance Project on the NTIA's "Request for Comments on the Internet Assigned Numbers Authority (IANA) Functions" (Docket # 110207099-1099-01) February 25, 2011 See: http://www.ntia.doc.gov/federal-register-notices/2011/request-comments-internet-assigned-numbers-authority-iana-functions

    Connected Trouble

    by Sunil Abraham last modified Oct 28, 2015 04:47 PM
    The internet of things phenomenon is based on a paradigm shift from thinking of the internet merely as a means to connect individuals, corporations and other institutions to an internet where all devices in (insulin pumps and pacemakers), on (wearable technology) and around (domestic appliances and vehicles) humans beings are connected.

    The guest column was published in the Week, issue dated November 1, 2015.


    Proponents of IoT are clear that the network effects, efficiency gains, and scientific and technological progress unlocked would be unprecedented, much like the internet itself.

    Privacy and security are two sides of the same coin―you cannot have one without the other. The age of IoT is going to be less secure thanks to big data. Globally accepted privacy principles articulated in privacy and data protection laws across the world are in conflict with the big data ideology. As a consequence, the age of internet of things is going to be less stable, secure and resilient. Three privacy principles are violated by most IoT products and services.

    Data minimisation

    According to this privacy principle, the less the personal information about the data subject that is collected and stored by the data controller, the more the data subject's right to privacy is protected. But, big data by definition requires more volume, more variety and more velocity and IoT products usually collect a lot of data, thereby multiplying risk.

    Purpose limitation

    This privacy principle is a consequence of the data minimisation principle. If only the bare minimum of personal information is collected, then it can only be put to a limited number of uses. But, going beyond that would harm the data subject. IoT innovators and entrepreneurs are trying to rapidly increase features, efficiency gains and convenience. Therefore, they don't know what future purposes their technology will be put to tomorrow and, again by definition, resist the principle of purpose limitation.

    Privacy by design

    Data protection regulation required that products and services be secure and protect privacy by design and not as a superficial afterthought. IoT products are increasingly being built by startups that are disrupting markets and taking down large technology incumbents. The trouble, however, is that most of these startups do not have sufficient internal security expertise and in their tearing hurry to take products to the market, many IoT products may not be comprehensively tested or audited from a privacy perspective.

    There are other cyber security principles and internet design principles that are disregarded by the IoT phenomenon, further compromising security and privacy of users.

    Centralisation

    Most of the network effects that IoT products contribute to require centralisation of data collected from users and their devices. For instance, if users of a wearable physical activity tracker would like to use gamification to keep each other motivated during exercise, the vendor of that device has to collect and store information about all its users. Since some users always wear them, they become highly granular stores of data that can also be used to inflict privacy harms.

    Decentralisation was a key design principle when the internet was first built. The argument was that you can never take down a decentralised network by bombing any of the nodes. Unfortunately, because of the rise of internet monopolies like Google, the age of cloud computing, and the success of social media giants, the internet is increasingly becoming centralised and, therefore, is much more fragile than it used be. IoT is going to make this worse.

    Complexity

    The more complex a particular technology is, the more fragile and vulnerable it is. This is not necessarily true but is usually the case given that more complex technology needs more quality control, more testing and more fixes. IoT technology raises complexity exponentially because the devices that are being connected are complex themselves and were not originally engineered to be connected to the internet. The networks they constitute are nothing like the internet which till now consisted of clients, web servers, chat servers, file servers and database servers, usually quite removed from the physical world. Compromised IoT devices, on the other hand, could be used to inflict direct harm on life and property.

    Death of the air gap

    The things that will be connected to the internet were previously separated from the internet through the means of an air gap. This kept them secure but also less useful and usable. In other words, the very act of connecting devices that were previously unconnected will expose them to a range of attacks. Security and privacy related laws, standards, audits and enforcement measures are the best way to address these potential pitfalls. Governments, privacy commissioners and data protections authorities across the world need to act so that the privacy of people and the security of our information society are protected.

    Breaking Down ICANN Accountability: What It Is and What the Internet Community Wants

    by Ramya Chandrasekhar last modified Nov 05, 2015 03:29 PM
    At the recent ICANN conference held in Dublin (ICANN54), one issue that was rehashed and extensively deliberated was ICANN's accountability and means to enhance the same. In light of the impending IANA stewardship transition from the NTIA to the internet's multi-stakeholder community, accountability of ICANN to the internet community becomes that much more important. In this blog post, some aspects of the various proposals to enhance ICANN's accountability have been deconstructed and explained.

    The Internet Corporation for Assigned Names and Numbers, known as ICANN, is a private not-for-profit organization, registered in California. Among other functions, it is tasked with carrying out the IANA function[1], pursuant to a contract between the US Government (through the National Telecommunications and Information Administration – NTIA) and itself. Which means, as of now, there exists legal oversight by the USG over ICANN with regard to the discharge of these IANA functions.[2]

    However, in 2014, the NTIA, decided to completely handover stewardship of the IANA functions to the internet’s ‘global multistakeholder community’. But the USG put down certain conditions before this transition could be effected, one of which was to ensure that there exists proper accountability within the ICANN.[3]

    The reason for this, was that the internet community feared a shift of ICANN to a FIFA-esque organization with no one to keep it in check, post the IANA transition if these accountability concerns weren’t addressed.[4]

    And thus, to answer these concerns, the Cross Community Working Group (CCWG-Accountability) has come up with reports that propose certain changes to the structure and functioning of ICANN.

    In light of the discussions that took place at ICANN54 in Dublin, this blog post is directed towards summarizing some of these proposals - those pertaining to the Independent Review Process or IRP (explained below) as well the various accountability models that are the subject of extensive debate both on and off the internet.

    Building Blocks Identified by the CCWG-Accountability

    The CCWG-Accountability put down four “building blocks”, as they call it, on which all their work is based. One of these is what is known as the Independent Review Process (or IRP). This is a mechanism by which internal complaints, either by individuals or by SOs/ACs[5], are addressed. However, the current version of the IRP is criticized for being an inefficient mechanism of dispute resolution.[6]

    And thus the CCWG-Accountability proposed a variety of amendments to the same.

    Another building block that the CCWG-Accountability identified is the need for an “empowered internet community”, which means more engagement between the ICANN Board and the internet community, as well as increased oversight by the community over the Board. As of now, the USG acts as the oversight-entity. Post the IANA transition however, the community feels they should step in and have an increased say with regard to decisions taken by the ICANN Board.

    As part of empowering the community, the CCWG-Accountability identified five core areas in which the community needs to possess some kind of powers or rights. These areas are – review and rejection of the ICANN budget, strategic plans and operating plans; review, rejection and/or approval of standard bylaws as well fundamental bylaws; review and rejection of Board decisions pertaining to IANA functions; appointment and removal of individual directors on the Board; and recall of the entire Board itself. And it is with regard to what kind of powers and rights are to be vested with the community that a variety of accountability models have been proposed, both by the CCWG-Accountability as well as the ICANN Board. However, of all these models, discussion is now primarily centered on three of them – the Sole Member Model (SMM), the Sole Designator Model (SDM) and the Multistakeholder Empowerment Model (MEM).

    What is the IRP?

    The Independent Review Process or IRP is the dispute resolution mechanism, by which complaints and/or oppositions by individuals with regard to Board resolutions are addressed. Article 4 of the ICANN bylaws lay down the specifics of the IRP. As of now, a standing panel of six to nine arbitrators is constituted, from which a panel is selected for hearing every complaint. However, the primary criticism of the current version of the IRP is the restricted scope of issues that the panel passes decisions on.[7]

    The bylaws explicitly state that the panel needs to focus on a set on procedural questions while hearing a complaint – such as whether the Board acted in good faith or exercised due diligence in passing the disputed resolution.

    Changes Proposed by the Internet Community to Enhance the IRP

    To tackle this and other concerns with the existing version of the IRP, the CCWG-Accountability proposed a slew of changes in the second draft proposal that they released in August this year. What they proposed is to make the IRP arbitral panel hear complaints and decide the matter on both procedural (as they do now) and substantive grounds. In addition, they also propose a broadening of who all have locus to initiate an IRP, to include individuals, groups and other entities. Further, they also propose a more precedent-based method of dispute resolution, wherein a panel refers to and uses decisions passed by past panels in arriving at a decision.

    At the 19th October “Enhancing ICANN-Accountability Engagement Session” that took place in Dublin as part of ICANN54, the mechanism to initiate an IRP was explained by Thomas Rickert, CCWG Co-Chair.[8]

    Briefly, the modified process is as follows -

    • An objection may be raised by any individual, even a non-member.
    • This individual needs to find an SO or an AC that shares the objection.
    • A “pre-call” or remote meeting between all the SOs and ACs is scheduled, to see if objection receives prescribed threshold of approval from the community.
    • If this threshold is met, dialogue is undertaken with the Board, to see if the objection is sustained by the Board.
    • If this dialogue also fails, then IRP can be initiated.

    The question of which “enforcement model” empowers the community arises post the initiation of this IRP, and in the event that the community receives an unfavourable decision through the IRP or that the ICANN Board refuses to implement the IRP decision. Thus, all the “enforcement models” retain the IRP as the primary method of internal dispute resolution.

    The direction that the CCWG-Accountability has taken with regard to enhancement of the IRP is heartening. And these proposals have received large support from the community. What is to be seen now is whether these proposals will be fully implemented by the Board or not, in addition to all the other proposals made by the CCWG.

    Enforcement  – An Overview of the Different Models

    In addition to trying to enhance the existing dispute resolution mechanism, the CCWG-Accountability also came up with a variety of “enforcement models”, by which the internet community would be vested with certain powers. And in response to the models proposed by the CCWG-Accountability, the ICANN Board came up with a counter proposal, called the MEM.

    Below is a tabular representation of what kinds of powers are vested with the community under the SMM, the SDM and the MEM.

    Power

    SMM

    SDM

    MEM

    Reject/Review Budget, Strategies and OPs.

    +

    Review/Reject Board decisions with regard to IANA functions.

    Sole Member has the reserved power to reject the budget up to 2 times.

    Member also has standing to enforce bylaw restrictions on the budget, etc.

    Sole Designator can only trigger Board consultations if opposition to budget, etc exists. Further, bylaws specify how many times such a consultation can be triggered.

    Designator only possesses standing to enforce this consultation.

    Community can reject Budget up to two times. Board is required by bylaws to reconsider budget post such rejection, by consulting with the community. If still no change is made, then community can initiate process to recall the Board.

    Reject/Review amendments to Standard bylaws and Fundamental bylaws

    Sole Member has right to veto these changes. Further, member also standing to enforce this right under the relevant Californian law.

    Sole Designator can also veto these changes. However, ambiguity regarding standing of designator to enforce this right.

    No veto power granted to any SO or AC.

    Each SO and AC evaluate if they want to voice the said objection. If certain threshold of agreement reached, then as per the bylaws, the Board cannot go ahead with the amendment.

    Appointment and Removal of individual ICANN directors

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    Sole Member can appoint and remove individual directors based on direction from the applicable Nominating Committee.

    The SOs/ACs cannot appoint individual directors. But they can initiate process for their removal.

    However, directors can only be removed for breach of or on the basis of certain clauses in a “pre-service letter” that they sign.

    Recall of ICANN Board

    Sole Member has the power to recall Board.

    Further, it has standing to enforce this right in Californian courts.

    Sole Designator also has the power to recall the Board.

    However, ambiguity regarding standing to enforce this right.

    Community is not vested with power to recall the Board.

    However, if simultaneous trigger of pre-service letters occurs, in some scenarios, only then can something similar to a recall of the Board occur.

    A Critique of these Models

    SMM:

    The Sole Member Model (or SMM) was discussed and adopted in the second draft proposal, released in August 2015. This model is in fact the simplest and most feasible variant of all the other membership-based models, and has received substantial support from the internet community. The SMM proposes only one amendment to the ICANN bylaws - a move from having no members to one member, while ICANN itself retains its character as a non-profit mutual-benefit corporation under Californian laws.

    This “sole member” will be the community as a whole, represented by the various SOs and ACs. The SOs and ACs require no separate legal personhood to be a part of this “sole member”, but can directly participate. This participation is to be effected by a voting system, explained in the second draft, which allocates the maximum number of votes each SO and AC can cast. This ensures that each SO/AC doesn’t have to cast a unanimous vote, but each differing opinion within an SO/AC is given equal weight.

    SDM:

    A slightly modified and watered down version of the SMM, proposed by the CCWG-Accountability as an alternative to the same, is the “Sole Designator Model” or the SDM. Such a model requires an amendment to the ICANN bylaws, by which certain SOs/ACs are assigned “designator” status. By virtue of this status, they may then exercise certain rights - the right to recall the Board in certain scenarios and the right to veto budgets and strategic plans.

    However, there is some uncertainty in Californian law regarding who can be a designator - an individual or an entity as well. So whether unincorporated associations, such as the SOs and ACs, can be a “designator” as per the law is a question that doesn’t have a clear answer yet.

    Where most discussion with respect to the SDM has occurred has been in the area of the designator being vested with the power to “spill” or remove all the members of the ICANN Board. The designator is vested with this power as a sort of last-resort mechanism for the community’s voice to be heard. However, an interesting point raised in one of the Accountability sessions at ICANN54 was the almost negligible probability of this course of action ever being taken, i.e. the Board being “spilled”. So while in theory this model seems to vest the community with massive power, in reality, because the right to “spill” the Board may never be invoked, the SDM is actually a weak enforceability model.

    Other Variants of the Designator Model:

    The CCWG-Accountability, in both its first and second report, discussed variants of the designator model as well. A generic SO/AC Designator model was discussed in the first draft. The Enhanced SO/AC Designator model, discussed in the second draft, also functions along similar lines. However, only those SOs and ACs that wanted to be made designators apply to become so, as opposed to the requirement of a mandatory designator under the SDM model.

    After the second draft released by the CCWG-Accountability and the counter-proposal released by the ICANN Board (see below for the ICANN Board’s proposal), discussion was mostly directed towards the SMM and the MEM. However, the discussion with regard to the designator model has recently been revived by members of the ALAC at ICANN54 in Dublin, who unanimously issued a statement supporting the SDM.[9] And following this, many more in the community have expressed their support towards adopting the designator model.[10]

    MEM:

    The Multi-stakeholder Enforcement Model or MEM was the ICANN Board’s counter-model to all the models put forth by the CCWG-Accountability, specifically the SMM. However, there is no clarity with regard to the specifics of this model. In fact, the vagueness surrounding the model is one of the biggest criticisms of the model itself.

    The CCWG-Accountability accounts for possible consequences of implementation every model by a mechanism known as “stress-tests”. The Board’s proposal, on the other hand, rejects the SMM due to its “unintended consequences”, but does not provide any clarity on what these consequences are or what in fact the problems with the SMM itself are.[11]

    In addition, many are opposed to the Board proposal in general because it wasn’t created by the community, and therefore not reflective of the community’s views, as opposed to the SMM.[12]

    Instead, the Board’s solution is to propose a counter-model that doesn’t in fact fix the existing problems of accountability.

    What is known of the MEM though, gathered primarily from an FAQ published on the ICANN community forum, is this: The community, through the various SOs and ACs, can challenge any action of the Board that is CONTRADICTORY TO THE FUNDAMENTAL BYLAWS only, through a binding arbitration. The arbitration panel will be decided by the Board and the arbitration itself will be financed by ICANN. Further, this process will not replace the existing Independent Review Process or IRP, but will run parallely.

    Even this small snippet of the MEM is filled with problems. Concerns of neutrality with regard to the arbitral panel and challenge of the award itself have been raised.[13]

    Further, the MEM seems to be in direct opposition to the ‘gold standard’ multi-stakeholder model of ICANN. Essentially, there is no increased accountability of the ICANN under the MEM, thus eliciting severe opposition from the community.

    What is interesting to note about all these models, is that they are all premised on ICANN continuing to remain within the jurisdiction of the United States. And even more surprising is that hardly anyone questions this premise. However, at ICANN54 this issue received a small amount of traction, enough for the setting up of an ad-hoc committee to address these jurisdictional concerns. But even this isn’t enough traction. The only option now though is to wait and see what this ad-hoc committee, as well as the CCWG-Accountability through its third draft proposal to be released later this year, comes up with.


    [1]. The IANA functions or the technical functions are the name, number and protocol functions with regard to the administration of the Domain Name System or the DNS.

    [2]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [3]. http://www.theregister.co.uk/2015/10/19/congress_tells_icann_quit_escaping_accountability/?page=1

    [4]. http://www.theguardian.com/technology/2015/sep/21/icann-internet-us-government

    [5]. SOs are Supporting Organizations and ACs are Advisory Committees. They form part of ICANN’s operational structure.

    [6]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [7]. Leon Sanchez (ALAC member from the Latin American and Caribbean Region) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 5) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [8]. Thomas Rickert (GNSO-appointed CCWG co-chair) speaking at the Enhancing ICANN Accountability Engagement Session !, ICANN54, Dublin (see page 15,16) https://meetings.icann.org/en/dublin54/schedule/mon-enhancing-accountability/transcript-enhancing-accountability-19oct15-en

    [9]. http://www.brandregistrygroup.org/alac-throws-spanner-in-icann-accountability-discussions

    [10]. http://www.theregister.co.uk/2015/10/22/internet_community_icann_accountability/

    [11]. http://www.theregister.co.uk/2015/09/07/icann_accountability_latest/

    [12]. http://www.circleid.com/posts/20150923_empire_strikes_back_icann_accountability_at_the_inflection_point/

    [13]. http://www.internetgovernance.org/2015/09/06/icann-accountability-a-three-hour-call-trashes-a-year-of-work/

    Bios and Photos of Speakers for Big Data in the Global South International Workshop

    by Prasad Krishna last modified Nov 06, 2015 02:01 AM

    PDF document icon Bios&Photos_BigDataWorkshop.pdf — PDF document, 1825 kB (1869456 bytes)

    Comments on the Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (WSIS+10)

    by Geetha Hariharan last modified Nov 18, 2015 06:33 AM
    Following the comment-period on the Zero Draft, the Draft Outcome Document of the UN General Assembly's Overall Review of implementation of WSIS Outcomes was released on 4 November 2015. Comments were sought on the Draft Outcome Document from diverse stakeholders. The Centre for Internet & Society's response to the call for comments is below.

     

    The WSIS+10 Overall Review of the Implementation of WSIS Outcomes, scheduled for December 2015, comes as a review of the WSIS process initiated in 2003-05. At the December summit of the UN General Assembly, the WSIS vision and mandate of the IGF are to be discussed. The Draft Outcome Document, released on 4 November 2015, is towards an outcome document for the summit. Comments were sought on the Draft Outcome Document. Our comments are below.

    1. The Draft Outcome Document of the UN General Assembly’s Overall Review of the Implementation of WSIS Outcomes (“the current Draft”) stands considerably altered from the Zero Draft. With references to development-related challenges, the Zero Draft covered areas of growth and challenges of the WSIS. It noted the persisting digital divide, the importance of innovation and investment, and of conducive legal and regulatory environments, and the inadequacy of financial mechanisms. Issues crucial to Internet governance such as net neutrality, privacy and the mandate of the IGF found mention in the Zero Draft.
    2. The current Draft retains these, and adds to them. Some previously-omitted issues such as surveillance, the centrality of human rights and the intricate relationship of ICTs to the Sustainable Development Goals, now stand incorporated in the current Draft. This is most commendable. However, the current Draft still lacks teeth with regard to some of these issues, and fails to address several others.
    3. In our comments to the Zero Draft, CIS had called for these issues to be addressed. We reiterate our call in the following paragraphs.

    (1) ICT for Development

    1. In the current Draft, paragraphs 14-36 deal with ICTs for development. While the draft contains rubrics like ‘Bridging the digital divide’, ‘Enabling environment’, and ‘Financial mechanisms’, the following issues are unaddressed:
    2. Equitable development for all;
    3. Accessibility to ICTs for persons with disabilities;
    4. Access to knowledge and open data.

    Equitable development

    1. In the Geneva Declaration of Principles (2003), two goals are set forth as the Declaration’s “ambitious goal”: (a) the bridging of the digital divide; and (b) equitable development for all (¶ 17). The current Draft speaks in detail about the bridging of the digital divide, but the goal of equitable development is conspicuously absent. At WSIS+10, when the WSIS vision evolves to the creation of inclusive ‘knowledge societies’, equitable development should be both a key principle and a goal to stand by.
    2. Indeed, inequitable development underscores the persistence of the digital divide. The current Draft itself refers to several instances of inequitable development; for ex., the uneven production capabilities and deployment of ICT infrastructure and technology in developing countries, landlocked countries, small island developing states, countries under occupation or suffering natural disasters, and other vulnerable states; lack of adequate financial mechanisms in vulnerable parts of the world; variably affordable (or in many cases, unaffordable) spread of ICT devices, technology and connectivity, etc.
    3. What underscores these challenges is the inequitable and uneven spread of ICTs across states and communities, including in their production, capacity-building, technology transfers, gender-concentrated adoption of technology, and inclusiveness.
    4. As such, it is essential that the WSIS+10 Draft Outcome Document reaffirm our commitment to equitable development for all peoples, communities and states.
    5. We suggest the following inclusion to paragraph 5 of the current Draft:
    “5. We reaffirm our common desire and commitment to the WSIS vision to build an equitable, people-centred, inclusive, and development-oriented Information Society…”

    Accessibility for persons with disabilities

    10. Paragraph 13 of the Geneva Declaration of Principles (2003) pledges to “pay particular attention to the special needs of marginalized and vulnerable groups of society” in the forging of an Information Society. Particularly, ¶ 13 recognises the special needs of older persons and persons with disabilities.

    11. Moreover, ¶ 31 of the Geneva Declaration of Principles calls for the special needs of persons with disabilities, and also of disadvantaged and vulnerable groups, to be taken into account while promoting the use of ICTs for capacity-building. Accessibility for persons with disabilities is thus core to bridging the digital divide – as important as bridging the gender divide in access to ICTs.

    12. Not only this, but the WSIS+10 Statement on the Implementation of WSIS Outcomes (June 2014) also reaffirms the commitment to “provide equitable access to information and knowledge for all… including… people with disabilities”, recognizing that it is “crucial to increase the participation of vulnerable people in the building process of Information Society…” (¶8).

    13. In our previous submission, CIS had suggested language drawing attention to this. Now, the current Draft only acknowledges that “particular attention should be paid to the specific ICT challenges facing… persons with disabilities…” (paragraph 11). It acknowledges also that now, accessibility for persons with disabilities constitutes one of the core elements of quality (paragraph 22). However, there is a glaring omission of a call to action, or a reaffirmation of our commitment to bridging the divide experienced by persons with disabilities.

    14. We suggest, therefore, the addition of the following language the addition of paragraph 24A to the current Draft. Sections of this suggestion are drawn from ¶8, WSIS+10 Statement on the Implementation of WSIS Outcomes.

    "24A. Recalling the UN Convention on the rights of people with disabilities, the Geneva principles paragraph 11, 13, 14 and 15, Tunis Commitment paras 20, 22 and 24, and reaffirming the commitment to providing equitable access to information and knowledge for all, building ICT capacity for all and confidence in the use of ICTs by all, including youth, older persons, women, indigenous and nomadic peoples, people with disabilities, the unemployed, the poor, migrants, refugees and internally displaced people and remote and rural communities, it is crucial to increase the participation of vulnerable people in the building process of information Society and to make their voice heard by stakeholders and policy-makers at different levels. It can allow the most fragile groups of citizens worldwide to become an integrated part of their economies and also raise awareness of the target actors on the existing ICTs solution (such as tolls as e- participation, e-government, e-learning applications, etc.) designed to make their everyday life better. We recognise need for continued extension of access for people with disabilities and vulnerable people to ICTs, especially in developing countries and among marginalized communities, and reaffirm our commitment to promoting and ensuring accessibility for persons with disabilities. In particular, we call upon all stakeholders to honour and meet the targets set out in Target 2.5.B of the Connect 2020 Agenda that enabling environments ensuring accessible telecommunication/ICT for persons with disabilities should be established in all countries by 2020.”

    Access to knowledge and open data

    15. The Geneva Declaration of Principles dedicates a section to access to information and knowledge (B.3). It notes, in ¶26, that a “rich public domain” is essential to the growth of Information Society. It urges that public institutions be strengthened to ensure free and equitable access to information (¶26), and also that assistive technologies and universal design can remove barriers to access to information and knowledge (¶25). Particularly, the Geneva Declaration advocates the use of free and open source software, in addition to proprietary software, to meet these ends (¶27).

    16. It was also recognized in the WSIS+10 Statement on the Implementation of WSIS Outcomes (‘Challenges-during implementation of Action Lines and new challenges that have emerged’) that there is a need to promote access to all information and knowledge, and to encourage open access to publications and information (C, ¶¶9 and 12).

    17. In our previous submission, CIS had highlighted the importance of open access to knowledge thus: “…the implications of open access to data and knowledge (including open government data), and responsible collection and dissemination of data are much larger in light of the importance of ICTs in today’s world. As Para 7 of the Zero Draft indicates, ICTs are now becoming an indicator of development itself, as well as being a key facilitator for achieving other developmental goals. As Para 56 of the Zero Draft recognizes, in order to measure the impact of ICTs on the ground – undoubtedly within the mandate of WSIS – it is necessary that there be an enabling environment to collect and analyse reliable data. Efforts towards the same have already been undertaken by the United Nations in the form of ‘Data Revolution for Sustainable Development’. In this light, the Zero Draft rightly calls for enhancement of regional, national and local capacity to collect and conduct analyses of development and ICT statistics (Para 56). Achieving the central goals of the WSIS process requires that such data is collected and disseminated under open standards and open licenses, leading to creation of global open data on the ICT indicators concerned.”

    18. This crucial element is missing from the current Draft of the WSIS+10 Outcome Document. Of course, the current Draft notes the importance of access to information and free flow of data. But it stops short of endorsing and advocating the importance of access to knowledge and free and open source software, which are essential to fostering competition and innovation, diversity of consumer/ user choice and ensuring universal access.

    19. We suggest the following addition – of paragraph 23A to the current Draft:

    "23A. We recognize the need to promote access for all to information and knowledge, open data, and open, affordable, and reliable technologies and services, while respecting individual privacy, and to encourage open access to publications and information, including scientific information and in the research sector, and particularly in developing and least developed countries.”

    (2) Human Rights in Information Society

    20. The current Draft recognizes that human rights have been central to the WSIS vision, and reaffirms that rights offline must be protected online as well. However, the current Draft omits to recognise the role played by corporations and intermediaries in facilitating access to and use of the Internet.

    21. In our previous submission, CIS had noted that “the Internet is led largely by the private sector in the development and distribution of devices, protocols and content-platforms, corporations play a major role in facilitating – and sometimes, in restricting – human rights online”.

    22. We reiterate our suggestion for the inclusion of paragraph 43A to the current Draft:

    "43A. We recognize the critical role played by corporations and the private sector in facilitating human rights online. We affirm, in this regard, the responsibilities of the private sector set out in the Report of the Special Representative of the Secretary General on the issue of human rights and transnational corporations and other business enterprises, A/HRC/17/31 (21 March 2011), and encourage policies and commitments towards respect and remedies for human rights.”

    (3) Internet Governance

    The support for multilateral governance of the Internet

    23. While the section on Internet governance is not considerably altered from the zero draft, there is a large substantive change in the current Draft. The current Draft states that the governance of the Internet should be “multilateral, transparent and democratic, with full involvement of all stakeholders” (¶50). Previously, the zero draft recognized the “the general agreement that the governance of the Internet should be open, inclusive, and transparent”.

    24. A return to purely ‘multilateral’ Internet governance would be regressive. Governments are, without doubt, crucial in Internet governance. As scholarship and experience have both shown, governments have played a substantial role in shaping the Internet as it is today: whether this concerns the availability of content, spread of infrastructure, licensing and regulation, etc. However, these were and continue to remain contentious spaces.

    25. As such, it is essential to recognize that a plurality of governance models serve the Internet, in which the private sector, civil society, the technical community and academia play important roles. We recommend returning to the language of the zero draft in ¶32: “open, inclusive and transparent governance of the Internet”.

    Governance of Critical Internet Resources

    26. It is curious that the section on Internet governance in both the zero and the current Draft makes no reference to ICANN, and in particular, to the ongoing transition of IANA stewardship and the discussions surrounding the accountability of ICANN and the IANA operator. The stewardship of critical Internet resources, such as the root, is crucial to the evolution and functioning of the Internet. Today, ICANN and a few other institutions have a monopoly over the management and policy-formulation of several critical Internet resources.

    27. While the WSIS in 2003-05 considered this a troubling issue, this focus seems to have shifted entirely. Open, inclusive, transparent and global Internet are misnomer-principles when ICANN – and in effect, the United States – continues to have monopoly over critical Internet resources. The allocation and administration of these resources should be decentralized and distributed, and should not be within the disproportionate control of any one jurisdiction.

    28. Therefore, we reiterate our suggestion to add paragraph 53A after Para 53:

    "53A. We affirm that the allocation, administration and policy involving critical Internet resources must be inclusive and decentralized, and call upon all stakeholders and in particular, states and organizations responsible for essential tasks associated with the Internet, to take immediate measures to create an environment that facilitates this development.”

    Inclusiveness and Diversity in Internet Governance

    29. The current Draft, in ¶52, recognizes that there is a need to “promote greater participation and engagement in Internet governance of all stakeholders…”, and calls for “stable, transparent and voluntary funding mechanisms to this end.” This is most commendable.

    30. The issue of inclusiveness and diversity in Internet governance is crucial: today, Internet governance organisations and platforms suffer from a lack of inclusiveness and diversity, extending across representation, participation and operations of these organisations. As CIS submitted previously, the mention of inclusiveness and diversity becomes tokenism or formal (but not operational) principle in many cases.

    31. As we submitted before, the developing world is pitifully represented in standards organisations and in ICANN, and policy discussions in organisations like ISOC occur largely in cities like Geneva and New York. For ex., 307 out of 672 registries listed in ICANN’s registry directory are based in the United States, while 624 of the 1010 ICANN-accredited registrars are US-based.

    32. Not only this, but 80% of the responses received by ICANN during the ICG’s call for proposals were male. A truly global and open, inclusive and transparent governance of the Internet must not be so skewed. Representation must include not only those from developing countries, but must also extend across gender and communities.

    33. We propose, therefore, the addition of a paragraph 51A after Para 51:

    "51A. We draw attention to the challenges surrounding diversity and inclusiveness in organisations involved in Internet governance, including in their representation, participation and operations. We note with concern that the representation of developing countries, of women, persons with disabilities and other vulnerable groups, is far from equitable and adequate. We call upon organisations involved in Internet governance to take immediate measures to ensure diversity and inclusiveness in a substantive manner.”

     


    Prepared by Geetha Hariharan, with inputs from Sunil Abraham and Japreet Grewal. All comments submitted towards the Draft Outcome Document may be found at this link.

    Summary Report Internet Governance Forum 2015

    by Jyoti Panday last modified Nov 30, 2015 10:47 AM
    Centre for Internet and Society (CIS), India participated in the Internet Governance Forum (IGF) held at Poeta Ronaldo Cunha Lima Conference Center, Joao Pessoa in Brazil from 10 November 2015 to 13 November 2015. The theme of IGF 2015 was ‘Evolution of Internet Governance: Empowering Sustainable Development’. Sunil Abraham, Pranesh Prakash & Jyoti Panday from CIS actively engaged and made substantive contributions to several key issues affecting internet governance at the IGF 2015. The issue-wise detail of their engagement is set out below.

    INTERNET GOVERNANCE

    I. The Multi-stakeholder Advisory Group to the IGF organised a discussion on Sustainable Development Goals (SDGs) and Internet Economy at the Main Meeting Hall from 9:00 am to 12:30 pm on 11 November, 2015. The discussions at this session focused on the importance of Internet Economy enabling policies and eco-system for the fulfilment of different SDGs. Several concerns relating to internet entrepreneurship, effective ICT capacity building, protection of intellectual property within and across borders were availability of local applications and content were addressed. The panel also discussed the need to identify SDGs where internet based technologies could make the most effective contribution. Sunil Abraham contributed to the panel discussions by addressing the issue of development and promotion of local content and applications. List of speakers included:

    1. Lenni Montiel, Assistant-Secretary-General for Development, United Nations

    2. Helani Galpaya, CEO LIRNEasia

    3. Sergio Quiroga da Cunha, Head of Latin America, Ericsson

    4. Raúl L. Katz, Adjunct Professor, Division of Finance and Economics, Columbia Institute of Tele-information

    5. Jimson Olufuye, Chairman, Africa ICT Alliance (AfICTA)

    6. Lydia Brito, Director of the Office in Montevideo, UNESCO

    7. H.E. Rudiantara, Minister of Communication & Information Technology, Indonesia

    8. Daniel Sepulveda, Deputy Assistant Secretary, U.S. Coordinator for International and Communications Policy at the U.S. Department of State  

    9. Deputy Minister Department of Telecommunications and Postal Services for the republic of South Africa

    10. Sunil Abraham, Executive Director, Centre for Internet and Society, India

    11. H.E. Junaid Ahmed Palak, Information and Communication Technology Minister of Bangladesh

    12. Jari Arkko, Chairman, IETF

    13. Silvia Rabello, President, Rio Film Trade Association

    14. Gary Fowlie, Head of Member State Relations & Intergovernmental Organizations, ITU

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2327-2015-11-11-internet-economy-and-sustainable-development-main-meeting-room

    Video link Internet economy and Sustainable Development here https://www.youtube.com/watch?v=D6obkLehVE8

     II. Public Knowledge organised a workshop on The Benefits and Challenges of the Free Flow of Data at Workshop Room 5 from 11:00 am to 12:00 pm on 12 November, 2015. The discussions in the workshop focused on the benefits and challenges of the free flow of data and also the concerns relating to data flow restrictions including ways to address them. Sunil Abraham contributed to the panel discussions by addressing the issue of jurisdiction of data on the internet. The panel for the workshop included the following.

    1. Vint Cerf, Google

    2. Lawrence Strickling, U.S. Department of Commerce, NTIA

    3. Richard Leaning, European Cyber Crime Centre (EC3), Europol

    4. Marietje Schaake, European Parliament

    5. Nasser Kettani, Microsoft

    6. Sunil Abraham, CIS India

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2467-2015-11-12-ws65-the-benefits-and-challenges-of-the-free-flow-of-data-workshop-room-5

    Video link https://www.youtube.com/watch?v=KtjnHkOn7EQ

     III. Article 19 and Privacy International organised a workshop on Encryption and Anonymity: Rights and Risks at Workshop Room 1 from 11:00 am to 12:30 pm on 12 November, 2015. The workshop fostered a discussion about the latest challenges to protection of anonymity and encryption and ways in which law enforcement demands could be met while ensuring that individuals still enjoyed strong encryption and unfettered access to anonymity tools. Pranesh Prakash contributed to the panel discussions by addressing concerns about existing south Asian regulatory framework on encryption and anonymity and emphasizing the need for pervasive encryption. The panel for this workshop included the following.

    1. David Kaye, UN Special Rapporteur on Freedom of Expression

    2. Juan Diego Castañeda, Fundación Karisma, Colombia

    3. Edison Lanza, Organisation of American States Special Rapporteur

    4. Pranesh Prakash, CIS India

    5. Ted Hardie, Google

    6. Elvana Thaci, Council of Europe

    7. Professor Chris Marsden, Oxford Internet Institute

    8. Alexandrine Pirlot de Corbion, Privacy International

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2407-2015-11-12-ws-155-encryption-and-anonymity-rights-and-risks-workshop-room-1

    Video link available here https://www.youtube.com/watch?v=hUrBP4PsfJo

     IV. Chalmers & Associates organised a session on A Dialogue on Zero Rating and Network Neutrality at the Main Meeting Hall from 2:00 pm to 4:00 pm on 12 November, 2015. The Dialogue provided access to expert insight on zero-rating and a full spectrum of diverse views on this issue. The Dialogue also explored alternative approaches to zero rating such as use of community networks. Pranesh Prakash provided a detailed explanation of harms and benefits related to different approaches to zero-rating. The panellists for this session were the following.

    1. Jochai Ben-Avie, Senior Global Policy Manager, Mozilla, USA

    2. Igor Vilas Boas de Freitas, Commissioner, ANATEL, Brazil

    3. Dušan Caf, Chairman, Electronic Communications Council, Republic of Slovenia

    4. Silvia Elaluf-Calderwood, Research Fellow, London School of Economics, UK/Peru

    5. Belinda Exelby, Director, Institutional Relations, GSMA, UK

    6. Helani Galpaya, CEO, LIRNEasia, Sri Lanka

    7. Anka Kovacs, Director, Internet Democracy Project, India

    8. Kevin Martin, VP, Mobile and Global Access Policy, Facebook, USA

    9. Pranesh Prakash, Policy Director, CIS India

    10. Steve Song, Founder, Village Telco, South Africa/Canada

    11. Dhanaraj Thakur, Research Manager, Alliance for Affordable Internet, USA/West Indies

    12. Christopher Yoo, Professor of Law, Communication, and Computer & Information Science, University of Pennsylvania, USA

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/igf2015-main-sessions

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2457-2015-11-12-a-dialogue-on-zero-rating-and-network-neutrality-main-meeting-hall-2

     V. The Internet & Jurisdiction Project organised a workshop on Transnational Due Process: A Case Study in MS Cooperation at Workshop Room 4 from 11:00 am to 12:00 pm on 13 November, 2015. The workshop discussion focused on the challenges in developing an enforcement framework for the internet that guarantees transnational due process and legal interoperability. The discussion also focused on innovative approaches to multi-stakeholder cooperation such as issue-based networks, inter-sessional work methods and transnational policy standards. The panellists for this discussion were the following.

    1. Anne Carblanc Head of Division, Directorate for Science, Technology and Industry, OECD

    2. Eileen Donahoe Director Global Affairs, Human Rights Watch

    3. Byron Holland President and CEO, CIRA (Canadian ccTLD)

    4. Christopher Painter Coordinator for Cyber Issues, US Department of State

    5. Sunil Abraham Executive Director, CIS India

    6. Alice Munyua Lead dotAfrica Initiative and GAC representative, African Union Commission

    7. Will Hudsen Senior Advisor for International Policy, Google

    8. Dunja Mijatovic Representative on Freedom of the Media, OSCE

    9. Thomas Fitschen Director for the United Nations, for International Cooperation against Terrorism and for Cyber Foreign Policy, German Federal Foreign Office

    10. Hartmut Glaser Executive Secretary, Brazilian Internet Steering Committee

    11. Matt Perault, Head of Policy Development Facebook

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2475-2015-11-13-ws-132-transnational-due-process-a-case-study-in-ms-cooperation-workshop-room-4

    Video link Transnational Due Process: A Case Study in MS Cooperation available here https://www.youtube.com/watch?v=M9jVovhQhd0

     VI. The Internet Governance Project organised a meeting of the Dynamic Coalition on Accountability of Internet Governance Venues at Workshop Room 2 from 14:00 – 15:30 on 12 November, 2015. The coalition brought together panelists to highlight the challenges in developing an accountability framework for internet governance venues that include setting up standards and developing a set of concrete criteria. Jyoti Panday provided the perspective of civil society on why acountability is necessary in internet governance processes and organizations. The panelists for this workshop included the following.

    1. Robin Gross, IP Justice

    2. Jeanette Hofmann, Director Alexander von Humboldt Institute for Internet and Society

    3. Farzaneh Badiei, Internet Governance Project

    4. Erika Mann, Managing Director Public PolicyPolicy Facebook and Board of Directors ICANN

    5. Paul Wilson, APNIC

    6. Izumi Okutani, Japan Network Information Center (JPNIC)

    7. Keith Drazek , Verisign

    8. Jyoti Panday, CIS

    9. Jorge Cancio, GAC representative

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c23/dynamic-coalition-on-accountability-of-internet-governance-venues?iframe=no&w=&sidebar=yes&bg=no

    Video link https://www.youtube.com/watch?v=UIxyGhnch7w

     VII. Digital Infrastructure Netherlands Foundation organized an open forum at Workshop Room 3 from 11:00 – 12:00 on 10 November, 2015. The open forum discussed the increase in government engagement with “the internet” to protect their citizens against crime and abuse and to protect economic interests and critical infrastructures. It brought together panelists topresent ideas about an agenda for the international protection of ‘the public core of the internet’ and to collect and discuss ideas for the formulation of norms and principles and for the identification of practical steps towards that goal. Pranesh Prakash participated in the e open forum. Other speakers included

    1. Bastiaan Goslings AMS-IX, NL

    2. Pranesh Prakash CIS, India

    3. Marilia Maciel (FGV, Brasil

    4. Dennis Broeders (NL Scientific Council for Government Policy)

    Detailed description of the open forum is available here http://schd.ws/hosted_files/igf2015/3d/DINL_IGF_Open%20Forum_The_public_core_of_the_internet.pdf

    Video link available here https://www.youtube.com/watch?v=joPQaMQasDQ

    VIII. UNESCO, Council of Europe, Oxford University, Office of the High Commissioner on Human Rights, Google, Internet Society organised a workshop on hate speech and youth radicalisation at Room 9 on Thursday, November 12. UNESCO shared the initial outcome from its commissioned research on online hate speech including practical recommendations on combating against online hate speech through understanding the challenges, mobilizing civil society, lobbying private sectors and intermediaries and educating individuals with media and information literacy. The workshop also discussed how to help empower youth to address online radicalization and extremism, and realize their aspirations to contribute to a more peaceful and sustainable world. Sunil Abraham provided his inputs. Other speakers include

    1. Chaired by Ms Lidia Brito, Director for UNESCO Office in Montevideo

    2.Frank La Rue, Former Special Rapporteur on Freedom of Expression

    3. Lillian Nalwoga, President ISOC Uganda and rep CIPESA, Technical community

    4. Bridget O’Loughlin, CoE, IGO

    5. Gabrielle Guillemin, Article 19

    6. Iyad Kallas, Radio Souriali

    7. Sunil Abraham executive director of Center for Internet and Society, Bangalore, India

    8. Eve Salomon, global Chairman of the Regulatory Board of RICS

    9. Javier Lesaca Esquiroz, University of Navarra

    10. Representative GNI

    11. Remote Moderator: Xianhong Hu, UNESCO

    12. Rapporteur: Guilherme Canela De Souza Godoi, UNESCO

    Detailed description of the workshop is available here http://igf2015.sched.org/event/4c1X/ws-128-mitigate-online-hate-speech-and-youth-radicalisation?iframe=no&w=&sidebar=yes&bg=no

    Video link to the panel is available here https://www.youtube.com/watch?v=eIO1z4EjRG0

     INTERMEDIARY LIABILITY

    IX. Electronic Frontier Foundation, Centre for Internet Society India, Open Net Korea and Article 19 collaborated to organize a workshop on the Manila Principles on Intermediary Liability at Workshop Room 9 from 11:00 am to 12:00 pm on 13 November 2015. The workshop elaborated on the Manila Principles, a high level principle framework of best practices and safeguards for content restriction practices and addressing liability for intermediaries for third party content. The workshop saw particpants engaged in over lapping projects considering restriction practices coming togetehr to give feedback and highlight recent developments across liability regimes. Jyoti Panday laid down the key details of the Manila Principles framework in this session. The panelists for this workshop included the following.

    1. Kelly Kim Open Net Korea,

    2. Jyoti Panday, CIS India,

    3. Gabrielle Guillemin, Article 19,

    4. Rebecca McKinnon on behalf of UNESCO

    5. Giancarlo Frosio, Center for Internet and Society, Stanford Law School

    6. Nicolo Zingales, Tilburg University

    7. Will Hudson, Google

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2423-2015-11-13-ws-242-the-manila-principles-on-intermediary-liability-workshop-room-9

    Video link available here https://www.youtube.com/watch?v=kFLmzxXodjs

     ACCESSIBILITY

    X. Dynamic Coalition on Accessibility and Disability and Global Initiative for Inclusive ICTs organised a workshop on Empowering the Next Billion by Improving Accessibility at Workshop Room 6 from 9:00 am to 10:30 am on 13 November, 2015. The discussion focused on the need and ways to remove accessibility barriers which prevent over one billion potential users to benefit from the Internet, including for essential services. Sunil Abraham specifically spoke about the lack of compliance of existing ICT infrastructure with well established accessibility standards specifically relating to accessibility barriers in the disaster management process. He discussed the barriers faced by persons with physical or psychosocial disabilities. The panelists for this discussion were the following.

    1. Francesca Cesa Bianchi, G3ICT

    2. Cid Torquato, Government of Brazil

    3. Carlos Lauria, Microsoft Brazil

    4. Sunil Abraham, CIS India

    5. Derrick L. Cogburn, Institute on Disability and Public Policy (IDPP) for the ASEAN(Association of Southeast Asian Nations) Region

    6. Fernando H. F. Botelho, F123 Consulting

    7. Gunela Astbrink, GSA InfoComm

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2438-2015-11-13-ws-253-empowering-the-next-billion-by-improving-accessibility-workshop-room-3

    Video Link Empowering the next billion by improving accessibility https://www.youtube.com/watch?v=7RZlWvJAXxs

     OPENNESS

    XI. A workshop on FOSS & a Free, Open Internet: Synergies for Development was organized at Workshop Room 7 from 2:00 pm to 3:30 pm on 13 November, 2015. The discussion was focused on the increasing risk to openness of the internet and the ability of present & future generations to use technology to improve their lives. The panel shred different perspectives about the future co-development of FOSS and a free, open Internet; the threats that are emerging; and ways for communities to surmount these. Sunil Abraham emphasised the importance of free software, open standards, open access and access to knowledge and the lack of this mandate in the draft outcome document for upcoming WSIS+10 review and called for inclusion of the same. Pranesh Prakash further contributed to the discussion by emphasizing the need for free open source software with end‑to‑end encryption and traffic level encryption based on open standards which are decentralized and work through federated networks. The panellists for this discussion were the following.

    1. Satish Babu, Technical Community, Chair, ISOC-TRV, Kerala, India

    2. Judy Okite, Civil Society, FOSS Foundation for Africa

    3. Mishi Choudhary, Private Sector, Software Freedom Law Centre, New York

    4. Fernando Botelho, Private Sector, heads F123 Systems, Brazil

    5. Sunil Abraham, CIS India

    6. Pranesh Prakash, CIS India

    7. Nnenna Nwakanma- WWW.Foundation

    8. Yves MIEZAN EZO, Open Source strategy consultant

    9. Corinto Meffe, Advisor to the President and Directors, SERPRO, Brazil

    10. Frank Coelho de Alcantara, Professor, Universidade Positivo, Brazil

    11. Caroline Burle, Institutional and International Relations, W3C Brazil Office and Center of Studies on Web Technologies

    Detailed description of the workshop is available here http://www.intgovforum.org/cms/workshops/list-of-published-workshop-proposals

    Transcript of the workshop is available here http://www.intgovforum.org/cms/187-igf-2015/transcripts-igf-2015/2468-2015-11-13-ws10-foss-and-a-free-open-internet-synergies-for-development-workshop-room-7

    Video link available here https://www.youtube.com/watch?v=lwUq0LTLnDs



    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    by Nishant Shah last modified Nov 23, 2015 01:27 PM
    The idea of a 'digital' Diwali reduces our social relationships to a ledger of give and take. The last fortnight, I have been bombarded with advertisements selling the idea of a “Digital Diwali”. We have become so used to the idea that everything that is digital is modern, better and more efficient.
    WhatsApps with fireworks, apps with diyas: Why Diwali needs to go beyond digital

    For me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. (Source: Reuters)

    The article was published in the Indian Express on November 22, 2015.


    I have WhatsApp messages with exploding fireworks, singing greeting cards that chant mystic sounding messages, an app that turns my smartphone into a flickering diya, another app that remotely controls the imitation LED candles on my windows, an invitation to Skype in for a puja at a friend’s house 3,000 km away, and the surfeit of last minute shopping deals, each one offering a dhamaka of discounts.

    However, to me, the digitality of Diwali is beyond the surface level of seductive screens and one-click shopping, or messages of love and apps of light. Think of Diwali as sharing the fundamental logic that governs the digital — the logic of counting. As we explode with joy this festive season, we count our blessings, our loved ones, the gifts and presents that we exchange. If we are on the new Fitbit trend, we count the calories we consume and burn as we make our way through parties where it is important to see and be seen, compare and contrast, connect with all the people who could be thought of as friends, followers, connectors, or connections.

    While there is no denying that there is a sociality that the festival brings in, there is also a cruel algebra of counting that comes along with it. It is no surprise that as we celebrate the victory of good over evil and right over wrong, we also simultaneously bow our heads to the goddess of wealth in this season.

    Look beyond the glossy surface of Diwali festivities, and you realise that it is exactly like the digital. Digital is about counting. It is right there in the name — digits refers to numbers. Or digits refer to fingers — these counting appendages which we can manipulate and flex in order to achieve desired results. At the core of digital systems is the logic of counting, and counting, as anybody will tell us, is not a benign process. What gets counted, gets accounted for, thus producing a ledger of give and take which often becomes the measure of our social relationships.

    I remember, as a child, my mother meticulously making a note of every gift or envelope filled with money that ever came our way from the relatives, so that there would be precise and exact reciprocation. I am certain that there is now an app which can keep a track of these exchanges. I am not suggesting that these occasions of gifting are merely mercenary, but they are embodiments of finely calibrated values and worth of relationships defined by proximity, intimacy, hierarchy and distance. The digital produces and works on a similar algorithm, which is often as inscrutable and opaque as the unspoken codes of the Diwali ledger.

    There is something else that happens with counting. The only things that can have value are things that have value. I don’t know which ledger counts the coming together of my very distributed family for an evening of chatting, talking, sharing lives and laughter. I don’t know how anybody would reciprocate that one late night when a cousin came to our home and spent hours with my younger brother making a rangoli to surprise the rest of us. I have no idea how they will ever reciprocate gifts that one of the younger kids made at school for all the members of the family.

    Diwali is about the things, but like the digital system, these are things that cannot be counted. And within the digital system, things that cannot be counted are things that get discounted. They become unimportant. They become noise, or rubbish. Our social networks are counting systems that might notice the low frequency of my connections with my extended family but they cannot quantify the joy I hear in the voice of my grandmother when I call her from a different time-zone to catch up with her. Digital systems can only deal with things with value and not their worth.

    I do want to remind myself that there is more to this occasion than merely counting. And for once, I want to go beyond the digital, where my memories of the past and the expectations of the future are not shaped by the digital systems of counting and quantifying. Instead, I want Diwali to be analogue. I shall still be mediating my collectivity with the promises of connectivity, but I want to think of this moment as beyond the logics and logistics of counting that codify our social transactions and take such a central location in our personal functioning. This Diwali, I am rooting for a post-digital Diwali, that accounts for all those things that cannot be counted, but are sometimes the only things that really count.

    CIS Submission on CCWG-Accountability 2nd Draft Proposal on Work Stream 1 Recommendations

    by Pranesh Prakash last modified Nov 23, 2015 02:58 PM
    The Centre for Internet & Society (CIS) submitted the below to ICANN's CCWG-Accountability.

    The CCWG Accountability proposal is longer than many countries' constitutions.  Given that, we will keep our comments brief, addressing a very limited set of the issues in very broad terms.

    Human Rights

    ICANN is unique in many ways.  It is a global regulator that has powers of taxation to fund its own operation.  ICANN is not a mere corporation. For such a regulator, ensuring fair process (what is often referred to as "natural justice") as well as substantive human rights (such as the freedom of expression, right against discrimination, right to privacy, and cultural diversity), are important.  Given this, the narrow framing of "free expression and the free flow of information" in Option 1, we believe Option 2 is preferable.

    Diversity

    We are glad that diversity is being recognized as an important principle.  As we noted during the open floor session at ICANN49: [We are] extremely concerned about the accountability of ICANN to the global community.  Due to various decisions made by the US government relating to ICANN's birth, ICANN has had a troubled history with legitimacy.  While it has managed to gain and retain the confidence of the technical community, it still lacks political legitimacy due to its history.  The NTIA's decision has presented us an opportunity to correct this.

    However, ICANN can't hope to do so without going beyond the current ICANN community, which while nominally being 'multistakeholder' and open to all, grossly under-represents those parts of the world that aren't North America and Western Europe.

    Of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.  In a session yesterday, a large number of the policies that favour entrenched incumbents from richer countries were discussed.  But without adequate representation from poorer countries, and adequate representation from the rest of the world's Internet population, there is no hope of changing these policies.

    This is true not just of the business sector, but of all the 'stakeholders' that are part of global Internet policymaking, whether they follow the ICANN multistakeholder model or another.  A look at the board members of the Internet Architecture Board, for instance, would reveal how skewed the technical community can be, whether in terms of geographic or gender diversity.

    Without greater diversity within the global Internet policymaking communities, there is no hope of equity, respect for human rights — civil, political, cultural, social and economic — and democratic functioning, no matter how 'open' the processes seem to be, and no hope of ICANN accountability either.

    Meanwhile, there are those who are concerned that diversity should not prevail over skill and experience.  Those who have the greatest skill and experience will be those who are insiders in the ICANN system.  To believe that being an insider in the ICANN system ought to be privileged over diversity is wrong.  A call for diversity isn't just political correctness.  It is essential for legitimacy of ICANN as a globally-representative body, and not just one where the developed world (primarily US-based persons) makes policies for the whole globe, which is what it has so far been.  Of course, this cannot be corrected overnight, but it is crucial that this be a central focus of the accountability initiative.

    Jurisdiction, Membership Models and Voting Rights

    The Sole-Member Community Mechanism (SMCM) that has been proposed seems in large part the best manner provided under Californian law relating to public benefit corporations of dealing with accountability issues, and is the lynchpin of the whole accountability mechanism under workstream.

    However, the jurisdictional analysis laid down in 11.3 will only be completed post-transition, as part of workstream. Thus the SMCM may not necessarily be the best model under a different legal jurisdiction. It would be useful to discuss the dependency between these more clearly.  In this vein, it is essential that the Article XVIII Section 1 not be designated a fundamental bylaw.  Further, it would be useful to add that for some limited aspects of the transition (such as IANA functioning), ICANN should seek to enter into a host country agreement to provide legal immunity, thus providing a qualification to para 125 ("ICANN accountability requires compliance with applicable legislation, in jurisdictions where it operates.") since the IANA functions operator ought not be forced by a country not to honour requests made by, for example, North Korea.

    It should also be noted that accountability needs independence, which may be of two kinds: independence of financial source, and independence of appointment.  From what one could gather from the CCWG proposal, the Independent Review Panel will be funded by the budget the ICANN Board prepares, while the appointment process is still unclear.

    One of the most important accountability mechanisms with regard to the IANA functions is that of changing the IANA Functions Operator.  As per the CWG Stewardship's current proposal, the "Post-Transition IANA" won't be an entity that is independent of ICANN.  If the PTI's governance is permanently made part of ICANN's fundamental bylaws (as an affiliate controlled by ICANN), how is it proposed that the IFO be moved from PTI to some other entity if the IANA Functions Review Team so decides? Additionally, for such an important function, the composition of the IFRT should not be left unspecified.

    While it is welcome that a separation is proposed between the IANA budget and budget for rest of ICANN's functioning, the current discussion around budgets seems to be based on the assumption that all IANA functions will be funded by ICANN, whereas if the IANA functions are separated, each community might fund it separately.  That provides two levels of insulation to IANA functions operator(s): separate sources of operational revenue, as well as separate budgets within ICANN.

    It should be noted that there have been some responses that express concern about the shifting of existing power structures within ICANN through some of the proposed alternative voting allocations in the SMCM. However, rather than present arguments as to why these shifts would be beneficial or harmful for ICANN's overall accountability, these responses seem to assume that shift from the current power structures are harmful.  This is an unfounded assumption and cannot be a valid reason, nor can speculation of how the United States Congress will behave be a valid reason for rejecting an otherwise valid proposal.  If there are harms, they ought to be clearly articulated: shifts from the status quo and fear of the US Congress aren't valid harms.  Thus, while it is important to consider how different voting rights models might change the status quo while arriving at any judgments, that cannot be the sole criterion for judgment of its merits.  Further, as the French government notes:

    [T]he French Government still considers that linking Stress Test 18 to a risk of capture of ICANN by governments and NTIA’s requirement that no “government-led or intergovernmental organization solution would be acceptable”, makes no sense. . . . Logically, the risk of capture of ICANN by governments in the future is as low as it is now and in any case, it cannot lead to a “government-led or intergovernmental organization solution”.

    While dealing with the question of relative voting proportions, the community must remembered that not all parts of the world are equally developed with regard to the domain name industry and with respect to civil society as those countries in North America, Western Europe, and other developed nations, and thus may not find adequate representation via the SOs.  In many parts of the world, civil society organizations — especially those focussed on Internet governance and domain name policies — are non-existent.  Thus a system that privileges the SOs to the exclusion of other components of a multistakeholder governance model would not be representative or diverse.  A multistakeholder model cannot disproportionately represent business interests over all other interests.

    In this regard, the comments of former ICANN Chairperson, Rod Beckstrom, at ICANN43 ought to be recalled:

    ICANN must be able to act for the public good while placing commercial and financial interests in the appropriate context . . . How can it do this if all top leadership is from the very domain name industry it is supposed to coordinate independently?

    As Kieren McCarthy points out about ICANN:

    The Board does have too many conflicted members
    The NomCom is full of conflicts
    There are not enough independent voices within the organization

    Reforms in these ought to be as crucial to accountability as the membership model.

    The current mechanisms for ensuring transparency, such as the DIDP process, are wholly inadequate.  We have summarized our experience with the DIDP process, and how often we were denied information on baseless grounds in this table.

    Predictive Policing: What is it, How it works, and its Legal Implications

    by Rohan George — last modified Nov 24, 2015 04:31 PM
    This article reviews literature surrounding big data and predictive policing and provides an analysis of the legal implications of using predictive policing techniques in the Indian context.

    Introduction

    For the longest time, humans have been obsessed with prediction. Perhaps the most well-known oracle in history, Pythia, the infallible Oracle of Delphi was said to predict future events in hysterical outbursts on the seventh day of the month, inspired by the god Apollo himself. This fascination with informing ourselves about future events has hardly subsided in us humans. What has changed however is the methods we employ to do so. The development of Big data technologies for one, has seen radical applications into many parts of life as we know it, including enhancing our ability to make accurate predictions about the future.

    One notable application of Big data into prediction caters to another basic need since the dawn of human civilisation, the need to protect our communities and cities. The word 'police' itself originates from the Greek word 'polis', which means city. The melding of these two concepts prediction and policing has come together in the practice of Predictive policing, which is the application of computer modelling to historical crime data and metadata to predict future criminal activity[1]. In the subsequent sections, I will attempt an introduction of predictive policing and explain some of the main methods within the domain of predictive policing. Because of the disruptive nature of these technologies, it will also be prudent to expand on the implications predictive technologies have for justice, privacy protections and protections against discrimination among others.

    In introducing the concept of predictive policing, my first step is to give a short explanation about current predictive analytics techniques, because these techniques are the ones which are applied into a law enforcement context as predictive policing.

    What is predictive analysis

    Facilitated by the availability of big data, predictive analytics uses algorithms to recognise data patterns and predict future outcomes[2]. Predictive analytics encompasses data mining, predictive modeling, machine learning, and forecasting[3]. Predictive analytics also relies heavily on machine learning and artificial intelligence approaches [4]. The aim of such analysis is to identify relationships among variables that may not be immediately apparent using hypothesis-driven methods.[5] In the mainstream media, one of the most infamous stories about the use of predictive analysis comes from USA, regarding a department store Target and their data analytics practices [6]. Target mined data from purchasing patterns of people who signed onto their baby registry. From this they were able to predict approximately when customers may be due and target advertisements accordingly. In the noted story, they were so successful that they predicted pregnancy before the pregnant girl's father knew she was pregnant. [7]

    Examples of predictive analytics

    • Predicting the success of a movie based on its online ratings[8]
    • Many universities, sometimes in partnership with other firms use predictive analytics to provide course recommendations to students, track student performance, personalize curriculum to individual students and foster networking between students.[9]
    • Predictive Analysis of Corporate Bond Indices Returns[10]

    Relationship between predictive analytics and predictive policing

    The same techniques used in many of the predictive methods mentioned above find application into some predictive policing methods. However two important points need to be raised:

    First, predictive analytics is actually a subset of predictive policing. This is because while the steps in creating a predictive model, of defining a target variable, exposing your model to training data, selecting appropriate features and finally running predictive analysis [11] maybe the same in a policing context, there are other methods which may be used to predict crime, but which do not rely on data mining. These techniques may instead use other methods, such as some of those detailed below along with data about historical crime to generate predictions.

    In her article "Policing by Numbers: Big Data and the Fourth Amendment"[12], Joh categorises 3 main applications of Big data into policing. These are Predictive Policing, Domain Awareness systems and Genetic Data Banks. Genetic data banks refer to maintaining large databases of DNA that was collected as part of the justice system. Issues arise when the DNA collected is repurposed in order to conduct familial searches, instead of being used for corroborating identity. Familial searches may have disproportionate impacts on minority races. Domain Awareness systems use various computer software and other digital surveillance tools such as Geographical Information Systems [13] or more illicit ones such as Black Rooms[14] to "help police create a software-enhanced picture of the present, using thousands of data points from multiple sources within a city" [15]. I believe Joh was very accurate in separating Predictive Policing from Domain Awareness systems, especially when it comes to analysing the implications of the various applications of Big data into policing.

    In such an analysis of the implications of using predictive policing methods, the issues surrounding predictive technologies often get conflated with larger issues about the application of big data into law enforcement. That opens the debate up to questions about overly intrusive evidence gathering and mass surveillance systems, which though used along with predictive technology, are not themselves predictive in nature. In this article, I aim to concentrate on the specific implications that arise due to predictive methods.

    One important point regarding the impact of predictive policing is how the insights that predictive policing methods offer are used. There is much support for the idea that predictive policing does not replace policing methods, but actually augments them. The RAND report specifically cites one myth about predictive policing as "the computer will do everything for you[16]". In reality police officers need to act on the recommendations provided by the technologies.

    What is Predictive policing?

    Predictive policing is the "application of analytical techniques-particularly quantitative techniques-to identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions".[17] It is important to note that the use of data and statistics to inform policing is not new. Indeed, even twenty years ago, before the deluge of big data we have today, law enforcement regimes such as the New York Police Department (NYPD) were already using crime data in a major way. In order to keep track of crime trends, NYPD used the software CompStat[18] to map "crime statistics along with other indicators of problems, such as the locations of crime victims and gun arrests"[19]. The senior officers used the information provided by CompStat to monitor trends of crimes on a daily basis and such monitoring became an instrumental way to track the performance of police agencies[20]. CompStat has since seen application in many other jurisdictions [21].

    But what is new is the amount of data available for collection, as well as the ease with which organisations can analyse and draw insightful results from that data. Specifically, new technologies allow for far more rigorous interrogation of data and wide-ranging applications, including adding greater accuracy to the prediction of future incidence of crime.

    Predictive Policing methods

    Some methods of predictive policing involve application of known standard statistical methods, while other methods involve modifying these standard techniques. Predictive techniques that forecast future criminal activities can be framed around six analytic categories. They all may overlap in the sense that multiple techniques are used to create actual predictive policing software and in fact it is similar theories of criminology which undergird many of these methods, but the categorisation in such a way helps clarify the concept of predictive policing. The basis for the categorisation below comes from a RAND Corporation report entitled 'Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations' [22], which is a comprehensive and detailed contribution to scholarship in this nascent area.

    Hot spot analysis: Methods involving hot spot analysis attempt to "predict areas of increased crime risk based on historical crime data"[23]. The premise behind such methods lies in the adage that "crime tends to be lumpy" [24]. Hot Spot analysis seeks to map out these previous incidences of crime in order to inform potential future crime.

    Regression methods: A regression aims to find relationships between independent variables (factors that may influence criminal activity) and certain variables that one aims to predict. Hence, this method would track more variables than just crime history.

    Data mining techniques: Data mining attempts to recognise patterns in data and use it to make predictions about the future. One important variant in the various types of data mining methods used in policing are different types of algorithms that are used to mine data in different ways. These are dependent on the nature of the data the predictive model was trained on and will be used to interrogate in the future. Two broad categories of algorithms commonly used are clustering algorithms and classification algorithms:

    · Clustering algorithms "form a class of data mining approaches that seek to group data into clusters with similar attributes" [25]. One example of clustering algorithms is spatial clustering algorithms, which use geospatial crime incident data to predict future hot spots for crime[26].

    · Classification algorithms "seek to establish rules assigning a class or label to events"[27]. These algorithms use training data sets "to learn the patterns that determine the class of an observation"[28] The patterns identified by the algorithm will be applied to future data, and where applicable, the algorithm will recognise similar patterns in the data. This can be used to make predictions about future criminal activity for example.

    Near-repeat methods: Near-repeat methods work off the assumption that future crimes will take place close to timing and location of current crimes. Hence, it could be postulated that areas of high crime will experience more crime in the near future[29]. This involves the use of a 'self-exciting' algorithm, very similar to algorithms modelling earthquake aftershocks [30]. The premise undergirding such methods is very similar to that of hot spot analysis.

    Spatiotemporal analysis: Using "environmental and temporal features of the crime location" [31] as the basis for predicting future crime. By combining the spatiotemporal features of the crime area with crime incident data, police could use the resultant information to predict the location and time of future crimes. Examples of factors that may be considered include timing of crimes, weather, distance from highways, time from payday and many more.

    Risk terrain analysis: Analyses other factors that are useful in predicting crimes. Examples of such factors include "the social, physical, and behavioural factors that make certain areas more likely to be affected by crime"[32]

    Various methods listed above are used, often together, to predict the where and when a crime may take place or even potential victims. The unifying thread which relates these methods is their dependence on historical crime data.

    Examples of predictive policing:

    Most uses of predictive policing that have been studied and reviewed in scholarly work come from the USA, though I will detail one case study from Derbyshire, UK. Below is a collation of various methods that are a practical application of the methods raised above.

    Hot Spot analysis in Sacramento: In February 2011, Sacramento Police Department began using hot spot analysis along with research on optimal patrol time to act as a sufficient deterrent to inform how they patrol high-risk areas. This policy was aimed at preventing serious crimes by patrolling these predicted hot spots. In places where there was such patrolling, serious crimes reduced by a quarter with no significant increases such crimes in surrounding areas[33].

    Data Mining and Hot Spot Mapping in Derbyshire, UK: The Safer Derbyshire Partnership, a group of law enforcement agencies and municipal authorities sought to identify juvenile crime hotspots[34]. They used MapInfo software to combine "multiple discrete data sets to create detailed maps and visualisations of criminal activity, including temporal and spatial hotspots" [35]. This information informed law enforcement about how to optimally deploy their resources.

    Regression models in Pittsburgh: Researchers used reports from Pittsburgh Bureau of Police about violent crimes and "leading indicator" [36] crimes, crimes that were relatively minor but which could be a sign of potential future violent offences. The researcher ran analysis of areas with violent crimes, which were used as the dependent variable in analysing whether violent crimes in certain areas could be predicted by the leading indicator data. From the 93 significant violent crime areas that were studied, 19 areas were successfully predicted by the leading indicator data.[37]

    Risk terrain modelling analysis in Morris County, New Jersey: Police in Morris County, used risk terrain analysis to tackle violent crimes and burglaries. They considered five inputs in their model: "past burglaries, the address of individuals recently arrested for property crimes, proximity to major highways, the geographic concentration of young men and the location of apartment complexes and hotels." [38] The Morris County law enforcement officials linked the significant reductions in violent and property crime to their use of risk terrain modelling[39].

    Near-repeat & hot spot analysis used by Santa Cruz Police Department: Uses PredPol software that applies the Mohler's algorithm [40] to a database with five years' worth of crime data to assess the likelihood of future crime occurring in the geographic areas within the city. Before going on shift, officers receive information identifying 15 such areas with the highest probability of crime[41]. The initiative has been cited as being very successful at reducing burglaries, and was used in Los Angeles and Richmond, Virginia[42].

    Data Mining and Spatiotemporal analysis to predict future criminal activities in Chicago: Officers in Chicago Police Department made visits to people their software predicted were likely to be involved in violent crimes[43], guided by an algorithm-generated "Heat List"[44]. Some of the inputs used in the predictions include some types of arrest records, gun ownership, social networks[45] (police analysis of social networking is also a rising trend in predictive policing[46]) and generally type of people you are acquainted with [47] among others, but the full list of the factors are not public. The list sends police officers (or sometimes mails letters) to peoples' homes to offer social services or deliver warnings about the consequences for offending. Based in part on the information provided by the algorithm, officers may provide people on the Heat List information about vocational training programs or warnings about how Federal Law provides harsher punishments for reoffending[48].

    Predictive policing in India

    In this section, I map out some of the developments in the field of predictive policing within India. On the whole, predictive policing is still very new in India, with Jharkhand being the only state that appears to already have concrete plans in place to introduce predictive policing.

    Jharkhand Police

    The Jharkhand police began developing their IT infrastructure such as a Geographic Information System (GIS) and Server room when they received funding for Rs. 18.5 crore from the Ministry of Home Affairs[49]. The Open Group on E-governance (OGE), founded as a collaboration between the Jharkhand Police and National Informatics Centre[50], is now a multi-disciplinary group which takes on different projects related to IT[51]. With regards to predictive policing, some members of OGE began development in 2013 of data mining software which will scan online records that are digitised. The emerging crime trends "can be a building block in the predictive policing project that the state police want to try."[52]

    The Jharkhand Police was also reported in 2012 to be in the final stages of forming a partnership with IIM-Ranchi[53]. It was alleged the Jharkhand police aimed to tap into IIM's advanced business analytics skills [54], skills that can be very useful in a predictive policing context. Mr Pradhan suggested that "predictive policing was based on intelligence-based patrol and rapid response"[55] and that it could go a long way to dealing with the threat of Naxalism in Jharkhand[56].

    However, in Jharkhand, the emphasis appears to be targeted at developing a massive Domain Awareness system, collecting data and creating new ways to present that data to officers on the ground, instead of architecting and using predictive policing software. For example, the Jharkhand police now have in place "a Naxal Information System, Crime Criminal Information System (to be integrated with the CCTNS) and a GIS that supplies customised maps that are vital to operations against Maoist groups"[57]. The Jharkhand police's "Crime Analytics Dashboard" [58] shows the incidence of crime according to type, location and presents it in an accessible portal, providing up-to-date information and undoubtedly raises the situational awareness of the officers. Arguably, the domain awareness systems that are taking shape in Jharkhand would pave the way for predictive policing methods to be applied in the future. These systems and hot spot maps seem to be the start of a new age of policing in Jharkhand.

    Predictive Policing Research

    One promising idea for predictive policing in India comes from the research conducted by Lavanya Gupta and others entitled "Predicting Crime Rates for Predictive Policing"[59], which was a submission for the Gandhian Young Technological Innovation Award. The research uses regression modelling to predict future crime rates. Drawing from First Information Reports (FIRs) of violent crimes (murder, rape, kidnapping etc.) from Chandigarh Police, the team attempted "to extrapolate annual crime rate trends developed through time series models. This approach also involves correlating past crime trends with factors that will influence the future scope of crime, in particular demographic and macro-economic variables" [60]. The researchers used early crime data as the training data for their model, which after some testing, eventually turned out to have an accuracy of around 88.2%.[61] On the face of it, ideas like this could be the starting point for the introduction of predictive policing into India.

    The rest of India's law enforcement bodies do not appear to be lagging behind. In the 44th All India police science congress, held in Gandhinagar, Gujarat in March this year, one of the Themes for discussion was the "Role of Preventive Forensics and latest developments in Voice Identification, Tele-forensics and Cyber Forensics"[62].Mr A K Singh, (Additional Director General of Police, Administration) the chairman of the event also said in an interview that there was to be a round-table DGs (Director General of Police) held at the conference to discuss predictive policing[63]. Perhaps predictive policing in India may not be that far away from reality.

    CCTNS and the building blocks of Predictive policing

    The Ministry of Home Affairs conceived of a Crime and Criminals Tracking and Network System (CCTNS) as part of national e-Governance plans. According to the website of the National Crime Records Bureau (NCRB), CCTNS aims to develop "a nationwide networked infrastructure for evolution of IT-enabled state-of-the-art tracking system around 'investigation of crime and detection of criminals' in real time" [64]

    The plans for predictive policing seem in the works, but first steps that are needed in India across police forces involve digitizing data collection by the police, as well as connecting law enforcement agencies. The NCRB's website described the current possibility of exchange of information between neighbouring police stations, districts or states as being "next to impossible"[65]. The aim of CCTNS is precisely to address this gap and integrate and connect the segregated law enforcement arms of the state in India, which would be a foundational step in any initiatives to apply predictive methods.

    What are the implications of using predictive policing? Lessons from USA

    Despite the moves by law enforcement agencies to adopt predictive policing, one reality is that the implications of predictive policing methods are far from clear. This section will examine these implications on the carriage of justice and its use in law, as well as how it impacts privacy concerns for the individual. It frames the existing debates surrounding these issues with predictive policing, and aims to apply these principles into an Indian context.

    Justice, Privacy & IV Amendment

    Two key concerns about how predictive policing methods may be used by law enforcement relate to how insights from predictive policing methods are acted upon and how courts interpret them. In the USA, this issue may finds its place under the scope of IV Amendment jurisprudence. The IV amendment states that all citizens are "secure from unreasonable searches and seizures of property by the government"[66]. In this sense, the IV amendment forms the basis for search and surveillance law in the USA.

    A central aspect of the IV Amendment jurisprudence is drawn from United States v. Katz. In Katz, the FBI attached a microphone to the outside of a public phone booth to record the conversations of Charles Katz, who was making phone calls related to illegal gambling. The court ruled that such actions constituted a search within the auspices of the 4th amendment. The ruling affirmed constitutional protection of all areas where someone has a "reasonable expectation of privacy"[67].

    Later cases have provided useful tests for situations where government surveillance tactics may or may not be lawful, depending on whether it violates one's reasonable expectation of privacy. For example, in United States v. Knotts, the court held that "police use of an electronic beeper to follow a suspect surreptitiously did not constitute a Fourth Amendment search"[68]. In fact, some argue that that the Supreme Court's reasoning in such cases suggests " any 'scientific enhancement' of the senses used by the police to watch activity falls outside of the Fourth Amendment's protections if the activity takes place in public"[69]. This reasoning is based on the third party doctrine which holds that "if you voluntarily provide information to a third party, the IV Amendment does not preclude the government from accessing it without a warrant"[70]. The clearest exposition of this reasoning was in Smith v. Maryland, where the presiding judges noted that "this Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties"[71].

    However, the third party has seen some challenge in recent time. In United States v. Jones, it was ruled that the government's warrantless GPS tracking of his vehicle 24 hours a day for 28 days violated his Fourth Amendment rights[72]. Though the majority ruling was that warrantless GPS tracking constituted a search, it was in a concurring opinion written by Justice Sonya Sotomayor that such intrusive warrantless surveillance was said to infringe one's reasonable expectation of privacy. As Newell reflected on Sotomayor's opinion,

    "Justice Sotomayor stated that the time had come for Fourth Amendment jurisprudence to discard the premise that legitimate expectations of privacy could only be found in situations of near or complete secrecy. Sotomayor argued that people should be able to maintain reasonable expectations of privacy in some information voluntarily disclosed to third parties"[73].

    She said that the court's current reasoning on what constitutes reasonable expectations of privacy in information disclosed to third parties, such as email or phone records or even purchase histories, is "ill-suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks"[74].

    Predictive policing vs. Mass surveillance and Domain Awareness Systems

    However, there is an important distinction to be drawn between these cases and evidence from predictive policing. This has to do with the difference in nature of the evidence collection. Arguably, from Jones and others, what we see is that use of mass surveillance and domain awareness systems, drawing from Joh's categorisation of domain awareness systems as being distinct from predictive policing mentioned above, could potentially encroach on one's reasonable expectation of privacy. However, I think that predictive policing, and the possible implications for justice associated with it, its predictive harms, are quite distinct from what has been heard by courts thus far.

    The reason for distinct risks between predictive harms and privacy harms originating from information gathering is related to the nature of predictive policing technologies, and how they are used. It is highly unlikely that the evidence submitted by the State to indict an offender will be mainly predictive in nature. For example, would it be possible to convict an accused person solely on the premise that he was predicted to be highly likely to commit a crime, and that subsequently he did? The legal standard of proving guilt beyond a reasonable doubt [75] can hardly be met solely on predictive evidence for a multitude of reasons. Predictive policing methods could at most, be said to inform police about the risk of someone committing a crime or of crime happening at a certain location, as demonstrated above.

    Predictive policing and Criminal Procedure

    It may therefore pay to analyse how predictive policing may be used across the various processes within the criminal justice system. In fact, in an analysis of the various stages of criminal procedure, from opening an investigation to gathering evidence, followed by arrest, trial, conviction and sentencing, we see that as the individual gets subject to more serious incursions or sanctions by the state, it takes a higher standard of certainty about wrongdoing and a higher burden of proof, in order to legitimize that particular action.

    Hence, at more advanced stages of the criminal justice process such as seeking arrest warrants or trial, it is very unlikely that predictive policing on its own can have a tangible impact, because the nature of predictive evidence is probability based. It aims to calculate the risk of future crime occurring based on statistical analysis of past crime data[76]. While extremely useful, probabilities on their own will not come remotely close meet the legal standards of proving 'guilt beyond reasonable doubt'. It may be at the earlier stages of the criminal justice process that evidence predictive policing might see more widespread application, in terms of applying for search warrants and searching suspicious people while on patrol.

    In fact, in the law enforcement context, prediction as a concept is not new to justice. Both courts and law enforcement officials already make predictions about future likelihood of crimes. In the case of issuing warrants, the IV amendment makes provisions that law enforcement officials show that the potential search is based "upon probable cause"[77] in order for a judge to grant a warrant. In US v. Brinegar, probable cause was defined as existing "where the facts and circumstances within the officers' knowledge, and of which they have reasonably trustworthy information, are sufficient in themselves to warrant a belief by a man of reasonable caution that a crime is being committed" [78]. Again, this legal standard seems too high for predictive evidence meet.

    However, the police also have an important role to play in preventing crimes by looking out for potential crimes while on patrol or while doing surveillance. When the police stop a civilian on the road to search him, reasonable suspicion must be established. This standard of reasonable suspicion was defined in most clearly in Terry v. Ohio, which required police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[79]. Therefore, "reasonable suspicion that 'criminal activity may be afoot' is at base a prediction that the facts and circumstances warrant the reasonable prediction that a crime is occurring or will occur"[80]. Despite the assertion that "there are as of yet no reported cases on predictive policing in the Fourth Amendment context"[81], examining the impact of predictive policing on the doctrine of reasonable suspicion could be very instructive in understanding the implications for justice and privacy [82].

    Predictive Policing and Reasonable Suspicion

    Ferguson's insightful contribution to this area of scholarship involves the identification of existing areas where prediction already takes place in policing, and analogising them into a predictive policing context[83]. These three areas are: responding to tips, profiling, and high crime areas (hot spots).

    Tips

    Tips are pieces of information shared with the police by members of the public. Often tips, either anonymous or from known police informants, may predict future actions of certain people, and require the police to act on this information. The precedent for understanding the role of tips in probable cause comes from Illinois v. Gates[84]. It was held that "an informant's 'veracity,' 'reliability,' and 'basis of knowledge'-remain 'highly relevant in determining the value'"[85] of the said tip. Anonymous tips need to be detailed, timely and individualised enough[86] to justify reasonable suspicion [87]. And when the informant is known to be reliable, then his prior reliability may justify reasonable suspicion despite lacking a basis in knowledge[88].

    Ferguson argues that whereas predictive policing cannot provide individualised tips, it is possible to consider reliable tips about certain areas as a parallel to predictive policing[89]. And since the courts had shown a preference for reliability even in the face of a weak basis in knowledge, it is possible to see the reasonable suspicion standard change in its application[90]. It also implies that IV protections may be different in places where crime is predicted to occur [91].

    Profiling

    Despite the negative connotations and controversial overtones at the mere sound of the word, profiling is already a method commonly used by law enforcement. For example, after a crime has been committed and general features of the suspect identified by witnesses, police often stop civilians who fit this description. Another example of profiling is common in combating drug trafficking[92], where agents keep track of travellers at airports to watch for suspicious behaviour. Based on their experience of common traits which distinguish drug traffickers from regular travellers (a profile), agents may search travellers if they fit the profile[93]. In the case of United States v. Sokolow[94], the courts "recognized that a drug courier profile is not an irrelevant or inappropriate consideration that, taken in the totality of circumstances, can be considered in a reasonable suspicion determination" [95]. Similar lines of thinking could be employed in observing people exchanging small amounts of money in an area known for high levels of drug activity, conceiving predictive actions as a form of profile[96].

    It is valid to consider predictive policing as a form of profiling[97], but Ferguson argues that the predictive policing context means this 'new form' of profiling could change IV analysis. The premise behind such an argument lies in the fact that a prediction made by some algorithm about potential high risk of crime in a certain area, could be taken in conjunction observations of ordinarily innocuous events. Read in the totality of circumstances, these two threads may justify individual reasonable suspicion [98]. For example, a man looking into cars at a parking lot may not by itself justify reasonable suspicion, but taken together with a prediction of high risk of car theft at that locality, it may well justify reasonable suspicion. It is this impact of predictive policing, which influences the analysis of reasonable suspicion in a totality of circumstances that may represent new implications for courts looking at IV amendment protections.

    Profiling, Predictive Policing and Discrimination

    The above sections have already brought up the point that law enforcement agencies already utilize profiling methods in their operations. Also, as the sections on how predictive analytics works and on methods of predictive policing make clear, predictive policing definitely incorporates the development of profiles for predicting future criminal activity. Concerns about predictive models generate potentially discriminatory predictions therefore are very serious, and need addressing. Potential discrimination may be either overt, though far less likely, or unintended. A valuable case study of which sheds light on such discriminatory data mining practices can be found in US Labour law. It was shown how predictive models could be discriminatory at various stages, from conceptualising the model and training it with training data, to eventually selecting inappropriate features to search for [99]. It is also possible for data scientists to (intentionally or not) use proxies for identifiers like race, income level, health condition and religion. Barocas and Selbst argue that "the current distribution of relevant attributes-attributes that can and should be taken into consideration in apportioning opportunities fairly-are demonstrably correlated with sensitive attributes" [100]. Hence, what may result is unintended discrimination, as predictive models and their subjective and implicit biases are reflected in predicted decisions, or that the discrimination is not even accounted for in the first place. While I have not found any case law where courts have examined such situations in a criminal context, at the very least, law enforcement agencies need to be aware of these possibilities and guard against any forms of discriminatory profiling.

    However, Ferguson argues that "the precision of the technology may in fact provide more protection for citizens in broadly defined high crime areas" [101]. This is because the label of a 'high-crime area' may no longer apply to large areas but instead to very specific areas of criminal activity. This implies that previously defined areas of high crime, like entire neighbourhoods may not be scrutinised in such detail. Instead, police now may be more precise in locating and policing areas of high crime, such as an individual street corner or a particular block of flats instead of an entire locality.

    Hot Spots

    Courts have also considered the existence of notoriously 'high-crime areas as part of considering reasonable suspicion[102]. This was seen in Illinois v. Wardlow [103], where the "high crime nature of an area can be considered in evaluating the officer's objective suspicion"[104]. Many cases have since applied this reasoning without scrutinising the predictive value of such a label. In fact, Ferguson asserts that such labelling has questionable evidential value[105]. He uses the facts of the Wardlow case itself to challenge the 'high crime area' factor. Ferguson cites the reasoning of one of the judges in the case:

    "While the area in question-Chicago's District 11-was a low-income area known for violent crimes, how that information factored into a predictive judgment about a man holding a bag in the afternoon is not immediately clear."[106]

    Especially because "the most basic models of predictive policing rely on past crimes"[107], it is likely that the predictive policing methods like hot spot or spatiotemporal analysis and risk terrain modelling may help to gather or build data models about high crime areas. Furthermore, the mathematical rigour of the predictive modelling could help clarify the term 'high crime area'. As Ferguson argues, "courts may no longer need to rely on the generalized high crime area terminology when more particularized and more relevant information is available" [108].

    Summary

    Ferguson synthesises four themes to which encapsulate reasonable suspicion analysis:

    1. Predictive information is not enough on its own. Instead, it is "considered relevant to the totality of circumstances, but must be corroborated by direct police observation"[109].
    2. The prediction must also "be particularized to a person, a profile, or a place, in a way that directly connects the suspected crime to the suspected person, profile, or place"[110].
    3. It must also be detailed enough to distinguish a person or place from others not the focus of the prediction [111].
    4. Finally, predicted information becomes less valuable over time. Hence it must be acted on quickly or be lost [112].

    Conclusions from America

    The main conclusion to draw from the analysis of the parallels between existing predictions in IV amendment law and predictive policing is that "predictive policing will impact the reasonable suspicion calculus by becoming a factor within the totality of circumstances test"[113]. Naturally, it reaffirms the imperative for predictive techniques to collect reliable data [114] and analyse it transparently[115]. Moreover, in order for courts to evaluate the reliability of the data and the processes used (since predictive methods become part of the reasonable suspicion calculus), courts need to be able to analyse the predictive process. This has implications for the how hearings may be conducted, for how legal adjudicators may require training and many more. Another important concern is that the model of predictive information and police corroboration or direct observation[116] may mean that in areas which were predicted to have low risk of crime, the reasonable suspicion doctrine works against law enforcement. There may be less effort paid to patrolling these other areas as a result of predictions.

    Implications for India

    While there have been no cases directly involving predictive policing methods, it would be prudent to examine the parts of Indian law which would inform the calculus on the lawfulness of using predictive policing methods. A useful lens to examine this might be found in the observation that prediction is not in itself a novel concept in justice, and is already used by courts and law enforcement in numerous circumstances.

    Criminal Procedure in Non-Warrant Contexts

    The most logical way to begin analysing the legal implications of predictive policing in India may probably involve identifying parallels between American and Indian criminal procedure, specifically searching for instances where 'reasonable suspicion' or some analogous requirement exists for justifying police searches.

    In non-warrant scenarios, we find conditions for officers to conduct such a warrantless search in Section 165 of the Criminal Procedure Code (Cr PC). For clarity purposes I have stated section 165 (1) in full:

    "Whenever an officer in charge of a police station or a police officer making an investigation has reasonable grounds for believing that anything necessary for the purposes of an investigation into any offence which he is authorised to investigate may be found in any place with the limits of the police station of which he is in charge, or to which he is attached, and that such thing cannot in his opinion be otherwise obtained without undue delay, such officer may, after recording in writing the grounds of his belief and specifying in such writing, so far as possible, the thing for which search is to be made, search, or cause search to be made, for such thing in any place within the limits of such station." [117]

    However, India differs from the USA in that its Cr PC allows for police to arrest individuals without a warrant as well. As observed in Gulab Chand Upadhyaya vs State Of U.P, "Section 41 Cr PC gives the power to the police to arrest without warrant in cognizable offences, in cases enumerated in that Section. One such case is of receipt of a 'reasonable complaint' or 'credible information' or 'reasonable suspicion'" [118] Like above, I have stated section 41 (1) and subsection (a) in full:

    "41. When police may arrest without warrant.

    (1) Any police officer may without an order from a Magistrate and without a warrant, arrest any person-

    (a) who has been concerned in any cognizable offence, or against whom a reasonable complaint has been made, or credible information has been received, or a reasonable suspicion exists, of his having been so concerned"[119]

    In analysing the above sections of Indian criminal procedure from a predictive policing angle, one may find both similarities and differences between the proposed American approach and possible Indian approaches to interpreting or incorporating predictive policing evidence.

    Similarity of 'reasonable suspicion' requirement

    For one, the requirement for "reasonable grounds" or "reasonable suspicion" seems to be analogous to the American doctrine of reasonable suspicion. This suggests that the concepts used in forming reasonable suspicion, for the police to "be able to point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion"[120] may also be useful in the Indian context.

    One case which sheds light on an Indian interpretation of reasonable suspicion or grounds is State of Punjab v. Balbir Singh[121]. In that case, the court observes a requirement for "reason to believe that such an offence under Chapter IV has been committed and, therefore, an arrest or search was necessary as contemplated under these provisions"[122] in the context of Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act, 1985[123]. In examining the requirement of having "reason to believe", the court draws on Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act[124], where the judge observed that "the expression 'reason to believe' is not synonymous with subjective satisfaction of the officer. The belief must be held in good faith; it cannot be merely a pretence….."[125]

    In light of this, the judge in Balbir Singh remarked that "whether there was such reason to believe and whether the officer empowered acted in a bona fide manner, depends upon the facts and circumstances of the case and will have a bearing in appreciation of the evidence" [126]. The standard considered by the court in Balbir Singh and Partap Singh is different from the 'reasonable suspicion' or 'reasonable grounds' standard as per Section 41 and 165 of Cr PC. But I think the discussion can help to inform our analysis of the idea of reasonableness in law enforcement actions. Of importance was the court requirement of something more than mere "pretence" as well as a belief held in good faith. This could suggest that in fact the reasoning in American jurisprudence about reasonable suspicion might be at least somewhat similar to how Indian courts view reasonable suspicion or grounds in the context of predictive policing, and therefore how we could similarly conjecture that predictive evidence could form part of the reasonable suspicion calculus in India as well.

    Difference in judicial treatment of illegally obtained evidence - Indian lack of exclusionary rules

    However, the apparent similarity of how police in America and India may act in non-warrant situations - guided by the idea of reasonable suspicion - is only veneered by linguistic parallels. Despite the existence of such conditions which govern the searches without a warrant, I believe that Indian courts currently may provide far less protection against unlawful use of predictive technologies. The main premise behind this argument is that Indian courts refuse to exclude evidence that was obtained in breaches of the conditions of sections of the Cr PC. What exists in place of evidentiary safeguards is a line of cases in which courts routinely admit unlawfully or illegally obtained evidence. Without protections against unlawfully gathered evidence being considered relevant by courts, any regulations on search or conditions to be met before a search is lawful become ineffective. Evidence may simply enter the courtroom through a backdoor.

    In the USA, this is by and large, not the case. Although there are exceptions to these rules, exclusionary rules are set out to prevent admission of evidence which violates the constitution[127]. "The exclusionary rule applies to evidence gained from an unreasonable search or seizure in violation of the Fourth Amendment "[128]. Mapp v. Ohio [129] set the precedent for excluding unconstitutionally gathered evidence, where the court ruled that "all evidence obtained by searches and seizures in violation of the Federal Constitution is inadmissible in a criminal trial in a state court" [130].

    Any such evidence which then leads law enforcement to collect new information may also be excluded, as part of the "fruit of the poisonous tree" doctrine[131], established in Silverthorne Lumber Co. v. United States [132]. The doctrine is a metaphor which suggests that if the source of certain evidence is tainted, so is 'fruit' or derivatives from that unconstitutional evidence. One such application was in Beck v. Ohio[133], where the courts overturned a petitioner's conviction because the evidence used to convict him was obtained via an unlawful arrest.

    However in India's context, there is very little protection against the admission and use of unlawfully gathered evidence. In fact, there are a line of cases which lay out the extent of consideration given to unlawfully gathered evidence - both cases that specifically deal with the rules as per the Indian Cr PC as well as cases from other contexts - which follow and develop this line of reasoning of allowing illegally obtained evidence.

    One case to pay attention to is State of Maharastra v. Natwarlal Damodardas Soni - in this case, the Anti-Corruption Bureau searched the house of the accused after receiving certain information as a tip. The police "had powers under the Code of Criminal Procedure to search and seize this gold if they had reason to believe that a cognizable offence had been committed in respect thereof"[134]. Justice Sarkaria, in delivering his judgement, observed that for argument's sake, even if the search was illegal, "then also, it will not affect the validity of the seizure and further investigation"[135]. The judge drew reasoning from Radhakishan v. State of U.P[136]. This which was a case involving a postman who had certain postal items that were undelivered recovered from his house. As the judge in Radhakishan noted:

    "So far as the alleged illegality of the search is concerned, it is sufficient to say that even assuming that the search was illegal the seizure of the articles is not vitiated. It may be that where the provisions of Sections 103 and 165 of the Code of Criminal Procedure, are contravened the search could be resisted by the person whose premises are sought to be searched. It may also be that because of the illegality of the search the court may be inclined to examine carefully the evidence regarding the seizure. But beyond these two consequences no further consequence ensues." [137]

    Shyam Lal Sharma v. State of M.P.[138] was also drawn upon, where it was held that "even if the search is illegal being in contravention with the requirements of Section 165 of the Criminal Procedure Code, 1898, that provision ceases to have any application to the subsequent steps in the investigation"[139].

    Even in Gulab Chand Upadhyay, mentioned above, the presiding judge contended that even "if arrest is made, it does not require any, much less strong, reasons to be recorded or reported by the police. Thus so long as the information or suspicion of cognizable offence is "reasonable" or "credible", the police officer is not accountable for the discretion of arresting or no arresting"[140].

    A more complete articulation of the receptiveness of Indian courts to admit illegally gathered evidence can be seen in the aforementioned Balbir Singh. The judgement aimed to:

    "dispose of one of the contentions that failure to comply with the provisions of Cr PC in respect of search and seizure even up to that stage would also vitiate the trial. This aspect has been considered in a number of cases and it has been held that the violation of the provisions particularly that of Sections 100, 102, 103 or 165 Cr PC strictly per se does not vitiate the prosecution case. If there is such violation, what the courts have to see is whether any prejudice was caused to the accused and in appreciating the evidence and other relevant factors, the courts should bear in mind that there was such a violation and from that point of view evaluate the evidence on record."[141]

    The judges then consulted a series of authorities on the failure to comply with provisions of the Cr PC:

    1. State of Punjab v. Wassan Singh[142]: "irregularity in a search cannot vitiate the seizure of the articles"[143].
    2. Sunder Singh v. State of U.P[144]: 'irregularity cannot vitiate the trial unless the accused has been prejudiced by the defect and it is also held that if reliable local witnesses are not available the search would not be vitiated."[145]
    3. Matajog Dobey v.H.C. Bhari[146]: "when the salutory provisions have not been complied with, it may, however, affect the weight of the evidence in support of the search or may furnish a reason for disbelieving the evidence produced by the prosecution unless the prosecution properly explains such circumstance which made it impossible for it to comply with these provisions."[147]
    4. R v. Sang[148]: "reiterated the same principle that if evidence was admissible it matters not how it was obtained."[149] Lord Diplock, one of the Lords adjudicating the case, observed that "however much the judge may dislike the way in which a particular piece of evidence was obtained before proceedings were commenced, if it is admissible evidence probative of the accused's guilt "it is no part of his judicial function to exclude it for this reason". [150] As the judge in Balbir Singh quoted from Lord Diplock, a judge "has no discretion to refuse to admit relevant admissible evidence on the ground that it was obtained by improper or unfair means. The court is not concerned with how it was obtained."[151]

    The vast body of case law presented above provides observers with a clear image of the courts willingness to admit and consider illegally obtained evidence. The lack of safeguards against admission of unlawful evidence are important from the standpoint of preventing the excessive or unlawful use of predictive policing methods. The affronts to justice and privacy, as well as the risks of profiling, seem to become magnified when law enforcement use predictive methods more than just to augment their policing techniques but to replace some of them. The efficacy and expediency offered by using predictive policing needs to be balanced against the competing interest of ensuring rule of law and due process. In the Indian context, it seems courts sparsely consider this competing interest.

    Naturally, weighing in on which approach is better depends on a multitude of criteria like context, practicality, societal norms and many more. It also draws on existing debates in administrative law about the role of courts, which may emphasise protecting individuals and preventing excessive state power (red light theory) or emphasise efficiency in the governing process with courts assisting the state to achieve policy objectives (green light theory) [152].

    A practical response may be that India should aim to embrace both elements and balance them appropriately, although what an appropriate balance again may vary. There are some who claim that this balance already exists in India. Evidence for such a claim may come from R.M. Malkani v. State of Maharashtra[153], where the court considered whether an illegally tape-recorded conversation could be admissible. In its reasoning, the court drew from Kuruma, Son of Kanju v. R. [154], noting that

    " if evidence was admissible it matters not how it was obtained. There is of course always a word of caution. It is that the Judge has a discretion to disallow evidence in a criminal case if the strict rules of admissibility would operate unfairly against the accused. That caution is the golden rule in criminal jurisprudence"[155].

    While this discretion exists at least principally in India, in practice the cases presented above show that judges rarely exercise that discretion to prevent or bar the admission of illegally obtained evidence or evidence that was obtained in a manner that infringed the provisions governing search or arrest in the Cr PC. Indeed, the concern is that perhaps the necessary safeguards required to keep law enforcement practices, including predictive policing techniques, in check would be better served by a greater focus on reconsidering the legality of unlawfully gathered evidence. If not, evidence which should otherwise be inadmissible may find its way into consideration by existing legal backdoors.

    Risk of discriminatory predictive analysis

    Regarding the risk of discriminatory profiling, Article 15 of India's Constitution[156] states that "the State shall not discriminate against any citizen on grounds only of religion, race, caste, sex, place of birth or any of them" [157]. The existence of constitutional protection for such forms of discrimination suggests that India will be able to guard against discriminatory predictive policing. However, as mentioned before, predictive analytics often discriminates institutionally, "whereby unconscious implicit biases and inertia within society's institutions account for a large part of the disparate effects observed, rather than intentional choices"[158]. As in most jurisdictions, preventing these forms of discrimination are much harder. Especially in a jurisdiction whose courts are already receptive to allowing admission of illegally obtained evidence, the risk of discriminatory data mining or prejudiced algorithms being used by police becomes magnified. Because the discrimination may be unintentional, it may be even harder for evidence from discriminatory predictive methods to be scrutinised or when applicable, dismissed by the courts.

    Conclusion for India

    One thing which is eminently clear from the analysis of possible interpretations of predictive evidence is that Indian Courts have had no experience with any predictive policing cases, because the technology itself is still at a nascent stage. There is in fact a long way to go before predictive policing will become used on a scale similar to that of USA for example.

    But, even in places where predictive policing is used much more prominently, there is no precedent to observe how courts may view predictive policing. Ferguson's method of locating analogous situations to predictive policing which courts have already considered is one notable approach, but even this does not provide complete answer. One of his main conclusions that predictive policing will affect the reasonable suspicion calculus, or in India's case, contribute to 'reasonable grounds' in some ways, is perhaps the most valid one.

    However, what provides more cause for concern in India's context are the limited protections against use of unlawfully gathered evidence. The lack of 'exclusionary rules' unlike those present in the US amplifies the various risks of predictive policing because individuals have little means of redress in such situations where predictive policing may be used unjustly against them.

    Yet, the promise of predictive policing remains undeniably attractive for India. The successes predictive policing methods seem to have had In the US and UK coupled with the more efficient allocation of law enforcement's resources as a consequence of adapting predictive policing evidence this point. The government recognises this and seems to be laying the foundation and basic digital infrastructure required to utilize predictive policing optimally. One ought also to ask whether it is the even within the court's purview to decide what kind of policing methods are to be permissible through evaluating the nature of evidence. There is a case to be made for the legislative arm of the state to provide direction on how predictive policing is to be used in India. Perhaps the law must also evolve with the changes in technology, especially if courts are to scrutinise the predictive policing methods themselves.


    [1] Joh, Elizabeth E. "Policing by Numbers: Big Data and the Fourth Amendment." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 1, 2014. http://papers.ssrn.com/abstract=2403028.

    [2] Tene, Omer, and Jules Polonetsky. "Big Data for All: Privacy and User Control in the Age of Analytics." Northwestern Journal of Technology and Intellectual Property 11, no. 5 (April 17, 2013): 239.

    [3] Datta, Rajbir Singh. "Predictive Analytics: The Use and Constitutionality of Technology in Combating Homegrown Terrorist Threats." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 1, 2013. http://papers.ssrn.com/abstract=2320160.

    [4] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [5] Ibid.

    [6] Duhigg, Charles. "How Companies Learn Your Secrets." The New York Times, February 16, 2012. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html.

    [7] Ibid.

    [8] Lijaya, A, M Pranav, P B Sarath Babu, and V R Nithin. "Predicting Movie Success Based on IMDB Data." International Journal of Data Mining Techniques and Applications 3 (June 2014): 365-68.

    [9] Johnson, Jeffrey Alan. "Ethics of Data Mining and Predictive Analytics in Higher Education." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, May 8, 2013. http://papers.ssrn.com/abstract=2156058.

    [10] Sangvinatsos, Antonios A. "Explanatory and Predictive Analysis of Corporate Bond Indices Returns." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, June 1, 2005. http://papers.ssrn.com/abstract=891641.

    [11] Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, February 13, 2015. http://papers.ssrn.com/abstract=2477899.

    [12] Joh, supra note 1.

    [13] US Environmental Protection Agency. "How We Use Data in the Mid-Atlantic Region." US EPA. Accessed November 6, 2015. http://archive.epa.gov/reg3esd1/data/web/html/.

    [14] See here for details of blackroom.

    [15] Joh, supra note 1, at pg 48.

    [16] Perry, Walter L., Brian McInnis, Carter C. Price, Susan Smith and John S. Hollywood. Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Santa Monica, CA: RAND Corporation, 2013. http://www.rand.org/pubs/research_reports/RR233. Also available in print form.

    [17] Ibid, at pg 2.

    [18] Chan, Sewell. "Why Did Crime Fall in New York City?" City Room. Accessed November 6, 2015. http://cityroom.blogs.nytimes.com/2007/08/13/why-did-crime-fall-in-new-york-city/.

    [19] Bureau of Justice Assistance. "COMPSTAT: ITS ORIGINS, EVOLUTION, AND FUTURE IN LAW ENFORCEMENT AGENCIES," 2013. http://www.policeforum.org/assets/docs/Free_Online_Documents/Compstat/compstat%20-%20its%20origins%20evolution%20and%20future%20in%20law%20enforcement%20agencies%202013.pdf.

    [20] 1996 internal NYPD article "Managing for Results: Building a Police Organization that Dramatically Reduces Crime, Disorder, and Fear."

    [21] Bratton, William. "Crime by the Numbers." The New York Times, February 17, 2010. http://www.nytimes.com/2010/02/17/opinion/17bratton.html.

    [22] RAND CORP, supra note 16.

    [23] RAND CORP, supra note 16, at pg 19.

    [24] Joh, supra note 1, at pg 44.

    [25] RAND CORP, supra note 16, pg 38.

    [26] Ibid.

    [27] RAND CORP, supra note 16, at pg 39.

    [28] Ibid.

    [29] RAND CORP, supra note 16, at pg 41.

    [30] Data-Smart City Solutions. "Dr. George Mohler: Mathematician and Crime Fighter." Data-Smart City Solutions, May 8, 2013. http://datasmart.ash.harvard.edu/news/article/dr.-george-mohler-mathematician-and-crime-fighter-166.

    [31] RAND CORP, supra note 16, at pg 44.

    [32] Joh, supra note 1, at pg 45.

    [33] Ouellette, Danielle. "Dispatch - A Hot Spots Experiment: Sacramento Police Department," June 2012. http://cops.usdoj.gov/html/dispatch/06-2012/hot-spots-and-sacramento-pd.asp.

    [34] Pitney Bowes Business Insight. "The Safer Derbyshire Partnership." Derbyshire, 2013. http://www.mapinfo.com/wp-content/uploads/2013/05/safer-derbyshire-casestudy.pdf.

    [35] Ibid.

    [36] Daniel B Neill, Wilpen L. Gorr. "Detecting and Preventing Emerging Epidemics of Crime," 2007.

    [37] RAND CORP, supra note 16, at pg 33.

    [38] Joh, supra note 1, at pg 46.

    [39] Paul, Jeffery S, and Thomas M. Joiner. "Integration of Centralized Intelligence with Geographic Information Systems: A Countywide Initiative." Geography and Public Safety 3, no. 1 (October 2011): 5-7.

    [40] Mohler, supra note 30.

    [41] Ibid.

    [42] Moses, B., Lyria, & Chan, J. (2014). Using Big Data for Legal and Law Enforcement
    Decisions: Testing the New Tools (SSRN Scholarly Paper No. ID 2513564). Rochester, NY: Social Science Research Network. Retrieved from http://papers.ssrn.com/abstract=2513564

    [43] Gorner, Jeremy. "Chicago Police Use Heat List as Strategy to Prevent Violence." Chicago Tribune. August 21, 2013. http://articles.chicagotribune.com/2013-08-21/news/ct-met-heat-list-20130821_1_chicago-police-commander-andrew-papachristos-heat-list.

    [44] Stroud, Matt. "The Minority Report: Chicago's New Police Computer Predicts Crimes, but Is It Racist?" The Verge. Accessed November 13, 2015. http://www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist.

    [45] Moser, Whet. "The Small Social Networks at the Heart of Chicago Violence." Chicago Magazine, December 9, 2013. http://www.chicagomag.com/city-life/December-2013/The-Small-Social-Networks-at-the-Heart-of-Chicago-Violence/.

    [46] Lester, Aaron. "Police Clicking into Crimes Using New Software." Boston Globe, March 18, 2013. https://www.bostonglobe.com/business/2013/03/17/police-intelligence-one-click-away/DzzDbrwdiNkjNMA1159ybM/story.html.

    [47] Stanley, Jay. "Chicago Police 'Heat List' Renews Old Fears About Government Flagging and Tagging." American Civil Liberties Union, February 25, 2014. https://www.aclu.org/blog/chicago-police-heat-list-renews-old-fears-about-government-flagging-and-tagging.

    [48] Rieke, Aaron, David Robinson, and Harlan Yu. "Civil Rights, Big Data, and Our Algorithmic Future," September 2014. https://bigdata.fairness.io/wp-content/uploads/2015/04/2015-04-20-Civil-Rights-Big-Data-and-Our-Algorithmic-Future-v1.2.pdf.

    [49] Edmond, Deepu Sebastian. "Jhakhand's Digital Leap." Indian Express, September 15, 2013. http://www.jhpolice.gov.in/news/jhakhands-digital-leap-indian-express-15092013-18219-1379316969.

    [50] Jharkhand Police. "Jharkhand Police IT Vision 2020 - Effective Shared Open E-Governance." 2012. http://jhpolice.gov.in/vision2020. See slide 2

    [51] Edmond, supra note 49.

    [52] Edmond, supra note 49.

    [53] Kumar, Raj. "Enter, the Future of Policing - Cops to Team up with IIM Analysts to Predict & Prevent Incidents." The Telegraph. August 28, 2012. http://www.telegraphindia.com/1120828/jsp/jharkhand/story_15905662.jsp#.VkXwxvnhDWK.

    [54] Ibid.

    [55] Ibid.

    [56] Ibid.

    [57] See supra note 49.

    [58] See here for Jharkhand Police crime dashboard.

    [59] Lavanya Gupta, and Selva Priya. "Predicting Crime Rates for Predictive Policing." Gandhian Young Technological Innovation Award, December 29, 2014. http://gyti.techpedia.in/project-detail/predicting-crime-rates-for-predictive-policing/3545.

    [60] Gupta, Lavanya. "Minority Report: Minority Report." Accessed November 13, 2015. http://cmuws2014.blogspot.in/2015/01/minority-report.html.

    [61] See supra note 59.

    [62] See here for details about 44th All India Police Science Congress.

    [63] India, Press Trust of. "Police Science Congress in Gujarat to Have DRDO Exhibition." Business Standard India, March 10, 2015. http://www.business-standard.com/article/pti-stories/police-science-congress-in-gujarat-to-have-drdo-exhibition-115031001310_1.html.

    [64] National Crime Records Bureau. "About Crime and Criminal Tracking Network & Systems - CCTNS." Accessed November 13, 2015. http://ncrb.gov.in/cctns.htm.

    [65] Ibid. (See index page)

    [66] U.S. Const. amend. IV, available here

    [67] United States v Katz, 389 U.S. 347 (1967) , see here

    [68] See supra note 1, at pg 60.

    [69] See supra note 1, at pg 60.

    [70] Villasenor, John. "What You Need to Know about the Third-Party Doctrine." The Atlantic, December 30, 2013. http://www.theatlantic.com/technology/archive/2013/12/what-you-need-to-know-about-the-third-party-doctrine/282721/.

    [71] Smith v Maryland, 442 U.S. 735 (1979), see here

    [72] United States v Jones, 565 U.S. ___ (2012), see here

    [73] Newell, Bryce Clayton. "Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, October 16, 2013. http://papers.ssrn.com/abstract=2341182, at pg 24.

    [74] See supra note 72.

    [75] Dahyabhai Chhaganbhai Thakker vs State Of Gujarat, 1964 AIR 1563

    [76] See supra note 16.

    [77] See supra note 66.

    [78] Brinegar v. United States, 338 U.S. 160 (1949), see here

    [79] Terry v. Ohio, 392 U.S. 1 (1968), see here

    [80] Ferguson, Andrew Guthrie. "Big Data and Predictive Reasonable Suspicion." SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 4, 2014. http://papers.ssrn.com/abstract=2394683, at pg 287. See also supra note 79.

    [81] See supra note 80.

    [82] See supra note 80.

    [83] See supra note 80.

    [84] See supra note 80, at pg 289.

    [85] Illinois v. Gates, 462 U.S. 213 (1983). See here

    [86] See Alabama v. White, 496 U.S. 325 (1990). See here

    [87] See supra note 80, at pg 291.

    [88] See supra note 80, at pg 293.

    [89] See supra note 80, at pg 308.

    [90] Ibid.

    [91] Ibid.

    [92] Larissa Cespedes-Yaffar, Shayona Dhanak, and Amy Stephenson. "U.S. v. Mendenhall, U.S. v. Sokolow, and the Drug Courier Profile Evidence Controversy." Accessed July 6, 2015. http://courses2.cit.cornell.edu/sociallaw/student_projects/drugcourier.html.

    [93] Ibid.

    [94] United States v. Sokolow, 490 U.S. 1 (1989), see here

    [95] See supra note 80, at pg 295.

    [96] See supra note 80, at pg 297.

    [97] See supra note 80, at pg 308.

    [98] See supra note 80, at pg 310.

    [99] See supra note 11.

    [100] See supra note 11.

    [101] See supra note 80, at pg 303.

    [102] See supra note 80, at pg 300.

    [103] Illinois v. Wardlow, 528 U.S. 119 (2000), see here

    [104] Ibid.

    [105] See supra note 80, at pg 301.

    [106] Ibid.

    [107] See supra note 1, at pg 42.

    [108] See supra note 80, at pg 303.

    [109] See supra note 80, at pg 303.

    [110] Ibid.

    [111] Ibid.

    [112] Ibid.

    [113] See supra note 80, at pg 312.

    [114] See supra note 80, at pg 317.

    [115] See supra note 80, at pg 319.

    [116] See supra note 80, at pg 321.

    [117] Section 165 Indian Criminal Procedure Code, see here

    [118] Gulab Chand Upadhyaya vs State Of U.P, 2002 CriLJ 2907

    [119] Section 41 Indian Criminal Procedure Code

    [120] See supra note 79

    [121] State of Punjab v. Balbir Singh. (1994) 3 SCC 299

    [122] Ibid.

    [123] Section 41 and 42 in The Narcotic Drugs and Psychotropic Substances Act 1985, see here

    [124] Partap Singh (Dr) v. Director of Enforcement, Foreign Exchange Regulation Act. (1985) 3 SCC 72 : 1985 SCC (Cri) 312 : 1985 SCC (Tax) 352 : AIR 1985 SC 989

    [125] Ibid, at SCC pg 77-78.

    [126] See supra note 121, at pg 313.

    [127] Carlson, Mr David. "Exclusionary Rule." LII / Legal Information Institute, June 10, 2009. https://www.law.cornell.edu/wex/exclusionary_rule.

    [128] Ibid.

    [129] Mapp v Ohio, 367 U.S. 643 (1961), see here

    [130] Ibid.

    [131] Busby, John C. "Fruit of the Poisonous Tree." LII / Legal Information Institute, September 21, 2009. https://www.law.cornell.edu/wex/fruit_of_the_poisonous_tree.

    [132] Silverthorne Lumber Co., Inc. v. United States, 251 U.S. 385 (1920), see here.

    [133] Beck v. Ohio, 379 U.S. 89 (1964), see here.

    [134] State of Maharashtra v. Natwarlal Damodardas Soni, (1980) 4 SCC 669, at 673.

    [135] Ibid.

    [136] Radhakishan v. State of U.P. [AIR 1963 SC 822 : 1963 Supp 1 SCR 408, 411, 412 : (1963) 1 Cri LJ 809]

    [137] Ibid, at SCR pg 411-12.

    [138] Shyam Lal Sharma v. State of M.P. (1972) 1 SCC 764 : 1974 SCC (Cri) 470 : AIR 1972 SC 886

    [139] See supra note 135, at page 674.

    [140] See supra note 119, at para. 10.

    [141] See supra note 121, at pg 309.

    [142] State of Punjab v. Wassan Singh, (1981) 2 SCC 1 : 1981 SCC (Cri) 292

    [143] See supra note 121, at pg 309.

    [144] Sunder Singh v. State of U.P, AIR 1956 SC 411 : 1956 Cri LJ 801

    [145] See supra note 121, at pg 309.

    [146] Matajog Dobey v.H.C. Bhari, AIR 1956 SC 44 : (1955) 2 SCR 925 : 1956 Cri LJ 140

    [147] See supra note 121, at pg 309.

    [148] R v. Sang, (1979) 2 All ER 1222, 1230-31

    [149] See supra note 121, at pg 309.

    [150] Ibid.

    [151] Ibid.

    [152] Harlow, Carol, and Richard Rawlings. Law and Administration. 3rd ed. Law in Context. Cambridge University Press, 2009.

    [153] R.M. Malkani v. State of Maharashtra, (1973) 1 SCC 471

    [154] Kuruma, Son of Kanju v. R., (1955) AC 197

    [155] See supra note 154, at 477.

    [156] Indian Const. Art 15, see here

    [157] Ibid.

    [158] See supra note 11.

    Response by the Centre for Internet and Society to the Draft Proposal to Transition the Stewardship of the Internet Assigned Numbers Authority (IANA) Functions from the U.S. Commerce Department’s National Telecommunications and Information Administration

    by Pranesh Prakash last modified Nov 29, 2015 06:35 AM
    This proposal was made to the Global Multistakeholder Community on August 9, 2015. The proposal was drafted by Pranesh Prakash and Jyoti Panday. The research assistance was provided by Padmini Baruah, Vidushi Marda, and inputs from Sunil Abraham.

    For more than a year now, the customers and operational communities performing key internet functions related to domain names, numbers and protocols have been negotiating the transfer of IANA stewardship. India has dual interests in the ICANN IANA Transition negotiations: safeguarding independence, security and stability of the DNS for development, and promoting an effective transition agreement that internationalizes the IANA Functions Operator (IFO). Last month the IANA Stewardship Transition Coordination Group (ICG) set in motion a public review of its combined assessment of the proposals submitted by the names, numbers and protocols communities. In parallel to the transition of the NTIA oversight, the community has also been developing mechanisms to strengthen the accountability of ICANN and has devised two workstreams that consider both long term and short term issues. This 2 is our response to the consolidated ICG proposal which considers the proposals for the transition of the NTIA oversight over the IFO.

    Click to download the submission.

    The Humpty-Dumpty Censorship of Television in India

    by Bhairav Acharya last modified Nov 29, 2015 08:37 AM
    The Modi government’s attack on Sathiyam TV is another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    The article originally published in the Wire on September 8, 2015 was also mirrored on the website Free Speech/Privacy/Technology.


    It is tempting to think of the Ministry of Information and Broadcasting’s (MIB) attack on Sathiyam TV solely as another authoritarian exhibition of Prime Minister Narendra Modi’s government’s intolerance of criticism and dissent. It certainly is. But it is also another manifestation of the Indian state’s paranoia of the medium of film and television, and consequently, the irrational controlling impulse of the law.

    Sathiyam TV’s transgressions

    Sathiyam’s transgressions began more than a year ago, on May 9, 2014, when it broadcast a preacher saying of an unnamed person: “Oh Lord! Remove this satanic person from the world!” The preacher also allegedly claimed this “dreadful person” was threatening Christianity. This, the MIB reticently claims, “appeared to be targeting a political leader”, referring presumably to Prime Minister Modi, to “potentially give rise to a communally sensitive situation and incite the public to violent tendencies.”

    The MIB was also offended by a “senior journalist” who, on the same day, participated in a non-religious news discussion to allegedly claim Modi “engineered crowds at his rallies” and used “his oratorical skills to make people believe his false statements”. According to the MIB, this was defamatory and “appeared to malign and slander the Prime Minister which was repugnant to (his) esteemed office”.

    For these two incidents, Sathiyam was served a show-cause notice on 16 December 2014 which it responded to the next day, denying the MIB’s claims. Sathiyam was heard in-person by a committee of bureaucrats on 6 February 2015. On 12 May 2015, the MIB handed Sathiyam an official an official “Warning” which appears to be unsupported by law. Sathiyam moved the Delhi High Court to challenge this.

    As Sathiyam sought judicial protection, the MIB issued the channel a second warning August 26, 2016 citing three more objectionable news broadcasts of: a child being subjected to cruelty by a traditional healer in Assam; a gun murder inside a government hospital in Madhya Pradesh; and, a self-immolating man rushing the dais at a BJP rally in Telangana. All three news items were carried by other news channels and websites.

    Governing communications

    Most news providers use multiple media to transmit their content and suffer from complex and confusing regulation. Cable television is one such medium, so is the Internet; both media swiftly evolve to follow technological change. As the law struggles to keep up, governmental anxiety at the inability to perfectly control this vast field of speech and expression frequently expresses itself through acts of overreach and censorship.

    In the newly-liberalised media landscape of the early 1990s, cable television sprang up in a legal vacuum. Doordarshan, the sole broadcaster, flourished in the Centre’s constitutionally-sanctioned monopoly of broadcasting which was only broken by the Supreme Court in 1995. The same year, Parliament enacted the Cable Television Networks (Regulation) Act, 1995 (“Cable TV Act”) to create a licence regime to control cable television channels. The Cable TV Act is supplemented by the Cable Television Network Rules, 1994 (“Cable Rules”).

    The state’s disquiet with communications technology is a recurring motif in modern Indian history. When the first telegraph line was laid in India, the colonial state was quick to recognize its potential for transmitting subversive speech and responded with strict controls. The fourth iteration of the telegraph law represents the colonial government’s perfection of the architecture of control. This law is the Indian Telegraph Act, 1885, which continues to dominate communications governance in India today including, following a directive in 2004, broadcasting.

    Vague and arbitrary law

    The Cable TV Act requires cable news channels such as Sathiyam to obey a list of restrictions on content that is contained in the Cable Rules (“Programme Code“). Failure to conform to the Programme Code can result in seizure of equipment and imprisonment; but, more importantly, creates the momentum necessary to invoke the broad powers of censorship to ban a programme, channel, or even the cable operator. But the Programme Code is littered with vague phrases and undefined terms that can mean anything the government wants them to mean.

    By its first warning of May 12, 2015, the MIB claimed Sathiyam violated four rules in the Programme Code. These include rule 6(1)(c) which bans visuals or words “which promote communal attitudes”; rule 6(1)(d) which bans “deliberate, false and suggestive innuendos and half-truths”; rule 6(1)(e) which bans anything “which promotes anti-national attitudes”; and, rule 6(1)(i) which bans anything that “criticises, maligns or slanders any…person or…groups, segments of social, public and moral life of the country” (sic).

    The rest of the Programme Code is no less imprecise. It proscribes content that “offends against good taste” and “reflects a slandering, ironical and snobbish attitude” against communities. On the face of it, several provisions of the Programme Code travel beyond the permissible restrictions on free speech listed in Article 19(2) of the Constitution to question their validity. The fiasco of implementing the vague provisions of the erstwhile section 66A of the Information Technology Act, 2000 is a recent reminder of the dangers presented by poorly-drafted censorship law – which is why it was struck down by the Supreme Court for infringing the right to free speech. The Programme Code is an older creation, it has simply evaded scrutiny for two decades.

    The arbitrariness of the Programme Code is amplified manifold by the authorities responsible for interpreting and implementing it. An Inter-Ministerial Committee (IMC) of bureaucrats, supposedly a recommendatory body, interprets the Programme Code before the MIB takes action against channels. This is an executive power of censorship that must survive legal and constitutional scrutiny, but has never been subjected to it. Curiously, the courts have shied away from a proper analysis of the Programme Code and the IMC.

    Judicial challenges

    In 2011, a single judge of the Delhi High Court in the Star India case (2011) was asked to examine the legitimacy of the IMC as well as four separate clauses of the Programme Code including rule 6(1)(i), which has been invoked against Sathiyam. But the judge neatly sidestepped the issues. This feat of judicial adroitness was made possible by the crass indecency of the content in question, which could be reasonably restricted. Since the show clearly attracted at least one ground of legitimate censorship, the judge saw no cause to examine the other provisions of the Programme Code or even the composition of the IMC.

    This judicial restraint has proved detrimental. In May 2013, another single judge of the Delhi High Court, who was asked by Comedy Central to adjudge the validity of the IMC’s decision-making process, relied on Star India (2011) to uphold the MIB’s action against the channel. The channel’s appeal to the Supreme Court is currently pending. If the Supreme Court decides to examine the validity of the IMC, the Delhi High Court may put aside Sathiyam’s petition to wait for legal clarity.

    As it happens, in the Shreya Singhal case (2015) that struck down section 66A of the IT Act, the Supreme Court has an excellent precedent to follow to demand clarity and precision from the Programme Code, perhaps even strike it down, as well as due process from the MIB. On the accusation of defaming the Prime Minister, probably the only clearly stated objection by the MIB, the Supreme Court’s past law is clear: public servants cannot, for non-personal acts, claim defamation.

    Censorship by blunt force

    Beyond the IMC’s advisories and warnings, the Cable TV Act contains two broad powers of censorship. The first empowerment in section 19 enables a government official to ban any programme or channel if it fails to comply with the Programme Code or, “if it is likely to promote, on grounds of religion, race, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different religious, racial, linguistic or regional groups or castes or communities or which is likely to disturb the public tranquility.”

    The second empowerment is much wider. Section 20 of the Cable TV Act permits the Central Government to ban an entire cable television operator, as opposed to a single channel or programmes within channels, if it “thinks it necessary or expedient so to do in public interest”. No reasons need be given and no grounds need be considered. Such a blunt use of force creates an overwhelming power of censorship. It is not a coincidence that section 20 resembles some provisions of nineteenth-century telegraph laws, which were designed to enable the colonial state to control the flow of information to its native subjects.

    A manual for television bans

    Film and television have always attracted political attention and state censorship. In 1970, Justice Hidayatullah of the Supreme Court explained why: “It has been almost universally recognised that the treatment of motion pictures must be different from that of other forms of art and expression. This arises from the instant appeal of the motion picture… The motion picture is able to stir up emotions more deeply than any other product of art.”

    Within this historical narrative of censorship, television regulation is relatively new. Past governments have also been quick to threaten censorship for attacking an incumbent Prime Minister. There seems to be a pan-governmental consensus that senior political leaders ought to be beyond reproach, irrespective of their words and deeds.

    But on what grounds could the state justify these bans? Lord Atkins’ celebrated war-time dissent in Liversidge (1941) offers an unlikely answer:

    “When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean – neither more nor less.’”

    The Short-lived Adventure of India’s Encryption Policy

    by Bhairav Acharya last modified Nov 29, 2015 09:03 AM
    Written for the Berkeley Information Privacy Law Association (BIPLA).

    During his recent visit to Silicon Valley, Indian Prime Minister Narendra Modi said his government was “giving the highest importance to data privacy and security, intellectual property rights and cyber security”. But a proposed national encryption policy circulated in September 2015 would have achieved the opposite effect.

    The policy was comically short-lived. After its poorly-drafted provisions invited ridicule, it was swiftly withdrawn. But the government has promised to return with a fresh attempt to regulate encryption soon. The incident highlights the worrying assault on communications privacy and free speech in India, a concern compounded by the enormous scale of the telecommunications and Internet market.

    Even with only around 26 percent of its population online, India is already the world’s second-largest Internet user, recently overtaking the United States. The number of Internet users in India is set to grow exponentially, spurred by ambitious governmental schemes to build a ‘Digital India’ and a country-wide fiber-optic backbone. There will be a corresponding increase in the use of the Internet for communicating and conducting commerce.

    Encryption on the Internet

    Encryption protects the security of Internet users from invasions of privacy, theft of data, and other attacks. By applying an algorithmic cipher (key), ordinary data (plaintext) is encoded into an unintelligible form (ciphertext), which is decrypted using the key. The ciphertext can be intercepted but will remain unintelligible without the key. The key is secret.

    There are several methods of encryption. SSL/TLS, a family of encryption protocols, is commonly used by major websites. But while some companies encrypt sensitive data, such as passwords and financial information, during its transit through the Internet, most data at rest on servers is largely unencrypted. For instance, email providers regularly store plaintext messages on their servers. As a result, governments simply demand and receive backdoor access to information directly from the companies that provide these services. However, governments have long insisted on blanket backdoor access to all communications data, both encrypted and unencrypted, and whether at rest or in transit.

    On the other hand, proper end-to-end encryption – full encryption from the sender to recipient, where the service provider simply passes on the ciphertext without storing it, and deletes the metadata – will defeat backdoors and protect privacy, but may not be profitable. End-to-end encryption alarms the surveillance establishment, which is why British Prime Minister David Cameron wants to ban it, and many in the US government want Silicon Valley companies to stop using it.

    Communications privacy

    Instead of relying on a company to secure communications, the surest way to achieve end-to-end encryption is for the sender to encrypt the message before it leaves her computer. Since only the sender and intended recipient have the key, even if the data is intercepted in transit or obtained through a backdoor, only the ciphertext will be visible.

    For almost all of human history, encryption relied on a single shared key; that is, both the sender and recipient used a pre-determined key. But, like all secrets, the more who know it, the less secure the key becomes. From the 1970s onwards, revolutionary advances in cryptography enabled the generation of a pair of dissimilar keys, one public and one private, which are uniquely and mathematically linked. This is asymmetric or public key cryptography, where the private key remains an exclusive secret. It offers the strongest protection for communications privacy because it returns autonomy to the individual and is immune to backdoors.

    For those using public key encryption, Edward Snowden’s revelation that the NSA had cracked several encryption protocols including SSL/TLS was worrying. Brute-force decryption (the use of supercomputers to mathematically attack keys) questions the integrity of public key encryption. But, since the difficulty of code-breaking is directly proportional to key size, notionally, generating longer keys will thwart the NSA, for now.

    The crypto-wars in India

    Where does India’s withdrawn encryption policy lie in this landscape of encryption and surveillance? It is difficult to say. Because it was so badly drafted, understanding the policy was a challenge. It could have been a ham-handed response to commercial end-to-end encryption, which many major providers such as Apple and WhatsApp are adopting following consumer demand. But curiously, this did not appear to be the case, because the government later exempted WhatsApp and other “mass use encryption products”.

    The Indian establishment has a history of battling commercial encryption. From 2008, it fought Blackberry for backdoor access to its encrypted communications, coming close to banning the service, which dissipated only once the company lost its market share. There have been similar attempts to force Voice over Internet Protocol providers to fall in line, including Skype and Google. And there is a new thrust underway to regulate over-the-top content providers, including US companies.

    The policy could represent a new phase in India’s crypto-wars. The government, emboldened by the sheer scale of the country’s market, might press an unyielding demand for communications backdoors. The policy made no bones of this desire: it sought to bind communications companies by mandatory contracts, regulate key-size and algorithms, compel surrender of encryption products including “working copies” of software (the key generation mechanism), and more.

    The motives of regulation

    The policy’s deeply intrusive provisions manifest a long-standing effort of the Indian state to dominate communications technology unimpeded by privacy concerns. From wiretaps to Internet metadata, intrusive surveillance is not judicially warranted, does not require the demonstration of probable cause, suffers no external oversight, and is secret. These shortcomings are enabling the creation of a sophisticated surveillance state that sits ill with India’s constitutional values.

    Those values are being steadily besieged. India’s Supreme Court is entertaining a surge of clamorous litigation to check an increasingly intrusive state. Only a few months ago, the Attorney-General – the government’s foremost lawyer – argued in court that Indians did not have a right to privacy, relying on 1950s case law which permitted invasive surveillance. Encryption which can inexpensively lock the state out of private communications alarms the Indian government, which is why it has skirmished with commercially-available encryption in the past.

    On the other hand, the conflict over encryption is fueled by irregular laws. Telecoms licensing regulations restrict Internet Service Providers to 40-bit symmetric keys, a primitively low standard; higher encryption requires permission and presumably surrender of the shared key to the government. Securities trading on the Internet requires 128-bit SSL/TLS encryption while the country’s central bank is pushing for end-to-end encryption for mobile banking. Seen in this light, the policy could simply be an attempt to rationalize an uneven field.

    Encryption and freedom

    Perhaps the government was trying to restrict the use of public key encryption and Internet anonymization services, such as Tor or I2P, by individuals. India’s telecoms minister stated: “The purport of this encryption policy relates only to those who encrypt.” This was not particularly illuminating. If the government wants to pre-empt terrorism – a legitimate duty, this approach is flawed since regardless of the law’s command arguably no terrorist will disclose her key to the government. Besides, since there are very few Internet anonymizers in India who are anyway targeted for special monitoring, it would be more productive for the surveillance establishment to maintain the status quo.

    This leaves harmless encrypters – businesses, journalists, whistle blowers, and innocent privacy enthusiasts. For this group, impediments to encryption interferes with their ability to freely communicate. There is a proportionate link between encryption and the freedom of speech and expression, a fact acknowledged by Special Rapporteur David Kaye of the UN Human Rights Council, where India is a participating member. Kaye notes: “Encryption and anonymity are especially useful for the development and sharing of opinions, which often occur through online correspondence such as e-mail, text messaging, and other online interactions.”

    This is because encryption affords privacy which promotes free speech, a relationship reiterated by the previous UN Special Rapporteur, Frank La Rue. On the other hand, surveillance has a “chilling effect” on speech. In 1962, Justice Subba Rao’s famous dissent in the Indian Supreme Court presciently connected privacy and free speech:

    The act of surveillance is certainly a restriction on the [freedom of speech]. It cannot be suggested that the said freedom…will sustain only the mechanics of speech and expression. An illustration will make our point clear. A visitor, whether a wife, son or friend, is allowed to be received by a prisoner in the presence of a guard. The prisoner can speak with the visitor; but, can it be suggested that he is fully enjoying the said freedom? It is impossible for him to express his real and intimate thoughts to the visitor as fully as he would like. To extend the analogy to the present case is to treat the man under surveillance as a prisoner within the confines of our country and the authorities enforcing surveillance as guards. So understood, it must be held that the petitioner’s freedom under [the right to free speech under the Indian] Constitution is also infringed.

    Kharak Singh v. State of Uttar Pradesh (1964) 1 SCR 332, pr. 30.

    Perhaps the policy expressed the government’s discomfort at individual encrypters escaping surveillance, like free agents evading the state’s control. How should the law respond to this problem? Daniel Solove says the security of the state need not compromise individual privacy. On the other hand, as Ronald Dworkin influentially maintained, the freedoms of the individual precede the interests of the state.

    Security and trade interests

    However, even when assessed from the perspective of India’s security imperatives, the policy would have had harmful consequences. It required users of encryption, including businesses and consumers, to store plaintext versions of their communications for ninety days to surrender to the government upon demand. This outrageously ill-conceived provision would have created real ‘honeypots’ (originally, honeypots are decoy servers to lure hackers) of unencrypted data, ripe for theft. Note that India does not have a data breach law.

    The policy’s demand for encryption companies to register their products and give working copies of their software and encryption mechanisms to the Indian government would have flown in the face of trade secrecy and intellectual property protection. The policy’s hurried withdrawal was a public relations exercise on the eve of Prime Minister Modi’s visit to Silicon Valley. It was successful. Modi encountered no criticism of his government’s visceral opposition to privacy, even though the policy would have severely disrupted the business practices of US communications providers operating in India.

    Encryption invites a convergence of state interests between India and US as well: both countries want to control it. Last month’s joint statement from the US-India Strategic and Commercial Dialogue pledges “further cooperation on internet and cyber issues”. This innocuous statement masks a robust information-gathering and -sharing regime. There is no guarantee against the sharing of any encryption mechanisms or intercepted communications by India.

    The government has promised to return with a reworked proposal. It would be in India’s interest for this to be preceded by a broad-based national discussion on encryption and its links to free speech, privacy, security, and commerce.


    Click to read the post published on Free Speech / Privacy / Technology website.

    How India Regulates Encryption

    by Pranesh Prakash & Japreet Grewal — last modified Jul 23, 2016 01:24 PM
    Contributors: Geetha Hariharan

    Governments across the globe have been arguing for the need to regulate the use of encryption for law enforcement and national security purposes. Various means of regulation such as backdoors, weak encryption standards and key escrows have been widely employed which has left the information of online users vulnerable not only to uncontrolled access by governments but also to cyber-criminals. The Indian regulatory space has not been untouched by this practice and constitutes laws and policies to control encryption. The regulatory requirements in relation to the use of encryption are fragmented across legislations such as the Indian Telegraph Act, 1885 (Telegraph Act) and the Information Technology Act, 2000 (IT Act) and several sector-specific regulations. The regulatory framework is designed to either limit encryption or gain access to the means of decryption or decrypted information.

    Limiting encryption

    The IT Act does not prescribe the level or type of encryption to be used by online users. Under Section 84A, it grants the Government the authority to prescribe modes and methods of encryption. The Government has not issued any rules in exercise of these powers so far but had released a draft encryption policy on September 21, 2015. Under the draft policy, only those encryption algorithms and key sizes were permitted to be used as were to be notified by the Government. The draft policy was withdrawn due to widespread criticism of various requirements under the policy of which retention of unencrypted user information for 90 days and mandatory registration of all encryption products offered in the country were noteworthy.

    The Internet Service Providers License Agreement (ISP License), entered between the Department of Telecommunication (DoT) and an Internet Service Provider (ISP) to provide internet services (i.e. internet access and internet telephony services), permits the use of encryption up to 40 bit key length in the symmetric algorithms or its equivalent in others.[1] The restriction applies not only to the ISPs but also to individuals, groups and organisations that use encryption. In the event an individual, group or organisation decides to deploy encryption that is higher than 40 bits, prior permission from the DoT must be obtained and the decryption key must be deposited with the DoT. There are, however no parameters laid down for use of the decryption key by the Government. Several issues arise in relation enforcement of these license conditions.

    1. While this requirement is applicable to all individuals, groups and organisations using encryption it is difficult to enforce it as the ISP License only binds DoT and the ISP and cannot be enforced against third parties.
    2. Further, a 40 bit symmetric key length is considered to be an extremely weak standard[2] and is inadequate for protection of data stored or communicated online. Various sector-specific regulations that are already in place in India prescribe encryption of more than 40 bits.
      • The Reserve Bank of India has issued guidelines for Internet banking[3] where it prescribes 128-bit as the minimum level of encryption and acknowledges that constant advances in computer hardware and cryptanalysis may induce use of larger key lengths. The Securities and Exchange Board of India also prescribes[4] a 64-bit/128-bit encryption for standard network security and use of secured socket layer security preferably with 128-bit encryption, for securities trading over a mobile phone or a wireless application platform.  Further, under Rule 19 (2) of the Information Technology (Certifying Authorities) Rules, 2000 (CA Rules), the Government has prescribed security guidelines for management and implementation of information technology security of the certifying authorities. Under these guidelines, the Government has suggested the use of suitable security software or even encryption software to protect sensitive information and devices that are used to transmit or store sensitive information such as routers, switches, network devices and computers (also called information assets). The guidelines acknowledge the need to use internationally proven encryption techniques to encrypt stored passwords such as PKCS#1 RSA Encryption Standard (512, 1024, 2048 bit), PKCS#5 Password Based Encryption Standard or PKCS#7 Cryptographic Message Syntax Standard as mentioned under Rule 6 of the CA Rules. These encryption algorithms are very strong and secure as compared to a 40 bit encryption key standard.
      • The ISP License also contains a clause which provides that use of any hardware or software that may render the network security vulnerable would be considered a violation of the license conditions.[5] Network security may be compromised by using a weak security measure such as the 40 bit encryption or its equivalent prescribed by the DoT but the liability will be imputed to the ISP. As a result, an ISP which is merely complying with the license conditions by employing not more than a 40 bit encryption may be liable for what appears to be contradictory license conditions.
      • It is noteworthy that the restriction on the key size under the ISP License has not been imported to the Unified Service License Agreement (UL Agreement) that has been formulated by the DoT. The UL Agreement does not prescribe a specific level of encryption to be used for provision of services. Clause 37.5 of the UL Agreement however makes it clear that use of encryption will be governed by the provisions of the IT Act. As noted earlier, the Government has not specified any limit to level and type of encryption under the IT Act however it had released a draft encryption policy that has been suspended due to widespread criticism of its mandate.

     

    The Telecom Licenses (ISP License, UL Agreement, and Unified Access Service License) prohibit the use of bulk encryption by the service providers but they continue to remain responsible for maintaining privacy of communication and preventing unauthorized interception.

    Gaining access to means of decryption or decrypted information

    Besides restrictions on the level of encryption, the ISP License and the UL Agreement make it mandatory for the service providers including ISPs to provide to the DoT all details of the technology that is employed for operations and furnish all documentary details like concerned literature, drawings, installation materials and tools and testing instruments relating to the system intended to be used for operations as and when required by the DoT.[6] While these license conditions do not expressly lay down that access to means of decryption must be given to the government the language is sufficiently broad to include gaining such access as well. Further, ISPs are required to take prior approval of the DoT for installation of any equipment or execution of any project in areas which are sensitive from security point of view. The ISPs are in fact subject to and further required to facilitate continuous monitoring by the DoT. These obligations ensure that the Government has complete access to and control over the infrastructure for providing internet services which includes any installation or equipment required for the purpose of encryption and decryption.

    The Government has also been granted the power to gain access to means of decryption or simply, decrypted information under Section 69 of the IT Act and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009.

    1. A decryption order usually entails a direction to a decryption key holder to disclose a decryption key, allow access to or facilitate conversion of encrypted information and must contain reasons for such direction. In fact, Rule 8 of the Decryption Rules makes it mandatory for the authority to consider other alternatives to acquire the necessary information before issuing a decryption order.
    2. The Secretary in the Ministry of Home Affairs or the Secretary in charge of the Home Department in a state or union territory is authorised to issue an order of decryption in the interest of sovereignty or integrity of India, defense of India, security of the state, friendly relations with foreign states or public order or preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence. It is useful to note that this provision was amended in 2009 to expand the grounds on which a direction for decryption can be passed. Post 2009, the Government can issue a decryption order for investigation of any offence.  In the absence of any specific process laid down for collection of digital evidence do we follow the procedure under the criminal law or is it necessary that we draw a distinction between the investigation process in the digital and the physical environment and see if adequate safeguards exist to check the abuse of investigatory powers of the police herein.
    3. The orders for decryption must be examined by a review committee constituted under Rule 419A of the Indian Telegraph Rules, 1951 to ensure compliance with the provisions under the IT Act. The review committee is required to convene atleast once in two months for this purpose. However, we have been informed in a response by the Department of Electronics and Information Technology to an RTI dated April 21, 2015 filed by our organisation that since the constitution of the review committee has met only once in January 2013.

    Conclusion

    While studying a regulatory framework for encryption it is necessary that we identify the lens through which encryption is looked at i.e. whether encryption is considered as a means of information security or a threat to national security. As noted earlier, the encryption mandates for banking systems and certifying authorities in India are contradictory to those under the telecom licenses and the Decryption Rules. Would it help to analyse whether the prevailing scepticism of the Government is well founded against the need to have strong encryption? It would be useful to survey the statistics of cyber incidents where strong encryption was employed as well as look at instances that reflect on whether strong encryption has made it difficult for law enforcement agencies to prevent or resolve crimes. It would also help  to record cyber incidents that have resulted from vulnerabilities such as backdoors or key escrows deliberately introduced by law. These statistics would certainly clear the air about the role of encryption in securing cyberspace and facilitate appropriate regulation.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     



    [1] Clause 2.2 (vii) of the ISP License

    [2] Schneier, Bruce (1996). Applied Cryptography (Second ed.). John Wiley & Sons

    [3] Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds- Implementation of recommendations, 2011

    [4] Report on Internet Based Trading by the SEBI Committee on Internet based Trading and Services, 2000; It is useful to note that subsequently SEBI had acknowledged that the level of encryption would be governed by DoT policy in a SEBI circular no CIR/MRD/DP/25/2010 dated August 27, 2010 on Securities Trading using Wireless Technology

    [5] Clause 34.25 of the ISP License

    [6] Clauses 22 and  23 of Part IV  of the ISP License

    Concept Note: Network Neutrality in South Asia

    by Prasad Krishna last modified Dec 01, 2015 02:34 AM

    PDF document icon Network Neutrality South Asia Concept Note _ORF CIS.pdf — PDF document, 238 kB (244150 bytes)

    The Case of Whatsapp Group Admins

    by Japreet Grewal — last modified Dec 08, 2015 10:25 AM
    Contributors: Geetha Hariharan

    Censorship laws in India have now roped in group administrators of chat groups on instant messaging platforms such as Whatsapp (group admin(s)) for allegedly objectionable content that was posted by other users of these chat groups. Several incidents[1] were reported this year where group admins were arrested in different parts of the country for allowing content that was allegedly objectionable under law. A few reports mentioned that these arrests were made under Section 153A[2] read with Section 34[3] of the Indian Penal Code (IPC) and Section 67[4] of the Information Technology Act (IT Act).

    Targeting of a group admin for content posted by other members of a chat group has raised concerns about how this liability is imputed. Whether a group admin should be considered an intermediary under Section 2 (w) of the IT Act? If yes, whether a group admin would be protected from such liability?

    Group admin as an intermediary

    Whatsapp is an instant messaging platform which can be used for mass communication by opting to create a chat group. A chat group is a feature on Whatsapp that allows joint participation of Whatsapp users. The number of Whatsapp users on a single chat group can be up to 100. Every chat group has one or more group admins who control participation in the group by deleting or adding people. [5] It is imperative that we understand that by choosing to create a chat group on Whatsapp whether a group admin can become liable for content posted by other members of the chat group.

    Section 34 of the IPC provides that when a number of persons engage in a criminal act with a common intention, each person is made liable as if he alone did the act. Common intention implies a pre-arranged plan and acting in concert pursuant to the plan. It is interesting to note that group admins have been arrested under Section 153A on the ground that a group admin and a member posting content on a chat group that is actionable under this provision have common intention to post such content on the group. But would this hold true when for instance, a group admin creates a chat group for posting lawful content (say, for matchmaking purposes) and a member of the chat group posts content which is actionable under law (say, posting a video abusing Dalit women)? Common intention can be established by direct evidence or inferred from conduct or surrounding circumstances or from any incriminating facts.[6]

    We need to understand whether common intention can be established in case of a user merely acting as a group admin. For this purpose it is necessary to see how a group admin contributes to a chat group and whether he acts as an intermediary.

    We know that parameters for determining an intermediary differ across jurisdictions and most global organisations have categorised them based on their role or technical functions.[7] Section 2 (w) of the Information Technology Act, 2000 (IT Act) defines an intermediary as any person, who on behalf of another person, receives, stores or transmits messages or provides any service with respect to that message and includes the telecom services providers, network providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online marketplaces and cyber cafés. Does a group admin receive, store or transmit messages on behalf of group participants or provide any service with respect to messages of group participants or falls in any category mentioned in the definition? Whatsapp does not allow a group admin to receive, or store on behalf of another participant on a chat group. Every group member independently controls his posts on the group. However, a group admin helps in transmitting messages of another participant to the group by allowing the participant to be a part of the group thus effectively providing service in respect of messages. A group admin therefore, should be considered an intermediary. However his contribution to the chat group is limited to allowing participation but this is discussed in further detail in the section below.

    According to the Organisation for Economic Co-operation and Development (OECD), in a 2010 report[8], an internet intermediary brings together or facilitates transactions between third parties on the Internet. It gives access to, hosts, transmits and indexes content, products and services originated by third parties on the Internet or provide Internet-based services to third parties. A Whatsapp chat group allows people who are not on your list to interact with you if they are on the group admins’ contact list. In facilitating this interaction, according to the OECD definition, a group admin may be considered an intermediary.

    Liability as an intermediary

    Section 79 (1) of the IT Act protects an intermediary from any liability under any law in force (for instance, liability under Section 153A pursuant to the rule laid down in Section 34 of IPC) if an intermediary fulfils certain conditions laid down therein. An intermediary is required to carry out certain due diligence obligations laid down in Rule 3 of the Information Technology (Intermediaries Guidelines) Rules, 2011 (Rules). These obligations include monitoring content that infringes intellectual property, threatens national security or public order, or is obscene or defamatory or violates any law in force (Rule 3(2)).[9] An intermediary is liable for publishing or hosting such user generated content, however, as mentioned earlier, this liability is conditional. Section 79 of IT Act states that an intermediary would be liable only if it initiates transmission, selects receiver of the transmission and selects or modifies information contained in the transmission that falls under any category mentioned in Rule 3 (2) of the Rules. While we know that a group admin has the ability to facilitate sharing of information and select receivers of such information, he has no direct editorial control over the information shared. Group admins can only remove members but cannot remove or modify the content posted by members of the chat group. An intermediary is liable in the event it fails to comply with due diligence obligations laid down under rule 3 (2) and 3 (3) of the Rules however, since a group admin lacks the authority to initiate transmission himself and control content, he can’t comply with these obligations. Therefore, a group admin would be protected from any liability arising out of third party/user generated content on his group pursuant to Section 79 of the IT Act.

    It is however relevant to note whether the ability of a group admin to remove participants amounts to an indirect form of editorial control.

    Other pertinent observations

    In several reports[10] there have been discussions about how holding a group admin liable makes the process convenient as it is difficult to locate all the users of a particular group. This reasoning may not be correct as the Whatsapp policy[11] makes it mandatory for a prospective user to provide his mobile number in order to use the platform and no additional information is collected from group admins which may justify why group admins are targeted. Investigation agencies can access mobile numbers of Whatsapp users and gain more information from telecom companies.

    It is also interesting to note that the group admins were arrested after a user or someone familiar to a user filed a complaint with the police about content being objectionable or hurtful. Earlier this year, the apex court had ruled in the case of Shreya Singhal v. Union of India[12] that an intermediary needed a court order or a government notification for taking down information. With actions taken against group admins on mere complaints filed by anyone, it is clear that the law enforcement officials have been overriding the mandate of the court.

    Conclusion

     

    According to a study conducted by a global research consultancy, TNS Global, around 38 % of internet users in India use instant messaging applications such as Snapchat and Whatsapp on a daily basis, Whatsapp being the most widely used application. These figures indicate the scale of impact that arrests of group admins may have on our daily communication.

    It is noteworthy that categorising a group admin as an intermediary would effectively make the Rules applicable to all Whatsapp users intending to create groups and make it difficult to enforce and would perhaps blur the distinction between users and intermediaries.

    The critical question however is whether a chat group is considered a part of the bundle of services that Whatsapp offers to its users and not as an independent platform that makes a group admin a separate entity. Also, would it be correct to draw comparison of a Whatsapp group chat with a conference call on Skype or sharing a Google document with edit rights to understand the domain in which censorship laws are penetrating today?

     

    Valuable contribution by Pranesh Prakash and Geetha Hariharan


    [1] http://www.nagpurtoday.in/whatsapp-admin-held-for-hurting-religious-sentiment/06250951http://www.catchnews.com/raipur-news/whatsapp-group-admin-arrested-for-spreading-obscene-video-of-mahatma-gandhi-1440835156.html ; http://www.financialexpress.com/article/india-news/whatsapp-group-admin-along-with-3-members-arrested-for-objectionable-content/147887/

    [2] Section 153A. “Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of harmony.— (1) Whoever— (a) by words, either spoken or written, or by signs or by visible representations or otherwise, promotes or attempts to promote, on grounds of religion, race, place of birth, residence, language, caste or community or any other ground whatsoever, disharmony or feelings of enmity, hatred or ill-will between different reli­gious, racial, language or regional groups or castes or communi­ties…” or 2) Whoever commits an offence specified in sub-section (1) in any place of worship or in any assembly engaged in the performance of religious wor­ship or religious ceremonies, shall be punished with imprisonment which may extend to five years and shall also be liable to fine.

    [3] Section 34. Acts done by several persons in furtherance of common intention – When a criminal act is done by several persons in furtherance of common intention of all, each of such persons is liable for that act in the same manner as if it were done by him alone.

    [4] Section 67 Publishing of information which is obscene in electronic form. -Whoever publishes or transmits or causes to be published in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with fine which may extend to one lakh rupees and in the event of a second or subsequent conviction with imprisonment of either description for a term which may extend to ten years and also with fine which may extend to two lakh rupees."

    [5] https://www.whatsapp.com/faq/en/general/21073373

    [6] Pandurang v. State of Hyderabad AIR 1955 SC 216

    [7]https://www.eff.org/files/2015/07/08/manila_principles_background_paper.pdf;  http://unesdoc.unesco.org/images/0023/002311/231162e.pdf

    [8] http://www.oecd.org/internet/ieconomy/44949023.pdf

    [9] Rule 3(2) (b) of the Rules

    [10]http://www.thehindu.com/news/national/other-states/if-you-are-a-whatsapp-group-admin-better-be-careful/article7531350.ece; http://www.newindianexpress.com/states/tamil_nadu/Social-Media-Administrator-You-Could-Land-in-Trouble/2015/10/10/article3071815.ece;  http://www.medianama.com/2015/10/223-whatsapp-group-admin-arrest/http://www.thenewsminute.com/article/whatsapp-group-admin-you-are-intermediary-and-here%E2%80%99s-what-you-need-know-35031

    [11] https://www.whatsapp.com/legal/

    [12] http://supremecourtofindia.nic.in/FileServer/2015-03-24_1427183283.pdf

    DNA Research

    by Vanya Rakesh last modified Jul 21, 2016 11:02 AM
    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament. This legislation will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. Here is a collection of our research on privacy and security concerns related to the Bill.

     

    The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues related to the DNA Bill:

    1. Validity and legality of collection, usage and storage of DNA samples and information derived from the same.
    2. Monitoring projects and policies around Human DNA Profiling.
    3. Raising public awareness around issues concerning biometrics.

    In 2006, the Department of Biotechnology drafted the Human DNA Profiling Bill. In 2012 a revised Bill was released and a group of Experts was constituted to finalize the Bill. In 2014, another version was released, the approval of which is pending before the Parliament.

    The Bill seeks to establish DNA Databases at the state and regional level and a national level database. The databases would store DNA profiles of suspects, offenders, missing persons, and deceased persons. The database could be used by courts, law enforcement (national and international) agencies, and other authorized persons for criminal and civil purposes. The Bill will also regulate DNA laboratories collecting DNA samples. Lack of adequate consent, the broad powers of the board, and the deletion of innocent persons profiles are just a few of the concerns voiced about the Bill.

    DNA Profiling Bill - Infographic
    Download the infographic. Credit: Scott Mason and CIS team.

     

    1. DNA Bill

    The Human DNA Profiling bill is a legislation that will allow the government of India to Create a National DNA Data Bank and a DNA Profiling Board for the purposes of forensic research and analysis. There have been many concerns raised about the infringement of privacy and the power that the government will have with such information raised by Human Rights Groups, individuals and NGOs. The bill proposes to profile people through their fingerprints and retinal scans which allow the government to create different unique profiles for individuals. Some of the concerns raised include the loss of privacy by such profiling and the manner in which they are conducted. Unless strictly controlled, monitored and protected, such a database of the citizens' fingerprints and retinal scans could lead to huge blowbacks in the form of security risks and privacy invasions. The following articles elaborate upon these matters.

       

      2. Comparative Analysis with other Legislatures

      Human DNA Profiling is a system that isn't proposed only in India. This system of identification has been proposed and implemented in many nations. Each of these systems differs from the other on bases dependent on the nation's and society's needs. The risks and criticisms that DNA profiling has faced may be the same but the manner in which solutions to such issues are varying. The following articles look into the different systems in place in different countries and create a comparison with the proposed system in India to give us a better understanding of the risks and implications of such a system being implemented.

       

      Privacy Policy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:40 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Raising public awareness  and dialogue around privacy,
      2. Undertaking in depth research of domestic and international policy pertaining to privacy
      3. Driving comprehensive privacy legislation in India through research.

      India does not have a comprehensive legislation covering issues of privacy or establishing the right to privacy In 2010 an "Approach Paper on Privacy" was published, in 2011 the Department of Personnel and Training released a draft Right to Privacy Bill, in 2012 the Planning Commission constituted a group of experts which published The Report of the Group of Experts on Privacy, in 2013 CIS drafted the citizens Privacy Protection Bill, and in 2014 the Right to Privacy Bill was leaked. Currently the Government is in the process of drafting and finalizing the Bill.

      Draft Right to Privacy

      Privacy Research -

      1. Approach Paper on Privacy, 2010 -

      The following article contains the reply drafted by CIS in response to the Paper on Privacy in 2010. The Paper on Privacy was a document drafted by a group of officers created to develop a framework for a privacy legislation that would balance the need for privacy protection, security, sectoral interests, and respond to the domain legislation on the subject.

      2. Report on Privacy, 2012 -

      The Report on Privacy, 2012 was drafted and published by a group of experts under the Planning Commission pertaining to the current legislation with respect to privacy. The following articles contain the responses and criticisms to the report and the current legislation.

      3. Privacy Protection Bill, 2013 -

      The Privacy Protection Bill, 2013 was a legislation that aims to formulate the rules and law that governs privacy protection. The following articles refer to this legislation including a citizen's draft of the legislation.

      4. Right to Privacy Act, 2014 (Leaked Bill) -

      The Right to Privacy Act, 2014 is a bill still under proposal that was leaked, linked below.

      • Leaked Privacy Bill: 2014 vs. 2011 http://bit.ly/QV0Y0w

      Sectoral Privacy Research

      by Vanya Rakesh last modified Jan 03, 2016 09:46 AM
      The Centre for Internet and Society, India has been researching privacy in India since the year 2010, with special focus on the following issues.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      The Right to Privacy has evolved in India since many decades, where the question of it being a Fundamental Right has been debated many times in courts of Law. With the advent of information technology and digitisation of the services, the issue of Privacy holds more relevance in sectors like Banking, Healthcare, Telecommunications, ITC, etc., The Right to Privacy is also addressed in light of the Sexual minorities, Whistle-blowers, Government services, etc.

      Sectors -

      1. Consumer Privacy and other sectors -

      Consumer privacy laws and regulations seek to protect any individual from loss of privacy due to failures or limitations of corporate customer privacy measures. The following articles deal with the current consumer privacy laws in place in India and around the world. Also, privacy concerns have been considered along with other sectors like Copyright law, data protection, etc.

      § Consumer Privacy - How to Enforce an Effective Protective Regime? http://bit.ly/1a99P2z

      § Privacy and Information Technology Act: Do we have the Safeguards for Electronic Privacy? http://bit.ly/10VJp1P

      • Limits to Privacy http://bit.ly/19mPG6I

      § Copyright Enforcement and Privacy in India http://bit.ly/18fi9fM

      • Privacy in India: Country Report http://bit.ly/14pnNwl

      § Transparency and Privacy http://bit.ly/1a9dMnC

      § The Report of the Group of Experts on Privacy (Contributed by CIS) http://bit.ly/VqzKtr

      § The (In) Visible Subject: Power, Privacy and Social Networking http://bit.ly/15koqol

      § Privacy and the Indian Copyright Act, 1857 as Amended in 2010 http://bit.ly/1euwX0r

      § Should Ratan Tata be afforded the Right to Privacy? http://bit.ly/LRlXin

      § Comments on Information Technology (Guidelines for Cyber Café) Rules, 2011 http://bit.ly/15kojJn

      § Broadcasting Standards Authority Censures TV9 over Privacy Violations! http://bit.ly/16L4izl

      § Is Data Protection Enough? http://bit.ly/1bvaWx2

      § Privacy, speech at stake in cyberspace http://cis-india.org/news/privacy-speech-at-stake-in-cyberspace-1

      § Q&A to the Report of the Group of Experts on Privacy http://bit.ly/TPhzQQ

      § Privacy worries cloud Facebook's WhatsApp Deal http://cis-india.org/internet-governance/blog/economic-times-march-14-2014-sunil-abraham-privacy-worries-cloud-facebook-whatsapp-deal

      § GNI Assessment Finds ICT Companies Protect User Privacy and Freedom of Expression http://bit.ly/1mjbpmL

      § A Stolen Perspective http://bit.ly/1bWHyzv

      § Is Data Protection enough? http://cis-india.org/internet-governance/blog/privacy/is-data-protection-enough

      § I don't want my fingerprints taken http://bit.ly/aYdMia

      § Keeping it Private http://bit.ly/15wjTVc

      § Personal Data, Public Profile http://bit.ly/15vlFk4

      § Why your Facebook Stalker is Not the Real Problem http://bit.ly/1bI2MSc

      § The Private Eye http://bit.ly/173ypSI

      § How Facebook is Blatantly Abusing our Trust http://bit.ly/OBXGXk

      § Open Secrets http://bit.ly/1b5uvK0

      § Big Brother is Watching You http://bit.ly/1cGpg0K

      2. Banking/Finance -

      Privacy in the banking and finance industry is crucial as the records and funds of one person must not be accessible by another without the due authorisation. The following articles deal with the current system in place that governs privacy in the financial and banking industry.

      § Privacy and Banking: Do Indian Banking Standards Provide Enough Privacy Protection? http://bit.ly/18fhsTM

      § Finance and Privacy http://bit.ly/15aUPh6

      § Making the Powerful Accountable http://bit.ly/1nvzSpC

      3. Telecommunications -

      The telecommunications industry is the backbone of current technology with respect to ICTs. The telecommunications industry has its own rules and regulations. These rules are the focal point of the following articles including criticism and acclaim.

      § Privacy and Telecommunications: Do We Have the Safeguards? http://bit.ly/10VJp1P

      § Privacy and Media Law http://bit.ly/18fgDfF

      § IP Addresses and Expeditious Disclosure of Identity in India http://bit.ly/16dBy4N

      § Telecommunications and Internet Privacy Read more: http://bit.ly/16dEcaF

      § Encryption Standards and Practices http://bit.ly/KT9BTy

      § Encryption Standards and Practices http://cis-india.org/internet-governance/blog/privacy/privacy_encryption

      § Security: Privacy, Transparency and Technology http://cis-india.org/internet-governance/blog/security-privacy-transparency-and-technolog y

      4. Sexual Minorities -

      While the internet is a global forum of self-expression and acceptance for most of us, it does not hold true for sexual minorities. The internet is a place of secrecy for those that do not conform to the typical identities set by society and therefore their privacy is more important to them than most. When they reveal themselves or are revealed by others, they typically face a lot of group hatred from the rest of the people and therefore value their privacy. The following article looks into their situation.

      · Privacy and Sexual Minorities http://bit.ly/19mQUyZ

      5. Health -

      The privacy between a doctor and a patient is seen as incredibly important and so should the privacy of a person in any situation where they reveal more than they would to others in the sense of CT scans and other diagnoses. The following articles look into the present scenario of privacy in places like a hospital or diagnosis center.

      § Health and Privacy http://bit.ly/16L1AJX

      § Privacy Concerns in Whole Body Imaging: A Few Questions http://bit.ly/1jmvH1z

      6. e-Governance -

      The main focus of governments in ICTs is their gain for governance. There have many a multiplicity of laws and legislation passed by various countries including India in an effort to govern the universal space that is the internet. Surveillance is a major part of that governance and control. The articles listed below deal with the issues of ethics and drawbacks in the current legal scenario involving ICTs.

      § E-Governance and Privacy http://bit.ly/18fiReX

      § Privacy and Governmental Databases http://bit.ly/18fmSy8

      § Killing Internet Softly with its Rules http://bit.ly/1b5I7Z2

      § Cyber Crime & Privacy http://bit.ly/17VTluv

      § Understanding the Right to Information http://bit.ly/1hojKr7

      § Privacy Perspectives on the 2012-2013 Goa Beach Shack Policy http://bit.ly/ThAovQ

      § Identifying Aspects of Privacy in Islamic Law http://cis-india.org/internet-governance/blog/identifying-aspects-of-privacy-in-islamic-law

      § What Does Facebook's Transparency Report Tell Us About the Indian Government's Record on Free Expression & Privacy? http://cis-india.org/internet-governance/blog/what-does-facebook-transparency-report-tell -us-about-indian-government-record-on-free-expression-and-privacy

      § Search and Seizure and the Right to Privacy in the Digital Age: A Comparison of US and India http://cis-india.org/internet-governance/blog/search-and-seizure-and-right-to-privacy-in-digital-age

      § Internet Privacy in India http://cis-india.org/telecom/knowledge-repository-on-internet-access/internet-privacy-in-i ndia

      § Internet-driven Developments - Structural Changes and Tipping Points http://bit.ly/10s8HVH

      § Data Retention in India http://bit.ly/XR791u

      § 2012: Privacy Highlights in India http://bit.ly/1kWe3n7

      § Big Dog is Watching You! The Sci-fi Future of Animal and Insect Drones http://bit.ly/1kWee1W

      7. Whistle-blowers -

      Whistle-blowers are always in a difficult situation when they must reveal the misdeeds of their corporations and governments due to the blowback that is possible if their identity is revealed to the public. As in the case of Edward Snowden and many others, a whistle-blowers identity is to be kept the most private to avoid the consequences of revealing the information that they did. This is the main focus of the article below.

      § The Privacy Rights of Whistle-blowers http://bit.ly/18GWmM3

      8. Cloud and Open Source -

      Cloud computing and open source software have grown rapidly over the past few decades. Cloud computing is when an individual or company uses offsite hardware on a pay by usage basis provided and owned by someone else. The advantages are low costs and easy access along with decreased initial costs. Open source software on the other hand is software where despite the existence of proprietary elements and innovation, the software is available to the public at no charge. These software are based of open standards and have the obvious advantage of being compatible with many different set ups and are free. The following article highlights these computing solutions.

      § Privacy, Free/Open Source, and the Cloud http://bit.ly/1cTmGoI

      9. e-Commerce -

      One of the fastest growing applications of the internet is e-Commerce. This includes many facets of commerce such as online trading, the stock exchange etc. in these cases, just as in the financial and banking industries, privacy is very important to protect ones investments and capital. The following article's main focal point is the world of e-Commerce and its current privacy scenario.

      § Consumer Privacy in e-Commerce http://bit.ly/1dCtgTs

      Security Research

      by Vanya Rakesh last modified Jan 03, 2016 09:55 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Research on the issue of privacy in different sectors in India.
      2. Monitoring projects, practices, and policies around those sectors.
      3. Raising public awareness around the issue of privacy, in light of varied projects, industries, sectors and instances.

      State surveillance in India has been carried out by Government agencies for many years. Recent projects include: NATGRID, CMS, NETRA, etc. which aim to overhaul the overall security and intelligence infrastructure in the country. The purpose of such initiatives has been to maintain national security and ensure interconnectivity and interoperability between departments and agencies. Concerns regarding the structure, regulatory frameworks (or lack thereof), and technologies used in these programmes and projects have attracted criticism.

      Surveillance/Security Research -

      1. Central Monitoring System -

      The Central Monitoring System or CMS is a clandestine mass electronic surveillance data mining program installed by the Center for Development of Telematics (C-DOT), a part of the Indian government. It gives law enforcement agencies centralized access to India's telecommunications network and the ability to listen in on and record mobile, landline, satellite, Voice over Internet Protocol (VoIP) calls along with private e-mails, SMS, MMS. It also gives them the ability to geo-locate individuals via cell phones in real time.

      • The Central Monitoring System: Some Questions to be Raised in Parliament http://bit.ly/1fln2vu

      2. Surveillance Industry : Global And Domestic -

      The surveillance industry is a multi-billion dollar economic sector that tracks individuals along with their actions such as e-mails and texts. With the cause for its existence being terrorism and the government's attempts to fight it, a network has been created that leaves no one with their privacy. All that an individual does in the digital world is suspect to surveillance. This included surveillance in the form of snooping where an individual's phone calls, text messages and e-mails are monitored or a more active kind where cameras, sensors and other devices are used to actively track the movements and actions of an individual. This information allows governments to bypass the privacy that an individual has in a manner that is considered unethical and incorrect. This information that is collected also in vulnerable to cyber-attacks that are serious risks to privacy and the individuals themselves. The following set of articles look into the ethics, risks, vulnerabilities and trade-offs of having a mass surveillance industry in place.

      • Surveillance Technologies http://bit.ly/14pxg74
      • New Standard Operating Procedures for Lawful Interception and Monitoring http://bit.ly/1mRRIo4

      3. Judgements By the Indian Courts -

      The surveillance industry in India has been brought before the court in different cases. The following articles look into the cause of action in these cases along with their impact on India and its citizens.

      4. International Privacy Laws -

      Due to the universality of the internet, many questions of accountability arise and jurisdiction becomes a problem. Therefore certain treaties, agreements and other international legal literature was created to answer these questions. The articles listed below look into the international legal framework which governs the internet.

      5. Indian Surveillance Framework -

      The Indian government's mass surveillance systems are configured a little differently from the networks of many countries such as the USA and the UK. This is because of the vast difference in infrastructure both in existence and the required amount. In many ways, it is considered that the surveillance network in India is far worse than other countries. This is due to the present form of the legal system in existence. The articles below explore the system and its functioning including the various methods through which we are spied on. The ethics and vulnerabilities are also explored in these articles.

      • A Comparison of Indian Legislation to Draft International Principles on Surveillance of Communications http://bit.ly/U6T3xy
      • Surveillance and the Indian Constitution - Part 2: Gobind and the Compelling State Interest Test http://bit.ly/1dH3meL
      • Surveillance and the Indian Constitution - Part 3: The Public/Private Distinction and the Supreme Court's Wrong Turn http://bit.ly/1kBosnw
      • Mastering the Art of Keeping Indians Under Surveillance http://cis-india.org/internet-governance/blog/the-wire-may-30-2015-bhairav-acharya-mastering-the-art-of-keeping-indians-under-surveillance

      UID Research

      by Vanya Rakesh last modified Jan 03, 2016 09:59 AM
      The Centre Internet and Society, India has been researching privacy policy in India since the year 2010 with the following objectives.
      1. Researching the vision and implementation of the UID Scheme - both from a technical and regulatory perspective.
      2. Understanding the validity and legality of collection, usage and storage of Biometric information for this scheme.
      3. Raising public awareness around issues concerning privacy, data security and the objectives of the UID Scheme.

      The UID scheme seeks to provide all residents of India an identity number based on their biometrics that can be used to authenticate individuals for the purpose of Government benefits and services. A 2015 Supreme Court ruling has clarified that the UID can only be used in the PDS and LPG Schemes.

      Concerns with the scheme include the broad consent taken at the time of enrolment, the lack of clarity as to what happens with transactional metadata, the centralized storage of the biometric information in the CIDR, the seeding of the aadhaar number into service providers’ databases, and the possibility of function creep. Also, there are concerns due to absence of a legislation to look into the privacy and security concerns.

      UID Research -

      1. Ramifications of Aadhar and UID schemes -

      The UID and Aadhar systems have been bombarded with criticisms and plagued with issues ranging from privacy concerns to security risks. The following articles deal with the many problems and drawbacks of these systems.

      § UID and NPR: Towards Common Ground http://cis-india.org/internet-governance/blog/uid-npr-towards-common-ground

      § Public Statement to Final Draft of UID Bill http://bit.ly/1aGf1NN

      § UID Project in India - Some Possible Ramifications http://cis-india.org/internet-governance/blog/uid-in-india

      § Aadhaar Number vs the Social Security Number http://cis-india.org/internet-governance/blog/aadhaar-vs-social-security-number

      § Feedback to the NIA Bill http://cis-india.org/internet-governance/blog/cis-feedback-to-nia-bill

      § Unique ID System: Pros and Cons http://bit.ly/1jmxbZS

      § Submitted seven open letters to the Parliamentary Finance Committee on the UID covering the following aspects: SCOSTA Standards (http://bit.ly/1hq5Rqd), Centralized Database (http://bit.ly/1hsHJDg), Biometrics (http://bit.ly/196drke), UID Budget (http://bit.ly/1e4c2Op), Operational Design (http://bit.ly/JXR61S), UID and Transactions (http://bit.ly/1gY6B8r), and Deduplication (http://bit.ly/1c9TkSg)

      § Comments on Finance Committee Statements to Open Letters on Unique Identity: The Parliamentary Finance Committee responded to the open letters sent by CIS through an email on 12 October 2011. CIS has commented on the points raised by the Committee: http://bit.ly/1kz4H0F

      § Unique Identification Scheme (UID) & National Population Register (NPR), and Governance http://cis-india.org/internet-governance/blog/uid-and-npr-a-background-note

      § Financial Inclusion and the UID http://cis-india.org/internet-governance/privacy_uidfinancialinclusion

      § The Aadhaar Case http://cis-india.org/internet-governance/blog/the-aadhaar-case

      § Do we need the Aadhaar scheme http://bit.ly/1850wAz

      § 4 Popular Myths about UID http://bit.ly/1bWFoQg

      § Does the UID Reflect India? http://cis-india.org/internet-governance/blog/privacy/uid-reflects-india

      § Would it be a unique identity crisis? http://cis-india.org/news/unique-identity-crisis

      § UID: Nothing to Hide, Nothing to Fear? http://cis-india.org/internet-governance/blog/privacy/uid-nothing-to-hide-fear

      2. Right to Privacy and UID -

      The UID system has been hit by many privacy concerns from NGOs, private individuals and others. The sharing of one's information, especially fingerprints and retinal scans to a system that is controlled by the government and is not vetted as having good security irks most people. These issues are dealt with the in the following articles.

      § India Fears of Privacy Loss Pursue Ambitious ID Project http://cis-india.org/news/india-fears-of-privacy-loss

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://bit.ly/1bWFoQg

      § Analysing the Right to Privacy and Dignity with Respect to the UID http://cis-india.org/internet-governance/blog/privacy/privacy-uiddevaprasad

      § Supreme Court order is a good start, but is seeding necessary? http://cis-india.org/internet-governance/blog/supreme-court-order-is-a-good-start-but-is-seeding-necessary

      § Right to Privacy in Peril http://cis-india.org/internet-governance/blog/right-to-privacy-in-peril

      3. Data Flow in the UID -

      The articles below deal with the manner in which data is moved around and handled in the UID system in India.

      § UIDAI Practices and the Information Technology Act, Section 43A and Subsequent Rules http://cis-india.org/internet-governance/blog/uid-practices-and-it-act-sec-43-a-and-subsequent-rules

      § Data flow in the Unique Identification Scheme of India http://cis-india.org/internet-governance/blog/data-flow-in-unique-identification-scheme-of-india

      CIS's Position on Net Neutrality

      by Sunil Abraham last modified Dec 09, 2015 01:06 PM
      Contributors: pranesh
      As researchers committed to the principle of pluralism we rarely produce institutional positions. This is also because we tend to update our positions based on research outputs. But the lack of clarity around our position on network neutrality has led some stakeholders to believe that we are advocating for forbearance. Nothing can be farther from the truth. Please see below for the current articulation of our common institutional position.

       

      1. Net Neutrality violations can potentially have multiple categories of harms — competition harms, free speech harms, privacy harms, innovation and ‘generativity’ harms, harms to consumer choice and user freedoms, and diversity harms thanks to unjust discrimination and gatekeeping by Internet service providers.

      2. Net Neutrality violations (including some those forms of zero-rating that violate net neutrality) can also have different kinds benefits — enabling the right to freedom of expression, and the freedom of association, especially when access to communication and publishing technologies is increased; increased competition [by enabling product differentiation, can potentially allow small ISPs compete against market incumbents]; increased access [usually to a subset of the Internet] by those without any access because they cannot afford it, increased access [usually to a subset of the Internet] by those who don't see any value in the Internet, reduced payments by those who already have access to the Internet especially if their usage is dominated by certain services and destinations.

      3. Given the magnitude and variety of potential harms, complete forbearance from all regulation is not an option for regulators nor is self-regulation sufficient to address all the harms emerging from Net Neutrality violations, since incumbent telecom companies cannot be trusted to effectively self-regulate. Therefore, CIS calls for the immediate formulation of Net Neutrality regulation by the telecom regulator [TRAI] and the notification thereof by the government [Department of Telecom of the Ministry of Information and Communication Technology]. CIS also calls for the eventual enactment of statutory law on Net Neutrality.  All such policy must be developed in a transparent fashion after proper consultation with all relevant stakeholders, and after giving citizens an opportunity to comment on draft regulations.

      4. Even though some of these harms may be large, CIS believes that a government cannot apply the precautionary principle in the case of Net Neutrality violations. Banning technical innovations and business model innovations is not an appropriate policy option. The regulation must toe a careful line to solve the optimization problem: refraining from over-regulation of ISPs and harming innovation at the carrier level (and benefits of net neutrality violations mentioned above) while preventing ISPs from harming innovation and user choice.  ISPs must be regulated to limit harms from unjust discrimination towards consumers as well as to limit harms from unjust discrimination towards the services they carry on their networks.

      5. Based on regulatory theory, we believe that a regulatory framework that is technologically neutral, that factors in differences in technological context, as well as market realities and existing regulation, and which is able to respond to new evidence is what is ideal.

        This means that we need a framework that has some bright-line rules based, but which allows for flexibility in determining the scope of exceptions and in the application of the rules.  Candidate principles to be embodied in the regulation include: transparency, non-exclusivity, limiting unjust discrimination.

      6. The harms emerging from walled gardens can be mitigated in a number of waysOn zero-rating the form of regulation must depend on the specific model and the potential harms that result from that model. Zero-rating can be: paid for by the end consumer or subsidized by ISPs or subsidized by content providers or subsidized by government or a combination of these; deal-based or criteria-based or government-imposed; ISP-imposed or offered by the ISP and chosen by consumers; Transparent and understood by consumers vs. non-transparent; based on content-type or agnostic to content-type; service-specific or service-class/protocol-specific or service-agnostic; available on one ISP or on all ISPs.  Zero-rating by a small ISP with 2% penetration will not have the same harms as zero-rating by the largest incumbent ISP.  For service-agnostic / content-type agnostic zero-rating, which Mozilla terms ‘equal rating’, CIS advocates for no regulation.

      7. CIS believes that Net Neutrality regulation for mobile and fixed-line access must be different recognizing the fundamental differences in technologies.

      8. On specialized services CIS believes that there should be logical separation and that all details of such specialized services and their impact on the Internet must be made transparent to consumers both individual and institutional, the general public and to the regulator.  Further, such services should be available to the user only upon request, and not without their active choice, with the requirement that the service cannot be reasonably provided with ‘best efforts’ delivery guarantee that is available over the Internet, and hence requires discriminatory treatment, or that the discriminatory treatment does not unduly harm the provision of the rest of the Internet to other customers.

      9. On incentives for telecom operators, CIS believes that the government should consider different models such as waiving contribution to the Universal Service Obligation Fund for prepaid consumers, and freeing up additional spectrum for telecom use without royalty using a shared spectrum paradigm, as well as freeing up more spectrum for use without a licence.

      10. On reasonable network management CIS still does not have a common institutional position.

      Smart Cities in India: An Overview

      by Vanya Rakesh last modified Jan 11, 2016 01:30 AM
      The Government of India is in the process of developing 100 smart cities in India which it sees as the key to the country's economic and social growth. This blog post gives an overview of the Smart Cities project currently underway in India. The smart cities mission in India is at a nascent stage and an evolving area for research. The Centre for Internet and Society will continue work in this area.

      Overview of the 100 Smart Cities Mission

      The Government of India announced its flagship programme- the 100 Smart Cities mission in the year 2014 and was launched in June 2015 to achieve urban transformation, drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology. Initially, the Mission aims to cover 100 cities across the countries (which have been shortlisted on the basis of a Smart Cities Proposal prepared by every city) and its duration will be five years (FY 2015-16 to FY 2019-20). The Mission may be continued thereafter in the light of an evaluation to be done by the Ministry of Urban Development (MoUD) and incorporation of the learnings into the Mission. The Mission aims to focus on area-based development in the form of redevelopment of existing spaces, or the development of new areas (Greenfield) to accommodate the growing urban population and ensure comprehensive planning to improve quality of life, create employment and enhance incomes for all - especially the poor and the disadvantaged. [1] On 27th August 2015 the Centre unveiled 98 smart cities across India which were selected for this Project. Across the selected cities, 13 crore population ( 35% of the urban population will be included in the development plans. [2] The mission has been developed for the purpose of achieving urban transformation. The vision is to preserve India's traditional architecture, culture & ethnicity while implementing modern technology to make cities livable, use resources in a sustainable manner and create an inclusive environment. [3]

      The promises of the Smart City mission include reduction of carbon footprint, adequate water and electricity supply, proper sanitation, including solid waste management, efficient urban mobility and public transport, affordable housing, robust IT connectivity and digitalization, good governance, citizen participation, security of citizens, health and education.

      Questions unanswered

      • Why and How was the Smart Cities project conceptualized in India? What was the need for such a project in India?
      • What was the role of the public/citizens at the ideation and conceptualization stage of the project?
      • Which actors from the Government, Private industry and the civil society are involved in this mission? Though the smart cities mission has been initiated by the Government of India under the Ministry of Urban Development, there is no clarity about the involvement of the associated offices and departments of the Ministry.

      How are the Smart Cities being selected?

      The 100 cities were supposed to be selected on the basis of Smart cities challenge[4] involving two stages. Stage I of the challenge involved Intra-State city selection on objective criteria to identify cities to compete in stage-II. In August 2015, The Ministry of Urban Development, Government of India announced 100 smart cities [5] evaluated on parameters such as service levels, financial and institutional capacity, past track record, called as the 'shortlisted cities' for this purpose. The selected cities are now competing for selection in the Second stage of the challenge, which is an All India competition. For this crucial stage, the potential 100 smart cities are required to prepare a Smart City Proposal (SCP) stating the model chosen (retrofitting, redevelopment, Greenfield development or a mix), along with a Pan-City dimension with Smart Solutions. The proposal must also include suggestions collected by way of consultations held with city residents and other stakeholders, along with the proposal for financing of the smart city plan including the revenue model to attract private participation. The country saw wide participation from the citizens to voice their aspirations and concerns regarding the smart city. 15th December 2015 has been declared as the deadline for submission of the SCP, which must be in consonance with evaluation criteria set by The MoUD, set on the basis of professional advice. [6] On the basis of this, 20 cities will be selected for the first year. According to the latest reports, the Centre is planning to fund only 10 cities for the first phase in case the proposals sent by the states do not match the expected quality standards and are unable to submit complete area-development plans by the deadline, i.e. 15th December, 2015. [7]

      Questions unanswered

      • Who would be undertaking the task of evaluating and selecting the cities for this project?
      • What are the criteria for selection of a city to qualify in the first 20 (or 10, depending on the Central Government) for the first phase of implementation?

      How are the smart cities going to be Funded?

      The Smart City Mission will be operated as a Centrally Sponsored Scheme (CSS) and the Central Government proposes to give financial support to the Mission to the extent of Rs. 48,000 crores over five years i.e. on an average Rs. 100 crore per city per year. [8] The additional resources will have to be mobilized by the State/ ULBs from external/internal sources. According to the scheme, once list of shortlisted Smart Cities is finalized, Rs. 2 crore would have been disbursed to each city for proposal preparation.[9]

      According to estimates of the Central Government, around Rs 4 lakh crore of funds will be infused mainly through private investments and loans from multilateral institutions among other sources, which accounts to 80% of the total spending on the mission. [10] For this purpose, the Government will approach the World Bank and the Asian Development Bank (ADB) for a loan costing £500 million and £1 billion each for 2015-20. If ADB approves the loan, it would be it will be the bank's highest funding to India's urban sector so far.[11] Foreign Direct Investment regulations have been relaxed to invite foreign capital and help into the Smart City Mission. [12]

      Questions unanswered

      • The Government notes on Financing of the project mentions PPPs for private funding and leveraging of resources from internal and external resources. There is lack of clarity on the external resources the Government has/will approach and the varied PPP agreements the Government is or is planning to enter into for the purpose of private investment in the smart cities.

      How is the scheme being implemented?

      Under this scheme, each city is required to establish a Special Purpose Vehicle (SPV) having flexibility regarding planning, implementation, management and operations. The body will be headed by a full-time CEO, with nominees of Central Government, State Government and ULB on its Board. The SPV will be a limited company incorporated under the Companies Act, 2013 at the city-level, in which the State/UT and the Urban Local Body (ULB) will be the promoters having equity shareholding in the ratio 50:50. The private sector or financial institutions could be considered for taking equity stake in the SPV, provided the shareholding pattern of 50:50 of the State/UT and the ULB is maintained and the State/UT and the ULB together have majority shareholding and control of the SPV. Funds provided by the Government of India in the Smart Cities Mission to the SPV will be in the form of tied grant and kept in a separate Grant Fund.[13]

      For the purpose of implementation and monitoring of the projects, the MoUD has also established an Apex Committee and National Mission Directorate for National Level Monitoring[14], a State Level High Powered Steering Committee (HPSC) for State Level Monitoring[15] and a Smart City Advisory Forum at the City Level [16].

      Also, several consulting firms[17] have been assigned to the 100 cities to help them prepare action plans.[18] Some of them include CRISIL, KPMG, McKinsey, etc. [19]

      Questions unanswered

      • What policies and regulations have been put in place to account for the smart cities, apart from policies looking at issues of security, privacy, etc.?
      • What international/national standards will be adopted while development of the smart cities? Though the Bureau of Indian Standards is in the process of formulating standardized guidelines for the smart cities in India[20], yet there is lack of clarity on adoption of these national standards, along with the role of international standards like the ones formulated by ISO.

      What is the role of Foreign Governments and bodies in the Smart cities mission?

      Ever since the government's ambitious project has been announced and cities have been shortlisted, many countries across the globe have shown keen interest to help specific shortlisted cities in building the smart cities and are willing to invest financially. Countries like Sweden, Malaysia, UAE, USA, etc. have agreed to partner with India for the mission.[21] For example, UK has partnered with the Government to develop three India cities-Pune, Amravati and Indore.[22] Israel's start-up city Tel Aviv also entered into an agreement to help with urban transformation in the Indian cities of Pune, Nagpur and Nashik to foster innovation and share its technical know-how.[23] France has piqued interest for Nagpur and Puducherry, while the United States is interested in Ajmer, Vizag and Allahabad. Also, Spain's Barcelona Regional Agency has expressed interest in exchanging technology with the Delhi. Apart from foreign government, many organizations and multilateral agencies are also keen to partner with the Indian government and have offered financial assistance by way of loans. Some of them include the UK government-owned Department for International Development, German government KfW development bank, Japan International Cooperation Agency, the US Trade and Development Agency, United Nations Industrial Development Organization and United Nations Human Settlements Programme. [24]

      Questions unanswered

      • Do these governments or organization have influence on any other component of the Smart cities?
      • How much are the foreign governments and multilateral bodies spending on the respective cities?
      • What kind of technical know-how is being shared with the Indian government and cities?

      What is the way ahead?

      On the basis of the SCP, the MoUD will evaluate, assess the credibility and select 20 smart cities out of the short-listed ones for execution of the plan in the first phase. The selected city will set up a SPV and receive funding from the Government.

      Questions unanswered

      • Will the deadline of submission of the Smart Cities Proposal be pushed back?
      • After the SCP is submitted on the basis of consultation with the citizens and public, will they be further involved in the implementation of the project and what will be their role?
      • How will the MoUD and other associated organizations as well as actors consider the implementation realities of the project, like consideration of land displacement, rehabilitation of the slum people, etc.
      • How are ICT based systems going to be utilized to make the cities and the infrastructure "smart"?
      • How is the MoUD going to respond to the concerns and criticism emerging from various sections of the society, as being reflected in the news items?
      • How will the smart cities impact and integrate the existing laws, regulations and policies? Does the Government intend to use the existing legislations in entirety, or update and amend the laws for implementation of the Smart Cities Mission?


      [1] Smart Cities, Mission Statement and Guidelines, Ministry of Urban Development, Government of India, June 2015, Available at : http://smartcities.gov.in/writereaddata/SmartCityGuidelines.pdf

      [2] http://articles.economictimes.indiatimes.com/2015-08-27/news/65929187_1_jammu-and-kashmir-12-cities-urban-development-venkaiah-naidu

      [3] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [4] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [5] Full list : http://www.scribd.com/doc/276467963/Smart-Cities-Full-List

      [6] http://smartcities.gov.in/writereaddata/Process%20of%20Selection.pdf

      [7] http://www.ibtimes.co.in/modi-govt-select-only-10-cities-under-smart-city-project-this-year-report-658888

      [8] http://smartcities.gov.in/writereaddata/Financing%20of%20Smart%20Cities.pdf

      [9] Smart Cities presentation by MoUD : http://smartcities.gov.in/writereaddata/Presentation%20on%20Smart%20Cities%20Mission.pdf

      [10] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/

      [11] http://indianexpress.com/article/india/india-others/funding-for-smart-cities-key-to-coffer-lies-outside-india/#sthash.5lnW9Jsq.dpuf

      [12] http://india.gov.in/spotlight/smart-cities-mission-step-towards-smart-india

      [13] http://smartcities.gov.in/writereaddata/SPVs.pdf

      [14] http://smartcities.gov.in/writereaddata/National%20Level%20Monitoring.pdf

      [15] http://smartcities.gov.in/writereaddata/State%20Level%20Monitoring.pdf

      [16] http://smartcities.gov.in/writereaddata/City%20Level%20Monitoring.pdf

      [17] http://smartcities.gov.in/writereaddata/List_of_Consulting_Firms.pdf

      [18] http://pib.nic.in/newsite/PrintRelease.aspx?relid=128457

      [20] http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html

      [21] http://accommodationtimes.com/foreign-countries-have-keen-interest-in-development-of-smart-cities/

      [22] http://articles.economictimes.indiatimes.com/2015-11-20/news/68440402_1_uk-trade-three-smart-cities-british-deputy-high-commissioner

      [23] http://www.jpost.com/Business-and-Innovation/Tech/Tel-Aviv-to-help-India-build-smart-cities-435161?utm_campaign=shareaholic&utm_medium=twitter&utm_source=socialnetwork

      [24] http://indianexpress.com/article/india/india-others/smart-cities-projectfrom-france-to-us-a-rush-to-offer-assistance-funds/#sthash.nCMxEKkc.dpuf

      ISO/IEC/ JTC 1/SC 27 Working Groups Meeting, Jaipur

      by Vanya Rakesh last modified Dec 21, 2015 02:38 AM
      I attended this event held from October 26 to 30, 2015 in Jaipur.

      The Bureau of Indian Standards (BIS) in collaboration with Data Security Council of India (DSCI) hosted the global standards’ meeting – ISO/IEC/ JTC 1/SC 27 Working Groups Meeting in Jaipur, Rajasthan at Hotel Marriott from 26th to 30th of October, 2015, followed by a half day conference on Friday, 30th October on the importance of Standards in the domain. The event witnessed experts from across the globe deliberating on forging international standards on Privacy, Security and Risk management in IoT, Cloud Computing and many other contemporary technologies, along with updating existing standards. Under SC 27, 5 working groups parallely held the meetings on varied Projects and Study periods respectively. The 5 Working Groups are as follows:

      1. WG1: Information Security Management Systems;
      2. WG 2 :Cryptography and Security Mechanisms;
      3. WG 3 : Security Evaluation, Testing and Specification;
      4. WG 4 : Security Controls and Services; and
      5. WG 5 :Identity Management and Privacy technologies; competence of security management

      This key set of Working Groups (WG)met in India for the first time.  Professionals discussed and debated development of standards under each working group to develop international standards to address issues regarding security, identity management and privacy.

      CIS had the opportunity to attend meetings under Working Group 5. This group further had parallel meetings on several topics namely:

      • Privacy enhancing data de-identification techniques ISO/IEC NWIP 20889 : Data de-identification techniques are important when it comes to PII to enable the exploitation of the benefits of data processing while maintaining compliance with regulatory requirements and the relevant ISO/IEC 29100 privacy principles. The selection, design, use and assessment of these techniques need to be performed appropriately in order to effectively address the risks of re-identification in a given context.  There is thus a need to classify known de-identification techniques using standardized terminology, and to describe their characteristics, including the underlying technologies, the applicability of each technique to reducing the risk of re-identification, and the usability of the de-identified data.  This is the main goal of this International Standard. Meetings were conducted to resolve comments sent by organisations across the world, review draft documents and agree on next steps.
      • A study period on Privacy Engineering framework : This session deliberated upon contributions, terms of reference and discuss the scope for the emerging field of privacy engineering framework. The session also reviewed important terms to be included in the standard and identify possible improvements to existing privacy impact assessment and management standards. It was identified that the goal of this standard is to integrate privacy into systems as part of the systems engineering process. Another concern raised was that the framework must be consistent with Privacy framework under ISO 29100 and HL7 Privacy and security standards.
      • A study period on user friendly online privacy notice and consent: The basic purpose of this New Work Item Proposal is to assess the viability of producing a guideline for PII Controllers on providing easy to understand notices and consent procedures to PII Principals within WG5. At the Meeting, a brief overview of the contributions received was given,along with assessment of  liaison to ISO/IEC JTC 1/SC 35 and other entities. This International Standard gives guidelines for the content and the structure of online privacy notices as well as documents asking for consent to collect and process personally identifiable information (PII) from PII principals online and is applicable to all situations where a PII controller or any other entity processing PII informs PII principals in any online context.
      • Some of the other sessions under Working Group 5 were on Privacy Impact Assessment ISO/IEC 29134, Standardization in the area of Biometrics and Biometric information protection, Code of Practise for the protection of personally identifiable information, Study period on User friendly online privacy notice and consent, etc.

      ISO/IEC/JTC 1/ SC27 is a joint technical committee of the international standards bodies – ISO and IEC on Information Technology security techniques which conducts regular meetings across the world. JTC 1 has over 2600 published standards developed under the broad umbrella of the committee and its 20 subcommittees. Draft International Standards adopted by the joint technical committees are circulated to the national bodies for voting. Publication as an International Standard requires approval by at least 75% of the national bodies casting a vote in favour of the same. In India, the Bureau of Indian Standards (BIS) is the National Standards Body. Standards are formulated keeping in view national priorities, industrial development, technical needs, export promotion, health, safety etc. and are harmonized with ISO/IEC standards (wherever they exist) to the extent possible, in order to facilitate adoption of ISO/IEC standards by all segments of industry and business.BIS has been actively participating in the  Technical Committee  work of ISO/IEC and is currently a Participating member in 417 and 74 Technical Committees/ Subcommittees and Observer member in 248 and 79 Technical Committees/Subcommittees of ISO and IEC respectively.  BIS  holds Secretarial responsibilities of 2 Technical Committees and 6 Subcommittees of ISO.

      The last meeting was held in the month of May, 2015 in Malaysia, followed by this meeting in October, 2015 Jaipur. 51 countries play an active role as the ‘Participating Members, India being one, while a few countries as observing members. As a part of these sessions, the participating countries also have rights to vote in all official ballots related to standards. The representatives of the country work on the preparation and development of the International Standards and provide feedback to their national organizations.

      There was an additional study group meeting on IoT to discuss comments on the previous drafts, suggest changes , review responses and identify standard gaps in SC 27.

      On October 30, 2015  BIS-DSCI hosted a half day International conference on 30 October, 2015 on Cyber Security and Privacy Standards, comprising of keynotes and panel discussions, bringing together national and international experts to share experience and exchange views on cyber security techniques and protection of data and privacy in international standards, and their growing importance in their society.  The conference looked at various themes like the Role of standards in smart cities, Responding to the Challenges of Investigating Cyber Crimes through Standards, etc. It was emphasised that due to an increasing digital world, there is a universal agreement for the need of cyber security as the infrastructure is globally connected, the cyber threats are also distributed as they are not restricted by the geographical boundaries. Hence, the need for technical and policy solutions, along with standards was highlighted for future protection of the digital world which is now deeply embedded in life, businesses and the government. Standards will help in setting crucial infrastructure for in data security and build associated infrastructure on these lines.

      The importance of standards was highlighted in context of smart cities wherein the need for standards was discussed by experts. Harmonization of regulations with standards must be looked at, by primarily creating standards which could be referred to by the regulators. Broadly, the challenges faced by smart cities are data security, privacy and digital resilience of the infrastructure. It was suggested that in the beginning, these areas must be looked at for development of standards in smart cities. Also, the ISO/IEC  has a Working Group and a Strategic Group focussing on Smart Cities. The risks of digitisation, network, identity management, etc. must be looked at to create the standards.

      The next meeting has been scheduled for April 2016 in Tampa (USA).

      This meeting was a good opportunity to interact with experts from various parts of the World and understand the working of ISO Meetings which are held twice/thrice every year. The Centre for Internet and Society will be continuing work and becoming involved in the standard setting process at the future Working group meetings.

      RTI PDF

      by Prasad Krishna last modified Dec 22, 2015 02:54 AM

      PDF document icon RTI.pdf — PDF document, 412 kB (422252 bytes)

      RTI response regarding the UIDAI

      by Vanya Rakesh last modified Dec 22, 2015 02:57 AM
      This is a response to the RTI filed regarding UIDAI

      The Supreme Curt of India, by virtue of an order dated 11th August 2015, directed the Government to widely publicize in electronic and print media, including radio and television networks that obtaining Aadhar card is not mandatory for the citizens to avail welfare schemes of the Government. (until the matter is resolved). CIS filed an RTI to get information about the steps taken by Government in this regard, the initiatives taken, and details about the expenditure incurred to publicize and inform the public about Aadhar not being mandatory to avail welfare schemes of the Government.

      Response: It has been informed that an advisory was issued by UIDAI headquarters to all regional offices to comply with the order, along with several advertisement campaigns. The total cost incurred so far by UIDAI for this is Rs. 317.30 lakh.


      Download the Response

      Benefits and Harms of "Big Data"

      by Scott Mason — last modified Dec 30, 2015 02:48 AM
      Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life.

      Introduction

      In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabyte[1]. By 2013 that had grown to 4 zettabytes[2], and with the nascent development of the so-called 'Internet of Things' gathering pace, these trends are likely to continue. This expansion in the volume, velocity, and variety of data available[3] , together with the development of innovative forms of statistical analytics, is generally referred to as "Big Data"; though there is no single agreed upon definition of the term. Although still in its initial stages, Big Data promises to provide new insights and solutions across a wide range of sectors, many of which would have been unimaginable even 10 years ago.

      Despite enormous optimism about the scope and variety of Big Data's potential applications however, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits[4]. Most notably these have included concerns about the inevitable threats to privacy associated with the generation, collection and use of large quantities of data [5]. However, concerns have also been raised regarding, for example, the lack of transparency around the design of algorithms used to process the data, over-reliance on Big Data analytics as opposed to traditional forms of analysis and the creation of new digital divides to just name a few.

      The existing literature on Big Data is vast, however many of the benefits and harms identified by researchers tend to relate to sector specific applications of Big Data analytics, such as predictive policing, or targeted marketing. Whilst these examples can be useful in demonstrating the diversity of Big Data's possible applications, it can nevertheless be difficult to gain an overall perspective of the broader impacts of Big Data as a whole. As such this article will seek to disaggregate the potential benefits and harms of Big Data, organising them into several broad categories, which are reflective of the existing scholarly literature.

      What are the potential benefits of Big Data?

      From politicians to business leaders, recent years have seen Big Data confidently proclaimed as a potential solution to a diverse range of problems from, world hunger and diseases, to government budget deficits and corruption. But if we look beyond the hyperbole and headlines, what do we really know about the advantages of Big Data? Given the current buzz surrounding it, the existing literature on Big Data is perhaps unsurprisingly vast, providing innumerable examples of the potential applications of Big Data from agriculture to policing. However, rather than try (and fail) to list the many possible applications of Big Data analytics across all sectors and industries, for the purposes of this article we have instead attempted to distil the various advantages of Big Data discussed within literature into the following five broad categories; Decision-Making, Efficiency & Productivity, Research & Development, Personalisation and Transparency, each of which will be discussed separately below.

      Decision-Making

      Whilst data analytics have always been used to improve the quality and efficiency of decision-making processes, the advent of Big Data means that the areas of our lives in which data driven decision- making plays a role is expanding dramatically; as businesses and governments become better able to exploit new data flows. Furthermore, the real-time and predictive nature of decision-making made possible by Big Data, are increasingly allowing these decisions to be automated. As a result, Big Data is providing governments and business with unprecedented opportunities to create new insights and solutions; becoming more responsive to new opportunities and better able to act quickly - and in some cases preemptively - to deal with emerging threats.

      This ability of Big Data to speed up and improve decision-making processes can be applied across all sectors from transport to healthcare and is often cited within the literature as one of the key advantages of Big Data. Joh, for example, highlights the increased use of data driven predictive analysis by police forces to help them to forecast the times and geographical locations in which crimes are most likely to occur. This allows the force to redistribute their officers and resources according to anticipated need, and in certain cities has been highly effective in reducing crime rates [6]. Raghupathi meanwhile cites the case of healthcare, where predictive modelling driven by big data is being used to proactively identify patients who could benefit from preventative care or lifestyle changes[7].

      One area in particular where the decision-making capabilities of Big Data are having a significant impact is in the field of risk management [8]. For instance, Big Data can allow companies to map their entire data landscape to help detect sensitive information, such as 16 digit numbers - potentially credit card data - which are not being stored according to regulatory requirements and intervene accordingly. Similarly, detailed analysis of data held about suppliers and customers can help companies to identify those in financial trouble, allowing them to act quickly to minimize their exposure to any potential default[9].

      Efficiency and Productivity

      In an era when many governments and businesses are facing enormous pressures on their budgets, the desire to reduce waste and inefficiency has never been greater. By providing the information and analysis needed for organisations to better manage and coordinate their operations, Big Data can help to alleviate such problems, leading to the better utilization of scarce resources and a more productive workforce [10].

      Within the literature such efficiency savings are most commonly discussed in relation to reductions in energy consumption [11]. For example, a report published by Cisco notes how the city of Olso has managed to reduce the energy consumption of street-lighting by 62 percent through the use of smart solutions driven by Big Data[12]. Increasingly, however, statistical models generated by Big Data analytics are also being utilized to identify potential efficiencies in sourcing, scheduling and routing in a wide range of sectors from agriculture to transport. For example, Newell observes how many local governments are generating large databases of scanned license plates through the use of automated license plate recognition systems (ALPR), which government agencies can then use to help improve local traffic management and ease congestion[13].

      Commonly these efficiency savings are only made possible by the often counter-intuitive insights generated by the Big Data models. For example, whilst a human analyst planning a truck route would always tend to avoid 'drive-bys' - bypassing one stop to reach a third before doubling back - Big Data insights can sometimes show such routes to be more efficient. In such cases efficiency saving of this kind would in all likelihood have gone unrecognised by a human analyst, not trained to look for such patterns[14].

      Research, Development, and Innovation

      Perhaps one of the most intriguing benefits of Big Data is its potential use in the research and development of new products and services. As is highlighted throughout the literature, Big Data can help businesses to gain an understanding of how others perceive their products or identify customer demand and adapt their marketing or indeed the design of their products accordingly[15]. Analysis of social media data, for instance, can provide valuable insights into customers' sentiments towards existing products as well as discover demands for new products and services, allowing businesses to respond more quickly to changes in customer behaviour[16].

      In addition to market research, Big Data can also be used during the design and development stage of new products; for example by helping to test thousands of different variations of computer-aided designs in an expedient and cost-effective manner. In doing so, business and designers are able to better assess how minor changes to a products design may affect its cost and performance, thereby improving the cost-effectiveness of the production process and increasing profitability.

      Personalisation

      For many consumers, perhaps the most familiar application of Big Data is its ability to help tailor products and services to meet their individual preferences. This phenomena is most immediately noticeable on many online services such as Netflix; where data about users activities and preferences is collated and analysed to provide a personalised service, for example by suggesting films or television shows the user may enjoy based upon their previous viewing history[17]. By enabling companies to generate in-depth profiles of their customers, Big Data allows businesses to move past the 'one size fits all' approach to product and services design and instead quickly and cost-effectively adapt their services to better meet customer demand.

      In addition to service personalisation, similar profiling techniques are increasingly being utilized in sectors such as healthcare. Here data about a patient's medical history, lifestyle, and even their gene expression patterns are collated, generating a detailed medical profile which can then be used to tailor treatments to meet their specific needs[18]. Targeted care of this sort can not only help to reduce costs for example by helping to avoid over-prescriptions, but may also help to improve the effectiveness of treatments and so ultimately their outcome.

      Transparency

      If 'knowledge is power', then, - so say Big Data enthusiasts - advances in data analytics and the quantity of data available can give consumers and citizens the knowledge to hold governments and businesses to account, as well as make more informed choices about the products and services they use. Nevertheless, data (even lots of it) does not necessarily equal knowledge. In order for citizens and consumers to be able to fully utilize the vast quantities of data available to them, they must first have some way to make sense of it. For some, Big Data analytics provides just such a solution, allowing users to easily search, compare and analyze available data, thereby helping to challenge existing information asymmetries and make business and government more transparent[19].

      In the private sector, Big Data enthusiasts have claimed that Big Data holds the potential to ensure complete transparency of supply chains, enabling concerned consumers to trace the source of their products, for example to ensure that they have been sourced ethically [20]. Furthermore, Big Data is now making accessible information which was previously unavailable to average consumers and challenging companies whose business models rely on the maintenance of information asymmetries.The real-estate industry, for example, relies heavily upon its ability to acquire and control proprietary information, such as transaction data as a competitive asset. In recent years, however, many online services have allowed consumers to effectively bypass agents, by providing alternative sources of real-estate data and enabling prospective buyers and sellers to communicate directly with each other[21]. Therefore, providing consumers with access to large quantities of actionable data . Big Data can help to eliminate established information asymmetries, allowing them to make better and more informed decisions about the products they buy and the services they enlist.

      This potential to harness the power of Big Data to improve transparency and accountability can also be seen in the public sector, with many scholars suggesting that greater access to government data could help to stem corruption and make politics more accountable. This view was recently endorsed by the UN who highlighted the potential uses of Big Data to improve policymaking and accountability in a report published by the Independent Expert Advisory Group on the "Data Revolution for Sustainable Development". In the report experts emphasize the potential of what they term the 'data revolution', to help achieve sustainable development goals by for example helping civil society groups and individuals to 'develop data literacy and help communities and individuals to generate and use data, to ensure accountability and make better decisions for themselves' [22].

      What are the potential harms of Big Data?

      Whilst it is often easy to be seduced by the utopian visions of Big Data evangelists, in order to ensure that Big Data can deliver the types of far-reaching benefits its proponents promise, it is vital that we are also sensitive to its potential harms. Within the existing literature, discussions about the potential harms of Big Data are perhaps understandably dominated by concerns about privacy. Yet as Big Data has begun to play an increasingly central role in our daily lives, a broad range of new threats have begun to emerge including issues related to security and scientific epistemology, as well as problems of marginalisation, discrimination and transparency; each of which will be discussed separately below.

      Privacy

      By far the biggest concern raised by researchers in relation to Big Data is its risk to privacy. Given that by its very nature Big Data requires extensive and unprecedented access to large quantities of data; it is hardly surprising that many of the benefits outlined above in one way or another exist in tension with considerations of privacy. Although many scholars have called for a broader debate on the effects of Big Data on ethical best practice [23], a comprehensive exploration into the complex debates surrounding the ethical implications of Big Data go far beyond the scope of this article. Instead we will simply attempt to highlight some of the major areas of concern expressed in the literature, including its effects on established principles of privacy and the implication of Big Data on the suitability of existing regulatory frameworks governing privacy and data protection.

      1. Re-identification

      Traditionally many Big Data enthusiasts have used de-identification - the process of anonymising data by removing personally identifiable information (PII) - as a way of justifying mass collection and use of personal data. By claiming that such measures are sufficient to ensure the privacy of users, data brokers, companies and governments have sought to deflect concerns about the privacy implications of Big Data, and suggest that it can be compliant with existing regulatory and legal frameworks on data protection.

      However, many scholars remain concerned about the limits of anonymisation. As Tene and Polonetsky observe 'Once data-such as a clickstream or a cookie number-are linked to an identified individual, they become difficult to disentangle'[24]. They cite the example of University of Texas researchers Narayanan and Shmatikov, who were able to successfully re-identify anonymised Netflix user data by cross referencing it with data stored in a publicly accessible online database. As Narayanan and Shmatikov themselves explained, 'once any piece of data has been linked to a person's real identity, any association between this data and a virtual identity breaks anonymity of the latter' [25]. The quantity and variety of datasets which Big Data analytics has made associable with individuals is therefore expanding the scope of the types of data that can be considered PII, as well as undermining claims that de-identification alone is sufficient to ensure privacy for users.

      2. Privacy Frameworks Obsolete?

      In recent decades privacy and data protection frameworks based upon a number of so-called 'privacy principles' have formed the basis of most attempts to encourage greater consideration of privacy issues online[26]. For many however, the emergence of Big Data has raised question about the extent to which these 'principles of privacy' are workable in an era of ubiquitous data collection.

      Collection Limitation and Data Minimization : Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum[27].

      Purpose Limitation: Since the utility of a given dataset is often not easily identifiable at the time of collection, datasets are increasingly being processed several times for a variety of different purposes. Such practices have significant implications for the principle of purpose limitation, which aims to ensure that organizations are open about their reasons for collecting data, and that they use and process the data for no other purpose than those initially specified [28].

      Notice and Consent: The principles of notice and consent have formed the cornerstones of attempts to protect privacy for decades. Nevertheless in an era of ubiquitous data collection, the notion that an individual must be required to provide their explicit consent to allow for the collection and processing of their data seems increasingly antiquated, a relic of an age when it was possible to keep track of your personal data relationships and transactions. Today as data streams become more complex, some have begun to question suitability of consent as a mechanism to protect privacy. In particular commentators have noted how given the complexity of data flows in the digital ecosystem most individuals are not well placed to make truly informed decisions about the management of their data[29]. In one study, researchers demonstrated how by creating the perceptions of control, users were more likely to share their personal information, regardless of whether or not the users had actually gained control [30]. As such, for many, the garnering of consent is increasingly becoming a symbolic box-ticking exercise which achieves little more than to irritate and inconvenience customers whilst providing a burden for companies and a hindrance to growth and innovation [31].

      Access and Correction: The principle of 'access and correction' refers to the rights of individuals to obtain personal information being held about them as well as the right to erase, rectify, complete or otherwise amend that data. Aside from the well documented problems with privacy self-management, for many the real-time nature of data generation and analysis in an era of Big Data poses a number of structural challenges to this principle of privacy. As x comments, 'a good amount of data is not pre-processed in a similar fashion as traditional data warehouses. This creates a number of potential compliance problems such as difficulty erasing, retrieving or correcting data. A typical big data system is not built for interactivity, but for batch processing. This also makes the application of changes on a (presumably) static data set difficult'[32].

      Opt In-Out: The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services is making opting-out of data collection not only impractical but in some cases impossible. [33]

      3. "Chilling Effects"

      For many scholars the normalization of large scale data collection is steadily producing a widespread perception of ubiquitous surveillance amongst users. Drawing upon Foucault's analysis of Jeremy Bentham's panopticon and the disciplinary effects of surveillance, they argue that this perception of permanent visibility can cause users to sub-consciously 'discipline' and self- regulate of their own behavior, fearful of being targeted or identified as 'abnormal' [34]. As a result, the pervasive nature of Big Data risks generating a 'chilling effect' on user behavior and free speech.

      Although the notion of "chilling effects" is quite prevalent throughout the academic literature on surveillance and security, the difficulty of quantifying the perception and effects of surveillance on online behavior and practices means that there have only been a limited number of empirical studies of this phenomena, and none directly related to the chilling effects of Big Data. One study, conducted by researchers at MIT however, sought to assess the impact of Edward Snowden's revelations about NSA surveillance programs on Google search trends. Nearly 6,000 participants were asked to individually rate certain keywords for their perceived degree of privacy sensitivity along multiple dimensions. Using Google's own publicly available search data, the researchers then analyzed search patterns for these terms before and after the Snowden revelations. In doing so they were able to demonstrate a reduction of around 2.2% in searchers for those terms deemed to be most sensitive in nature. According to the researchers themselves, the results 'suggest that there is a chilling effect on search behaviour from government surveillance on the Internet'[35]. Although this study focussed on the effects on government surveillance, for many privacy advocates the growing pervasiveness of Big Data risks generating similar results. [36]

      4. Dignitary Harms of Predictive Decision-Making

      In addition to its potentially chilling effects on free speech, the automated nature of Big Data analytics also possess the potential to inflict so-called 'dignitary harms' on individuals, by revealing insights about themselves that they would have preferred to keep private [37].

      In an infamous example, following a shopping trip to the retail chain Target, a young girl began to receive mail at her father's house advertising products for babies including, diapers, clothing, and cribs. In response, her father complained to the management of the company, incensed by what he perceived to be the company's attempts to "encourage" pregnancy in teens. A few days later however, the father was forced to contact the store again to apologies, after his daughter had confessed to him that she was indeed pregnant. It was later revealed that Target regularly analyzed the sale of key products such as supplements or unscented lotions in order to generate "pregnancy prediction" scores, which could be used to assess the likelihood that a customer was pregnant and to therefore target them with relevant offers[38]. Such cases, though anecdotal illustrate how Big Data if not adopted sensitively can lead to potential embarrassing information about users being made public.

      Security

      In relation to cybersecurity Big Data can be viewed to a certain extent as a double-edged sword. On the one hand, the unique capabilities of Big Data analytics can provide organizations with new and innovative methods of enhancing their cybersecurity systems. On the other however, the sheer quantity and diversity of data emanating from a variety of sources creates its own security risks.

      5. "Honey-Pot"

      The larger the quantities of confidential information stored by companies on their databases the more attractive those databases may appear to potential hackers.

      6. Data Redundancy and Dispersion

      Inherent to Big Data systems is the duplication of data to many locations in order to optimize query processing. Data is dispersed across a wide range of data repositories in different servers, in different parts of the world. As a result it may be difficult for organizations to accurately locate and secure all items of personal information.

      Epistemological and Methodological Implications

      In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [39]. 'There is now a better way' Anderson insisted, 'Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot'[40].

      In spite of these bold claims however, many theorists remain skeptical of Big Data's methodological benefits and have expressed concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favor of shallow correlative analysis[41].

      7. Obfuscation

      Although Big Data analytics can be utilized to study almost any phenomena where enough data exists, many theorists have warned that simply because Big Data analytics can be used does not necessarily mean that they should be used[42]. Bigger is not always better and indeed the sheer quantity of data made available to users may in fact act to obscure certain insights. Whereas traditional scientific methods use sampling techniques to identify the most important and relevant data, Big Data by contrast encourages the collection and use of as much data as possible, in an attempt to attain full resolution of the phenomena being studied. However, not all data is equally useful and simply inputting as much data as possible into an algorithm is unlikely to produce accurate results and may instead obscure key insights.

      Indeed, whilst the promise of automation is central to a large part of Big Data's appeal, researchers observe that most Big Data analysis still requires an element of human judgement to filter out the 'good' data from the 'bad', and to decide what aspects of the data are relevant to the research objectives. As Boyd and Crawford observe, 'in the case of social media data, there is a 'data cleaning' process: making decisions about what attributes and variables will be counted, and which will be ignored. This process is inherently subjective"[43].

      Google's Flu Trend project provides an illustrative example of how Big Data's tendency to try to maximise data inputs can produce misleading results. Designed to accurately track flu outbreaks based upon data collected from Google searches, the project was initially proclaimed to be a great success. Gradually however it became apparent that the results being produced were not reflective of the reality on the ground. Later it was discovered that the algorithms used by the project to interpret search terms were insufficiently accurate to filter out anomalies in searches, such as those related to the 2009 H1N1 flu pandemic. As such, despite the great promise of Big Data, scholars insist it remains critical to be mindful of its limitations, remain selective about the types of data included in the analysis and exercise caution and intuition whenever interpreting its results [44].

      8. "Apophenia"

      In complete contrast to the problem of obfuscation, Boyd and Crawford observe how Big Data may also lead to the practice of 'apophenia', a phenomena whereby analysts interpret patterns where none exist, 'simply because enormous quantities of data can offer connections that radiate in all directions" [45]. David Leinweber for example demonstrated that data mining techniques could show strong but ultimately spurious correlations between changes in the S&P 500 stock index and butter production in Bangladesh [46]. Such spurious correlation between disparate and unconnected phenomena are a common feature of Big Data analytics and risks leading to unfounded conclusions being draw from the data.

      Although Leinweber's primary focus of analysis was the use of Data-Mining technologies, his observations are equally applicable to Big Data. Indeed the tendency amongst Big Data analysts to marginalise the types of domain specific expertise capable of differentiating between relevant and irrelevant correlations in favour of algorithmic automation can in many ways be seen to exacerbate many of the problems Leinweber identified.

      9. From Causation to Correlation

      Closely related to the problem of Aphonenia is the concern that Big Data's emphasis on correlative analysis risks leading to an abandonment of the pursuit of causal knowledge in favour of shallow descriptive accounts of scientific phenomena[47].

      For many, Big Data enthusiasts 'correlation is enough', producing inherently meaningful results interpretable by anyone without the need for pre-existing theory or hypothesis. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, for others by neglecting the knowledge of domain experts, Big Data risks generating a shallow type of analysis, since it fails to adequately embed observations within a pre-existing body of knowledge.

      This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[48].

      As such, although Big Data can provide unrivalled accounts of "what" people do, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially critical in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes.

      Digital Divides and Marginalisation

      Today data is a highly valuable commodity. The market for data in and of itself has been steadily growing in recent years with the business models of many online services now formulated around the strategy of harvesting data from users[49]. As with the commodification of anything however, inequalities can easily emerge between the haves and have not's. Whilst the quantity of data currently generated on a daily basis is many times greater than at any other point in human history, the vast majority of this data is owned and tightly controlled by a very small number of technology companies and data brokers. Although in some instances limited access to data may be granted to university researchers or to those willing and able to pay a fee, in many cases data remains jealously guarded by data brokers, who view it as an important competitive asset. As a result these data brokers and companies risk becoming the gatekeepers of the Big Data revolution, adjudicating not only over who can benefit from Big Data, but also in what context and under what terms. For many such inconsistencies and inequalities in access to data raises serious doubts about just how widely distributed the benefits of Big Data will be. Others go even further claiming that far from helping to alleviate inequalities, the advent of Big Data risks exacerbating already significant digital divides that exist as well as creating new ones [50].

      10. Anti-Competitive Practices

      As a result of the reluctance of large companies to share their data, there increasingly exists a divide in access between small start-ups companies and their larger and more established competitors. Thus, new entrants to the marketplace may be at a competitive disadvantage in relation to large and well established enterprises, being as they are unable to harness the analytical power of the vast quantities of data available to large companies by virtue of their privileged market position. Since the performance of many online services are today often intimately connected with the collation and use of users data, some researchers have suggested that this inequity in access to data could lead to a reduction in competition in the online marketplace, and ultimately therefore to less innovation and choice for consumers[51].

      As a result researchers including Nathan Newman of New York University have called for a reassessment and reorientation of anti-trust investigations and regulatory approaches more generally to 'to focus on how control of personal data by corporations can entrench monopoly power and harm consumer welfare in an economy shaped increasingly by the power of "big data"'[52]. Similarly a report produced by the European Data Protection Supervisor concluded that, 'The scope for abuse of market dominance and harm to the consumer through refusal of access to personal information and opaque or misleading privacy policies may justify a new concept of consumer harm for competition enforcement in digital economy' [53].

      11. Research

      From a research perspective barriers to access to data caused by proprietary control of datasets are problematic, since certain types of research could become restricted to those privileged enough to be granted access to data. Meanwhile those denied access are left not only incapable of conducting similar research projects, but also unable to test, verify or reproduce the findings of those who do. The existence of such gatekeepers may also lead to reluctance on the part of researchers to undertake research critical of the companies, upon whom they rely for access, leading to a chilling effect on the types of research conducted[54].

      12. Inequality

      Whilst bold claims are regularly made about the potential of Big Data to deliver economic development and generate new innovations, some critics of remain concerned about how equally the benefits of Big Data will be distributed and the effects this could have on already established digital divides [55].

      Firstly, whilst the power of Big Data is already being utilized effectively by most economically developed nations, the same cannot necessarily be said for many developing countries. A combination of lower levels of connectivity, poor information infrastructure, underinvestment in information technologies and a lack of skills and trained personnel make it far more difficult for the developing world to fully reap the rewards of Big Data. As a consequence the Big Data revolution risks deepening global economic inequality as developing countries find themselves unable to compete with data rich nations whose governments can more easily exploit the vast quantities of information generated by their technically literate and connected citizens.

      Likewise, to the extent that the Big Data analytics is playing a greater role in public policy-making, the capacity of individuals to generate large quantities of data, could potentially impact upon the extent to which they can provide inputs into the policy-making process. In a country such as India for example, where there exist high levels of inequality in access to information and communication technologies and the internet, there remain large discrepancies in the quantities of data produced by individuals. As a result there is a risk that those who lack access to the means of producing data will be disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority [56].

      Discrimination

      13. Injudicious or Discriminatory Outcomes

      Big Data presents the opportunity for governments, businesses and individuals to make better, more informed decisions at a much faster pace. Whilst this can evidently provide innumerable opportunities to increase efficiency and mitigate risk, by removing human intervention and oversight from the decision-making process Big Data analysts run the risk of becoming blind to unfair or injudicious results generated by skewed or discriminatory programming of the algorithms.

      There currently exists a large number of automated decision-making algorithms in operation across a broad range of sectors including most notably perhaps those used to asses an individual's suitability for insurance or credit. In either of these cases faults in the programming or discriminatory assessment criteria can have potentially damaging implications for the individual, who may as a result be unable to attain credit or insurance. This concern with the potentially discriminatory aspects of Big Data is prevalent throughout the literature and real life examples have been identified by researchers in a large number of major sectors in which Big Data is currently being used[57].

      Yu for instance, cites the case of the insurance company Progressive, which required its customers to install 'Snapsnot' - a small monitoring device - into their cars in order to receive their best rates. The device tracked and reported the customers driving habits, and offered discounts to those drivers who drove infrequently, broke smoothly, and avoided driving at night - behaviors that correlate with a lower risk of future accidents. Although this form of price differentiation provided incentives for customers to drive more carefully, it also had the unintended consequence of unfairly penalizing late-night shift workers. As Yu observes, 'for late night shift-workers, who are disproportionately poorer and from minority groups, this differential pricing provides no benefit at all. It categorizes them as similar to late-night party-goers, forcing them to carry more of the cost of the intoxicated and other irresponsible driving that happens disproportionately at night'[58].

      In another example, it is noted how Big Data is increasingly being used to evaluate applicants for entry-level service jobs. One method of evaluating applicants is by the length of their commute - the rationale being that employees with shorter commutes are statistically more likely to remain in the job longer. However, since most service jobs are typically located in town centers and since poorer neighborhoods tend to be those on the outskirts of town, such criteria can have the effect of unfairly disadvantaging those living in economically deprived areas. Consequently such metrics of evaluation can therefore also unintentionally act to reinforce existing social inequalities by making it more difficult for economically disadvantaged communities to work their way out of poverty[59].

      14. Lack of Algorithmic Transparency.

      If data is indeed the 'oil of the 21st century'[60] then algorithms are very much the engines which are driving innovation and economic development. For many companies the quality of their algorithms is often a crucial factor in providing them with a market advantage over their competitor. Given their importance, the secrets behind the programming of algorithms are often closely guarded by companies, and are typically classified as trade secrets and as such are protected by intellectual property rights. Whilst companies may claim that such secrecy is necessary to encourage market competition and innovation, many scholars are becoming increasingly concerned about the lack of transparency surrounding the design of these most crucial tools.

      In particular there is a growing sentiment common amongst many researchers that there currently exists a chronic lack of accountability and transparency in terms of how Big Data algorithms are programmed and what criteria are used to determine outcomes [61]. As Frank Pasquale observed,

      ' hidden algorithms can make (or ruin) reputations, decide the destiny of entrepreneurs, or even devastate an entire economy. Shrouded in secrecy and complexity, decisions at major Silicon Valley and Wall Street firms were long assumed to be neutral and technical. But leaks, whistleblowers, and legal disputes have shed new light on automated judgment. Self-serving and reckless behavior is surprisingly common, and easy to hide in code protected by legal and real secrecy'[62].

      As such, without increased transparency in algorithmic design, instances of Big Data discrimination may go unnoticed as analyst are unable to access the information necessary to identify them.

      Conclusion

      Today Big Data presents us with as many challenges as it does benefits. Whilst Big Data analytics can offer incredible opportunities to reduce inefficiency, improve decision-making, and increase transparency, concerns remain about the effects of these new technologies on issues such as privacy, equality and discrimination. Although the tensions between the competing demands of Big Data advocates and their critics may appear irreconcilable; only by highlighting these points of contestation can we hope to begin to ask the types of important and difficult questions necessary to do so, including; how can we reconcile Big Data's need for massive inputs of personal information with core principles of privacy such as data minimization and collection limitation? What processes and procedures need to be put in place during the design and implementation of Big Data models and algorithms to provide sufficient transparency and accountability so as to avoid instances of discrimination? What measures can be used to help close digital divides and ensure that the benefits of Big Data are shared equitably? Questions such as these are today only just beginning to be addressed; each however, will require careful consideration and reasoned debate, if Big Data is to deliver on its promises and truly fulfil its 'revolutionary' potential.


      [1] Gantz, J., &Reinsel, D. Extracting Value from Chaos, IDC, (2011), available at: http://www.emc.com/collateral/analyst-reports/idc-extracting-value-from-chaos-ar.pdf

      [2] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

      [4] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878, Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [5] Ibid.,

      [6] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

      [7] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      [8] Anderson, R., & Roberts, D. 'Big Data: Strategic Risks and Opportunities, Crowe Horwarth Global Risk Consulting Limited, (2012) https://www.crowehorwath.net/uploadedfiles/crowe-horwath-global/tabbed_content/big%20data%20strategic%20risks%20and%20opportunities%20white%20paper_risk13905.pdf

      [9] Ibid.

      [10] Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society (2014)http://bds.sagepub.com/content/1/2/2053951714564227.abstract,

      [11] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [12] Cisco, 'IoE-Driven Smart Street Lighting Project Allows Oslo to Reduce Costs, Save Energy, Provide Better Service', Cisco, (2014) Available at: http://www.cisco.com/c/dam/m/en_us/ioe/public_sector/pdfs/jurisdictions/Oslo_Jurisdiction_Profile_051214REV.pdf

      [13] Newell, B, C. Local Law Enforcement Jumps on the Big Data Bandwagon: Automated License Plate Recognition Systems, Information Privacy, and Access to Government Information. University of Washington - the Information School, (2013) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2341182

      [14] Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

      [15] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

      [16] Davenport, T., Barth., Bean, R. How is Big Data Different, MITSloan Management Review, Fall (2012), Available at, http://sloanreview.mit.edu/article/how-big-data-is-different/

      [17] Tucker, Darren S., & Wellford, Hill B., Big Mistakes Regarding Big Data, Antitrust Source, American Bar Association, (2014). Available at SSRN: http://ssrn.com/abstract=2549044

      [18] Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      [19] Brown, B., Chui, M., Manyika, J. 'Are you Ready for the Era of Big Data?', McKinsey Quarterly, (2011), Available at, http://www.t-systems.com/solutions/download-mckinsey-quarterly-/1148544_1/blobBinary/Study-McKinsey-Big-data.pdf ; Benady, D., 'Radical transparency will be unlocked by technology and big data', Guardian (2014) Available at: http://www.theguardian.com/sustainable-business/radical-transparency-unlocked-technology-big-data

      [20] Ibid.

      [21] Ibid.

      [22] United Nations, A World That Counts: Mobilising the Data Revolution for Sustainable Development, Report prepared at the request of the United Nations Secretary-General,by the Independent Expert Advisory Group on a Data Revolutionfor Sustainable Development. (2014), pg. 18, see also, Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      [23] Greenleaf, G. Abandon All Hope? Foreword for Issue 37(2) of the UNSW Law Journal on 'Communications Surveillance, Big Data, and the Law' ,(2014) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2490425##, Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [24] Tene, O., &Polonetsky, J. Big Data for All: Privacy and User Control in the Age of Analytics, 11 Nw. J. Tech. &Intell. Prop. 239 (2013) http://scholarlycommons.law.northwestern.edu/njtip/vol11/iss5/1

      [25] Narayanan and Shmatikov quoted in Ibid.,

      [26] OECD, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, The Organization for Economic Co-Operation and Development, (1999); The European Parliament and the Council of the European Union, EU Data Protection Directive, "Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data," (1995)

      [27] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899

      [28] Article 29 Working Group., Opinion 03/2013 on purpose limitation, Article 29 Data Protection Working Party, (2013) available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

      [29] Solove, D, J. Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [30] Brandimarte, L., Acquisti, A., & Loewenstein, G., Misplaced Confidences:

      Privacy and the Control Paradox, Ninth Annual Workshop on the Economics of Information Security (WEIS) June 7-8 2010, Harvard University, Cambridge, MA, (2010), available at: https://fpf.org/wp-content/uploads/2010/07/Misplaced-Confidences-acquisti-FPF.pdf

      [31] Solove, D, J., Privacy Self-Management and the Consent Dilemma, 126 Harv. L. Rev. 1880 (2013), Available at: http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [32] Yu, W, E., Data., Privacy and Big Data-Compliance Issues and Considerations, ISACA Journal, Vol. 3 2014 (2014), available at: http://www.isaca.org/Journal/archives/2014/Volume-3/Pages/Data-Privacy-and-Big-Data-Compliance-Issues-and-Considerations.aspx

      [33] Ramirez, E., Brill, J., Ohlhausen, M., Wright, J., & McSweeny, T., Data Brokers: A Call for Transparency and Accountability, Federal Trade Commission (2014) https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf

      [34] Michel Foucault, Discipline and Punish: The Birth of the Prison. Translated by Alan Sheridan, London: Allen Lane, Penguin, (1977)

      [35] Marthews, A., & Tucker, C., Government Surveillance and Internet Search Behavior (2015), available at SSRN: http://ssrn.com/abstract=2412564

      [36] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society, Vol. 15, Issue 5, (2012)

      [37] Hirsch, D., That's Unfair! Or is it? Big Data, Discrimination and the FTC's Unfairness Authority, Kentucky Law Journal, Vol. 103, available at: http://www.kentuckylawjournal.org/wp-content/uploads/2015/02/103KyLJ345.pdf

      [38] Hill, K., How Target Figured Out A Teen Girl Was Pregnant Before Her Father Didhttp://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

      [39] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

      [40] Ibid.,

      [41] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [42] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

      [43] Ibid

      [44] Lazer, D., Kennedy, R., King, G., &Vespignani, A. " The Parable of Google Flu: Traps in Big Data Analysis ." Science 343 (2014): 1203-1205. Copy at http://j.mp/1ii4ETo

      [45] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [46] Leinweber, D. (2007) 'Stupid data miner tricks: overfitting the S&P 500', The Journal of Investing, vol. 16, no. 1, pp. 15-22. http://m.shookrun.com/documents/stupidmining.pdf

      [47] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679

      [48] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      [49] De Zwart, M. J., Humphreys, S., & Van Dissel, B. Surveillance, big data and democracy: lessons for Australia from the US and UK. Http://www.unswlawjournal.unsw.edu.au/issue/volume-37-No-2. (2014) Retrieved from https://digital.library.adelaide.edu.au/dspace/handle/2440/90048

      [50] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878; Newman, N., Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

      [51] Newman, N., The Cost of Lost Privacy: Search, Antitrust and the Economics of the Control of User Data (2013). Available at SSRN: http://ssrn.com/abstract=2265026, Newman, N. ,Search, Antitrust and the Economics of the Control of User Data, 31 YALE J. ON REG. 401 (2014)

      [52] Ibid.,

      [53] European Data Protection Supervisor, Privacy and competitiveness in the age of big data:

      The interplay between data protection, competition law and consumer protection in the Digital Economy, (2014), available at: https://secure.edps.europa.eu/EDPSWEB/webdav/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf

      [54] Boyd, D., and Crawford, K. 'Critical Questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon', Information, Communication & Society,Vol 15, Issue 5, (2012) http://www.tandfonline.com/doi/abs/10.1080/1369118X.2012.678878

      [55] Schradie, J., Big Data Not Big Enough? How the Digital Divide Leaves People Out, MediaShift, 31 July 2013, (2013), available at: http://mediashift.org/2013/07/big-data-not-big-enough-how-digital-divide-leaves-people-out/

      [56] Crawford, K., The Hidden Biases in Big Data, Harvard Business Review, 1 April 2013 (2013), available at: https://hbr.org/2013/04/the-hidden-biases-in-big-data

      [57] Robinson, D., Yu, H., Civil Rights, Big Data, and Our Algorithmic Future, (2014) http://bigdata.fairness.io/introduction/

      [58] Ibid.

      [59] Ibid

      [60] Rotellla, P., Is Data The New Oil? Forbes, 2 April 2012, (2012), available at: http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/

      [61] Barocas, S., &Selbst, A, D., Big Data's Disparate Impact,California Law Review, Vol. 104, (2015). Available at SSRN: http://ssrn.com/abstract=2477899; Kshetri. N, 'The Emerging role of Big Data in Key development issues: Opportunities, challenges, and concerns'. Big Data & Society(2014) http://bds.sagepub.com/content/1/2/2053951714564227.abstract

      [62] Pasquale, F., The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press , (2015)

      Eight Key Privacy Events in India in the Year 2015

      by Amber Sinha — last modified Jan 03, 2016 05:43 AM
      As the year draws to a close, we are enumerating some of the key privacy related events in India that transpired in 2015. Much like the last few years, this year, too, was an eventful one in the context of privacy.

      While we did not witness, as one had hoped, any progress in the passage of a privacy law, the year saw significant developments with respect to the ongoing Aadhaar case. The statement by the Attorney General, India's foremost law officer, that there is a lack of clarity over whether the right to privacy is a fundamental right, and the fact the the matter is yet unresolved was a huge setback to the jurisprudence on privacy. [1] However, the court has recognised a purpose limitation as applicable into the Aadhaar scheme, limiting the sharing of any information collected during the enrollment of residents in UID. A draft Encryption Policy was released and almost immediately withdrawn in the face of severe public backlash, and an updated Human DNA Profiling Bill was made available for comments. Prime Minister Narendra Modi's much publicised project "Digital India" was in news throughout the year, and it also attracted its' fair share of criticism in light of the lack of privacy safeguards it offered. Internationally, a lawsuit brought by Maximilian Schrems, an Austrian privacy activist, dealt a body blow to the fifteen year old Safe Harbour Framework in place for data transfers between EU and USA. Below, we look at what were, according to us, the eight most important privacy events in India, in 2015.

      1. August 11, 2015 order on Aadhaar not being compulsory

      In 2012, a writ petition was filed by Judge K S Puttaswamy challenging the government's policy in its attempt to enroll all residents of India in the UID project and linking the Aadhaar card with various government services. A number of other petitioners who filed cases against the Aadhaar scheme have also been linked with this petition and the court has been hearing them together. On September 11, 2015, the Supreme Court reiterated its position in earlier orders made on September 23, 2013 and March 24, 2014 stating that the Aadhaar card shall not be made compulsory for any government services. [2] Building on its earlier position, the court passed the following orders:

      a) The government must give wide publicity in the media that it was not mandatory for a resident to obtain an Aadhaar card,

      b) The production of an Aadhaar card would not be a condition for obtaining any benefits otherwise due to a citizen,

      c) Aadhaar card would not be used for any purpose other than the PDS Scheme, for distribution of foodgrains and cooking fuel such as kerosene and for the LPG distribution scheme.

      d) The information about an individual obtained by the UIDAI while issuing an Aadhaar card shall not be used for any other purpose, save as above, except as may be directed by a Court for the purpose of criminal investigation.[3]

      Despite this being the fifth court order given by the Supreme Court[4] stating that the Aadhaar card cannot be a mandatory requirement for access to government services or subsidies, repeated violations continue. One of the violations which has been widely reported is the continued requirement of an Aadhaar number to set up a Digital Locker account which also led to activist, Sudhir Yadav filing a petition in the Supreme Court.[5]

      2. No Right to Privacy - Attorney General to SC

      The Attorney General, Mukul Rohatgi argued before the Supreme Court in the Aadhaar case that the Constitution of India did not provide for a fundamental Right to Privacy.[6] He referred to the body of case in the Supreme Court dealing with this issue and made a reference to the 1954 case, MP Sharma v. Satish Chandra[7] stating that there was "clear divergence of opinion" on the Right to Privacy and termed it as "a classic case of unclear position of law." He also referred to the discussion on this matter in the Constitutional Assembly Debates and pointed to the fact the framers of the Constitution did not intend for this to be a fundamental right. He said the matter needed to be referred to a nine judge Constitution bench.[8] This raises serious questions over the jurisprudence developed by the Supreme Court on the right to privacy over the last five decades. The matter is currently pending resolution by a larger bench which needs to be constituted by the Chief Justice of India.

      3. Shreya Singhal judgment and Section 69A, IT Act

      In the much celebrated judgment, Shreya Singhal v. Union of India, in March 2015, the Supreme Court struck down Section 66A of the Information Technology Act, 2000 as unconstitutional and laid down guidelines for online takedowns under the Internet intermediary rules. However, significantly, the court also upheld Section 69A and the blocking rules under this provision. It was held to be a narrowly-drawn provision with adequate safeguards. The rules prescribe a procedure for blocking which involves receipt of a blocking request, examination of the request by the Committee and a review committee which performs oversight functions. However, commentators have pointed to the opacity of the process in the rules under this provisions. While the rules mandate that a hearing is given to the originator of the content, this safeguard is widely disregarded. The judgment did not discuss Section 69 of the Information Technology Act, 2000 which deal with decrypting of electronic communication, however, the Department of Electronic and Information Technology brought up this issue subsequently, through a Draft Encryption Policy, discussed below.

      4. Circulation and recall of Draft Encryption Policy

      On October 19, 2015, the Department of Electronic and Information Technology (DeitY) released for public comment a draft National Encryption Policy. The draft received an immediate and severe backlash from commentators, and was withdrawn by September 22, 2015. [9] The government blamed a junior official for the poor drafting of the document and noted that it had been released without a review by the Telecom Minister, Ravi Shankar Prasad and other senior officials.[10] The main areas of contention were a requirement that individuals store plain text versions of all encrypted communication for a period of 90 days, to be made available to law enforcement agencies on demand; the government's right to prescribe key-strength, algorithms and ciphers; and only government-notified encryption products and vendors registered with the government being allowed to be used for encryption.[11] The purport of the above was to limit the ways in which citizens could encrypt electronic communication, and to allow adequate access to law enforcement agencies. The requirement to keep all encrypted information in plain text format for a period of 90 days garnered particular criticism as it would allow for creation of a 'honeypot' of unencrypted data, which could attract theft and attacks.[12] The withdrawal of the draft policy is not the final chapter in this story, as the Telecom Minister has promised that the Department will come back with a revised policy. [13] This attempt to put restrictions on use of encryption technologies is not only in line with a host of surveillance initiatives that have mushroomed in India in the last few years,[14] but also finds resonance with a global trend which has seen various governments and law enforcement organisations argue against encryption. [15]

      5. Privacy concerns raised about Digital India

      The Digital India initiative includes over thirty Mission Mode Projects in various stages of implementation. [16] All of these projects entail collection of vast quantities of personally identifiable information of the citizens. However, most of these initiatives do not have clearly laid down privacy policies.[17] There is also a lack of properly articulated access control mechanisms and doubts over important issues such as data ownership owing to most projects involving public private partnership which involves private organisation collecting, processing and retaining large amounts of data. [18] Ahead of Prime Minister Modi's visit to the US, over 100 hundred prominent US based academics released a statement raising concerns about "lack of safeguards about privacy of information, and thus its potential for abuse" in the Digital India project. [19] It has been pointed out that the initiatives could enable a "cradle-to-grave digital identity that is unique, lifelong, and authenticable, and it plans to widely use the already mired in controversy Aadhaar program as the identification system." [20]

      6. Issues with Human DNA Profiling Bill, 2015

      The Human DNA Profiling Bill, 2015 envisions the creation of national and regional DNA databases comprising DNA profiles of the categories of persons specified in the Bill.[21] The categories include offenders, suspects, missing persons, unknown deceased persons, volunteers and such other categories specified by the DNA Profiling Board which has oversight over these banks. The Bill grants wide discretionary powers to the Board to introduce new DNA indices and make DNA profiles available for new purposes it may deem fit. [22] These, and the lack of proper safeguards surrounding issues like consent, retention and collection pose serious privacy risks if the Bill becomes a law. Significantly, there is no element of purpose limitation in the proposed law, which would allow the DNA samples to be re-used for unspecified purposes.[23]

      7. Impact of the Schrems ruling on India

      In Schrems v. Data Protection Commissioner, the Court of Justice in European Union (CJEU) annulled the Commission Decision 2000/520 according to which US data protection rules were deemed sufficient to satisfy EU privacy rules enabling transfers of personal data from EU to US, otherwise known as the 'Safe Harbour' framework. The court ruled that broad formulations of derogations on grounds of national security, public interest and law enforcement in place in the US goes beyond the test of proportionality and necessity under the Data Protection rules.[24] This judgment could also have implications for the data processing industry in India. For a few years now, a framework similar to the Safe Harbour has been under discussion for transfer of data between India and EU. The lack of a privacy legislation has been among the significant hurdles in arriving at a framework.[25] In the absence of a Safe Harbour framework, the companies in India rely on alternate mechanisms such as Binding Corporate Rules (BCR) or Model Contractual Clauses. These contracts impose the obligation on the data exporters and importers to ensure that 'adequate level of data protection' is provided. The Schrems judgement makes it clear that 'adequate level of data protection' entails a regime that is 'essentially equivalent' to that envisioned under Directive 95/46.[26] What this means is that any new framework of protection between EU and other countries like US or India will necessarily have to meet this test of essential equivalence. The PRISM programme in the US and a host of surveillance programmes that have been initiated by the government in India in the last few years could pose problems in satisfying this test of essential equivalence as they do not conform to the proportionality and necessity principles.

      8. The definition of "unfair trade practices" in the Consumer Protection Bill, 2015

      The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session[27] has introduced an expansive definition of the term "unfair trade practices." The definition as per the Bill includes the disclosure "to any other person any personal information given in confidence by the consumer."[28] This clause exclude from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Currently, the only law governing data protection law are the Reasonable security practices and procedures and sensitive personal data or information Rules, 2011[29] prescribed under Section 43A of the Information Technology Act, 2000. Under these rules, sensitive personal data or information is protected in that their disclosure requires prior permission from the data subject. [30] For other kinds of personal information not categorized as sensitive personal data or information, the only recourse of data subjects in case to claim breach of the terms of privacy policy which constitutes a lawful contract. [31] The Consumer Protection Bill, 2015, if enacted as law, could significantly expand the scope of protection available to data subjects. First, unlike the Section 43A rules, the provisions of the Bill would be applicable to physical as well as electronic collection of personal information. Second, disclosure to a third party of personal information other than sensitive personal data or information could also have similar 'prior permission' criteria under the Bill, if it can be shown that the information was shared by the consumer in confidence.

      What we see above are events largely built around a few trends that we have been witnessing in the context of privacy in India, in particular and across the world, in general. Lack of privacy safeguards in initiatives like the Aadhaar project and Digital India is symptomatic of policies that are not comprehensive in their scope, and consequently fail to address key concerns. Dr Usha Ramanathan has called these policies "powerpoint based policies" which are implemented based on proposals which are superficial in their scope and do not give due regard to their impact on a host of issues. [32] Second, the privacy concerns posed by the draft Encryption Policy and the Human DNA Profiling Bill point to the motive of surveillance that is in line with other projects introduced with the intent to protect and preserve national security. [33] Third, the incidents that championed the cause of privacy like the Schrems judgment have largely been initiated by activists and civil society actors, and have typically entailed the involvement of the judiciary, often the single recourse of actors in the campaign for the protection of civil rights. It must be noted that jurisprudence on the right to privacy in India has not moved beyond the guidelines set forth by the Supreme Court in PUCL v. Union of India.[34] However, new mass surveillance programmes and massive collection of personal data by both public and private parties through various schemes mandated a re-look at the standards laid down twenty years ago. The privacy issue pending resolution by a larger bench in the Aadhaar case affords an opportunity to revisit those principles in light of how surveillance has changed in the last two decades and strengthen privacy and data protection.


      [1] Right to Privacy not a fundamental right, cannot be invoked to scrap Aadhar: Centre tells Supreme Court, available at http://articles.economictimes.indiatimes.com/2015-07-23/news/64773078_1_fundamental-right-attorney-general-mukul-rohatgi-privacy

      [4] Five SC Orders Later, Aadhaar Requirement Continues to Haunt Many, available at http://thewire.in/2015/09/19/five-sc-orders-later-aadhaar-requirement-continues-to-haunt-many-11065/

      [5] Digital Locker scheme challenged in Supreme Court, available at http://www.moneylife.in/article/digital-locker-scheme-challenged-in-supreme-court/42607.html

      [6] Privacy not a fundamental right, argues Mukul Rohatgi for Govt as Govt affidavit says otherwise, available at http://www.legallyindia.com/Constitutional-law/privacy-not-a-fundamental-right-argues-mukul-rohatgi-for-govt-as-govt-affidavit-says-otherwise

      [7] 1954 SCR 1077.

      [8] Supra Note 1.

      [10] Encryption policy poorly worded by officer: Telecom Minister Ravi Shankar Prasad, available at http://economictimes.indiatimes.com/articleshow/49068406.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst

      [11] Updated: India's draft encryption policy puts user privacy in danger, available at http://www.medianama.com/2015/09/223-india-draft-encryption-policy/

      [12] Bhairav Acharya, The short-lived adventure of India's encryption policy, available at http://notacoda.net/2015/10/10/the-short-lived-adventure-of-indias-encryption-policy/

      [13] Supra Note 9.

      [14] Maria Xynou, Big democracy, big surveillance: India's surveillance state, available at https://www.opendemocracy.net/opensecurity/maria-xynou/big-democracy-big-surveillance-indias-surveillance-state

      [15] China passes controversial anti-terrorism law to access encrypted user accounts, available at http://www.theverge.com/2015/12/27/10670346/china-passes-law-to-access-encrypted-communications ; Police renew call against encryption technology that can help hide terrorists, available at http://www.washingtontimes.com/news/2015/nov/16/paris-terror-attacks-renew-encryption-technology-s/?page=all .

      [18] Indira Jaising, Digital India Schemes Must Be Preceded by a Data Protection and Privacy Law, available at http://thewire.in/2015/07/04/digital-india-schemes-must-be-preceded-by-a-data-protection-and-privacy-law-5471/

      [19] US academics raise privacy concerns over 'Digital India' campaign, available at http://yourstory.com/2015/08/us-digital-india-campaign/

      [20] Lisa Hayes, Digital India's Impact on Privacy: Aadhaar numbers, biometrics, and more, available at https://cdt.org/blog/digital-indias-impact-on-privacy-aadhaar-numbers-biometrics-and-more/

      [22] Comments on India's Human DNA Profiling Bill (June 2015 version), available at http://www.genewatch.org/uploads/f03c6d66a9b354535738483c1c3d49e4/IndiaDNABill_FGPI_15.pdf

      [23] Elonnai Hickok, Vanya Rakesh and Vipul Kharbanda, CIS Comments and Recommendations to the Human DNA Profiling Bill, June 2015, available at http://cis-india.org/internet-governance/blog/cis-comments-and-recommendations-to-human-dna-profiling-bill-2015

      [25] Jyoti Pandey, Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India, available at http://cis-india.org/internet-governance/blog/contestations-of-data-ecj-safe-harbor-ruling-and-lessons-for-india

      [26] Simon Cox, Case Watch: Making Sense of the Schrems Ruling on Data Transfer, available at https://www.opensocietyfoundations.org/voices/case-watch-making-sense-schrems-ruling-data-transfer

      [28] Section 2(41) (I) of the Consumer Protection Bill, 2015.

      [30] Rule 6 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

      [31] Rule 4 of Reasonable security practices and procedures and sensitive personal data or information Rules, 2011

      [33] Supra Note 11.

      [34] Chaitanya Ramachandra, PUCL V. Union of India Revisited: Why India's Sureveillance Law must be redesigned for the Digital Age, available at http://nujslawreview.org/wp-content/uploads/2015/10/Chaitanya-Ramachandran.pdf

      Free Basics: Negating net parity

      by Sunil Abraham last modified Jan 03, 2016 05:58 AM
      Researchers funded by Facebook were apparently told by 92 per cent of Indians they surveyed from large cities, with Internet connection and college degree, that the Internet “is a human right and that Free Basics can help bring Internet to all of India.” What a strange way to frame the question given that the Internet is not a human right in most jurisdictions.

      The article was published in the Deccan Herald on January 3, 2016.


      Free Basics is gratis service offered by Facebook in partnership with telcos in 37 countries. It is a mobile app that features less than a 100 of the 1 billion odd websites that are currently available on the WWW which in turn is only a sub-set of the Internet. Free Basics violates Net Neutrality because it introduces an unnecessary gatekeeper who gets to decide on “who is in” and “who is out”. Services like Free Basics could permanently alienate the poor from the full choice of the Internet because it creates price discrimination hurdles that discourage those who want to leave the walled garden.

      Inika Charles and Arhant Madhyala, two interns at Centre for Internet and Society (CIS), surveyed 1/100th of the Facebook sample, that is, 30 persons with the very same question at a café near our office in Bengaluru. Seventy per cent agreed with Facebook that the Internet was a human right but only 26 per cent thought Free Basics would achieve universal connectivity. My real point here is that numbers don’t matter. At least not in the typical way they do. Facebook dismissed Amba Kak’s independent, unfunded, qualitative research in Delhi, in their second public rebuttal, saying the sample size was only 20.

      That was truly ironical. The whole point of her research was the importance of small numbers. Kak says, “For some, it was the idea of an ‘emergency’ which made all-access plans valuable.” A respondent stated: “But maybe once or twice a month, I need some information which only Google can give me... like the other day my sister needed to know results to her entrance exams.” If you consider that too mundane, take a moment to picture yourself stranded in the recent Chennai flood. The statistical rarity of a Black Swan does not reduce its importance. A more neutral network is usually a more resilient network. When we do have our next national disaster, do we want to be one of the few countries on the planet who, thanks to our flawed regulation, have ended up with a splinternet?

      Telecom Regulatory Authority of India (Trai) chairman R S Sharma rightly expressed some scepticism around numbers when he said “the consultation paper is not an opinion poll.” He elaborated: “The issue here is some sites are being offered to one person free of cost while another is paying for it. Is this a good thing and can operators have such powers?” Had he instead asked “Is this the best option?” my answer would be “no”. Given the way he has formulated the question, our answer is a lawyerly “it depends”. The CIS believes that differential pricing should be prohibited. However, it can be allowed under certain exceptional standards when it is done in a manner that can be justified by the regulator against four axes of sometimes orthogonal policy objectives. They are increased access, enhanced competition, increased user choice and contribution to openness. For example, a permanent ban on Free Basics makes sense in the Netherlands but regulation may be sufficient for India.

      Gatekeeping powers

      To the second and more important part to Trai chairman’s second question on gatekeeping powers of operators, our answer is a simple “no”. But then, do we have any evidence that gatekeeping powers have been abused to the detriment of consumer and public interest? No. What do we do when we cannot, like Russell’s chicken, use induction to explain our future? Prof Simon Wren-Lew says, “If Bertrand Russell’s chicken had been an economist ...(it would have)... asked a crucial additional question: Why is the farmer doing this? What is in it for him?” There were five serious problems with Free Basics that Facebook has at least partially fixed, thanks mostly to criticism from consumers in India and Brazil. One, exclusivity with access provider; two, exclusivity with a set of web services; three, lack of transparency regarding retention of personal information; four, misrepresentation through the name of the service, Internet.org and five, lack of support for encrypted traffic. But how do we know these problems will stay fixed? Emerging markets guru Jan Chipchase tweeted asking “Do you trust Facebook? Today? Tomorrow? When its share price is under pressure and it wants to wring more $$$ from the platform?”

      Zero. Facebook pays telecom operators zero. The operators pay Facebook zero. The consumers pay zero. Why do we need to regulate philanthropy? Because these freebies are not purely the fruit of private capital. They are only possible thanks to an artificial state-supported oligopoly dependent on public resources like spectrum and wires (over and under public property). Therefore, these oligopolies much serve the public interest and also ensure that users are treated in a non-discriminatory fashion.

      Also provision of a free service should not allow powerful corporations to escape regulation–in jurisdictions like Brazil it is clear that Facebook has to comply with consumer protection law even if users are not paying for the service. Given that big data is the new oil, Facebook could pay the access provider in advertisements or manipulation of public discourse or by tweaking software defaults such as autoplay for videos which could increase bills of paying consumers quite dramatically.

      India needs a Net Neutrality regime that allows for business models and technological innovation as long as they don’t discriminate between users and competitors. The Trai should begin regulation based on principles as it has rightly done with the pre-emptive temporary ban. But there is a need to bring “numbers we can trust” to the regulatory debate. We as citizens need to establish a peer-to-peer Internet monitoring infrastructure across mobile and fixed lines in India that we can use to crowd source data.

      (The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world)

      Ground Zero Summit

      by Amber Sinha — last modified Jan 03, 2016 06:06 AM
      The Ground Zero Summit which claims to be the largest collaborative platform in Asia for cyber-security was held in New Delhi from 5th to 8th November. The conference was organised by the Indian Infosec Consortium (IIC), a not for profit organisation backed by the Government of India. Cyber security experts, hackers, senior officials from the government and defence establishments, senior professionals from the industry and policymakers attended the event.

      Keynote Address

      The Union Home Minister, Mr. Rajnath Singh, inaugurated the conference. Mr Singh described cyber-barriers that impact the issues that governments face in ensuring cyber-security. Calling the cyberspace as the fifth dimension of security in addition to land, air, water and space, Mr Singh emphasised the need to curb cyber-crimes in India, which have grown by 70% in 2014 since 2013. He highlighted the fact that changes in location, jurisdiction and language made cybercrime particularly difficult to address. Continuing in the same vein, Mr. Rajnath Singh also mentioned cyber-terrorism as one the big dangers in the time to come. With a number of government initiatives like Digital India, Smart Cities and Make in India leveraging technology, the Home Minister said that the success of these projects would be dependent on having robust cyber-security systems in place.

      The Home Minister outlined some initiatives that Government of India is planning to take in order to address concerns around cyber security - such as plans to finalize a new national cyber policy. Significantly, he referred to a committee headed by Dr. Gulshan Rai, the National Cyber Security Coordinator mandated to suggest a roadmap for effectively tackling cybercrime in India. This committee has recommended the setting up of Indian Cyber Crime Coordination Centre (I-4C). This centre is meant to engage in capacity building with key stakeholders to enable them to address cyber crimes, and work with law enforcement agencies. Earlier reports about the recommendation suggest that the I-4C will likely be placed under the National Crime Records Bureau and align with the state police departments through the Crime and Criminal Tracking and Network Systems (CCTNS). I-4C is supposed to be comprised of high quality technical and R&D experts who would be engaged in developing cyber investigation tools.

      Other keynote speakers included Alok Joshi, Chairman, NTRO; Dr Gulshan Rai, National Cyber Security Coordinator; Dr. Arvind Gupta, Head of IT Cell, BJP and Air Marshal S B Dep, Chief of the Western Air Command.

      Technical Speakers

      There were a number of technical speakers who presented on an array of subjects. The first session was by Jiten Jain, a cyber security analyst who spoke on cyber espionage conducted by actors in Pakistan to target defence personnel in India. Jiten Jain talked about how the Indian Infosec Consortium had discovered these attacks in 2014. Most of these websites and mobile apps posed as defence news and carried malware and viruses. An investigation conducted by IIC revealed the domains to be registered in Pakistan. In another session Shesh Sarangdhar, the CEO of Seclabs, an application security company, spoke about the Darknet and ways to break anonymity on it. Sarangdhar mentioned that anonymity on Darknet is dependent on all determinants of the equation in the communication maintaining a specific state. He discussed techniques like using audio files, cross domain on tor, siebel attacks as methods of deanonymization. Dr. Triveni Singh. Assistant Superintendent of Police, Special Task Force, UP Police made a presentation on the trends in cyber crime. Dr. Singh emphasised the amount of uncertainty with regard to the purpose of a computer intrusion. He discussed real life case studies such as data theft, credit card fraud, share trading fraud from the perspective of law enforcement agencies.

      Anirudh Anand, CTO of Infosec Labs discussed how web applications are heavily reliant on filters or escaping methods. His talk focused on XSS (cross site scripting) and bypassing regular expression filters. He also announced the release of XSS labs, an XSS test bed for security professionals and developers that includes filter evasion techniques like b-services, weak cryptographic design and cross site request forgery. Jan Siedl, an authority on SCADA presented on TOR tricks which may be used by bots, shells and other tools to better use the TOR network and I2P. His presentation dealt with using obfuscated bridges, Hidden Services based HTTP, multiple C&C addresses and use of OTP. Aneesha, an intern with the Kerala Police spoke about elliptical curve cryptography, its features such as low processing overheads. As this requires elliptic curve paths, efficient Encoding and Decoding techniques need to be developed. Aneesha spoke about an algorithm called Generator-Inverse for encoding and decoding a message using a Single Sign-on mechanism. Other subjects presented included vulnerabilities that remained despite using TLS/SSL, deception technology and cyber kill-chain, credit card frauds, Post-quantum crypto-systems and popular android malware.

      Panels

      There were also two panels organised at the conference. Samir Saran, Vice President of Observer Research Foundation, moderated the first panel on Cyber Arms Control. The panel included participants like Lt. General A K Sahni from the South Western Air Command; Lt. General A S Lamba, Retired Vice Chief Indian Army, Alok Vijayant, Director of Cyber Security Operation of NTRO and Captain Raghuraman from Reliance Industries. The panel debated the virtues of cyber arms control treaties. It was acknowledged by the panel that there was a need to frame rules and create a governance mechanism for wars in cyberspace. However, this would be effective only if the governments are the primary actors with the capability for building cyber-warfare know-how and tools. The reality was that most kinds of cyber weapons involved non state actors from the hacker community. In light of this, the cyber control treaties would lose most of their effectiveness.

      The second panel was on the Make for India’ initiatives. Dinesh Bareja, the CEO of Open Security Alliance and Pyramid Cyber Security was the moderator for this panel which also included Nandakumar Saravade, CEO of Data Security Council of India; Sachin Burman, Director of NCIIPC; Dr. B J Srinath, Director General of ICERT and Amit Sharma, Joint Director of DRDO. The focus of this session was on ‘Make in India’ opportunities in the domain of cyber security. The panelist discussed the role the government and industry could play in creating an ecosystem that supports entrepreneurs in skill development. Among the approaches discussed were: involving actors in knowledge sharing and mentoring chapters which could be backed by organisations like NASSCOM and bringing together industry and government experts in events like the Ground Zero Summit to provide knowledge and training on cyber-security issues.

      Exhibitions

      The conference was accompanied by a exhibitions showcasing indigenous cybersecurity products. The exhibitors included Smokescreen Technologies, Sempersol Consultancy, Ninja Hackon, Octogence Technologies, Secfence, Amity, Cisco Academy, Robotics Embedded Education Services Pvt. Ltd., Defence Research and Development Organisation (DRDO), Skin Angel, Aksit, Alqimi, Seclabs and Systems, Forensic Guru, Esecforte Technologies, Gade Autonomous Systems, National Critical Information Infrastructure Protection Centre (NCIIPC), Indian Infosec Consortium (IIC), INNEFU, Forensic Guru, Event Social, Esecforte Technologies, National Internet Exchange of India (NIXI) and Robotic Zone.

      The conference also witnessed events such Drone Wars, in which selected participants had to navigate a drone, a Hacker Fashion Show and the official launch of the Ground Zero’s Music Album.

      Understanding the Freedom of Expression Online and Offline

      by Prasad Krishna last modified Jan 03, 2016 10:24 AM

      PDF document icon PROVISIONAL PROGRAMME AGENDA_.pdf — PDF document, 542 kB (555783 bytes)

      ICFI Workshop

      by Prasad Krishna last modified Jan 03, 2016 10:33 AM

      PDF document icon ICFI Workshop note 10thDec2015.pdf — PDF document, 664 kB (680175 bytes)

      Facebook Free Basics: Gatekeeping Powers Extend to Manipulating Public Discourse

      by Vidushi Marda last modified Jan 09, 2016 01:43 PM
      15 million people have come online through Free Basics, Facebook's zero rated walled garden, in the past year. "If we accept that everyone deserves access to the internet, then we must surely support free basic internet services. Who could possibly be against this?" asks Facebook founder Mark Zuckerberg, in a recent op-ed defending Free Basics.

      The article was published in Catchnews on January 6, 2015. For more info click here.


      This rhetorical question however, has elicited a plethora of answers. The network neutrality debate has accelerated over the past few weeks with the Telecom Regulatory Authority of India (TRAI) releasing a consultation paper on differential pricing.

      While notifications to "Save Free Basics in India" prompt you on Facebook, an enormous backlash against this zero rated service has erupted in India.

      Free Basics

      The policy objectives that must guide regulating net neutrality are consumer choice, competition, access and openness. Facebook claims that Free Basics is a transition to the full internet and digital equality. However, by acting as a gatekeeper, Facebook gives itself the distinct advantage of deciding what services people can access for free by virtue of them being "basic", thereby violating net neutrality.

      Amidst this debate, it's important to think of the impact Facebook can have on manipulating public discourse. In the past, Facebook has used it's powerful News Feed algorithm to significantly shape our consumption of information online.

      In July 2014, Facebook researchers revealed that for a week in January 2012, it had altered the news feeds of 689,003 randomly selected Facebook users to control how many positive and negative posts they saw. This was done without their consent as part of a study to test how social media could be used to spread emotions online.

      Their research showed that emotions were in fact easily manipulated. Users tended to write posts that were aligned with the mood of their timeline.

      Another worrying indication of Facebook's ability to alter discourse was during the ALS Ice Bucket Challenge in July and August, 2014. Users' News Feeds were flooded with videos of individuals pouring a bucket of ice over their head to raise awareness for charitable cause, but not entirely on its merit.

      The challenge was Facebook's method of boosting its native video feature which was launched at around the same time. Its News Feed was mostly devoid of any news surrounding riots in Ferguson, Missouri at the same time, which happened to be a trending topic on Twitter.

      Each day, the news feed algorithm has to choose roughly 300 posts out of a possible 1500 for each user, which involves much more than just a random selection. The posts you view when you log into Facebook are carefully curated keeping thousands of factors in mind. Each like and comment is a signal to the algorithm about your preferences and interests.

      The amount of time you spend on each post is logged and then used to determine which post you are most likely to stop to read. Facebook even keeps into account text that is typed but not posted and makes algorithmic decisions based on them.

      It also differentiates between likes - if you like a post before reading it, the news feed automatically assumes that your interest is much fainter as compared to liking a post after spending 10 minutes reading it.

      Facebook believes that this is in the best interest of the user, and these factors help users see what he/she will most likely want to engage with. However, this keeps us at the mercy of a gatekeeper who impacts the diversity of information we consume, more often than not without explicit consent. Transparency is key.


      (Vidushi Marda is a programme officer at the Centre for Internet and Society)

      Human Rights in the Age of Digital Technology: A Conference to Discuss the Evolution of Privacy and Surveillance

      by Amber Sinha — last modified Jan 11, 2016 02:12 AM
      The Centre for Internet and Society organised a conference in roundtable format called ‘Human Rights in the Age of Digital Technology: A Conference to discuss the evolution of Privacy and Surveillance. The conference was held at Indian Habitat Centre on October 30, 2015. The conference was designed to be a forum for discussion, knowledge exchange and agenda building to draw a shared road map for the coming months.

      In India, the Right to Privacy has been interpreted to mean an individual's’ right to be left alone. In the age of massive use of Information and Communications Technology, it has become imperative to have this right protected. The Supreme Court has held in a number of its decisions that the right to privacy is implicit in the fundamental right to life and personal liberty under Article 21 of the Indian Constitution, though Part III does not explicitly mention this right. The Supreme Court has identified the right to privacy most often in the context of state surveillance and introduced the standards of compelling state interest, targetted surveillance and oversight mechanism which have been incorporated in the forms of rules under the Indian Telegraph Act, 1885.  Of late, privacy concerns have gained importance in India due to the initiation of national programmes like the UID Scheme, DNA Profiling, the National Encryption Policy, etc. attracting criticism for their impact on the right to privacy. To add to the growing concerns, the Attorney General, Mukul Rohatgi argued in the ongoing Aadhaar case that the judicial position on whether the right to privacy is a fundamental right is unclear and has questioned the entire body of jurisprudence on right to privacy in the last few decades.

      Participation

      The roundtable saw participation from various civil society organisation such as Centre for Communication Governance, The Internet Democracy Project, as well as individual researchers like Dr. Usha Ramanathan and Colonel Mathew.

      Introductions

      Vipul Kharbanda, Consultant, CIS made the introductions and laid down the agenda for the day. Vipul presented a brief overview of the kind of work of CIS is engaged in around privacy and surveillance, in areas including among others, the Human DNA Profiling Bill, 2014, the Aadhaar Project, the Privacy Bill and surveillance laws in India. It was also highlighted that CIS was engaged in work in the field of Big Data in light of the growing voices wanting to use Big Data in the Smart Cities projects, etc and one of the questions was to analyse whether the 9 Privacy Principles would still be valid in a Big Data and IoT paradigm.

      The Aadhaar Case

      Dr. Usha Ramanathan began by calling the Aadhaar project an identification project as opposed to an identity project. She brought up various aspects of project ranging from the myth of voluntariness, the strong and often misleading marketing that has driven the project, the lack of mandate to collect biometric data and the problems with the technology itself. She highlighted  inconsistencies, irrationalities and lack of process that has characterised the Aadhaar project since its inception. A common theme that she identified in how the project has been run was the element of ad-hoc-ness about many important decisions taken on a national scale and migrating from existing systems to the Aadhaar framework. She particularly highlighted the fact that as civil society actors trying to make sense of the project, an acute problem faced was the lack of credible information available. In that respect, she termed it as ‘powerpoint-driven project’ with a focus on information collection but little information available about the project itself. Another issue that Dr. Ramanathan brought up was that the lack of concern that had been exhibited by most people in sharing their biometric information without being aware of what it would be used, was in some ways symptomatic of they way we had begun to interact with technology and willingly giving information about ourselves, with little thought. Dr Ramanathan’s presentation detailed the response to the project from various quarters in the form of petitions in different high courts in India, how the cases were received by the courts and the contradictory response from the government at various stages. Alongside, she also sought to place the Aadhaar case in the context of various debates and issues, like its conflict with the National Population Register, exclusion, issues around ownership of data collected, national security implications and impact on privacy and surveillance. Aside from the above issues, Dr. Ramanathan also posited that the kind of flat idea of identity envisaged by projects like Aadhaar is problematic in that it adversely impacts how people can live, act and define themselves. In summation, she termed the behavior of the government as irresponsible for the manner in which it has changed its stand on issues to suit the expediency of the moment, and was particularly severe on the Attorney General raising questions about the existence of a fundamental right to privacy and casually putting in peril jurisprudence on civil liberties that has evolved over decades.

      Colonel Mathew concurred with Dr. Ramanathan that the Aadhaar Project was not about identity but about identification. Prasanna developed on this further saying that while identity was a right unto the individual, identification was something done to you by others. Colonel Mathew further presented a brief history of the Aadhaar case, and how the significant developments over the last few years have played out in the courts. One of the important questions that Colonel Mathew addressed was the claim of uniqueness made by the UID project. He pointed to research conducted by Hans Varghese Mathew which analysed the data on biometric collection and processing released by the UID and demonstrated that there was a clear probability of a duplication in 1 out of every 97 enrolments. He also questioned the oft-repeated claim that UID would give identification to those without it and allow them to access welfare schemes. In this context, he pointed at the failures of the introducer system and the fact that only 0.03% of those registered have been enrolled through the introducer system. Colonel Mathew also questioned the change in stance by the ruling party, BJP which had earlier declared that the UID project should be scrapped as it was a threat to national security. According to him, the prime mover of the scheme were corporate interests outside the country interested in the data to be collected. This, he claimed created very serious risks to the national security. Prasanna further added to this point stating that while, on the face of it, some of the claims of threats to national security may sound alarmist in nature, if one were to critically study the manner in which the data had collected for this project, the concerns appeared justified.

      The Draft Encryption Policy

      Amber Sinha, Policy Officer at CIS, made a presentation on the brief appearance of the Draft Encryption Policy which was released in October this year, and withdrawn by the government within a day. Amber provided an overview of the policy emphasising on clauses around limitations on kind of encryption algorithms and key sizes individuals and organisations could use and the ill-advised procedures that needed to be followed. After the presentation, the topic was opened for discussion. The initial part of the discussion was focussed on specific clauses that threatened privacy and could serve the ends of enabling greater surveillance of the electronic communications of individuals and organisations, most notably having an exhaustive list of encryption algorithms, and the requirement to keep all encrypted communication in plain text format for a period of 90 days. We also attempted to locate the draft policy in the context of privacy debates in India as well as the global response to encryption. Amber emphasised that while mandating minimum standards of encryption for communication between government agencies may be a honorable motive, as it is concerned with matters of national security, however when this is extended to private parties and involved imposes upward thresholds on the kinds of encryption they can use, it stems from the motive of surveillance. Nayantara, of The Internet Democracy Project, pointed out that there had been global push back against encryption by governments in various countries like US, Russia, China, Pakistan, Israel, UK, Tunisia and Morocco. In India also, the IT Act places limits on encryption. Her points stands further buttressed by the calls against encryption in the aftermath of the terrorist attacks in Paris last month.

      It also intended to have a session on the Human DNA Profiling Bill led by Dr. Menaka Guruswamy. However, due to certain issues in scheduling and paucity of time, we were not able to have the session.

      Questions Raised

      On Aadhaar, some of the questions raised included the question of  applicability of the Section 43A, IT Act rules to the private parties involved in the process. The issue of whether Aadhaar can be tool against corruption was raised by Vipul. However, Colonel Mathew demonstrated through his research that issues like corruption in the TPDS system and MNREGA which Aadhaar is supposed to solve, are not effectively addressed by it but that there were simpler solutions to these problems.

      Ranjit raised questions about the different contexts of privacy, and referred to the work of Helen Nissenbaum. He spoke about the history of freely providing biometric information in India, initially for property documents and how it has gradually been used for surveillance. He argued has due to this tradition, many people in India do not view sharing of biometric information as infringing on their privacy. Dipesh Jain, student at Jindal Global Law School pointed to challenges like how individual privacy is perceived in India, its various contexts, and people resorting to the oft-quoted dictum of ‘why do you want privacy if you have nothing to hide’. In the context, it is pertinent to mention the response of Edward Snowden to this question who said, “Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say.” Aakash Solanki, researcher

      Vipul and Amber also touched upon the new challenges that are upon us in a world of Big Data where traditional ways to ensure data protection through data minimisation principle and the methods like anonymisation may not work. With advances in computer science and mathematics threatening to re-identify anonymized datasets, and more and more reliances of secondary uses of data coupled with the inadequacy of the idea of informed consent, a significant paradigm shift may be required in how we view privacy laws.

      A number of action items going forward were also discussed, where different individuals volunteered to lead research on issues like the UBCC set up by the UIDAI, GSTN, the first national data utility, looking the recourses available to individual where his data is held by parties outside India’s jurisdiction.

      A Critique of Consent in Information Privacy

      by Amber Sinha and Scott Mason — last modified Jan 18, 2016 02:20 AM
      The idea of informed consent in privacy law is supposed to ensure the autonomy of an individual in any exercise which involves sharing of the individual's personal information. Consent is usually taken through a document, a privacy notice, signed or otherwise agreed to by the participant.

      Notice and Consent as cornerstone of privacy law
      The privacy notice, which is the primary subject of this article, conveys all pertinent information, including risks and benefits to the participant, and in the possession of such knowledge, they can make an informed choice about whether to participate or not.

      Most modern laws and data privacy principles seek to focus on individual control. In this context, the definition by the late Alan Westin, former Professor of Public Law & Government Emeritus, Columbia University, which characterises privacy as "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other," [1] is most apt. The idea of privacy as control is what finds articulation in data protection policies across jurisdictions beginning from the Fair Information Practice Principles (FIPP) from the United States. [2] Paul Schwarz, the Jefferson E. Peyser Professor at UC Berkeley School of Law and a Director of the Berkeley Center for Law and Technology, called the FIPP the building blocks of modern information privacy law. [3] These principles trace their history to a report called 'Records, Computers and Rights of Citizens'[4] prepared by an Advisory Committee appointed by the US Department of Health, Education and Welfare in 1973 in response to the increasing automation in data systems containing information about individuals. The Committee's mandate was to "explore the impact of computers on record keeping about individuals and, in addition, to inquire into, and make recommendations regarding, the use of the Social Security number."[5] The most important legacy of this report was the articulation of five principles which would not only play a significant role in the privacy laws in US but also inform data protection law in most privacy regimes internationally[6] like the OECD Privacy Guidelines, the EU Data Protection Principles, the FTC Privacy Principles, APEC Framework or the nine National Privacy Principles articulated by the Justice A P Shah Committee Report which are reflected in the Privacy Bill, 2014 in India. Fred Cate, the C. Ben Dutton Professor of Law at the Indiana University Maurer School of Law, effectively summarises the import of all of these privacy regimes as follows:

      "All of these data protection instruments reflect the same approach: tell individuals what data you wish to collect or use, give them a choice, grant them access, secure those data with appropriate technologies and procedures, and be subject to third-party enforcement if you fail to comply with these requirements or individuals' expressed preferences"[7]

      This makes the individual empowered and allows them to weigh their own interests in exercising their consent. The allure of this paradigm is that in one elegant stroke, it seeks to "ensure that consent is informed and free and thereby also to implement an acceptable tradeoff between privacy and competing concerns."[8] This system was originally intended to be only one of the multiple ways in data processing would be governed, along with other substantive principles such as data quality, however, it soon became the dominant and often the only mechanism.[9] In recent years however, the emergence of Big Data and the nascent development of the Internet of Things has led many commentators to begin questioning the workability of consent as a principle of privacy. [10] In this article we will look closely at the some of issues with the concept of informed consent, and how these notions have become more acute in recent years. Following an analysis of these issues, we will conclude by arguing that today consent, as the cornerstone of privacy law, may in fact be thought of as counter-productive and that a rethinking of a principle based approach to privacy may be necessary.

      Problems with Consent

      To a certain extent, there are some cognitive problems that have always existed with the issue of informed consent such as long and difficult to understand privacy notices,[11] although, in recent past with these problems have become much more aggravated. Fred Cate points out that FIPPs at their inception were broad principles which included both substantive and procedural aspects. However, as they were translated into national laws, the emphasis remained on the procedural aspect of notice and consent. From the idea of individual or societal welfare as the goals of privacy, the focus had shifted to individual control.[12] With data collection occurring with every use of online services, and complex data sets being created, it is humanly impossible to exercise rational decision-making about the choice to allow someone to use our personal data. The thrust of Big Data technologies is that the value of data resides not in its primary purposes but in its numerous secondary purposes where data is re-used many times over. [13] In that sense, the very idea of Big Data conflicts with the data minimization principle.[14] The idea is to retain as much data as possible for secondary uses. Since, these secondary uses are, by their nature, unanticipated, its runs counter to the the very idea of the purpose limitation principle. [15] The notice and consent requirement has simply led to a proliferation of long and complex privacy notices which are seldom read and even more rarely understood. We will articulate some issues with privacy notices which have always existed, and have only become more exacerbated in the context of Big Data and the Internet of Things.

      1. Failure to read/access privacy notices

      The notice and consent principle relies on the ability of the individual to make an informed choice after reading the privacy notice. The purpose of a privacy notice is to act as a public announcement of the internal practices on collection, processing, retention and sharing of information and make the user aware of the same.[16] However, in order to do so the individual must first be able to access the privacy notices in an intelligible format and read them. Privacy notices come in various forms, ranging from documents posted as privacy policies on a website, to click through notices in a mobile app, to signs posted in public spaces informing about the presence of CCTV cameras. [17]

      In order for the principle of notice and consent to work, the privacy notices need to be made available in a language understood by the user. As per estimates, about 840 million people (11% of the world population) can speak or understand English. However, most privacy notices online are not available in the local language in different regions.[18] Further, with the ubiquity of smartphones and advent of Internet of Things, constrained interfaces on mobile screens and wearables make the privacy notices extremely difficult to read. It must be remembered that privacy notices often run into several pages, and smaller screens effectively ensure that most users do not read through them. Further, connected wearable devices often have "little or no interfaces that readily permit choices." [19] As more and more devices are connected, this problem will only get more pronounced. Imagine in a world where refrigerators act as the intermediary disclosing information to your doctor or supermarket, at what point does the data subject step in and exercise consent.[20]

      Another aspect that needs to be understood is that unlike earlier when data collectors were far and few in between, the user could theoretically make a rational choice taking into account the purpose of data collection. However, in the world of Big Data, consent often needs to be provided while the user is trying to access services. In that context click through privacy notices such as those required to access online application, are treated simply as an impediment that must be crossed in order to get access to services. The fact that the consent need to be given in real time almost always results in disregarding what the privacy notices say.[21]

      Finally, some scholars have argued that while individual control over data may be appealing in theory, it merely gives an illusion of enhanced privacy but not the reality of meaningful choice.[22] Research demonstrates that the presence of the term 'privacy policy' leads people to the false assumption that if a company has a privacy policy in place, it automatically means presence of substantive and responsible limits on how data is handled.[23] Joseph Turow, the Robert Lewis Shayon Professor of Communication at the Annenberg School for Communication, and his team for example has demonstrated how "[w]hen consumers see the term 'privacy policy,' they believe that their personal information will be protected in specific ways; in particular, they assume that a website that advertises a privacy policy will not share their personal information."[24] In reality, however, privacy policies are more likely to serve as liability disclaimers for companies than any kind of guarantee of privacy for consumers. Most people tend to ignore privacy policies.[25] Cass Sunstein states that our cognitive capacity to make choices and take decisions is limited. When faced with an overwhelming number of choices to make, most of us do not read privacy notices and resort to default options.[26] The requirement to make choices, sometimes several times in a day, imposes significant burden on the consumers as well the business seeking such consent. [27]

      2. Failure to understand privacy notices

      FTC chairperson Edith Ramirez stated: "In my mind, the question is not whether consumers should be given a say over unexpected uses of their data; rather, the question is how to provide simplified notice and choice."[28] Privacy notices often come in the form of long legal documents much to the detriment of the readers' ability to understand them. These policies are "long, complicated, full of jargon and change frequently."[29] Kent walker list five problems that privacy notices typically suffer from - a) overkill - long and repetitive text in small print, b) irrelevance - describing situations of little concern to most consumers, c) opacity - broad terms the reflect the truth that is impossible to track and control all the information collected and stored, d) non-comparability - simplification required to achieve comparability will lead to compromising accuracy, and e) inflexibility - failure to keep pace with new business models.[30] Erik Sherman did a review of twenty three corporate privacy notices and mapped them against three indices which give approximate level of education necessary to understand text on a first read. His results show that most of policies can only be understood on the first read by people of a grade level of 15 or above. [31] FTC Chairperson Timothy Muris summed up the problem with long privacy notices when he said, "Acres of trees died to produce a blizzard of barely comprehensible privacy notices." [32]

      Margaret Jane Radin, the former Henry King Ransom Professor of Law Emerita at the University of Michigan, provides a good definition of free consent. It "involves a knowing understanding of what one is doing in a context in which it is actually

      possible for or to do otherwise, and an affirmative action in doing something, rather

      than a merely passive acquiescence in accepting something."[33] There have been various proposals advocating a more succinct and simpler standard for privacy notices,[34] or multi-layered notices[35] or representing the information in the form of a table. [36] However, studies show only an insignificant improvement in the understanding by consumers when privacy policies are represented in graphic formats like tables and labels. [37] It has also been pointed out that it is impossible to convey complex data policies in simple and clear language.[38]

      3. Failure to anticipate/comprehend the consequences of consent

      Today's infinitely complex and labyrinthine data ecosystem is beyond the comprehension of most ordinary users. Despite a growing willingness to share information online, most have no understanding of what happens to their data once they have uploaded it - Where it goes? Whom it is held by? Under what conditions? For what purpose? Or how might it be used, aggregated, hacked, or leaked in the future? For the most part, the above operations are "invisible, managed at distant centers, from behind the scenes, by unmanned powers."[39]

      The perceived opportunities and benefits of Big Data have led to an acceptance of the indiscriminate collection of as much data as possible as well as the retention of that data for unspecified future analysis. For many advocates, such practices are absolutely essential if Big Data is to deliver on its promises.. Experts have argued that key privacy principles particularly those of collection limitation, data minimization and purpose limitation should not be applied to Big Data processing.[40] As mentioned above, in the case of Big Data, the value of the data collected comes often not from its primary purpose but from its secondary uses. Deriving value from datasets involves amalgamating diverse datasets and executing speculative and exploratory kinds of analysis in order to discover hidden insights and correlations that might have previously gone unnoticed.[41] As such organizations are today routinely reprocessing data collected from individuals for purposes not directly related to the services they provide to the customer. These secondary uses of data are becoming increasingly valuable sources of revenue for companies as the value of data in and of itself continues to rise. [42]

      Purpose Limitation

      The principle of purpose limitation has served as a key component of data protection for decades. Purposes given for the processing of users' data should be given at the time of collection and consent and should be "specified, explicit and legitimate". In practice however, reasons given typically include phrases such as, 'for marketing purposes' or 'to improve the user experience' that are vague and open to interpretation. [43]

      Some commentators whilst conceding the fact that purpose limitation in the era of Big Data may not be possible have instead attempted to emphasise the notion of 'compatible use' requirements. In the view of Working Party on the protection of individuals with regard to the processing of person data, for example, use of data for a purpose other than that originally stated at the point of collection should be subject to a case-by-case review of whether not further processing for different purpose is justifiable - i.e., compatible with the original purpose. Such a review may take into account for example, the context in which the data was originally collected, the nature or sensitivity of the data involved, and the existence of relevant safeguards to insure fair processing of the data and prevent undue harm to the data subject.[44]

      On the other hand, Big Data advocates have argued that an assessment of legitimate interest rather than compatibility with the initial purpose is far better suited to Big Data processing.[45] They argue that today the notion of purpose limitation has become outdated. Whereas previously data was collected largely as a by-product of the purpose for which it was being collected. If for example, we opted to use a service the information we provided was for the most part necessary to enable the provision of that service. Today however, the utility of data is no longer restricted to the primary purpose for which it is collected but can be used to provide all kinds of secondary services and resources, reduce waste, increase efficiency and improve decision-making.[46] These kinds of positive externalities, Big Data advocates insist, are only made possible by the reprocessing of data.

      Unfortunately for the notion of consent the nature of these secondary purposes are rarely evident at the time of collection. Instead the true value of the data can often only be revealed when it is amalgamated with other diverse datasets and subjected to various forms of analysis to help reveal hidden and non-obvious correlations and insights.[47] The uncertain and speculative value of data therefore means that it is impossible to provide "specific, explicit, and legitimate" details about how a given data set will be used or how it might be aggregated in future. Without this crucial information data subjects have no basis upon which they can make an informed decision about whether or not to provide consent. Robert Sloan and Richard Warner argue that it is impossible for a privacy notice to contain enough information to enable free consent. They argue that current data collection practices are highly complex and that these practices involve collection of information at one stage for one purpose and then retain, analyze, and distribute it for a variety of other purposes in unpredictable ways. [48] Helen Nissenbaum points to the ever changing nature of data flow and the cognitive challenges it poses. "Even if, for a given moment, a

      snapshot of the information flows could be grasped, the realm is in constant flux, with new firms entering the picture, new analytics, and new back end contracts forged: in other words, we are dealing with a recursive capacity that is indefinitely extensible." [49]

      Scale and Aggregation

      Today the quantity of data being generated is expanding at an exponential rate. From smartphones and televisions, trains and airplanes, sensor-equipped buildings and even the infrastructures of our cities, data now streams constantly from almost every sector and function of daily life, 'creating countless new digital puddles, lakes, tributaries and oceans of information'.[50] In 2011 it was estimated that the quantity of data produced globally would surpass 1.8 zettabytes , by 2013 that had grown to 4 zettabytes , and with the nascent development of the Internet of Things gathering pace, these trends are set to continue. [51] Big Data by its very nature requires the collection and processing of very large and very diverse data sets. Unlike other forms scientific research and analysis which utilize various sampling techniques to identify and target the types of data most useful to the research questions, Big Data instead seeks to gather as much data as possible, in order to achieve full resolution of the phenomenon being studied, a task made much easier in recent years as a result of the proliferation of internet enabled devices and the growth of the Internet of Things. This goal of attaining comprehensive coverage exists in tension however with the key privacy principles of collection limitation and data minimization which seek to limit both the quantity and variety of data collected about an individual to the absolute minimum. [52]

      The dilution of the purpose limitation principle entails that even those who understand privacy notices and are capable of making rational choices about it, cannot conceptualize how their data will be aggregated and possibly used or re-used. Seemingly innocuous bits of data revealed at different stages could be combined to reveal sensitive information about the individual. Daniel Solove, the John Marshall Harlan Research Professor of Law at the George Washington University Law School, in his book, "The Digital Person", calls it the aggregation effect. He argues that the ingenuity of the data mining techniques and the insights and predictions that could be made by it render any cost-benefit analysis that an individual could make ineffectual. [53]

      4. Failure to opt-out

      The traditional choice against the collection of personal data that users have had access to, at least in theory, is the option to 'opt-out' of certain services. This draws from the free market theory that individuals exercise their free will when they use services and always have the option of opting out, thus, arguing against regulation but relying on the collective wisdom of the market to weed out harms. The notion that the provision of data should be a matter of personal choice on the part of the individual and that the individual can, if they chose decide to 'opt-out' of data collection, for example by ceasing use of a particular service, is an important component of privacy and data protection frameworks. The proliferation of internet-enabled devices, their integration into the built environment and the real-time nature of data collection and analysis however are beginning to undermine this concept. For many critics of Big Data, the ubiquity of data collection points as well as the compulsory provision of data as a prerequisite for the access and use of many key online services, is making opting-out of data collection not only impractical but in some cases impossible. [54]

      Whilst sceptics may object that individuals are still free to stop using services that require data. As online connectivity becomes increasingly important to participation in modern life, the choice to withdraw completely is becoming less of a genuine choice. [55] Information flows not only from the individuals it is about but also from what other people say about them. Financial transactions made online or via debit/credit cards can be analysed to derive further information about the individual. If opting-out makes you look anti-social, criminal, or unethical, the claims that we are exercising free will seems murky and leads one to wonder whether we are dealing with coercive technologies.

      Another issue with the consent and opt-out paradigm is the binary nature of the choice. This binary nature of consent makes a mockery of the notion that consent can function as an effective tool of personal data management. What it effectively means is that one can either agree with the long privacy notices, or choose to abandon the desired service. "This binary choice is not what the privacy architects envisioned four decades ago when they imagined empowered individuals making informed decisions about the processing of their personal data. In practice, it certainly is not the optimal mechanism to ensure that either information privacy or the free flow of information is being protected." [56]

      Conclusion: 'Notice and Consent' is counter-productive

      There continues to be an unwillingness amongst many privacy advocates to concede that the concept of consent is fundamentally broken, as Simon Davies, a privacy advocate based in London, comments 'to do so could be seen as giving ground to the data vultures', and risks further weakening an already dangerously fragile privacy framework.[57] Nevertheless, as we begin to transition into an era of ubiquitous data collection, evidence is becoming stronger that consent is not simply ineffective, but may in some instances might be counter-productive to the goals of privacy and data protection.

      As already noted, the notion that privacy agreements produce anything like truly informed consent has long since been discredited; given this fact, one may ask for whose benefit such agreements are created? One may justifiably argue that far from being for the benefit and protection of users, privacy agreement may in fact be fundamentally to the benefit of data brokers, who having gained the consent of users can act with near impunity in their use of the data collected. Thus, an overly narrow focus on the necessity of consent at the point of collection, risks diverting our attention from the arguably more important issue of how our data is stored, analysed and distributed by data brokers following its collection. [58]

      Furthermore, given the often complicated and cumbersome processes involved in gathering consent from users, some have raised concerns that the mechanisms put in place to garner consent could themselves morph into surveillance mechanisms. Davies, for example cites the case of the EU Cookie Directive, which required websites to gain consent for the collection of cookies. Davies observes how, 'a proper audit and compliance element in the system could require the processing of even more data than the original unregulated web traffic. Even if it was possible for consumers to use some kind of gateway intermediary to manage the consent requests, the resulting data collection would be overwhelming''. Thus in many instances there exists a fundamental tension between the requirement placed on companies to gather consent and the equally important principle of data minimization. [59]

      Given the above issues with notice and informed consent in the context of information privacy, and the fact that it is counterproductive to the larger goals of privacy law, it is important to revisit the principle or rights based approach to data protection, and consider a paradigm shift where one moves to a risk based approach that takes into account the actual threats of sharing data rather than relying on what has proved to be an ineffectual system of individual control. We will be dealing with some of these issues in a follow up to this article.


      [1] Alan Westin, Privacy and Freedom, Atheneum, New York, 2015.

      [2] FTC Fair Information Practice Principles (FIPP) available at https://www.it.cornell.edu/policies/infoprivacy/principles.cfm.

      [3] Paul M. Schwartz, "Privacy and Democracy in Cyberspace," 52 Vanderbilt Law Review 1607, 1614 (1999).

      [4] US Secretary's Advisory Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens, available at http://www.justice.gov/opcl/docs/rec-com-rights.pdf

      [6] Marc Rotenberg, "Fair Information Practices and the Architecture of Privacy: What Larry Doesn't Get," available at https://journals.law.stanford.edu/sites/default/files/stanford-technology-law-review/online/rotenberg-fair-info-practices.pdf

      [7] Fred Cate, The Failure of Information Practice Principles, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1156972

      [8] Robert Sloan and Richard Warner, Beyong Notice and Choice: Privacy, Norms and Consent, 2014, available at https://www.suffolk.edu/documents/jhtl_publications/SloanWarner.pdf

      [9] Fred Cate, Viktor Schoenberger, Notice and Consent in a world of Big Data, available at http://idpl.oxfordjournals.org/content/3/2/67.abstract

      [10] Daniel Solove, Privacy self-management and consent dilemma, 2013 available at http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [11] Ben Campbell, Informed consent in developing countries: Myth or Reality, available at https://www.dartmouth.edu/~ethics/docs/Campbell_informedconsent.pdf ;

      [12] Supra Note 7.

      [13] Viktor Mayer Schoenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think" John Murray, London, 2013 at 153.

      [14] The Data Minimization principle requires organizations to limit the collection of personal data to the minimum extent necessary to obtain their legitimate purpose and to delete data no longer required.

      [15] Omer Tene and Jules Polonetsky, "Big Data for All: Privacy and User Control in the Age of Analytics," SSRN Scholarly Paper, available at http://papers.ssrn.com/abstract=2149364

      [16] Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

      [17] Daniel Solove, The Digital Person: Technology and Privacy in the Information Age, NYU Press, 2006.

      [19] Opening Remarks of FTC Chairperson Edith Ramirez Privacy and the IoT: Navigating Policy Issues International Consumer Electronics Show Las Vegas, Nevada January 6, 2015 available at https://www.ftc.gov/system/files/documents/public_statements/617191/150106cesspeech.pdf

      [21] Supra Note 10.

      [22] Supra Note 7.

      [23] Chris Jay Hoofnagle & Jennifer King, Research Report: What Californians Understand

      About Privacy Online, available at http://ssrn.com/abstract=1262130

      [24] Joseph Turrow, Michael Hennesy, Nora Draper, The Tradeoff Fallacy, available at https://www.asc.upenn.edu/sites/default/files/TradeoffFallacy_1.pdf

      [25] Saul Hansell, "Compressed Data: The Big Yahoo Privacy Storm That Wasn't," New York Times, May 13, 2002 available at http://www.nytimes.com/2002/05/13/business/compressed-data-the-big-yahoo-privacy-storm-that-wasn-t.html?_r=0

      [26] Cass Sunstein, Choosing not to choose: Understanding the Value of Choice, Oxford University Press, 2015.

      [28] Opening Remarks of FTC Chairperson Edith Ramirez Privacy and the IoT: Navigating Policy Issues International Consumer Electronics Show Las Vegas, Nevada January 6, 2015 available at https://www.ftc.gov/system/files/documents/public_statements/617191/150106cesspeech.pdf

      [29] L. F. Cranor. Necessary but not sufficient: Standardized mechanisms for privacy notice and choice. Journal on Telecommunications and High Technology Law, 10:273, 2012, available at http://jthtl.org/content/articles/V10I2/JTHTLv10i2_Cranor.PDF

      [30] Kent Walker, The Costs of Privacy, 2001 available at https://www.questia.com/library/journal/1G1-84436409/the-costs-of-privacy

      [31] Erik Sherman, "Privacy Policies are great - for Phds", CBS News, available at http://www.cbsnews.com/news/privacy-policies-are-great-for-phds/

      [32] Timothy J. Muris, Protecting Consumers' Privacy: 2002 and Beyond, available at http://www.ftc.gov/speeches/muris/privisp1002.htm

      [33] Margaret Jane Radin, Humans, Computers, and Binding Commitment, 1999 available at http://www.repository.law.indiana.edu/ilj/vol75/iss4/1/

      [34] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

      [35] The Center for Information Policy Leadership, Hunton & Williams LLP, "Ten Steps To Develop A Multi-Layered Privacy Notice" available at https://www.informationpolicycentre.com/files/Uploads/Documents/Centre/Ten_Steps_whitepaper.pdf

      [36] Allen Levy and Manoj Hastak, Consumer Comprehension of Financial Privacy Notices, Interagency Notice Project, available at https://www.sec.gov/comments/s7-09-07/s70907-21-levy.pdf

      [37] Patrick Gage Kelly et al., Standardizing Privacy Notices: An Online Study of the Nutrition Label Approach available at https://www.ftc.gov/sites/default/files/documents/public_comments/privacy-roundtables-comment-project-no.p095416-544506-00037/544506-00037.pdf

      [39] Jonathan Obar, Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management, Big Data and Society, 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239188

      [40] Viktor Mayer Schoenberger and Kenneth Cukier, Big Data: A Revolution that will transform how we live, work and think" John Murray, London, 2013.

      [41] Supra Note 15.

      [42] Supra Note 40.

      [43] Article 29 Working Party, (2013) Opinion 03/2013 on Purpose Limitation, Article 29, available at: http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf

      [44] Ibid.

      [45] It remains unclear however whose interest would be accounted, existing EU legislation would allow commercial/data broker/third party interests to trump those of the user, effectively allowing re-processing of personal data irrespective of whether that processing would be in the interest of the user.

      [46] Supra Note 40.

      [47] Supra Note 10.

      [48] Robert Sloan and Richard Warner, Beyong Notice and Choice: Privacy, Norms and Consent, 2014, available at https://www.suffolk.edu/documents/jhtl_publications/SloanWarner.pdf

      [49] Helen Nissenbaum, A Contextual Approach to Privacy Online, available at http://www.amacad.org/publications/daedalus/11_fall_nissenbaum.pdf

      [50] D Bollier, The Promise and Peril of Big Data. The Aspen Institute, 2010, available at: http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf

      [51] Meeker, M. & Yu, L. Internet Trends, Kleiner Perkins Caulfield Byers, (2013), http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013 .

      [52] Supra Note 40.

      [53] Supra Note 17.

      [54] Janet Vertasi, My Experiment Opting Out of Big Data Made Me Look Like a Criminal, 2014, available at http://time.com/83200/privacy-internet-big-data-opt-out/

      [55] Ibid.

      [57] Simon Davies, Why the idea of consent for data processing is becoming meaningless and dangerous, available at http://www.privacysurgeon.org/blog/incision/why-the-idea-of-consent-for-data-processing-is-becoming-meaningless-and-dangerous/

      [58] Supra Note 10.

      [59] Simon Davies, Why the idea of consent for data processing is becoming meaningless and dangerous, available at http://www.privacysurgeon.org/blog/incision/why-the-idea-of-consent-for-data-processing-is-becoming-meaningless-and-dangerous/

      UID Ad

      by Prasad Krishna last modified Jan 13, 2016 02:28 AM

      PDF document icon Times of India 29_08_2015.pdf — PDF document, 1155 kB (1182894 bytes)

      Reply to RTI Application under RTI Act of 2005 from Vanya Rakesh

      by Vanya Rakesh last modified Jan 13, 2016 02:40 AM
      Unique Identification Authority of India replied to the RTI application filed by Vanya Rakesh.

      Madam,

      1. Please refer to your RTI application dated 3.12.2015 received in the Division on 10.12.2015 on the subject mentioned above requesting to provide the information in electronic form via the email address [email protected], copies of the artwork in print media released by UIDAI to create awareness about use of Aadhaar not being mandatory.
      2. I am directed to furnish herewith in electronic form, copy of the artwork in print media released / published in the epapers edition of the Times of India and Dainik Jagran in their respective editions of dated 29.8.2015 in a soft copy, about obtaining of Aadhaar not being mandatory for a citizen, as desired.
      3. In case, you want to go for an appeal in connection with the information provided, you may appeal to the Appellate Authority indicated below within thirty days from the date of receipt of this letter.
        Shri Harish Lal Verma,
        Deputy Director (Media),
        Unique Identification Authority of India
        3nd Floor, Tower – II, Jeevan Bharati Building,
        New Delhi – 110001.


      Yours faithfully,

      (T Gou Khangin)
      Section Officer & CPIO Media Division

      Copy for information to: Deputy Director (Establishment) & Nodal CPIO


      Below scanned copies:

      RTI Reply
      RTI Reply
      Coverage in Dainik Jagran
      Dainik Jagran

      Download the coverage in the Times of India here. Read the earlier blog entry here.

      Background Note Big Data

      by Prasad Krishna last modified Jan 17, 2016 01:55 AM

      PDF document icon Background Note-BigDataandandGovernanceinIndia.pdf — PDF document, 131 kB (134227 bytes)

      Network Neutrality across South Asia

      by Prasad Krishna last modified Jan 17, 2016 02:37 AM

      PDF document icon Network Neutrality Agenda Information_1.14..2016.pdf — PDF document, 411 kB (421545 bytes)

      NASSCOM-DSCI Annual Information Security Summit 2015 - Notes

      by Sumandro Chattapadhyay last modified Jan 19, 2016 07:58 AM
      NASSCOM-DSCI organised the 10th Annual Information Security Summit (AISS) 2015 in Delhi during December 16-17. Sumandro Chattapadhyay participated in this engaging Summit. He shares a collection of his notes and various tweets from the event.
      NASSCOM-DSCI Annual Information Security Summit 2015 - Notes

      Annual Information Security Summit (AISS) 2015

       

      Details about the Summit

      Event page: https://www.dsci.in/events/about/2261.

      Agenda: https://www.dsci.in/sites/default/files/Agenda-AISS-2015.pdf.

       

      Notes from the Summit

      Mr. G. K. Pillai, Chairman of Data Security Council of India (DSCI), set the tone of the Summit at the very first hour by noting that 1) state and private industries in India are working in silos when it comes to preventing cybercrimes, 2) there is a lot of skill among young technologists and entrepreneurs, and the state and the private sectors are often unaware of this, and 3) there is serious lack of (cyber-)capacity among law enforcement agencies.

      In his Inaugural Address, Dr. Arvind Gupta (Deputy National Security Advisor and Secretary, NSCS), provided a detailed overview of the emerging challenges and framework of cybersecurity in India. He focused on the following points:

      • Security is a key problem in the present era of ICTs as it is not in-built. In the upcoming IoT era, security must be built into ICT systems.
      • In the next billion addition to internet population, 50% will be from India. Hence cybersecurity is a big concern for India.
      • ICTs will play a catalytic role in achieving SDGs. Growth of internet is part of the sustainable development agenda.
      • We need a broad range of critical security services - big data analytics, identity management, etc.
      • The e-governance initiatives launched by the Indian government are critically dependent on a safe and secure internet.
      • Darkweb is a key facilitator of cybercrime. Globally there is a growing concern regarding the security of cyberspace.
      • On the other hand, there exists deep divide in access to ICTs, and also in availability of content in local languages.
      • The Indian government has initiated bilateral cybersecurity dialogues with various countries.
      • Indian government is contemplating setting up of centres of excellence in cryptography. It has already partnered with NASSCOM to develop cybersecurity guidelines for smart cities.
      • While India is a large global market for security technology, it also needs to be self-reliant. Indian private sector should make use of government policies and bilateral trust enjoyed by India with various developing countries in Africa and south America to develop security technology solutions, create meaningful jobs in India, and export services and software to other developing countries.
      • Strong research and development, and manufacturing base are absolutely necessary for India to be self-reliant in cybersecurity. DSCI should work with private sector, academia, and government to coordinate and realise this agenda.
      • In the line of the Climate Change Fund, we should create a cybersecurity fund, since it is a global problem.
      • Silos are our bane in general. Bringing government agencies together is crucial. Trust issues (between government, private sector, and users) remain, and can only be resolved over time.
      • The demand for cybersecurity solutions in India is so large, that there is space for everyone.
      • The national cybersecurity centre is being set up.
      • Thinktanks can play a crucial role in helping the government to develop strategies for global cybersecurity negotiations. Indian negotiators are often capacity constrained.

      Rajendra Pawar, Chair of the NASSCOM Cyber Security Task Force, NASSCOM Cybersecurity Initiative, provided glimpses of the emerging business opportunity around cybersecurity in India:

      • In next 10 years, the IT economy in India will be USD 350 bn, and 10% of that will be the cybersecurity pie. This means a million job only in the cybersecurity space.
      • Academic institutes are key to creation of new ideas and hence entrepreneurs. Government and private sectors should work closely with academic institutes.
      • Globally, cybersecurity innovation and industries happen in clusters. Cities and states must come forward to create such clusters.
      • 2/3rd of the cybersecurity market is provision of services. This is where India has a great advantage, and should build on that to become a global brand in cybersecurity services.
      • Everyday digital security literacy and cultures need to be created.
      • Publication of cybersecurity best practices among private companies is a necessity.
      • Dedicated cybersecurity spending should be made part of the e-governance budget of central and state governments.
      • DSCI should function as a clearing house of cybersecurity case studies. At present, thought leadership in cybersecurity comes from the criminals. By serving as a use case clearing house, DSCI will inform interested researchers about potential challenges for which solution needs to be created.

      Manish Tiwary of Microsoft informed the audience that India is in the top 3 positions globally in terms of malware proliferation, and this ensures that India is a big focus for Microsoft in its global war against malware. Microsoft India looks forward to work closely with CERT-In and other government agencies.

      The session on Catching Fraudsters had two insightful presentations from Dr. Triveni Singh, Additional SP of Special Task Force of UP Police, and Mr. Manoj Kaushik, IAS, Additional Director of FIU.

      Dr. Singh noted that a key challenge faced by police today is that nobody comes to them with a case of online fraud. Most fraud businesses are run by young groups operating BPOs that steal details from individuals. There exists a huge black market of financial and personal data - often collected from financial institutions and job search sites. Almost any personal data can be bought in such markets. Further, SIM cards under fake names are very easy to buy. The fraudsters are effective using all fake identity, and is using operational infrastructures outsourced from legitimate vendors under fake names. Without a central database of all bank customers, it is very difficult for the police to track people across the financial sector. It becomes even more difficult for Indian police to get access to personal data of potential fraudsters when it is stored in a foreign server. which is often the case with usual web services and apps. Many Indian ISPs do not keep IP history data systematically, or do not have the technical expertise to share it in a structured and time-sensitive way.

      Mr. Kaushik explained that no financial fraud is uniquely committed via internet. Many fraud begin with internet but eventually involve physical fraudulent money transaction. Credit/debit card frauds all involve card data theft via various internet-based and physical methods. However, cybercrime is continued to be mistakenly seen as frauds undertaken completely online. Further, mobile-based frauds are yet another category. Almost all apps we use are compromised, or store transaction history in an insecure way, which reveals such data to hackers. FIU is targeting bank accounts to which fraud money is going, and closing them down. Catching the people behind these bank accounts is much more difficult, as account loaning has become a common practice - where valid accounts are loaned out for a small amount of money to fraudsters who return the account after taking out the fraudulent money. Better information sharing between private sector and government will make catching fraudsters easier.

      The session on Smart Cities focused on discussing the actual cities coming up India, and the security challenges highlighted by them. There was a presentation on Mahindra World City being built near Jaipur. Presenters talked about the need to stabilise, standardise, and securitise the unique identities of machines and sensors in a smart city context, so as to enable secured machine-to-machine communication. Since 'smartness' comes from connecting various applications and data silos together, the governance of proprietary technology and ensuring inter-operable data standards are crucial in the smart city.

      As Special Purposed Vehicles are being planned to realise the smart cities, the presenters warned that finding the right CEOs for these entities will be critical for their success. Legacy processes and infrastructures (and labour unions) are a big challenge when realising smart cities. Hence, the first step towards the smart cities must be taken through connected enforcement of law, order, and social norms.

      Privacy-by-design and security-by-design are necessary criteria for smart cities technologies. Along with that regular and automatic software/middleware updating of distributed systems and devices should be ensured, as well as the physical security of the actual devices and cables.

      In terms of standards, security service compliance standards and those for protocols need to be established for the internet-of-things sector in India. On the other hand, there is significant interest of international vendors to serve the Indian market. All global data and cloud storage players, including Microsoft Azure cloud, are moving into India, and are working on substantial and complete data localisation efforts.

      Mr. R. Chandrasekhar, President of NASSCOM, foregrounded the recommendations made by the Cybersecurity Special Task Force of NASSCOM, in his Special Address on the second day. He noted:

      • There is a great opportunity to brand India as a global security R&D and services hub. Other countries are also quite interested in India becoming such a hub.
      • The government should set up a cybersecurity startup and innovation fund, in coordination with and working in parallel with the centres of excellence in internet-of-things (being led by DeitY) and the data science/analytics initiative (being led by DST).
      • There is an immediate need to create a capable workforce for the cybersecurity industry.
      • Cybersecurity affects everyone but there is almost no public disclosure. This leads to low public awareness and valuation of costs of cybersecurity failures. The government should instruct the Ministry of Corporate Affairs to get corporates to disclose (publicly or directly to the Ministry) security breeches.
      • With digital India and everyone going online, cyberspace will increasingly be prone to attacks of various kinds, and increasing scale of potential loss. Cybersecurity, hence, must be part of the core national development agenda.
      • The cybersecurity market in India is big enough and under-served enough for everyone to come and contribute to it.

      The Keynote Address by Mr. Rajiv Singh, MD – South Asia of Entrust Datacard, and Mr. Saurabh Airi, Technical Sales Consultant of Entrust Datacard, focused on trustworthiness and security of online identities for financial transactions. They argued that all kinds of transactions require a common form factor, which can be a card or a mobile phone. The key challenge is to make the form factor unique, verified, and secure. While no programme is completely secure, it is necessary to build security into the form factor - security of both the physical and digital kind, from the substrates of the card to the encryption algorithms. Entrust and Datacard have merged in recent past to align their identity management and security transaction workflows, from physical cards to software systems for transactions. The advantages of this joint expertise have allowed them to successfully develop the National Population Register cards of India. Now, with the mobile phone emerging as a key financial transaction form factor, the challenge across the cybersecurity industry is to offer the same level of physical, digital, and network security for the mobile phone, as are provided for ATM cards and cash machines.

      The following Keynote Address by Dr. Jared Ragland, Director - Policy of BSA, focused on the cybersecurity investment landscape in India and the neighbouring region. BSA, he explained, is a global trade body of software companies. All major global software companies are members of BSA. Recently, BSA has produced a study on the cybersecurity industry across 10 markets in the Asia Pacific region, titled Asia Pacific Cybersecurity Dashboard. The study provides an overview of cybersecurity policy developments in these countries, and sector-specific opportunities in the region. Dr. Ragland mentioned the following as the key building blocks of cybersecurity policy: legal foundation, establishment of operational entities, building trust and partnerships (PPP), addressing sector-specific requirements, and education and awareness. As for India, he argued that while steady steps have been taken in the cybersecurity policy space by the government, a lot remains to be done. Operationalisation of the policy is especially lacking. PPPs are happening but there is a general lack of persistent formal engagement with the private sector, especially with global software companies. There is almost no sector-specific strategy. Further, the requirement for India-specific testing of technologies, according to domestic and not global standards, is leading to entry barrier for global companies and export barrier for Indian companies. Having said that, Dr. Ragland pointed out that India's cybersecurity experience is quite representative of that of the Asia Pacific region. He noted the following as major stumbling blocks from an international industry perspective: unnecessary and unreasonable testing requirements, setting of domestic standards, and data localisations rules.

      One of the final sessions of the Summit was the Public Policy Dialogue between Prof. M.V. Rajeev Gowda, Member of Parliament, Rajya Sabha, and Mr. Arvind Gupta, Head of IT Cell, BJP.

      Prof. Gowda focused on the following concerns:

      • We often freely give up our information and rights over to owners of websites and applications on the web. We need to ask questions regarding the ownership, storage, and usage of such data.
      • While Section 66A of Information Technology Act started as a anti-spam rule, it has actually been used to harass people, instead of protecting them from online harassment.
      • The bill on DNA profiling has raised crucial privacy concerns related to this most personal data. The complexity around the issue is created by the possibility of data leakage and usage for various commercial interests.
      • We need to ask if western notions of privacy will work in the Indian context.
      • We need to move towards a cashless economy, which will not only formalise the existing informal economy but also speed up transactions nationally. We need to keep in mind that this will put a substantial demand burden on the communication infrastructure, as all transactions will happen through these.

      Mr. Gupta shared his keen insights about the key public policy issues in digital India:

      • The journey to establish the digital as a key political agenda and strategy within BJP took him more than 6 years. He has been an entrepreneur, and will always remain one. His approached his political journey as an entrepreneur.
      • While we are producing numerous digitally literate citizens, the companies offering services on the internet often unknowingly acquire data about these citizens, store them, and sometimes even expose them. India perhaps produces the greatest volume of digital exhaust globally.
      • BJP inherited the Aadhaar national identity management platform from UPA, and has decided to integrate it deeply into its digital India architecture.
      • Financial and administrative transactions, especially ones undertake by and with governments, are all becoming digital and mostly Aadhaar-linked. We are not sure where all such data is going, and who all has access to such data.
      • Right now there is an ongoing debate about using biometric system for identification. The debate on privacy is much needed, and a privacy policy is essential to strengthen Aadhaar. We must remember that the benefits of Aadhaar clearly outweigh the risks. Greatest privacy threats today come from many other places, including simple mobile torch apps.
      • India is rethinking its cybersecurity capacities in a serious manner. After Paris attack it has become obvious that the state should be allowed to look into electronic communication under reasonable guidelines. The challenge is identifying the fine balance between consumers' interest on one hand, and national interest and security concerns on the other. Unfortunately, the concerns of a few is often getting amplified in popular media.
      • MyGov platform should be used much more effectively for public policy debates. Social media networks, like Twitter, are not the correct platforms for such debates.

       

       

      Transparency in Surveillance

      by Vipul Kharbanda last modified Jan 23, 2016 03:11 PM
      Transparency is an essential need for any democracy to function effectively. It may not be the only requirement for the effective functioning of a democracy, but it is one of the most important principles which need to be adhered to in a democratic state.

      Introduction

      A democracy involves the state machinery being accountable to the citizens that it is supposed to serve, and for the citizens to be able to hold their state machinery accountable, they need accurate and adequate information regarding the activities of those that seek to govern them. However, in modern democracies it is often seen that those in governance often try to circumvent legal requirements of transparency and only pay lip service to this principle, while keeping their own functioning as opaque as possible.

      This tendency to not give adequate information is very evident in the departments of the government which are concerned with surveillance, and merit can be found in the argument that all of the government's clandestine surveillance activities cannot be transparent otherwise they will cease to be "clandestine" and hence will be rendered ineffective. However, this argument is often misused as a shield by the government agencies to block the disclosure of all types of information about their activities, some of which may be essential to determine whether the current surveillance regime is working in an effective, ethical, and legal manner or not. It is this exploitation of the argument, which is often couched in the language of or coupled with concerns of national security, that this paper seeks to address while voicing the need for greater transparency in surveillance activities and structures.

      In the first section the paper examines the need for transparency, and specifically deals with the requirement for transparency in surveillance. In the next part, the paper discusses the regulations governing telecom surveillance in India. The final part of the paper discusses possible steps that may be taken by the government in order to increase transparency in telecom surveillance while keeping in mind that the disclosure of such information should not make future surveillance ineffective.

      Need for Transparency

      In today's age where technology is all pervasive, the term "surveillance" has developed slightly sinister overtones, especially in the backdrop of the Edward Snowden fiasco. Indeed, there have been several independent scandals involving mass surveillance of people in general as well as illegal surveillance of specific individuals. The fear that the term surveillance now invokes, especially amongst those social and political activists who seek to challenge the status quo, is in part due to the secrecy surrounding the entire surveillance regime. Leaving aside what surveillance is carried out, upon whom, and when - the state actors are seldom willing and open to talk about how surveillance is carried out, how decisions regarding who and how to target, are reached, how agency budgets are allocated and spent, how effective surveillance actions were, etc. While there may be justified security based arguments to not disclose the full extent of the state's surveillance activities, however this cloak of secrecy may be used illegally and in an unauthorized manner to achieve ends more harmful to citizen rights than the maintenance of security and order in the society.

      Surveillance and interception/collection of communications data can take place under different legal processes in different countries, ranging from court-ordered requests of specified data from telecommunications companies to broad executive requests sent under regimes or regulatory frameworks requiring the disclosure of information by telecom companies on a pro-active basis. However, it is an open secret that data collection often takes place without due process or under non-legal circumstances.

      It is widely believed that transparency is a critical step towards the creation of mechanisms for increased accountability through which law enforcement and government agencies access communications data. It is the first step in the process of starting discussions and an informed public debate regarding how the state undertakes activities of surveillance, monitoring and interception of communications and data. Since 2010, a large number of ICT companies have begun to publish transparency reports on the extent that governments request their user data as well as requirements to remove content. However, governments themselves have not been very forthcoming in providing such detailed information on surveillance programs which is necessary for an informed debate on this issue.[1] Although some countries currently report limited information on their surveillance activities, e.g. the U.S. Department of Justice publishes an annual Wiretap Report (U.S. Courts, 2013a), and the United Kingdom publishes the Interception of Communications Commissioner Annual Report (May, 2013), which themselves do not present a complete picture, however even such limited measures are unheard of in a country such as India.

      It is obvious that Governments can provide a greater level of transparency regarding the limits in place on the freedom of expression and privacy than transparency reports by individual companies. Company transparency reports can only illuminate the extent to which any one company receives requests and how that company responds to them. By contrast, government transparency reports can provide a much greater perspective on laws that can potentially restrict the freedom of expression or impact privacy by illustrating the full extent to which requests are made across the ICT industry. [2]

      In India, the courts and the laws have traditionally recognized the need for transparency and derive it from the fundamental right to freedom of speech and expression guaranteed in our Constitution. This need coupled with a sustained campaign by various organizations finally fructified into the passage of the Right to Information Act, 2005, (RTI Act) which amongst other things also places an obligation on the sate to place its documents and records online so that the same may be freely available to the public. In light of this law guaranteeing the right to information, the citizens of India have the fundamental right to know what the Government is doing in their name. The free flow of information and ideas informs political growth and the freedom of speech and expression is the lifeblood of a healthy democracy, it acts as a safety valve. People are more ready to accept the decisions that go against them if they can in principle seem to influence them. The Supreme Court of India is of the view that the imparting of information about the working of the government on the one hand and its decision affecting the domestic and international trade and other activities on the other is necessary, and has imposed an obligation upon the authorities to disclose information.[3]

      The Supreme Court, in Namit Sharma v. Union of India,[4] while discussing the importance of transparency and the right to information has held:

      "The Right to Information was harnessed as a tool for promoting development; strengthening the democratic governance and effective delivery of socio-economic services. Acquisition of information and knowledge and its application have intense and pervasive impact on the process of taking informed decision, resulting in overall productivity gains .

      ……..

      Government procedures and regulations shrouded in the veil of secrecy do not allow the litigants to know how their cases are being handled. They shy away from questioning the officers handling their cases because of the latters snobbish attitude. Right to information should be guaranteed and needs to be given real substance. In this regard, the Government must assume a major responsibility and mobilize skills to ensure flow of information to citizens. The traditional insistence on secrecy should be discarded."

      Although these statements were made in the context of the RTI Act the principle which they try to illustrate can be understood as equally applicable to the field of state sponsored surveillance. Though Indian intelligence agencies are exempt from the RTI Act, it can be used to provide limited insight into the scope of governmental surveillance. This was demonstrated by the Software Freedom Law Centre, who discovered via RTI requests that approximately 7,500 - 9,000 interception orders are sent on a monthly basis.[5]

      While it is true that transparency alone will not be able to eliminate the barriers to freedom of expression or harm to privacy resulting from overly broad surveillance,, transparency provides a window into the scope of current practices and additional measures are needed such as oversight and mechanisms for redress in cases of unlawful surveillance. Transparency offers a necessary first step, a foundation on which to examine current practices and contribute to a debate on human security and freedom.[6]

      It is no secret that the current framework of surveillance in India is rife with malpractices of mass surveillance and instances of illegal surveillance. There have been a number of instances of illegal and/or unathorised surveillance in the past, the most scandalous and thus most well known is the incident where a woman IAS officer was placed under surveillance at the behest of Mr. Amit Shah who is currently the president of the ruling party in India purportedly on the instructions of the current prime minister Mr. Narendra Modi.[7] There are also a number of instances of private individuals indulging in illegal interception and surveillance; in the year 2005, it was reported that Anurag Singh, a private detective, along with some associates, intercepted the telephonic conversations of former Samajwadi Party leader Amar Singh. They allegedly contacted political leaders and media houses for selling the tapped telephonic conversation records. The interception was allegedly carried out by stealing the genuine government letters and forging and fabricating them to obtain permission to tap Amar Singh's telephonic conversations. [8] The same individual was also implicated for tapping the telephone of the current finance minister Mr. Arun Jaitely.[9]

      It is therefore obvious that the status quo with regard to the surveillance mechanism in India needs to change, but this change has to be brought about in a manner so as to make state surveillance more accountable without compromising its effectiveness and addressing legitimate security concerns. Such changes cannot be brought about without an informed debate involving all stakeholders and actors associated with surveillance, however the basic minimum requirement for an "informed" debate is accurate and sufficient information about the subject matter of the debate. This information is severely lacking in the public domain when it comes to state surveillance activities - with most data points about state surveillance coming from news items or leaked information. Unless the state becomes more transparent and gives information about its surveillance activities and processes, an informed debate to challenge and strengthen the status quo for the betterment of all parties cannot be started.

      Current State of Affairs

      Surveillance laws in India are extremely varied and have been in existence since the colonial times, remnants of which are still being utilized by the various State Police forces. However in this age of technology the most important tools for surveillance exist in the digital space and it is for this reason that this paper shall focus on an analysis of surveillance through interception of telecommunications traffic, whether by tracking voice calls or data. The interception of telecommunications actually takes place under two different statutes, the Telegraph Act, 1885 (which deals with interception of calls) as well as the Information Technology Act, 2000 (which deals with interception of data).

      Currently, the telecom surveillance is done as per the procedure prescribed in the Rules under the relevant sections of the two statutes mentioned above, viz. Rule 419A of the Telegraph Rules, 1951 for surveillance under the Telegraph Act, 1885 and the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 for surveillance under the Information Technology Act, 2000. These Rules put in place various checks and balances and try to ensure that there is a paper trail for every interception request. [10] The assumption is that the generation of a paper trail would reduce the number of unauthorized interception orders thus ensuring that the powers of interception are not misused. However, even though these checks and balances exist on paper as provided in the laws, there is not enough information in the public domain regarding the entire mechanism of interception for anyone to make a judgment on whether the system is working or not.

      As mentioned earlier, currently the only sources of information on interception that are available in the public domain are through news reports and a handful of RTI requests which have been filed by various activists.[11] The only other institutionalized source of information on surveillance in India is the various transparency reports brought out by companies such as Google, Yahoo, Facebook, etc.

      Indeed, Google was the first major corporation to publish a transparency report in 2010 and has been updating its report ever since. The latest data that is available for Google is for the period between January, 2015 to June, 2015 and in that period Google and Youtube together received 3,087 requests for data which asked for information on 4,829 user accounts from the Indian Government. Out of these requests Google only supplied information for 44% of the requests.[12] Although Google claims that they "review each request to make sure that it complies with both the spirit and the letter of the law, and we may refuse to produce information or try to narrow the request in some cases", it is not clear why Google rejected 56% of the requests. It may also be noted that the number of requests for information that Google received from India were the fifth highest amongst all the other countries on which information was given in the Transparency Report, after USA, Germany, France and the U.K.

      Facebook's transparency report for the period between January, 2015 to June, 2015 reveals that Facebook received 5,115 requests from the Indian Government for 6,268 user accounts, out of which Facebook produced data in 45.32% of the cases.[13] Facebook's transparency report claims that they respond to requests relating to criminal cases and "Each and every request we receive is checked for legal sufficiency and we reject or require greater specificity on requests that are overly broad or vague." However, even in Facebook's transparency report it is unclear why 55.68% of the requests were rejected.

      The Yahoo transparency report also gives data from the period between January 1, 2015 to June 30, 2015 and reveals that Yahoo received 831 requests for data, which related to 1,184 user accounts from the Indian Government. The Yahoo report is a little more detailed and also reveals that 360 of the 831 requests were rejected by Yahoo, however no details are given as to why the requests were rejected. The report also specifies that in 63 cases, no data was found by Yahoo, in 249 cases only non content data[14] was disclosed while in 159 cases content [15] was disclosed. The Yahoo report also claims that "We carefully scrutinize each request to make sure that it complies with the law, and we push back on those requests that don't satisfy our rigorous standards."

      While the Vodafone Transparency Report gives information regarding government requests for data in other jurisdictions, [16] it does not give any information on government requests in India. This is because Vodafone interprets the provisions contained in Rule 25(4) of the IT (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 (Interception Rules) and Rule 11 of the IT (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009 as well as Rule 419A(19) of the Indian Telegraph Rules, 1954 which require service providers to maintain confidentiality/secrecy in matters relating to interception, as being a legal prohibition on Vodafone to reveal such information.

      Apart from the four major companies discussed above, there are a large number of private corporations which have published transparency reports in order to acquire a sense of trustworthiness amongst their customers. Infact, the Ranking Digital Rights Project has been involved in ranking some of the biggest companies in the world on their commitment to accountability and has brought out the Ranking Digital Rights 2015 Corporate Accountability Index that has analysed a representative group of 16 companies "that collectively hold the power to shape the digital lives of billions of people across the globe".

      Suggestions on Transparency

      It is clear from the discussions above, as well as a general overview of various news reports on the subject, that telecom surveillance in India is shrouded in secrecy and it appears that a large amount of illegal and unauthorized surveillance is taking place behind the protection of this veil of secrecy. If the status quo continues, then it is unlikely that any meaningful reforms would take place to bring about greater accountability in the area of telecom surveillance. It is imperative, for any sort of changes towards greater accountability to take place, that we have enough information about what exactly is happening and for that we need greater transparency since transparency is the first step towards greater accountability.

      Transparency Reports

      In very simplistic terms transparency, in anything, can best be achieved by providing as much information about that thing as possible so that there are no secrets left. However, it would be naïve to say that all information about interception activities can be made public on the altar of the principle of transparency, but that does not mean that there should be no information at all on interception. One of the internationally accepted methods of bringing about transparency in interception mechanisms, which is increasingly being adopted by both the private sector as well as governments, is to publish Transparency Reports giving various details of interception while keeping security concerns in mind. The two types of transparency reports that we require in India and what that would entail is briefly discussed below:

      By the Government

      The problem with India's current regime for interception is that the entire mechanism appears more or less adequate on paper with enough checks and balances involved in it to prevent misuse of the allotted powers. However, because the entire process is veiled in secrecy, nobody knows exactly how good or how rotten the system has become and whether it is working to achieve its intended purposes. It is clear that the current system of interception and surveillance being followed by the government has some flaws, as can be gathered from the frequent news articles which talk about incidents of illegal surveillance. However, without any other official or more reliable sources of information regarding surveillance activities these anecdotal pieces of evidence are all we have to shape the debate regarding surveillance in India. It is only logical then that the debate around surveillance, which is informed by such sketchy and unreliable news reports will automatically be biased against the current mechanism since the newspapers would also only be interested in reporting the scandalous and the extraordinary incidents. For example, some argue that the government undertakes mass surveillance, while others argue that India only carries out targeted surveillance, but there is not enough information publicly available for a third party to support or argue against either claim. It is therefore necessary and highly recommended that the government start releasing a transparency report such as the one's brought out by the United States and the UK as mentioned above.

      There is no need for a separate department or authority just to make the transparency report and this task could probably be performed in-house by any department, but considering the sector involved, it would perhaps be best if the Department of Telecommunications is given the responsibility to bring out a transparency report. These transparency reports should contain certain minimum amount of data for them to be an effective tool in informing the public discourse and debate regarding surveillance and interception. The report needs to strike a balance between providing enough information so that an informed analysis can be made of the effectiveness of the surveillance regime without providing so much information so as to make the surveillance activities ineffective. Below is a list of suggestions as to what kind of data/information such reports should contain:

      • Reports should contain data regarding the number of interception orders that have been passed. This statistic would be extremely useful in determining how elaborate and how frequently the state indulges in interception activities. This information would be easily available since all interception orders have to be sent to the Review Committee set up under Rule 419A of the Telegraph Rules, 1954.
      • The Report should contain information on the procedural aspects of surveillance including the delegation of powers to different authorities and individuals, information on new surveillance schemes, etc. This information would also be available with the Ministry of Home Affairs since it is a Secretary or Joint Secretary level officer in the said Ministry which is supposed to authorize every order for interception.
      • The report should contain an aggregated list of reasons given by the authorities for ordering interception. This information would reveal whether the authorities are actually ensuring legal justification before issuing interception or are they just paying lip service to the rules to ensure a proper paper trail. Since every order of interception has to be in writing, the main reasons for interception can easily be gleaned from a perusal of the orders.
      • It should also reveal the percentage of cases where interception has actually found evidence of culpability or been successful in prevention of criminal activities. This one statistic would itself give a very good review of the effectiveness of the interception regime. Granted that this information may not be very easily obtainable, but it can be obtained with proper coordination with the police and other law enforcement agencies.
      • The report should also reveal the percentage of order that have been struck down by the Review Committee as not following the process envisaged under the various Rules. This would give a sense of how often the Rules are being flouted while issuing interception orders. This information can easily be obtained from the papers and minutes of the meetings of the Review Committee.
      • The report should also state the number of times the Review Committee has met in the period being reported upon. The Review Committee is an important check on the misuse of powers by the authorities and therefore it is important that the Review Committee carries out its activities in a diligent manner.

      It may be noted here that some provisions of the Telegraph Rules, 1954 especially sub-Rules 17 and 18 of Rule 419A as well as Rules 22, 23(1) and 25 of the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009 may need to be amended so as to make them compliant with the reporting mechanism proposed above.

      By the Private Sector

      We have already discussed above the transparency reports published by certain private companies. Suffice it to say that reports from private companies should give as much of the information discussed under government reports as possible and/or applicable, since they may not have a large amount of the information that is sought to be published in the government reports such as whether the interception was successful, the reasons for interception, etc. It is important to have ISPs provide such transparency reports as this will provide two different data points for information on interception and the very existence of these private reports may act as a check to ensure the veracity of the government transparency reports.

      As in the case of government reports, for the transparency reports of the private sector to be effective, certain provisions of the Telegraph Rules, 1954 and the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009, viz. sub-Rules 14, 15 and 19 of Rule 419A of the Telegraph Rules, 1954 and Rules 20, 21, 23(1) and 25 of the Information Technology (Procedure and Safeguards for Monitoring and Collecting Traffic Data or Information) Rules, 2009.

      Overhaul of the Review Committee

      The Review Committee which acts as a check on the misuse of powers by the competent authorities is a very important cog in the entire process. However, it is staffed entirely by the executive and does not have any members of any other background. Whilst it is probably impractical to have civilian members in the Review Committee which has access to potentially sensitive information, it is extremely essential that the Committee has wider representation from other sectors specially the judiciary. One or two members from the judiciary on the Review Committee would provide a greater check on the workings of the Committee as this would bring in representation from the judicial arm of the State so that the Review Committee does not remain a body manned purely by the executive branch. This could go some ways to ensure that the Committee does not just "rubber stamp" the orders of interception issued by the various competent authorities.

      Conclusion

      It is not in dispute that there is a need for greater transparency in the government's surveillance activities in order to address the problems associated with illegal and unauthorised interceptions. This paper is not making the case that greater transparency in and by itself will be able to solve the problems that may be associated with the government's currency interception and surveillance regime, however it is not possible to address any problem unless we know the real extent of it. It is essential for an informed debate and discussion that the people participating in the discussion are "informed", i.e. they should have accurate and adequate information regarding the issues which are being discussed. The current state of the debate on interception is rife with individuals using illustrative and anecdotal evidence which, in the absence of any other evidence, they assume to be the norm.

      A more transparent and forthcoming state machinery which regularly keeps its citizens abreast of the state of its surveillance regime would be likely to get better suggestions and perhaps less criticisms if it does come out that the checks and balances imposed in the regulations are actually making a difference to check unauthorized interceptions, and if not, then it is the right of the citizens to know about this and ask for reforms.


      [1] James Losey, "Surveillance of Communications: A Legitimization Crisis and the Need for Transparency", International Journal of Communication 9(2015), Feature 3450-3459, 2015.

      [2] Id.

      [4] http://www.judis.nic.in/supremecourt/imgs1.aspx?filename=39566 . Although the judgment was overturned on review, however this observation quoted above would still hold as it has not been specifically overturned.

      [6] James Losey, "Surveillance of Communications: A Legitimization Crisis and the Need for Transparency", International Journal of Communication 9 (2015), Feature 3450-3459, 2015.

      [10] For a detailed discussion of the Rules of interception please see Policy Paper on Surveillance in India, by Vipul Kharbanda, http://cis-india.org/internet-governance/blog/policy-paper-on-surveillance-in-india .

      [14] Non-content data (NCD) such as basic subscriber information including the information captured at the time of registration such as an alternate e-mail address, name, location, and IP address, login details, billing information, and other transactional information (e.g., "to," "from," and "date" fields from email headers).

      [15] Data that users create, communicate, and store on or through Yahoo. This could include words in a communication (e.g., Mail or Messenger), photos on Flickr, files uploaded, Yahoo Address Book entries, Yahoo Calendar event details, thoughts recorded in Yahoo Notepad or comments or posts on Yahoo Answers or any other Yahoo property.

      Big Data in the Global South - An Analysis

      by Tanvi Mani last modified Jan 24, 2016 02:54 AM

      I. Introduction

      "The period that we have embarked upon is unprecedented in history in terms of our ability to learn about human behavior." [1]

      The world we live in today is facing a slow but deliberate metamorphosis of decisive information; from the erstwhile monopoly of world leaders and the captains of industry obtained through regulated means, it has transformed into a relatively undervalued currency of knowledge collected from individual digital expressions over a vast network of interconnected electrical impulses.[2] This seemingly random deluge of binary numbers, when interpreted represents an intricately woven tapestry of the choices that define everyday life, made over virtual platforms. The machines we once employed for menial tasks have become sensorial observers of our desires, wants and needs, so much so that they might now predict the course of our future choices and decisions.[3] The patterns of human behaviour that are reflected within this data inform policy makers, in both a public and private context. The collective data obtained from our digital shadows thus forms a rapidly expanding storehouse of memory, from which interested parties can draw upon to resolve problems and enable a more efficient functioning of foundational institutions, such as the markets, the regulators and the government.[4]

      The term used to describe a large volume of collected data, in a structured as well as unstructured form is called Big Data. This data requires niche technology, outside of traditional software databases, to process; simply because of its exponential increment in a relatively short period of time. Big Data is usually identified using a "three V" characterization - larger volume, greater variety and distinguishably high rates of velocity. [5] This is exemplified in the diverse sources from which this data is obtained; mobile phone records, climate sensors, social media content, GPS satellite identifications and patterns of employment, to name a few. Big data analytics refers to the tools and methodologies that aim to transform large quantities of raw data into "interpretable data", in order to study and discern the same so that causal relationships between events can be conclusively established.[6] Such analysis could allow for the encouragement of the positive effects of such data and a concentrated mitigation of negative outcomes.

      This paper seeks to map out the practices of different governments, civil society, and the private sector with respect to the collection, interpretation and analysis of big data in the global south, illustrated across a background of significant events surrounding the use of big data in relevant contexts. This will be combined with an articulation of potential opportunities to use big data analytics within both the public and private spheres and an identification of the contextual challenges that may obstruct the efficient use of this data. The objective of this study is to deliberate upon how significant obstructions to the achievement of developmental goals within the global south can be overcome through an accurate recognition, interpretation and analysis of big data collected from diverse sources.

      II. Uses of Big Data in the Global Development

      Big Data for development is the process though which raw, unstructured and imperfect data is analyzed, interpreted and transformed into information that can be acted upon by governments and policy makers in various capacities. The amount of digital data available in the world today has grown from 150 exabytes in 2005 to 1200 exabytes in 2010.[7] It is predicted that this figure would increase by 40% annually in the next few years[8], which is close to 40 times growth of the world's population. [9] The implication of this is essentially that the share of available data in the world today that is less than a minute old is increasing at an exponential rate. Moreover, an increasing percentage of this data is produced and created real-time.

      The data revolution that is incumbent upon us is characterized by a rapidly accumulating and continuously evolving stock of data prevalent` in both industrialized as well as developing countries. This data is extracted from technological services that act as sensors and reflect the behaviour of individuals in relation to their socio-economic circumstances.

      For many global south countries, this data is generated through mobile phone technology. This trend is evident in Sub Saharan Africa, where mobile phone technology has been used as an effective substitute for often weak and unstructured State mechanisms such as faulty infrastructure, underdeveloped systems of banking and inferior telecommunication networks.[10]

      For example, a recent study presented at the Data for Development session at the NetMob Conference at MIT used mobile phone data to analyze the impact of opening a new toll highway in Dakar, Senegal on human mobility, particularly how people commute to work in the metropolitan area. [11] A huge investment, the improved infrastructure is expected to result in a significant increase of people in and out of Dakar, along with the transport of essential goods. This would initiate rural development in the areas outside of Dakar and boost the value of land within the region.[12] The impact of the newly constructed highway can however only be analyzed effectively and accurately through the collection of this mobile phone data from actual commuters, on a real time basis.

      Mobile phones technology is no longer used just for personal communication but has been transformed into an effective tool to secure employment opportunities, transfer money, determine stock options and assess the prices of various commodities.[13] This generates vast amounts of data about individuals and their interactions with the government and private sector companies. Internet Traffic is predicted to grow between 25 to 30 % in the next few years in North America, Western Europe and Japan but in Latin America, The Middle East and Africa this figure has been expected to touch close to 50%.[14] The bulk of this internet traffic can be traced back to mobile devices.

      The potential applicability of Big Data for development at the most general level is the ability to provide an overview of the well being of a given population at a particular period of time.[15] This overcomes the relatively longer time lag that is prevalent with most other traditional forms of data collection. The analysis of this data has helped, to a large extent, uncover "digital smoke signals" - or inherent changes in the usage patterns of technological services, by individuals within communities.[16] This may act as an indicator of the changes in the underlying well-being of the community as a whole. This information about the well-being of a community derived from their usage of technology provides significantly relevant feedback to policy makers on the success or failure of particular schemes and can pin point changes that need to be made to status quo. [17]The hope is that this feedback delivered in real-time, would in turn lead to a more flexible and accessible system of international development, thus securing more measurable and sustained outcomes. [18]

      The analysis of big data involves the use of advanced computational technology that can aid in the determination of trends, patterns and correlations within unstructured data so as to transform it into actionable information. It is hoped that this in addition to the human perspective and experience afforded to the process could enable decision makers to rely upon information that is both reliable and up to date to formulate durable and self-sustaining development policies.

      The availability of raw data has to be adequately complemented with intent and a capacity to use it effectively. To this effect, there is an emerging volume of literature that seeks to characterize the primary sources of this Big Data as sharing certain easily distinguishable features. Firstly, it is digitally generated and can be stored in a binary format, thus making it susceptible to requisite manipulation by computers attempting to engage in its interpretation. It is passively produced as a by-product of digital interaction and can be automatically extracted for the purpose of continuous analysis. It is also geographically traceable within a predetermined time period. It is however important to note that "real time" does not necessarily refer to information occurring instantly but is reflective of the relatively short time in which the information is produced and made available thus making it relevant within the requisite timeframe. This allows efficient responsive action to be taken in a short span of time thus creating a feedback loop. [19]

      In most cases the granularity of the data is preferably sought to be expanded over a larger spatial context such as a village or a community as opposed to an individual simply because this affords an adequate recognition of privacy concerns and the lack of definitive consent of the individuals in the extraction of this data. In order to ease the process of determination of this data, the UN Global Pulse has developed taxonomy of sorts to assess the types of data sources that are relevant to utilizing this information for development purposes.[20] These include the following sources;

      Data Exhaust or the digital footprint left behind by individuals' use of technology for service oriented tasks such as web purchases, mobile phone transactions and real time information collected by UN agencies to monitor their projects such as levels of food grains in storage units, attendance in schools etc.

      Online Information which includes user generated content on the internet such as news, blog entries and social media interactions which may be used to identify trends in human desires, perceptions and needs.

      Physical sensors such as satellite or infrared imagery of infrastructural development, traffic patterns, light emissions and topographical changes, thus enabling the remote sensing of changes in human activity over a period of time.

      Citizen reporting or crowd sourced data , which includes information produced on hotlines, mobile based surveys, customer generated maps etc. Although a passive source of data collection, this is a key instrument in assessing the efficacy of action oriented plans taken by decision makers.

      The capacity to analyze this big data is hinged upon the reliance placed on technologically advanced processes such as powerful algorithms which can synthesize the abundance of raw data and break down the information enabling the identification of patterns and correlations. This process would rely on advanced visualization techniques such "sense-making tools"[21]

      The identification of patterns within this data is carried out through a process of instituting a common framework for the analysis of this data. This requires the creation of a specific lexicon that would help tag and sort the collected data. This lexicon would specify what type of information is collected and who it is interpreted and collected by, the observer or the reporter. It would also aid in the determination of how the data is acquired and the qualitative and quantitative nature of the data. Finally, the spatial context of the data and the time frame within which it was collected constituting the aspects of where and when would be taken into consideration. The data would then be analyzed through a process of Filtering, Summarizing and Categorizing the data by transforming it into an appropriate collection of relevant indicators of a particular population demographic. [22]

      The intensive mining of predominantly socioeconomic data is known as "reality mining" [23] and this can shed light on the processes and interactions that are reflected within the data. This is carried out via a tested three fold process. Firstly, the " Continuous Analysis over the streaming of the data", which involves the monitoring and analyzing high frequency data streams to extract often uncertain raw data. For example, the systematic gathering of the prices of products sold online over a period of time. Secondly, "The Online digestion of semi structured data and unstructured data", which includes news articles, reviews of services and products and opinion polls on social media that aid in the determination of public perception, trends and contemporary events that are generating interest across the globe. Thirdly, a 'Real-time Correlation of streaming data with slowly accessible historical data repositories,' which refers to the "mechanisms used for correlating and integrating data in real-time with historical records."[24] The purpose of this stage is to derive a contextualized perception of personalized information that seeks to add value to the data by providing a historical context to it. Big Data for development purposes would make use of a combination of these depending on the context and need.

      (i) Policy Formulation

      The world today has become increasingly volatile in terms of how the decisions of certain countries are beginning to have an impact on vulnerable communities within entirely different nations. Our global economy has become infinitely more susceptible to fluctuating conditions primarily because of its interconnectivity hinged upon transnational interdependence. The primordial instigators of most of these changes, including the nature of harvests, prices of essential commodities, employment structures and capital flows, have been financial and environmental disruptions. [25] According to the OECD, " Disruptive shocks to the global economy are likely to become more frequent and cause greater economic and social hardship. The economic spillover effects of events like the financial crisis or a potential pandemic will grow due to the increasing interconnectivity of the global economy and the speed with which people, goods and data travel."[26]

      The local impacts of these fluctuations may not be easily visible or even traceable but could very well be severe and long lasting. A vibrant literature on the vulnerability of communities has highlighted the impacts of these shocks on communities often causing children to drop out of school, families to sell their productive assets, and communities to place a greater reliance on state rations.[27] These vulnerabilities cannot be definitively discerned through traditional systems of monitoring and information collection. The evidence of the effects of these shocks often take too long to reach decision makers; who are unable to formulate effective policies without ascertaining the nature and extent of the hardships suffered by these in a given context. The existing early warning systems in place do help raise flags and draw attention to the problem but their reach is limited and veracity compromised due to the time it takes to extract and collate this information through traditional means. These traditional systems of information collection are difficult to implement within rural impoverished areas and the data collected is not always reliable due to the significant time gap in its collection and subsequent interpretation. Data collected from surveys does provide an insight into the state of affairs of communities across demographics but this requires time to be collected, processed, verified and eventually published. Further, the expenses incurred in this process often prove to be difficult to offset.

      The digital revolution therefore provides a significant opportunity to gain a richer and deeper insight into the very nature and evolution of the human experience itself thus affording a more legitimate platform upon which policy deliberations can be articulated. This data driven decision making, once the monopoly of private institutions such as The World Economic Forum and The McKinsey Institute [28] has now emerged at the forefront of the public policy discourse. Civil society has also expressed an eagerness to be more actively involved in the collection of real-time data after having perceived its benefits. This is evidenced by the emergence of 'crowd sourcing'[29] and other 'participatory sensing' [30] efforts that are founded upon the commonalities shared by like minded communities of individuals. This is being done on easily accessible platforms such as mobile phone interfaces, hand-held radio devices and geospatial technologies. [31]

      The predictive nature of patterns identifiable from big data is extremely relevant for the purpose of developing socio-economic policies that seek to bridge problem-solution gaps and create a conducive environment for growth and development. Mobile phone technology has been able to quantify human behavior on an unprecedented scale.[32] This includes being able to detect changes in standard commuting patterns of individuals based on their employment status[33] and estimating a country's GDP in real-time by measuring the nature and extent of light emissions through remote sensing. [34]

      A recent research study has concluded that "due to the relative frequency of certain queries being highly correlated with the percentage of physician visits in which individuals present influenza symptoms, it has been possible to accurately estimate the levels of influenza activity in each region of the United States, with a reporting lag of just a day." Online data has thus been used as a part of syndromic surveillance efforts also known as infodemiology. [35] The US Centre for Disease Control has concluded that mining vast quantities of data through online health related queries can help detect disease outbreaks " before they have been confirmed through a diagnosis or a laboratory confirmation." [36] Google trends works in a similar way.

      Another public health monitoring system known as the Healthmap project compiles seemingly fragmented data from news articles, social media, eye-witness reports and expert discussions based on validated studies to "achieve a unified and comprehensive view of the current global state of infectious diseases" that may be visualized on a map. [37]

      Big Data used for development purpose can reduce the reliance on human inputs thus narrowing the room for error and ensuring the accuracy of information collected upon which policy makers can base their decisions.

      (ii) Advocacy and Social Change

      Due to the ability of Big Data to provide an unprecedented depth of detail on particular issues, it has often been used as a vehicle of advocacy to highlight various issues in great detail. This makes it possible to ensure that citizens are provided with a far more participative experience, capturing their attention and hence better communicating these problems. Numerous websites have been able to use this method of crowd sourcing to broadcast socially relevant issues[38]. Moreover, the massive increase in access to the internet has dramatically improved the scope for activism through the use of volunteered data due to which advocates can now collect data from volunteers more effectively and present these issues in various forums. Websites like Ushahidi[39] and the Black Monday Movement [40] being prime examples of the same. These platforms have championed various causes, consistently exposing significant social crises' that would otherwise go unnoticed.

      The Ushahidi application used crowd sourcing mechanisms in the aftermath of the Haiti earthquake to set up a centralized messaging system that allowed mobile phone users to provide information on injured and trapped people.[41] An analysis of the data showed that the concentration of text messages was correlated with the areas where there was an increased concentration of damaged buildings. [42] Patrick Meier of Ushahidi noted "These results were evidence of the system's ability to predict, with surprising accuracy and statistical significance, the location and extent of structural damage post the earthquake." [43]

      Another problem that data advocacy hopes to tackle, however, is that of too much exposure, with advocates providing information to various parties to help ensure that there exists no unwarranted digital surveillance and that sensitive advocacy tools and information are not used inappropriately. An interesting illustration of the same is The Tactical Technology Collective[44] that hopes to improve the use of technology by activists and various other political actors. The organization, through various mediums such as films, events etc. hopes to train activists regarding data protection and privacy awareness and skills among human rights activists. Additionally, Tactical Technology also assists in ensuring that information is used in an appealing and relevant manner by human rights activists and in the field of capacity building for the purposes of data advocacy.

      Observed data such as mobile phone records generated through network operators as well as through the use of social media are beginning to embody an omnipotent role in the development of academia through detailed research. This is due to the ability of this data to provide microcosms of information within both contexts of finer granularity and over larger public spaces. In the wake of natural disasters, this can be extremely useful, as reflected by the work of Flowminder after the 2010 Haiti earthquake.[45] A similar string of interpretive analysis can be carried out in instances of conflict and crises over varying spans of time. Flowminder used the geospatial locations of 1.9 million subscriber identity modules in Haiti, beginning 42 days before the earthquake and 158 days after it. This information allowed researches to empirically determine the migration patterns of population post the earthquake and enabled a subsequent UNFPA household survey.[46] In a similar capacity, the UN Global Pulse is seeking to assist in the process of consultation and deliberation on the specific targets of the millennium development goals through a framework of visual analytics that represent the big data procured on each of the topics proposed for the post- 2015 agenda online.[47]

      A recent announcement of collaboration between RTI International, a non-profit research organization and IBM research lab looks promising in its initiative to utilize big data analytics in schools within Mombasa County, Kenya.[48] The partnership seeks to develop testing systems that would capture data that would assist governments, non-profit organizations and private enterprises in making more informed decisions regarding the development of education and human resources within the region. Äs observed by Dr. Kamal Bhattacharya, The Vice President of IBM Research, "A significant lack of data on Africa in the past has led to misunderstandings regarding the history, economic performance and potential of the government." The project seeks to improve transparency and accountability within the schooling system in more than 100 institutions across the county. The teachers would be equipped with tablet devices to collate the data about students, classrooms and resources. This would allow an analysis of the correlation between the three aspects thus enabling better policy formulation and a more focused approach to bettering the school system. [49] This is a part of the United States Agency for International Development's Education Data for Decision Making (EdData II) project. According to Dr Kommy Weldemariam, Research Scientist , IBM Research, "… there has been a significant struggle in making informed decisions as to how to invest in and improve the quality and content of education within Sub-Saharan Africa. The Project would create a school census hub which would enable the collection of accurate data regarding performance, attendance and resources at schools. This would provide valuable insight into the building of childhood development programs that would significantly impact the development of an efficient human capital pool in the near future."[50]

      A similar initiative has been undertaken by Apple and IBM in the development of the "Student Achievement App" which seeks to use this data for "content analysis of student learning". The Application as a teaching tool that analyses the data provided to develop actionable intelligence on a per-student basis." [51] This would give educators a deeper understanding of the outcome of teaching methodologies and subsequently enable better leaning. The impact of this would be a significant restructuring of how education is delivered. At a recent IBM sponsored workshop on education held in India last year , Katharine Frase, IBM CTO of Public Sector predicted that "classrooms will look significantly different within a decade than they have looked over the last 200 years."[52]

      (iii) Access and the exchange of information

      Big data used for development serves as an important information intermediary that allows for the creation of a unified space within which unstructured heterogeneous data can be efficiently organized to create a collaborative system of information. New interactive platforms enable the process of information exchange though an internal vetting and curation that ensures accessibility to reliable and accurate information. This encourages active citizen participation in the articulation of demands from the government, thus enabling the actualization of the role of the electorate in determining specific policy decisions.

      The Grameen Foundation's AppLab in Kampala aids in the development of tools that can use the information from micro financing transactions of clients to identify financial plans and instruments that would be be more suitable to their needs.[53] Thus, through working within a community, this technology connects its clients in a web of information sharing that they both contribute to and access after the source of the information has been made anonymous. This allows the individual members of the community to benefit from this common pool of knowledge. The AppLab was able to identify the emergence of a new crop pest from an increase in online searches for an unusual string of search terms within a particular region. Using this as an early warning signal, the Grameen bank sent extension officers to the location to check the crops and the pest contamination was dealt with effectively before it could spread any further.[54]

      (iv) Accountability and Transparency

      Big data enables participatory contributions from the electorate in existing functions such as budgeting and communication thus enabling connections between the citizens, the power brokers and elites. The extraction of information and increasing transparency around data networks is also integral to building a self-sustaining system of data collection and analysis. However it is important to note that this information collected must be duly analyzed in a responsible manner. Checking the veracity of the information collected and facilitating individual accountability would encourage more enthusiastic responses from the general populous thus creating a conducive environment to elicit the requisite information. The effectiveness of the policies formulated by relying on this information would rest on the accuracy of such information.

      An example of this is Chequeado, a non-profit Argentinean media outlet that specializes in fact-checking. It works on a model of crowd sourcing information on the basis of which it has fact checked everything from the live presidential speech to congressional debates that have been made open to the public. [55] It established a user friendly public database, DatoCHQ, in 2014 which allowed its followers to participate in live fact-checks by sending in data, which included references, facts, articles and questions, through twitter. [56] This allowed citizens to corroborate the promises made by their leaders and instilled a sense of trust in the government.

      III. Big Data and Smart Cities in the Global South

      Smart cities have become a buzzword in South Asia, especially after the Indian government led by Prime Minister Narendra Modi made a commitment to build 100 smart cities in India[57]. A smart city is essentially designed as a hub where the information and communication technologies (ICT) are used to create feedback loops with an almost minimum time gap. In traditional contexts, surveys carried out through a state sponsored census were the only source of systematic data collection. However these surveys are long drawn out processes that often result in a drain on State resources. Additionally, the information obtained is not always accurate and policy makers are often hesitant to base their decisions on this information. The collection of data can however be extremely useful in improving the functionality of the city in terms of both the 'hard' or physical aspects of the infrastructural environment as well as the 'soft' services it provides to citizens. One model of enabling this data collection, to this effect, is a centrally structured framework of sensors that may be able to determine movements and behaviors in real-time, from which the data obtained can be subsequently analyzed. For example, sensors placed under parking spaces at intersections can relay such information in short spans of time. South Korea has managed to implement a similar structure within its smart city, Songdo.[58]

      Another approach to this smart city model is using crowd sourced information through apps, either developed by volunteers or private conglomerates. These allow for the resolving of specific problems by organizing raw data into sets of information that are attuned to the needs of the public in a cohesive manner. However, this system would require a highly structured format of data sets, without which significantly transformational result would be difficult to achieve.[59]

      There does however exist a middle ground, which allows the beneficiaries of this network, the citizens, to take on the role of primary sensors of information. This method is both cost effective and allows for an experimentation process within which an appropriate measure of the success or failure of the model would be discernible in a timely manner. It is especially relevant in fast growing cities that suffer congestion and breakdown of infrastructure due to the unprecedented population growth. This population is now afforded with the opportunity to become a part of the solution.

      The principle challenge associated with extracting this Big Data is its restricted access. Most organizations that are able to collect this big data efficiently are private conglomerates and business enterprises, who use this data to give themselves a competitive edge in the market, by being able to efficiently identify the needs and wants of their clientele. These organizations are reluctant to release information and statistics because they fear it would result in them losing their competitive edge and they would consequently lose the opportunity to benefit monetarily from the data collected. Data leaks would also result in the company getting a bad name and its reputation could be significantly hampered. Despite the individual anonymity, the transaction costs incurred in ensuring the data of their individual customers is protected is often an expensive process. In addition to this there is a definite human capital gap resulting from the significant lack of scientists and analysts to interpret raw data transmitted across various channels.

      (i) Big Data in Urban Planning

      Urban planning would require data that is reflective of the land use patterns of communities, combined with their travel descriptions and housing preferences. The mobility of individuals is dependent on their economic conditions and can be determined through an analysis of their purchases, either via online transactions or from the data accumulated by prominent stores. The primary source of this data is however mobile phones, which seemed to have transcend economic barriers. Secondary sources include cards used on public transport such as the Oyster card in London and the similar Octopus card used in Hong Kong. However, in most developing countries these cards are not available for public transport systems and therefore mobile network data forms the backbone of data analytics. An excessive reliance on the data collected through Smart phones could however be detrimental, especially in developing countries, simply because the usage itself would most likely be concentrated amongst more economically stable demographics and the findings from this data could potentially marginalize the poor.[60]

      Mobile network big data (MNBD) is generated by all phones and includes CDRs, which are obtained from calls or texts that are sent or received, internet usage, topping up a prepaid value and VLR or Visitor Location Registry data which is generated whenever the phone is question has power. It essentially communicates to the Base Transceiver Stations (BSTs) that the phone is in the coverage area. The CDR includes records of calls made, duration of the call and information about the device. It is therefore stored for a longer period of time. The VLR data is however larger in volume and can be written over. Both VLR and CDR data can provide invaluable information that can be used for urban planning strategies. [61] LIRNEasia, a regional policy and regulation think-tank has carried out an extensive study demonstrating the value of MNBD in SriLanka.[62] This has been used to understand and sometimes even monitor land use patterns, travel patterns during peak and off seasons and the congregation of communities across regions. This study was however only undertaken after the data had been suitably pseudonymised.[63] The study revealed that MNBD was incredibly valuable in generating important information that could be used by policy formulators and decision makers, because of two primary characteristics. Firstly, it comes close to a comprehensive coverage of the demographic within developing countries, thus using mobile phones as sensors to generate useful data. Secondly, people using mobile phones across vast geographic areas reflect important information regarding patterns of their travel and movement. [64]

      MNBD allows for the tracking and mapping of changes in population densities on a daily basis, thus identifying 'home' and 'work' locations, informing policy makers of population congestion so that thy may be able to formulate policies with respect to easing this congestion. According to Rohan Samarajiva, founding chair of LIRNEasia, "This allows for real-time insights on the geo-spatial distribution of population, which may be used by urban planners to create more efficient traffic management systems."[65] This can also be used for the developmental economic policies. For example, the northern region of Colombo, a region inhabited by the low income families shows a lower population density on weekdays. This is reflective of the large numbers travelling to southern Colombo for employment. [66]Similarly, patterns of land use can be ascertained by analyzing the various loading patterns of base stations. Building on the success of the Mobile Data analysis project in SriLanka LIRNEasia plans to collaborate with partners in India and Bangladesh to assimilate real time information about the behavioral tendencies of citizens, using which policy makers may be able to make informed decisions. When this data is combined with user friendly virtual platforms such as smartphone Apps or web portals, it can also help citizens make informed choices about their day to day activities and potentially beneficial long term decisions. [67]

      Challenges of using Mobile Network Data

      Mobile networks invest significant sums of money in obtaining information regarding usage patterns of their services. Consequently, they may use this data to develop location based advertizing. In this context, there is a greater reluctance to share data for public purposes. Allowing access to one operator's big data by another could result in significant implications on the other with respect to the competitive advantage shared by the operator. A plausible solution to this conundrum is the accumulation of data from multiple sources without separating or organizing it according to the source it originates from. There is thus a lesser chance of sensitive information of one company being used by another. However, even operators do have concerns about how the data would be handled before this "mashing up" occurs and whether it might be leaked by the research organization itself. LIRNEasia used comprehensive non-disclosure agreements to ensure that the researchers who worked with the data were aware of the substantial financial penalties that may be imposed on them for data breaches. The access to the data was also restricted. [68]

      Another line of argumentation advocates for the open sharing of data. A recent article in the Economist has articulated this in the context of the Ebola outbreak in West Africa. " Releasing the data, though, is not just a matter for firms since people's privacy is involved. It requires governmental action as well. Regulators in each affected country would have to order operators to make their records accessible to selected researchers, who through legal agreements would only be allowed to use the data in a specific manner. For example, Orange, a major mobile phone network operator has made millions of CDRs from Senegal and The Ivory Coast available for researchers for their use under its Data Development Initiative. However the Political will amongst regulators and Network operators to do this seems to be lacking."[69]

      It would therefore be beneficial for companies to collaborate with the customers who create the data and the researchers who want to use it to extract important insights. This however would require the creation of and subsequent adherence to self regulatory codes of conduct. [70] In addition to this cooperation between network operators will assist in facilitating the transference of the data of their customers to research organizations. Sri Lanka is an outstanding example of this model of cooperation which has enabled various operators across spectrums to participate in the mobile-money enterprise.[71]

      (ii) Big Data and Government Delivery of Services and Functions

      The analysis of Data procured in real time has proven to be integral to the formulation of policies, plans and executive decisions. Especially in an Asian context, Big data can be instrumental in urban development, planning and the allocation of resources in a manner that allows the government to keep up with the rapidly growing demands of an empowered population whose numbers are on an exponential rise. Researchers have been able to use data from mobile networks to engage in effective planning and management of infrastructure, services and resources. If, for example, a particular road or highway has been blocked for a particular period of time an alternative route is established before traffic can begin to build up creating a congestion, simply through an analysis of information collected from traffic lights, mobile networks and GPS systems.[72]

      There is also an emerging trend of using big data for state controlled services such as the military. The South Korean Defense Minister Han Min Koo, in his recent briefing to President Park Geun-hye reflected on the importance of innovative technologies such as Big Data solutions. [73]

      The Chinese government has expressed concerns regarding data breaches and information leakages that would be extremely dangerous given the exceeding reliance of governments on big data. A security report undertaken by Qihoo 360, China's largest software security provider established that 2,424 of the 17,875 Web security loopholes were on government websites. Considering the blurring line between government websites and external networks, it has become all the more essential for authorities to boost their cyber security protections.[74]

      The Japanese government has considered investing resources in training more data scientists who may be able to analyze the raw data obtained from various sources and utilize requisite techniques to develop an accurate analysis. The Internal Affairs and Communication Ministry planned to launch a free online course on big data, the target of which would be corporate workers as well as government officials.[75]

      Data analytics is emerging as an efficient technique of monitoring the public transport management systems within Singapore. A recent collaboration between IBM, StarHub, The Land Transport Authority and SMRT initiated a research study to observe the movement of commuters across regions. [76] This has been instrumental in revamping the data collection systems already in place and has allowed for the procurement of additional systems of monitoring.[77] The idea is essentially to institute a "black box" of information for every operational unit that allows for the relaying of real-time information from sources as varied as power switches, tunnel sensors and the wheels, through assessing patterns of noise and vibration. [78]

      In addition to this there are numerous projects in place that seek to utilize Big Data to improve city life. According to Carlo Ritti, Director of the MIT Senseable City Lab, "We are now able to analyze the pulse of a city from moment to moment. Over the past decade, digital technologies have begun to blanket our cities, forming the backbone of a large, intelligent infrastructure." [79] The professor of Information Architecture and Founding Director of the Singapore ETH Centre, Gerhart Schmitt has observed that "the local weather has a major impact on the behavior of a population." In this respect the centre is engaged in developing a range of visual platforms to inform citizens on factors such as air quality which would enable individuals to make everyday choices such as what route to take when planning a walk or predict a traffic jam. [80] Schmitt's team has also been able to arrive at a pattern that connects the demand for taxis with the city's climate. The amalgamation of taxi location with rainfall data has been able to help locals hail taxis during a storm. This form of data can be used in multiple ways allowing the visualization of temperature hotspots based on a "heat island" effect where buildings, cars and cooling units cause a rise in temperature. [81]

      Microsoft has recently entered into a partnership with the Federal University of Minas Gerais, one of the largest universities in Brazil to undertake a research project that could potentially predict traffic jams up to an hour in advance. [82] The project attempts to analyze information from transport departments, road traffic cameras and drivers social network profiles to identify patterns that they could use to help predict traffic jams approximately 15 to 60 minutes before they actually happen.[83]

      In anticipation of the increasing demand for professionals with requisite training in data sciences, the Malaysian Government has planned to increase the number of local data scientists from the present 80 to 1500 by 2020, through the support of the universities within the country.

      IV. Big Data and the Private Sector in the Global South

      Essential considerations in the operations of Big Data in the Private sector in the Asia Pacific region have been extracted by a comprehensive survey carried out by the Economist Intelligence Unit.[84] Over 500 executives across the Asia Pacific region were surveyed, from across industries representing a diverse range of functions. 69% of these companies had an annual turnover of over US $500m. The respondents were senior managers responsible for taking key decisions with regard to investment strategies and the utilization of big data for the same.

      The results of the Survey conclusively determined that firms in the Asia Pacific region have had limited success with implementing Big Data Practices. A third of the respondents claimed to have an advanced knowledge of the utilization of big data while more than half claim to have made limited progress in this regard. Only 9% of the Firms surveyed cited internal barriers to implementing big data practices. This included a significant difficulty in enabling the sharing of information across boundaries. Approximately 40% of the respondents surveyed claimed they were unaware of big data strategies, even if they had in fact been in place simply because these had been poorly communicated to them. Almost half of the firms however believed that big data plays an important role in the success of the firm and that it can contribute to increasing revenue by 25% or more.

      Numerous obstacles in the adoption of big data were cited by the respondents. These include the lack of suitable software to interpret the data and the lack of in-house skills to analyze the data appropriately. In addition to this, the lack of willingness on the part of various departments to share their data for the fear of a breach or leak was thought to be a major hindrance. This combined with a lack of communication between the various departments and exceedingly complicated reports that cannot be analyzed given the limited resources and lack of human capital qualified enough to carry out such an analysis, has resulted in an indefinite postponement of any policy propounding the adoption of big data practices.

      Over 59% of the firms surveyed agreed that collaboration is integral to innovation and that information silos are a huge hindrance within a knowledge based economy. There is also a direct correlation between the size of the company and its progress in adopting big data, with larger firms adopting comprehensive strategies more frequently than smaller ones. A major reason for this is that large firms with substantially greater resources are able to actualize the benefits of big data analytics more efficiently than firms with smaller revenues. These businesses which have advanced policies in place outlining their strategies with respect to their reliance on big data are also more likely to communicate these strategies to their employees to ensure greater clarity in the process.

      The use of big data was recently voted as the "best management practice" of the past year according to a cumulative ranking published by Chief Executive China Magazine, a Trade journal published by Global Sources on 13th January, 2015 in Beijing. The major benefit cited was the real-time information sourced from customers, which allows for direct feedback from clients when making decisions regarding changes in products or services. [85]

      A significant contributor to the lack of adequate usage of data analytics is the belief that a PhD is a prerequisite for entering the field of data science. This misconception was pointed out by Richard Jones, vice president of Cloudera in the Australia, New Zealand and the Asean region. Cloudera provides businesses with the requisite professional services that they may need to effectively utilize Big Data. This includes a combination of the necessary manpower, technology and consultancy services.[86] Deepak Ramanathan, the chief technology officer, SAS Asia Pacific believes that this skill gap can be addressed by forming data science teams within both governments and private enterprises. These teams could comprise of members with statistical, coding and business skills and allow them to work in a collaborative manner to address the problem at hand.[87] SAS is an Enterprise Software Giant that creates tools tailored to suit business users to help them interpret big data. Eddie Toh, the planning and marketing manager of Intel's data center platform believes that businesses do not necessarily need data scientists to be able to use big data analytics to their benefit and can in fact outsource the technical aspects of the interpretation of this data as and when required.[88]

      The analytical team at Dell has forged a partnership with Brazilian Public Universities to facilitate the development of a local talent pool in the field of data analytics. The Instituto of Data Science (IDS) will provide training methodologies for in person or web based classes. [89] The project is being undertaken by StatSoft, a subsidiary of Dell that was acquired by the technology giant last year. [90]

      V. Conclusion

      There have emerged numerous challenges in the analysis and interpretation of Big Data. While it presents an extremely engaging opportunity, which has the potential to transform the lives of millions of individuals, inform the private sector and influence government, the actualization of this potential requires the creation of a sustainable foundational framework ; one that is able to mitigate the various challenges that present themselves in this context.

      A colossal increase in the rate of digitization has resulted in an unprecedented increment in the amount of Big Data available, especially through the rapid diffusion cellular technology. The importance of mobile phones as a significant source of data, especially in low income demographics cannot be overstated. This can be used to understand the needs and behaviors of large populations, providing an in depth insight into the relevant context within which valuable assessments as to the competencies, suitability and feasibilities of various policy mechanisms and legal instruments can be made. However, this explosion of data does have a lasting impact on how individuals and organizations interact with each other, which might not always be reflected in the interpretation of raw data without a contextual understanding of the demographic. It is therefore vital to employ the appropriate expertise in assessing and interpreting this data. The significant lack of a human resource to capital to analyze this information in an accurate manner poses a definite challenge to its effective utilization in the Global South.

      The legal and technological implications of using Big Data are best conceptualized within the deliberations on protecting the privacy of the contributors to this data. The primary producers of this information, from across platforms, are often unaware that they are in fact consenting to the subsequent use of the data for purposes other than what was intended. For example people routinely accept terms and conditions of popular applications without understanding where or how the data that they inadvertently provide will be used.[91] This is especially true of media generated on social networks that are increasingly being made available on more accessible platforms such as mobile phones and tablets. Privacy has and always will remain an integral pillar of democracy. It is therefore essential that policy makers and legislators respond effectively to possible compromises of privacy in the collection and interpretation of this data through the institution of adequate safeguards in this respect.

      Another challenge that has emerged is the access and sharing of this data. Private corporations have been reluctant to share this data due to concerns about potential competitors being able to access and utilize the same. In addition to this, legal considerations also prevent the sharing of data collected from their customers or users of their services. The various technical challenges in storing and interpreting this data adequately also prove to be significant impediments in the collection of data. It is therefore important that adequate legal agreements be formulated in order to facilitate a reliable access to streams of data as well as access to data storage facilities to accommodate for retrospective analysis and interpretation.

      In order for the use of Big Data to gain traction, it is important that these challenges are addressed in an efficient manner with durable and self-sustaining mechanisms of resolving significant obstructions. The debates and deliberations shaping the articulation of privacy concerns and access to such data must be supported with adequate tools and mechanisms to ensure a system of "privacy-preserving analysis." The UN Global Pulse has put forth the concept of data philanthropy to attempt to resolve these issues, wherein " corporations [would] take the initiative to anonymize (strip out all personal information) their data sets and provide this data to social innovators to mine the data for insights, patterns and trends in realtime or near realtime."[92]

      The concept of data philanthropy highlights particular challenges and avenues that may be considered for future deliberations that may result in specific refinements to the process.

      One of the primary uses of Big Data, especially in developing countries is to address important developmental issues such as the availability of clean water, food security, human health and the conservation of natural resources. Effective Disaster management has also emerged as one of the key functions of Big Data. It therefore becomes all the more important for organizations to assess the information supply chains pertaining to specific data sources in order to identify and prioritize the issues of data management. [93] Data emerging from different contexts, across different sources may appear in varied compositions and would differ significantly across economic demographics. The Big Data generated from certain contexts would be inefficient due to the unavailability of data within certain regions and the resulting studies affecting policy decisions should take into account this discrepancy. This data unavailability has resulted in a digital divide which is especially prevalent in the global south. [94]

      Appropriate analysis of the Big Data generated would provide a valuable insight into the key areas and inform policy makers with respect to important decisions. However, it is necessary to ensure that the quality of this data meets a specific standard and appropriate methodological processes have been undertaken to interpret and analyze this data. The government is a key actor that can shape the ecosystem surrounding the generation, analysis and interpretation of big data. It is therefore essential that governments of countries across the global south recognize the need to collaborate with civic organizations as well technical experts in order to create appropriate legal frameworks for the effective utilization of this data.


      [1] Onella, Jukka- Pekka. "Social Networks and Collective Human Behavior." UN Global Pulse. 10 Nov.2011. <http://www.unglobalpulse.org/node/14539>

      [2] http://www.business2community.com/big-data/evaluating-big-data-predictive-analytics-01277835

      [3] Ibid

      [4] http://unglobalpulse.org/sites/default/files/BigDataforDevelopment-UNGlobalPulseJune2012.pdf

      [5] Ibid, p.13, pp.5

      [6] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [7] Helbing, Dirk , and Stefano Balietti. "From Social Data Mining to Forecasting Socio-Economic Crises." Arxiv (2011) 1-66. 26 Jul 2011 http://arxiv.org/pdf/1012.0178v5.pdf.

      [8] Manyika, James, Michael Chui, Brad Brown, Jacques Bughin, Richard Dobbs, Charles Roxburgh andAngela H. Byers. "Big data: The next frontier for innovation, competition, and productivity." McKinsey

      Global Institute (2011): 1-137. May 2011.

      [9] "World Population Prospects, the 2010 Revision." United Nations Development Programme. <http://esa.un.org/unpd/wpp/unpp/panel_population.htm>

      [10] Mobile phone penetration, measured by Google, from the number of mobile phones per 100 habitants, was 96% in Botswana, 63% in Ghana, 66% in Mauritania, 49% in Kenya, 47% in Nigeria, 44% in Angola, 40% in Tanzania (Source: Google Fusion Tables)

      [11] http://www.brookings.edu/blogs/africa-in-focus/posts/2015/04/23-big-data-mobile-phone-highway-sy

      [12] Ibid

      [13] <http://www.google.com/fusiontables/Home/>

      [14] "Global Internet Usage by 2015 [Infographic]." Alltop. <http://holykaw.alltop.com/global-internetusage-by-2015-infographic?tu3=1>

      [15] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011 <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [16] Ibid

      [17] Ibid

      [18] Ibid

      [19] Goetz, Thomas. "Harnessing the Power of Feedback Loops." Wired.com. Conde Nast Digital, 19 June 2011. <http://www.wired.com/magazine/2011/06/ff_feedbackloop/all/1>.

      [20] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [21] Bollier, David. The Promise and Peril of Big Data. The Aspen Institute, 2010. <http://www.aspeninstitute.org/publications/promise-peril-big-data>

      [22] Ibid

      [23] Eagle, Nathan and Alex (Sandy) Pentland. "Reality Mining: Sensing Complex Social Systems",Personal and Ubiquitous Computing, 10.4 (2006): 255-268.

      [24] Kirkpatrick, Robert. "Digital Smoke Signals." UN Global Pulse. 21 Apr. 2011. <http://www.unglobalpulse.org/blog/digital-smoke-signals>

      [25] OECD, Future Global Shocks, Improving Risk Governance, 2011

      [26] "Economy: Global Shocks to Become More Frequent, Says OECD." Organisation for Economic Cooperationand Development. 27 June. 2011.

      [27] Friedman, Jed, and Norbert Schady. How Many More Infants Are Likely to Die in Africa as a Result of the Global Financial Crisis? Rep. The World Bank <http://siteresources.worldbank.org/INTAFRICA/Resources/AfricaIMR_FriedmanSchady_060209.pdf>

      [28] Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute,June 2011<http://www.mckinsey.com/mgi/publications/big_data/pdfs/MGI_big_data_full_report.pdf>

      [29] The word "crowdsourcing" refers to the use of non-official actors ("the crowd") as (free) sources of information, knowledge and services, in reference and opposition to the commercial practice of

      outsourcing. "

      [30] Burke, J., D. Estrin, M. Hansen, A. Parker, N. Ramanthan, S. Reddy and M.B. Srivastava. ParticipatorySensing. Rep. Escholarship, University of California, 2006. <http://escholarship.org/uc/item/19h777qd>.

      [31] "Crisis Mappers Net-The international Network of Crisis Mappers." <http://crisismappers.net>, http://haiti.ushahidi.com and Goldman et al., 2009

      [32] Alex Pentland cited in "When There's No Such Thing As Too Much Information". The New York Times.23 Apr. 2011<http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>.

      [33] Nathan Eagle also cited in "When There's No Such Thing As Too Much Information". The New YorkTimes. 23 Apr. 2011. <http://www.nytimes.com/2011/04/24/business/24unboxed.html?_r=1&src=tptw>.

      [34] Helbing and Balietti. "From Social Data Mining to Forecasting Socio-Economic Crisis."

      [35] Eysenbach G. Infodemiology: tracking flu-related searches on the Web for syndromic surveillance.AMIA (2006)<http://yi.com/home/EysenbachGunther/publications/2006/eysenbach2006cinfodemiologyamia proc.pdf>

      [36] Syndromic Surveillance (SS)." Centers for Disease Control and Prevention. 06 Mar. 2012.<http://www.cdc.gov/ehrmeaningfuluse/Syndromic.html>.

      [37] Health Map <http://healthmap.org/en/>

      [39] www.ushahidi.com

      [41] Ushahidi is a nonprofit tech company that was developed to map reports of violence in Kenya followingthe 2007 post-election fallout. Ushahidi specializes in developing "free and open source software for

      information collection, visualization and interactive mapping." <http://ushahidi.com>

      [42] Conducted by the European Commission's Joint Research Center against data on damaged buildingscollected by the World Bank and the UN from satellite images through spatial statistical techniques.

      [43] www.ushahidi.com

      [44] See https://tacticaltech.org/

      [45] see www. flowminder.org

      [46] Ibid

      [48] http://allafrica.com/stories/201507151726.html

      [49] Ibid

      [50] Ibid

      [51] http://www.computerworld.com/article/2948226/big-data/opinion-apple-and-ibm-have-big-data-plans-for-education.html

      [52] Ibid

      [53] http://www.grameenfoundation.org/where-we-work/sub-saharan-africa/uganda

      [54] Ibid

      [55] http://chequeado.com/

      [56] http://datochq.chequeado.com/

      [57] Times of India (2015): "Chandigarh May Become India's First Smart City," 12 January, http://timesofi ndia.indiatimes.com/india/Chandigarh- may-become-Indias-fi rst-smart-city/articleshow/ 45857738.cms

      [58] http://www.cisco.com/web/strategy/docs/scc/ioe_citizen_svcs_white_paper_idc_2013.pdf

      [59] Townsend, Anthony M (2013): Smart Cities: Big Data, Civic Hackers and the Quest for a New Utopia, New York: WW Norton.

      [60] See "Street Bump: Help Improve Your Streets" on Boston's mobile app to collect data on roadconditions, http://www.cityofboston.gov/DoIT/ apps/streetbump.asp

      [61] Mayer-Schonberger, V and K Cukier (2013): Big Data: A Revolution That Will Transform How We Live, Work, and Think, London: John Murray.

      [62] http://www.epw.in/review-urban-affairs/big-data-improve-urban-planning.html

      [63] Ibid

      [64] Newman, M E J and M Girvan (2004): "Finding and Evaluating Community Structure in Networks,"Physical Review E, American Physical Society, Vol 69, No 2.

      [65] http://www.sundaytimes.lk/150412/sunday-times-2/big-data-can-make-south-asian-cities-smarter-144237.html

      [66] Ibid

      [67] Ibid

      [68] http://www.epw.in/review-urban-affairs/big-data-improve-urban-planning.html

      [69] GSMA (2014): "GSMA Guidelines on Use of Mobile Data for Responding to Ebola," October, http:// www.gsma.com/mobilefordevelopment/wpcontent/ uploads/2014/11/GSMA-Guidelineson-

      protecting-privacy-in-the-use-of-mobilephone- data-for-responding-to-the-Ebola-outbreak-_ October-2014.pdf

      [70] An example of the early-stage development of a self-regulatory code may be found at http:// lirneasia.net/2014/08/what-does-big-data-sayabout- sri-lanka/

      [71] See "Sri Lanka's Mobile Money Collaboration Recognized at MWC 2015," http://lirneasia. net/2015/03/sri-lankas-mobile-money-colloboration- recognized-at-mwc-2015/

      [72] http://www.thedailystar.net/big-data-for-urban-planning-57593

      [74] http://www.news.cn/, 25/11/2014

      [76] http://www.todayonline.com/singapore/can-big-data-help-tackle-mrt-woes

      [77] Ibid

      [78] Ibid

      [79] http://edition.cnn.com/2015/06/24/tech/big-data-urban-life-singapore/

      [80] Ibid

      [81] Ibid

      [82] http://venturebeat.com/2015/04/03/how-microsofts-using-big-data-to-predict-traffic-jams-up-to-an-hour-in-advance/

      [83] Ibid

      [84] https://www.hds.com/assets/pdf/the-hype-and-the-hope-summary.pdf

      [85] http://www.news.cn , 14/01/2015

      [86] http://www.techgoondu.com/2015/06/29/plugging-the-big-data-skills-gap/

      [87] Ibid

      [88] Ibid

      [89] http://www.zdnet.com/article/dell-to-create-big-data-skills-in-brazil/

      [90] Ibid

      [91] Efrati, Amir. "'Like' Button Follows Web Users." The Wall Street Journal. 18 May 2011.

      <http://online.wsj.com/article/SB10001424052748704281504576329441432995616.html>

      [92] Krikpatrick, Robert. "Data Philanthropy: Public and Private Sector Data Sharing for Global Resilience."

      UN Global Pulse. 16 Sept. 2011. <http://www.unglobalpulse.org/blog/data-philanthropy-public-privatesector-data-sharing-global-resilience>

      [93] Laney D (2001) 3D data management: Controlling data volume, velocity and variety. Available at: http://blogs. gartner.com/doug-laney/files/2012/01/ad949-3D-DataManagement-Controlling-Data-Volume-Velocity-andVariety.pdf

      [94] Boyd D and Crawford K (2012) Critical questions for Big Data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication, & Society 15(5): 662-679.

      The Creation of a Network for the Global South - A Literature Review

      by Tanvi Mani last modified Feb 04, 2016 01:13 PM

      I. Introduction

      The organization of societies and states is predicated on the development of Information Technology and has begun to enable the construction of specialized networks. These networks aid in the mobilization of resources on a global platform.[1] There is a need for governance structures that embody this globalized thinking and adopt superior information technology devices to bridge gaps in the operation and participation of not only political functions but also economic processes and operations.[2] Currently, public institutions fall short of an optimum level of functioning simply because they lack the information, know-how and resources to respond effectively to this newly globalized and economically liberalized world order. Civil society is beginning to seek a greater participatory voice in both policy making and ideating, which require public institutions to institute a method of allowing this participation while at the same time retaining the crux of their functions and processes. The network society thus requires, As argued by Castells, a new methodology of social structuring, one amalgamating the analysis of social structure and social action within the same overarching framework.[3] This Network propounds itself as a 'dynamic, self-evolving structure, which, powered by information technology and communicating with the same digital language, can grow, and include all social expressions, compatible with each network's goals. Networks increase their value exponentially through their contribution to human resources, markets, raw materials and other such components of production and distribution.' [4]

      As noted by Kevin Kelly,' The Atom is the past. The symbol of science for the next century is the dynamical Net.…Whereas the Atom represents clean simplicity, the Net channels the messy power of complexity. The only organization capable of nonprejudiced growth or unguided learning is a network. All other topologies limit what can happen. A network swarm is all edges and therefore open ended any way you come at it. Indeed the network is the least structured organization that can be said to have any structure at all. ..In fact a plurality of truly divergent components can only remain coherent in a network. No other arrangement - chain, pyramid, tree, circle, hub - can contain true diversity working as a whole .'[5]

      A network therefore is integral to the facilitation, coordination and advocacy of different agenda within a singular framework, which seeks to formulate suitable responses to a wide range of problems across regions. An ideal model of a network would therefore be one that is reflective of the interconnectivity between relationships, strengthened by effective communication and based on a strong foundation of trust.

      The most powerful element of a network is however the idea of a common purpose. The pursuit is towards similar ends and therefore the interconnected web of support it offers is in realization of a singular goal,

      II. Evolution of the Network

      There are certain norms that must be incorporated for a network to be able to work at its best. Robert Chambers, in his book, Whose Reality Counts? Identifies these norms and postulates their extension to every form of a network, in order to capture its creative spirit and aid in the realization of its goals.[6] A network should therefore ideally foster four fundamental elements in order to inculcate an environment of trust, encouragement and the overall actualization of its purpose. These elements are; Diversity or the encouragement of a multitude of narratives from diverse sources, Dynamism or the ability of participants to retain their individual identities while maintaining a facilitative structure, Democracy or an equitable system of decision making to enable an efficient working of the net and finally, Decentralization or the feasibility of enjoying local specifics on a global platform.[7]

      In order to attain these ideal elements it is integral to strengthen certain aspects of the practice through performing specific and focused functions, these include making sure of a clear broad consensus, which ensures the co-joining of a common purpose. Additionally, centralization, in the form of an overarching set of rules must be kept to a minimum, in order to facilitate a greater level of flexibility while still providing the necessary support structure. The building of trust and solid relationships between participants is prioritized to enhance creative ideation in a supportive environment. Joint activities, more than being output oriented are seen as the knots that tie together the entire web of support. Input and participation are the foremost objectives of the network, in keeping with the understanding that "contribution brings gain". [8]

      Significant management issues that plague networks include the practical aspects of bringing the network into function through efficient leadership and the consolidation of a common vision. A balanced approach would entail a common consultation on the goals of the network, the sources of funding and an agreed upon structure within which the network would operate. It is also important to create alliances outside of the sector of familiarity and ensure an inclusive environment for members across regions, allowing them to retain their localized individuality while affording them with a global platform. [9]

      III. Structure

      The structural informality of a network is essential to its sustenance. Networks must therefore ensure that they embody a non-hierarchized structure, devoid of bureaucratic interferences and insulated from a centralized system of control and supervision. This requires an internal system of checks and balances, consisting of periodic reviews and assessments. Networks must therefore limit the powers of supervision of the secretariat. The secretariat must allow for the coordination of its activities and allocate appropriate areas of engagement according to the relative strength of the participating members.

      One form of a network structure, postulated within a particular research study is the threads, knots and Nets model. [10] It consists of members within a network bound together by threads of relationship, communication and trust. These threads represent the commonality that binds together the participants of the particular network. The threads are established through common ideas and a voluntary participation in the process of communication and conflict resolution. [11]

      The knots represent the combined activities which the participants engage in, with the common goal of realizing a singular purpose. These knots signify an optimum level of activity, wherein members of the network are able to support, inspire and confer tangible benefits onto each other. The net represents the entire structure of the network, which is constructed through a confluence of relationships and common activities. [12] The structure is autonomous in nature and allows participants to contribute without losing their individual identities. It is also dynamic and flexible; incorporating new elements with relative ease. It is therefore a collaboration which affords onto its members the opportunity to expand without losing its purpose. The maintenance of such a structure requires constant review and repair, with adequate awareness of weak links or "threads" and the capability and willingness to knot them together with new participants, thereby extending the net.

      For example, the Global Alliance for Vaccines and Immunization used a system of organizational "milestones" to monitor the progress of the network and keep the network concentrated. It requires a sustained institutional effort to fulfill its mandate of "the right of every child to be protected against vaccine-preventable diseases" and brings together international organizations, civil society and private industry. [13] As postulated within the Critical Choices research study of the United Nations, clearly defined milestones are integral to sustaining an effective support mechanism for donors and ensuring that all relevant participants are on board. [14] This also allows for donors to be made aware of the tangible outcomes that have been achieved by the network. Interim goals that are achievable within a short span of time also afford a sense of legitimacy onto the network, allowing it to deliver on its mandate early on. Setting milestones would require an in depth focus and a nuanced understanding of specific aspects of larger problems and delivering early results on these problems would allow for a foundational base of trust, on the foundation of which, a possibly long drawn out consultative process can be fixed.[15]

      A Network might often find alliances outside of its sector of operation. For example, Greenpeace was able to make its voice heard in International Climate Change negotiations by engaging with private insurance companies and enlisting their support.[16] The organization looked towards the private sector for support to mobilize resources and enlist the requisite expertise within their various projects. [17]

      A. Funding

      The financial support a network receives is essential for its sustenance. The initial seed money it receives can be obtained from a single source however, cross sectoral financing is necessary to build a consensus with regards to issues that may be a part of the network's mandate. The World Commission for Dams (WCD), for example, obtains funding from multiple sources in order to retain its credibility. The sources of funding of the WCD include government agencies, multilateral organizations, business associations, NGO's and Government Agencies, without a single donor contributing more than 10% of the total funding it receives.[18] However, the difficulty with this model of funding is the relative complexity in assimilating a number of smaller contributions, which may take away from its capacity to expand its reach and enhance the scope of its work. Cross sectoral funding is less of a fundamental requirement for networks whose primary mandate is implementation, such as The Global Environment Facility (GEF), whose legitimacy is derived from intergovernmental treaties and is therefore only funded by governments.[19] The GEF has only recently broadened its sources of funding to include external contributions from the private sector.

      A network can also be funded through the objective it seeks to achieve through the course of its activities. For example, Rugmark an international initiative which seeks to mitigate the use of child labor in South Asia uses an external on site monitoring system to verify and provide labels certifying the production of carpets without the use of child labor.[20] The monitors of this system are trained by Rugmark and carpet producers have to sign a binding agreement, undertaking not to employ children below the age of 14 in order to receive the certification. The funds generated from these carpets, for the import of which American and European importers pay 1% of the import value, are used to provide rehabilitation and education facilities for the children in affected areas. The use of these funds is reported regularly. [21]

      The funding must be sustained for a few years, which is a difficult task for networks that require an overall consensus of participants. The greatest outcomes of the network are not tangible solutions to the problem but the facilitation of an environment which allows stakeholders to derive a tangible solution. Thus, the elements of trust, communication and collaboration are integral to the efficient functioning of the network. However, the lack of tangible outcomes exposes the funders to financial risks. The best way to reduce such risks is to institute an uncompromising time limit for the initiative, within which it must achieve tangible results or solutions that can be implemented. A less stringent approach would be to incorporate a system of periodic review and assessment of the accomplishments of the network, subsequent to which further recommendations may be made for a further course of action.[22]

      B. Relationships

      A three year study conducted by Newell & Swan drew definitive conclusions with respect to the inter-organizational collaboration between participants within a network. The study determined that there currently exist three types of trust; Companion trust or the trust that exists within the goodwill and friendship between participants, Competence trust, wherein the competence of other participants to carry out the tasks assigned to them is agreed upon and lastly, Commitment trust or the trust which is predicated on contractual or inter-institutional that are agreed upon. [23] While companion and competence trust are easily identifiable, commitment trust is more subjective as it is determined by the agreement surrounding the core values and overall identifiable aims. Sheppard & Tuchinsky refer to an identification based trust which is based on a collective understanding of shared values. Such a trust requires significant investment but they argue, "The rewards are commensurably greater and the he benefits go beyond quantity, efficiency and flexibility." [24] Powell postulates, "Trust and other forms of social capital are moral resources that operate in fundamentally different manner than physical capital. The supply of trust increases, rather than decreases, with use: indeed, trust can be depleted if not used." [25]

      Karl Wieck endorses the "maintenance of tight control values and beliefs which allow for local adaptation within centralized systems." [26] The autonomy that participants within a network enjoy is therefore considered to be close to sacred, so as to allow them to engage with each other on an equitable footing, while still maintain their individual identities. Freedman and Reynders believe that networks place a so called 'premium' on " the autonomy of those linked through the network…..networks provide a structure through which different groups - each with their own organizational styles, substantive priorities, and political strategies - can join together for common purposes that fill needs felt by each. "[27] Consequently, lower the level of centralized control within a network, the greater the requirement of trust. Allen Nan resonates with this idea, as is evident from her review of coordinating conflict resolution NGO's. She believes that these NGO's are most effective when " beginning with a loose voluntary association which grows through relationship building, gradually building more structure and authority as it develops. No NGO wants to give away its authority until it trusts a networking body of people that it knows. " [28]

      C. Communication and Collaboration

      The binding force that ties together any network is the importance of relationships between participants and their interactions with organizations outside the network. Research has shown that face to face interaction works best and although email may be practical, a face-to-face meeting at regular intervals builds a level of trust amongst participants. [29] It is however important to prevent network from turning into 'self-selecting oligarchies' and to prevent this, there needs to be a balance drawn between goodwill and the trust in others' competence along with a common understanding of differently hierarchized values. [30]

      There is also an impending need to develop a relationship vocabulary, as suggested by Taylor, which would be of particular use within transnational networks and afford a deeper understanding of cross cultural relationships.[31]

      D. Participation

      A significant issue that networks today have to address is how to inculcate and then subsequently maintain participation in the activities of the network. This would include providing incentives to participants, encouraging diversity and enabling greater creative inflow across sectors to generate innovative output. Participation involves three fundamental elements; Action, which includes active contribution in the form of talking, listening, commenting, responding and sharing information, Process, which aids in an equitable system of decision making and constructing relationships and the underpinned values associated with these two elements, which include spreading equality, inculcating openness and including previously excluded communities or individuals. [32] Participation in itself envisages a three leveled definition; participation as a contribution, where people offer a tangible input, participation as an organization process, where people organize themselves to influence certain pre-existing processes and participation as a form of empowerment where people seek to gain power and authority from participating.

      In order to create an autonomous system of evaluating and monitoring the nature and context of participation, a network would have to attempt to systematically incorporate a few fundamental processes, such as; enabling an understanding of the dynamism of a network through an established criteria of monitoring the levels of participation of the members, creating an explicit checklist of qualifications of this participation, such as the contributions of the participants, the limits of commitment and the available resources that must be shared and distributed, acknowledging the importance of relationships as fundamental to the success of any network., building a capacity for facilitative and shared leadership, tracing the changes that occur when the advocacy and lobbying activities of individuals are linked and using these individuals as participants who have the power to influence policy and development at various levels.[33] Finally, the recognition that utilizing the combined faculties of the network would aid in the effectuation of further change is vital to sustaining an active participation in the network.[34] It is common for networks to stagnate simply because of the lack of clarity on what a network really is or what it entails. There are significant misconceptions as to the activities engaged in by the network, such as the idea that a network "works solely as a resource center, to provide information, material and papers, rather than as forums for two way exchanges of information and experiences," contribute to the misunderstanding regarding the participation requirements within a network.[35] To facilitate an active, participatory function of learning, a network needs to be more than a resource center that seeks to meet the needs of beneficiaries. While meeting these needs is essential, development projects tend to obfuscate the benefit/input relationship within a network, thus significantly depleting its dynamism quotient. [36]

      One method of moving away from the needs based model is to create a tripartite functionary, as was created within a particular research study. [37] This involves A Contributions Assessment, A Weaver's Triangle for Networks and An identification of channels of participation.

      Contributions Assessment is an analysis of what the participants within a network are willing to contribute. It enables the network to assess what resources it has access to and how those resources may be distributes amongst the participants, multiplied or exchanged. [38] This system is predicated on a premise of assessing what participants have to offer as opposed to what they need. It challenges the long held notion of requiring an evaluation to identify problems, to address which recommendations are made and in fact seeks to focus on the moments of excellence and enable a discussion on the factors that contributed to these moments. [39] It thus places a value on the best of "what is" as opposed to trying to find a plausible "what ought to be". This approach allows participants to recognize that they are in fact the real "resource Centre" of the network and are encouraged act accordingly.

      A Contributions Assessment may be practically incorporated through a few steps. It must be focused on the contributions, after a discussion on who the contributors may be. The aims of the network must be clarified, along with a specification of the contributions required such as perhaps newsletters, a conference, policy analysis etc. The members of the network must be clear on what they would like to contribute to the network and how such contribution might be delivered. Finally, the secretariat must be able to ideate or innovate on how it can enable more contributions from the networks in a more effective manner. [40]

      The Weaver's Triangle has been adapted to be applies within networks and enables participants to understand what the aims and activities of the network are. It identifies the overall aim of the network and the change the network seeks to bring about to the status quo. It then lays out the objectives of the network in the form of specific statements about the said differences that the network seeks to bring about. Finally, the network would have to explain why a particular activity has been chosen. [41] The base of the triangle reflects the specific activities that the network seeks to engage in to achieve the said objectives. The triangle is further divided into two, to ensure that action aims and process aims have equal weightage; this allows for the facilitation of an exchange and a connection between the members of the network. [42]

      The Circles of Participation is an idea that has been put forth by the Latin American and Caribbean Women's Health Network. (LACWHN). [43] This Network has three differentiated categories of membership, which it uses to determine the degree of commitment of an organization to the network. R- refers to the members who receive the women's health journal, P refers to members who actively participate in events and campaigns and who are advisors for specific topics. PP refers to the permanent participants within the network at national and international levels. They also receive a journal. This categorization allows the network to make an assessment of the dynamism and growth of a network, with members moving through the categories depending on their levels of participation. [44]

      An important space for contributions to the network is the newsletter. This can be facilitated by allowing contributions from various sources, provided they meet the established quality checks, ensuring a balance between regions of origin of the members of the network, ensuring a balance between the policy and program activities of the members and keeping the centralized editorial process to a minimum. This is in keeping with the ideal of a decentralized system of expression that allows each member to retain its individuality while still contributing to the aims of the network. The Women's Global Network on Reproductive Rights (WGNRR) sought to create a similar system of publication to measure the success of their linkages, the levels of empowerment amongst members, in terms of strategizing and enabling localized action and the allocation of space in a fair and equitable manner. [45] Another Network, Creative Exchange customizes its information flow within the network so that each member only receives the information it expresses interest in.[46] This prevents the overburdening of members with unnecessary information.

      The activities of the network which don't directly pass through the secretariat or the coordinator of the network can be monitored efficiently by keeping I close contact with new entrants to the network and capturing the essence of the activities that occur on the fringes of the network. This would allow an assessment of the diversity of the network. For example, Creative exchange sends out short follow up emails to determine the number and nature of contacts that have been made subsequent to a particular item in the newsletter. The UK Conflict Development and Peace Network (CODEP) records the newest subscribers to the network after every issue of their newsletter and AB Colombia sends out weekly news summaries electronically which are available for free to recipients who provide details of their professional engagements and why or how they wish to use these summaries. [47] This enables the mapping of the type of recipients the information reaches.

      E. Leadership and Coordination

      Sarason and Lorentz postulate four distinguishing characteristics that capture the creativity and expertise required by individuals leading and coordinating networks.[48] Knowledge of the territory or a broad understanding of the type of members, the resources available and the needs of the members is extremely important to facilitate an ideal environment of mutual trust and open dialogue between the members. Scanning the network for fluidity and assessing openings, making connections and innovating solutions would enable an efficient leadership that would contribute to the overall dynamism of the network. In addition to this, perceiving strengths and building on assets of existing resources would allow the network to capitalize on its strengths. Finally, the coordinators of a network must be a resource to all members of the network and thus enable them to create better and more efficient systems. They must therefore exercise their personal influence over members wherever required for the overall benefit of the network. Practically, a beneficial leadership would also require an inventive approach by providing fresh and interesting solutions to immediate problems. A sense of clarity, transparency and accountability would also encourage members of the network to participate more and engage with each other. It is important for the leadership within a network to deliver on expectations, while building consensus amongst its members.

      A shared objective, a collaborative setting and a constant review of strategies is important to maintain linkages within a network. Responsible relationships underpinned by values and supported by flows of relevant information would allow an effective and fruitful analysis by those who are engaged within a network to do the relevant work. In addition to this, a respect for the autonomy of the network is essential.

      F. Inclusion

      Public policy networks are more often than not saturated with the economic and social elite from across the developed world. A network across the Global South would have to change this norm and extend its ambit of membership to grass root organizations, which might not have otherwise had the resources or the opportunity to be a part of a network.[49] Networks can achieve their long term goals only if they are driven by the willingness to include organizations from across economic demographics. This would ensure that their output is the result of a collaborative process that takes into account cross cultural norms and differentials across economic demographics.

      The participation of diverse actors is reflective of the policy making processing having given due regard to on the ground realities and being sensitive towards the concerns of differently placed interest groups. Networks have been accused of catering only to the needs of industrial countries and subscribing to values of the global north thus stunting local development and enforcing double standards. This tarnishes the legitimacy of the processes inculcated within the network itself. It is therefore all the more essential that a network focused on the global south have a diverse collection of members from across backgrounds and economic contexts. Additionally, the accountability of the network to civil society is dependent on the nature of the links it maintains with the public. Inclusion thus fosters a sense of legitimacy and accountability. The inclusion of local institutions from the beginning would also increase the chances of the solutions provided by the network, being effectively implemented. Local inclusion affords a sense of responsibility and ensures that the network would remain sustainable in the long run. Allowing local stakeholders to take ownership of the network and participate in the formulation of policies, engage in planning and facilitate participation would enable an efficient addressing of significant public policy issues. [50] Thus networks would need to create avenues for participation of local institutions and civil society to engage in a democratic form of decision making.

      III. Evaluation

      The process of evaluation of a network is most efficiently effectuated through a checklist that has been formulated within a research study for the purpose of evaluating its own network. [51]

      This checklist enumerates the various elements that have to be taken into consideration while evaluating the success of a network, as follows;

      FIG 1.[52]

      1. What is a network?

      'Networks are energising and depend crucially on the motivation of members'

      (Networks for Development, 2000:35)

      This definition is one that is broadly shared across the literature, although it is more detailed than some.

       

      A network has:

      • A common purpose derived from shared perceived need for action
      • Clear objectives and focus
      • A non-hierarchical structure
      A network encourages
      • Voluntary participation and commitment
      • The input of resources by members for benefit of all

      A network provides

      • Benefit derived from participation and linking

       

      2. What does a network do?

      • Facilitate shared space for exchange, learning, development - the capacity-building aspect
      • Act for change in areas where none of members is working in systematic way - the advocacy, lobbying and campaigning aspect
      • Include a range of stakeholders - the diversity/ broad-reach aspect

       

      3. What are the guiding principles and values?

      • Collaborative action
      • Respect for diversity
      • Enabling marginalised voices to be heard
      • Acknowledgement of power differences, and commitment to equality

      4. How do we do what we do, in accordance with our principles and values?

      Building Participation

      • Knowing the membership, what each can put in, and what each seeks to gain
      • Valuing what people can put in
      • Making it possible for them to do so
      • Seeking commitment to a minimum contribution
      • Ensuring membership is appropriate to the purpose and tasks
      • Encouraging members to be realistic about what they can give
      • Ensuring access to decision-making and opportunities to reflect on achievements
      • Keeping internal structural and governance requirements to a necessary minimum.

       

      Building Relationships and Trust

      • Spending time on members getting to know each other, especially face-to-face
      • Coordination point/secretariat has relationship-building as vital part of work
      • Members/secretariat build relations with others outside network - strategic individuals and institutions

       

      Facilitative Leadership (may be one person, or rotating, or a team)

      • Emphasis on quality of input rather than control
      • Knowledgeable about issues, context and opportunities,
      • Enabling members to contribute and participate
      • Defining a vision and articulating aims
      • Balancing the creation of forward momentum and action, with generating consensus
      • Understanding the dynamics of conflict and how to transform relations
      • Promoting regular monitoring and participatory evaluation
      • Have the minimum structure and rules necessary to do the work. Ensure governance is light, not strangling.Give members space to be dynamic
      • Encourage all those who can make a contribution to the overall goal to do so, even if it is small.

      Working toward decentralised and democratic governance

      • At the centre, make only the decisions that are vital to continued functioning. Push decision-making outwards.
      • Ensure that those with least resources and power have the opportunity to participate in a meaningful way.

       

      Building Capacity

      • Encourage all to share the expertise they have to offer. Seek out additional expertise that is missing.

       

      5. What are the evaluation questions that we can ask about these generic qualities? How do each contribute to the achievement of your aims and objectives?

      Participation

      • What are the differing levels or layers of participation across the network?
      • Are people participating as much as they are able to and would like?
      • Is the membership still appropriate to the work of the network? Purpose and membership may have evolved over time
      • Are opportunities provided for participation in decision-making and reflection?
      • What are the obstacles to participation that the network can do something about?

      Trust

      • What is the level of trust between members? Between members and secretariat?
      • What is the level of trust between non-governing and governing members?
      • How do members perceive levels of trust to have changed over time?
      • How does this differ in relation to different issues?
      • What mechanisms are in place to enable trust to flourish? How might these be strengthened?

       

      Leadership

      • Where is leadership located?
      • Is there a good balance between consensus-building and action?
      • Is there sufficient knowledge and analytical skill for the task?
      • What kind of mechanism is in place to facilitate the resolution of conflicts?

       

      Structure and control

      • How is the structure felt and experienced? Too loose, too tight, facilitating, strangling?
      • Is the structure appropriate for the work of the network?
      • How much decision-making goes on?
      • Where are most decisions taken? Locally, centrally, not taken?
      • How easy is it for change in the structure to take place?

       

      Diversity and dynamism

      • How easy is it for members to contribute their ideas and follow-through on them?
      • If you map the scope of the network through the membership, how far does it reach? Is this as broad as

      intended? Is it too broad for the work you are trying to do?

      Democracy

      • What are the power relationships within the network? How do the powerful and less powerful interrelate? Who sets the objectives, has access to the resources, participates in the governance?

      Factors to bear in mind when assessing sustainability

      • Change in key actors, internally or externally; succession planning is vital for those in central roles
      • Achievement of lobbying targets or significant change in context leading to natural decline in energy;
      • Burn out and declining sense of added value of network over and above every-day work.
      • Membership in networks tends to be fluid. A small core group can be a worry if it does not change and renew itself over time, but snapshots of moments in a network's life can be misleading. In a flexible, responsive environment members will fade in and out depending on the 'fit' with their own priorities. Such changes may indicate dynamism rather than lack of focus.
      • Decision-making and participation will be affected by the priorities and decision-making processes of members' own organisations.
      • Over-reaching, or generating unrealistic expectations may drive people away
      • Asking same core people to do more may diminish reach, reduce diversity and encourage burn-out

      V. Learning and Recommendations

      In order to facilitate the optimum working of a network several factors need to be taken into consideration and certain specific processes have to be incorporated into the regular functioning of the network. These are for example,

      • Ensuring that the evaluation of the network occurs at periodic intervals with the requisite level of attention to detail and efficiency to enable an in depth recalibration of the functions and processes of the network. To this effect, evaluation specialists must be engaged not just at times of crises or instability but as accompaniments to the various processes undertaken by the network. This would enable a holistic development of the network.
      • It is also important to understand the underlying values that define the unique nature of the network. The coordination of the network, its functions and its activities are intrinsically linked to these values and recognition of this element of the network would enable a greater functionality in the overall operation of the network.
      • A strong relationship between the members of the network, predicated on trust and open dialogue is essential for its efficient functioning. This would allow the accumulation of innovative ideas and dynamic thought to direct the future activities of the network.
      • The Secretariat or coordinator of the network must be able to engage the member in monitoring and evaluating the progress of the network. One method of enabling this coordination is through the institution of 'participant observer' methods at international conferences or meetings, which allow the members of the network to report back on the work that they have, which is linked to the work of other members.
      • The autonomy of a network and its decentralized mechanism of functioning are integral to retain the individuality of its members, who seek to pursue institutional objectives. The members seek to facilitate creative thinking and share ideas and this must be supported by financial resources. A strong bond of trust between the members of a network is therefore essential to enable long term commitments and the flourishing of interpersonal communication between members.
      • It is important that the subject area of operation of the network be comprehensively defined before the network comes into existence.
      • As seen with the experience of Canadian Knowledge Networks, it is beneficial to be selective in inviting participant to the network and following a rigorous process of review and selection would ensure that only the best candidates are selected so as to facilitate effective partnerships with other networks, as a result of demonstrable expertise within a particular field.
      • The management of a network must be disciplined, with clearly demarcated project deadlines and an optimum level of transparency and accountability. At the helm of leadership of every successful network, there has been intelligent, decisive and facilitative exchange, which is essential in securing a durable and potentially expandable space for the network to operate in.

      A. Canadian Perspectives

      A study of Canadian experiences was conducted by examining The Centers of Excellence and the Networks of Centers of Excellence (NCEs), which were funded through three Federal Granting Councils.[53] An initial observation that was made through the course of this study was that each network is intrinsically different and there is no uniform description which would fit all of them. The objectives of the Networks of Centers of Excellence Program are broadly, as follows; to encourage fundamental and applied research in fields which are critical to the economic development of Canada, to encourage the development and retention of world class scientists and engineers specializing in essential technologies, to manage multidisciplinary, cross sectoral national research programs which integrate stakeholder priorities through established partnerships and finally, to accelerate the exchange of research results within networks by accelerating technology transfers, made to users for social and economic development. [54] Extensive interviews carried out in the course of the research conducted by the ARA Consulting Group Inc. drew up particularly relevant conclusions with respect to the NCEs.

      Firstly, they have been able to produce significant "cultural shifts" among the researchers associated with the network. This is attributed to the network facilitating a collaborative effort amongst researchers as opposed to their previous working, which was largely in isolation. The benefits of this collaboration have been identified as providing innovative ideas and leading the research itself in unprecedented directions. This has the effect of equipping Canada with the capability to compete on a global level with respect to its research endeavors. The culture shift has also allowed researchers to be more aware of the problems that plague industry and has instigated more in depth research into the development of the industrial sector. Government initiatives that have attempted to cohesively apply academic research to industry have had limited success. The NCE's however have managed to successfully disintegrate the barriers between these two seemingly disparate fields. This has resulted in a faster and more effective system of knowledge dissemination resulting in durable and self-sustaining economic development, which takes place at a faster rate. The NCE's have also been able to contribute to healthcare, wellness and overall sustainable development through their cross sectoral research approach, a model that can be used worldwide.

      Another tangible effect has been that the relationship between industry and academic research is evolving into a positive and collaborative exchange, as opposed to the previous state which was largely isolationist, bordering on confrontational.[55] A possible cause of this is the increased representation of companies in the establishment of networks resulting in them influencing the course of research. This has not been met with any resistance from academic researchers who are driven by the imperative of an open publication. [56] Besides influencing the style of management, industrial representation has also brought about an increase in the level of private sector financial contributions made to NCEs. It is believed that these NCEs may even be able to support themselves in the next 7-8 years through the funding they receive from the commercialization of their research.

      A third benefit that has emerged is the faster rate of production of new knowledge and innovative thinking. This is the result of collaborative techniques which is made more efficient through the use of modern technology. The increasing number of multi authored cross institutional scholarly publications made available by the NCE is evidentiary of this trend. The rate and quantity of technology transfers has also increased exponentially as a result of this. Knowledge networks also facilitate the mobilization of human resources and address cross disciplinary problems, resulting in an efficient and synergistic solutions. Their low cost, fast pace approach has been instrumental in constructing an understanding of and capacity to engage in sustainable development.

      The significant contributions to sustainable development include the Canadian Genetic Diseases Network, which has discovered two specific genes that cause early onset Alzheimer's disease. The Sustainable Forest Management Network has claimed that its research does have a considerable level of influence on the industrial approach to sustainability. The Canadian Bacterial Disease Network conducts research on bacterially caused diseases which are mostly prevalent in developing countries, with a view to produce antibiotics and vaccines that may be able to successfully combat these vaccines. TeleLearning, another such network is working on the creation of software environments which will form the basis of technology based education in the future. [57] The greatest advantage of these knowledge networks is that they have been able to surpass traditional disciplinary barriers and have emerged at the forefront of interdisciplinary articulation, which is emerging as the path to breakthroughs in the fields of applied sciences and technology in the future. The NCE's have also been able to provide diverse working environments for graduate students, where they have been able to work under scientists associated with different specializations and across different departments. They have also been able to interact with government and industry representatives, giving them a far greater exposure of the field and equipping them to avail of a wide range of employment opportunities.

      The corporate style of management incorporated within the NCEs encourages a sense of discipline and an enthusiasm for innovation. The Board of Directors at NCE's take on a perfunctory role and function as a typical corporate board. Researchers are therefore required to provide regular reports and meet deadlines to achieve predetermined goals that have been agreed upon. The new paradigm of sustainable development and the fluid transfer of knowledge requires this structure of management, even within a previously strictly academically oriented environment. NCEs have been incorporated as non-profit corporation for largely legal reasons such as the ownership of intellectual property.

      The participation to these networks is restricted and is open only through an invitation, in the form of a submission of project proposals under a particular theme, with the final selection being made subject to a rigorous process of evaluation. This encourages the participants of the network to embody a degree of discipline and carry out their activities in a constructive, time bound manner.

      B. Perceived Challenges

      These knowledge networks, although extremely beneficial in the long run, do have certain specific issues that need to be addressed. Firstly, most formal knowledge networks do not have a formalized communication strategy. While they do make use of various forms of telecommunication, this communication is is no way formally directed or specific. Although some networks have managed to set up a directed communications strategy, supplemented by the involvement of specifically communications based networks (such as CANARIE) , there is still a long way to go in this area.

      As is evident with most academic endeavors in recent years, efficient and sustained development both in terms of economy as well as self-sustenance, requires a smooth transitioning to a close collaboration with the industry. Although the NCE's have made progress in this area, a lesson that can be learnt from this is that knowledge networks do require a collaborative arrangement between researchers, the industry and the financial sector. [58] The nature of this collaboration cannot be predicted before tangible research outputs are developed that reflect the relevance of academia in the industrial and financial sectors. A particular network, PENCE has mandated that the boards of directors include a representative of the financial sector. This is a step forward in opening the doors to greater collaboration and mutually assured growth and sustainable development in both academia as well as the industrial and financial sectors.

      As with all knowledge networks there is a continuous need for expansion of the focus areas to cover more fields and instigate research in neglected areas. The largest number of networks has been in the fields of healthcare and health associated work. However there is an impending need for networks to be established in other fields as well, such as those related to environmental issues, social dynamics and the general quality of life. [59]

      The Canadian experience has resulted in a nuanced understanding of specific actions that need to be taken to strengthen knowledge networks across the spectrum. Firstly, there is an impending need to build new knowledge networks, which would be required to strengthen institutions upon which the networks are based. These include universities and research institutions, which have been weakened both financially and academically over the past few years. The NCE Program, on the face of it, seems to be strengthening universities, by attracting funding for research endeavors that would otherwise not be available to them. While this may be true, it tends to obfuscate the true nature of a university as an intellectual community, by portraying it as a funding source for research and equipment.[60] The deteriorating role of the university in fostering research and laying the foundation of an intellectual community can be reversed by the competition posed by the NCEs which tend to threaten its stature in the fields of multi-disciplinary and graduate institution. Another aspect that needs to be considered is the role of knowledge networks in fostering sustainable development not only on a national or regional scale but on a global level. This can be effectuated by allowing the amalgamation of the academia and industry through ample representation, a model that has proven to be effective within the NCEs. This is all the more relevant today where multinational corporations hold considerable sway over the global economy, so much so that the role of governments in regulating this economy is gradually decreasing. Multilateral investment treaties and agreements are reflective of this.

      The final issue is that of the long standing debate between public good and proprietary knowledge. Canadian knowledge networks are of the opinion that knowledge must be freely disseminated. However, certain networks including the NCEs grant the exclusive right of the development and application of this knowledge to specific industry affiliates. On one hand this facilitates further investment into the research, which creates better products, new jobs and further social development. This is predicated on a fine balance of allowing this development without widening the already disparate socio-economic gaps that exist between developed and developing countries. Thus the balance between public good and propriety knowledge must be effectively managed by the regulatory role discharged by the governments and the decision making faculties of these knowledge networks. [61]

      Establishing international linkages across networks based within different regions across the world would also be an effective means of ensuring effective partnerships and the creation of a new, self-sustaining structure. This would bring new prospects of funding into sustainable development activities and engage industrial affiliates with international development activities.

      C. Donor Perspectives

      The International Development Research Centre, based in Canada has also been instrumental in the setting up of support structures for networks. The IDRC has remained consistent in its emphasis of networks as mechanisms of linking scientists engaged in similar problems across the globe instead of as mechanisms to fund research in countries. This has afforded the IDRC with a greater level of flexibility in responding to the needs of developing countries as well as responding to the financial pressures within Canada to deliver superior technical support with a reduction in overheads. The IDRC sees networking an indispensable aspect of scientific pursuit and technological adaptation in the most effective manner. It is currently supporting four specific types of networks; horizontal networks which link together institutions with similar areas of specialization, vertical networks which work on disparate aspects of the same problem of different but interrelated problems, information networks which provide a centralized form of information service to members, which enables them to exchange information in the manner necessary and finally training networks which provide supervisory services to independent participants within the network.[62]

      (I) Internal Evaluations

      There is an outstanding need to monitor visits that are undertaken by the coordinator or the specific representatives of the member or donor as applicable. This would expedite the process of identifying problems and aid in deriving tangible solutions in an efficient manner. The criteria for the assessment would vary depending on the goals of the organization. Donors may pose questions with respect to the cost effectiveness of a particular pattern of research and may seek a formal report regarding this aspect. A more extensive model of donor evaluations may even include assessments with respect to the monitoring and coordination of specific functions.

      (II) External Evaluations

      A system of external evaluation would be useful with assessing data with respect to the operations of programs and their objectives. This would engage newer participants by injecting newer ideas and insights into the management and scope of the network. The most extensive method of network evaluation was one that was postulated by Valverde [63] and reviewed by Faris [64]. It aimed to draw an analysis of particular constraints and specific elements that would influence the execution of network programs. This method identifies a list of threats, opportunities, strengths and weaknesses which would inform future recommendations. The Valverde method makes use of both formal as well as informal data which is varied depending on the type of network and the management structure it employs.[65]

      (III) Financial Viability

      A network almost always requires external resources to aid in the setting up and coordination of its activities. Donor agencies must recognize the long term commitment that is required in this respect. It is therefore essential that the period for which this funding will be made available be clarified at the outset, to leave agencies with ample time to plan for the possibility of cessation of external financial support. [66] As concluded from the findings of the research study, although most networks are offered external support, it is primarily technology transfer and information networks that have been able to generate the bulk of funding in this respect. They have been able to obtain this financial assistance from a variety of sources including participating organizations as well as governments. [67] The funding for purely research networks however are inconsistent and the networks would have to plan in advance for a possible cessation of financial support.[68]

      (IV) Adaptability

      From the perspective of donors, the degree of adaptability and level of responsiveness of a particular network is especially relevant in assessing the coordination, control and leadership of a particular network. A network that is plagued by ineffective leadership and the lack of coordination is unable to adapt to changing circumstances and meet the needs of its participants. A combination of collaborative effort, a localized approach and far-sighted leadership instills in the participants of the network a sense of comfort in its processes and in the donors a faith in its ability to address topical issues and remain relevant.

      (V) The Exchange of Information

      As noted by Akhtar, a network is created to respond to the growing need to improve channels of information exchange and communication. [69] Information needs to be tailored to suit its users and must be disseminated accordingly. The study conducted has concluded that information networks that are engaged in the transfer of technology are inefficient in disseminating internally derived information and recognizing the needs of their users.[70] Given that these networks are especially user oriented this systemic failure is extremely problematic. There is also a need to review the mechanism of transferring strategic research techniques and the approaches employed in dealing with developing countries. Special attention must be paid to the beneficiaries of a particular network so that the research conducted is directed towards that particular demographic. This is especially relevant for information networks, which from the evaluation; appear to be generating data but not considering who would be using these services.[71]

      (VI) Capacity Building

      Facilitating the training of individuals both on a formal and informal level has led to an enhance level of research and reporting, as well as the designing of projects. There is however a need to tailor this training to suit the needs of the participants of a particular network. Networks which have been able to provide inputs which are not ordinarily locally provided have instigated the establishment of national and regional institutions. [72]

      (VII) Cost Effectiveness

      It is important to note however that networks need to employ the most cost effective mechanism of delivering support services to national programs. A network must work in a manner that allows for enough individual enterprise but at the same time follows a collaborative model to generate more effective and relevant research within a short span of time and through the utilization of minimum resources. The Caribbean Technology Consultation Services (CTCS) for example was found to be far more cost effective and in fact 50% cheaper than the services of the United Nations Industrial Development Organization. [73] Similarly, the evaluators of the LAAN found that funding a network was significantly cheaper than finding individual research projects.[74]


      [1] Castells, Manuel (2000) "Toward a Sociology of the Network Society" Contemporary Sociology, Vol

      29 (5) p693-699

      [2] Reinicke, Wolfgang H & Francis Deng, et al (2000) Critical Choices: The United Nations, Networks

      and the Future of Global Governance IDRC, Ottawa

      [3] Supra ., n.1, p.697

      [4] Ibid

      [5] Supra n.1, p.61

      [6] Chambers, Robert (1997) Whose Reality Counts? Putting the First Last Intermediate Technology

      Publications, London

      [7] Ibid

      [8] Chisholm, Rupert. F (1998) Developing Network Organizations: Learning from Practice and Theory

      Addison Wesley

      [9] Brown, L. David. 1993. "Development Bridging Organizations and Strategic

      Management for Social Change." Advances in Strategic Management 9.

      [10] Madeline Church et al, Participation, Relationships and Dynamic change: New Thinking On Evaluating The Work Of International Networks Development Planning Unit, University College London (2002), p. 16

      [11] Ibid

      [12] Ibid

      [13] Reinicke, Wolfgang H & Francis Deng, et al (2000) Critical Choices: The United Nations, Networks

      and the Future of Global Governance IDRC, Ottawa, p.61

      [14] Ibid

      [15] Ibid

      [16] Supra n.13, p. 65

      [17] Ibid

      [18] Supra n. 13, p. 62

      [19] Ibid

      [20] Supra n. 13, p. 63

      [21] Ibid

      [22] Supra n. 13, p. 64

      [23] Newell, Sue & Jacky Swan (2000) "Trust and Inter-organizational Networking" in Human Relations,

      Vol 53 (10)

      [24] Sheppard, Blair H & Marla Tuchinsky (1996) "Micro-OB and the Network Organisation" in Kramer, R.

      And Tyler T. (eds) Trust in Organisations, Sage

      [25] Powell, Walter W (1996) "Trust-based forms of governance" in Kramer, R. And Tyler T. (eds) Trust in

      Organisations , Sage

      [26] Stern, Elliot (2001) "Evaluating Partnerships: Developing a Theory Based Framework", Paper for

      European Evaluation Society Conference 2001, Tavistock Institute

      [27] Freedman, Lynn & Jan Reynders (1999) Developing New Criteria for Evaluating Networks in Karl, M.

      (ed) Measuring the Immeasurable: Planning Monitoring and Evaluation of Networks, WFS

      [28] Allen Nan, Susan (1999) "Effective Networking for Conflict Transformation" Draft Paper for

      International Alert./UNHCR Working Group on Conflict Management and Prevention

      [29] Supra n. 10, p. 20

      [30] Ibid

      [31] Taylor, James, (2000) "So Now They Are Going To Measure Empowerment!", paper for INTRAC 4th

      International Workshop on the Evaluation of Social Development, Oxford, April

      [32] Karl, Marilee (2000) Monitoring And Evaluating Stakeholder Participation In Agriculture And Rural

      Development Projects: A Literature Review, FAO

      [33] Supra n. 10, p.25

      [34] Ibid

      [35] Supra n. 10, p. 26

      [36] Ibid

      [37] Supra n. 10, p.27

      [38] Ludema, James D, David L Cooperrider & Frank J Barrett (2001) "Appreciative Inquiry: the Power of

      the Unconditional Positive Question" in Reason, P. & Bradbury, H. (eds) Handbook of Action

      Research , Sage

      [39] Ibid

      [40] Supra n. 10, p. 29

      [41] Ibid

      [42] Ibid

      [43] Sida (2000) Webs Women Weave, Sweden, 131-135

      [44] Ibid

      [45] Dutting, Gisela & Martha de la Fuente (1999) "Contextualising our Experiences: Monitoring and

      Evaluation in the Women's Global Network for Reproductive Rights" in Karl, M. (ed) Measuring the

      Immeasurable: Planning Monitoring and Evaluation of Networks , WFS

      [46] Supra n. 10, p. 30

      [47] Supra n. 10, p. 32

      [48] Allen Nan, Susan (1999) "Effective Networking for Conflict Transformation" Draft Paper for

      International Alert./UNHCR Working Group on Conflict Management and Prevention

      [49] Supra n. 13, p. 67

      [50] Supra n. 13, 68

      [51] Supra n 10, 36

      [52] See Madeline Church et al, Participation, Relationships and Dynamic change: New Thinking On Evaluating The Work Of International Networks Development Planning Unit, University College London (2002), p. 36-37

      [53] The three granting councils are: the Natural Sciences and Engineering Research Council (NSERC),

      the Social Sciences and Humanities Research Council (SSHRC), and the Medical Research Council

      (MRC).

      [54] Howard C. Clark, Formal Knowledge Networks: A Study of Canadian Experiences, International Institute for Sustainable Development 1998, p. 16

      [55] Ibid, p. 18

      [56] Ibid, p. 18

      [57] Ibid, p. 19

      [58] Ibid , p 21

      [59] Ibid , p. 22

      [60] Ibid, p. 31

      [61] Ibid

      [62] Terry Smutylo and Saidou Koala, Research Networks: Evolution and Evaluation from a Donor's Perspective, p. 232

      [63] Valverde, C. 1988, Agricultural research networking : Development and evaluation, International Services for National Agricultural Research, The Hague, Netherlands. Staff Notes (18-26 November 1988)

      [64] Faris, D.G 1991, Agricultural research networks as development tools: Views of a network coordinator, IDRC, Ottawa, Canada, and International Crops Research Institute for the Semi-Arid Tropic, Patancheru, Andhra Pradesh, India

      [65] Supra n. 62

      [66] Terry Smutylo and Saidou Koala, Research Networks: Evolution and Evaluation from a Donor's Perspective, p. 233

      [67] ibid

      [68] Ibid

      [69] Akhtar, S. 1990. Regional Information Networks : Some Lessons from Latin America. Information Development 6 (1) : 35-42

      [70] Ibid, p. 242

      [71] Ibid, p. 242

      [72] Ibid., p. 243

      [73] Stanley, J.L and Elwela, S.S.B 1988, Evaluation report for the Caribbean Technology Consultancy Services (CTCS), CTCS Network Project (1985-1988) IDRC Ottawa, Canada

      [74] Moreau,L. 1991, Evaluation of Latin American Aqualculture Network. IDRC, Ottawa, Canada

      Summary of the Public Consultation by Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad

      by Vipul Kharbanda last modified Jan 28, 2016 03:22 PM
      On December 22nd and 23rd a public consultation was organized by the Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad at the GB Pant Social Science Institute, Allahabad to discuss the issues related to making Allahabad into a Smart City under the Smart On December 22nd and 23rd a public consultation was organized by the Vigyan Foundation, Oxfam India and G.B. Pant Institute, Allahabad at the GB Pant Social Science Institute, Allahabad to discuss the issues related to making Allahabad into a Smart City under the Smart City scheme of the Central Government. An agenda for the same is attached herewith. City scheme of the Central Government.

      The Centre for Internet and Society, Bangalore (CIS) is researching  the 100 Smart City Scheme from the perspective of Big Data and is seeking to understand the role of Big Data in smart cities in India as well as the impact of the generation and use of the same. CIS is also examining whether the current legal framework is adequate to deal with these new technologies. It was in this background that CIS attended a part of the workshop.

      At the outset the organizers had noted that there will be no discussion on technology and its adoption in this particular workshop.. The format involved a speaker providing his/her viewpoint on the topic concerned and  the discussion revolved mainly around problems relating to traffic, parking, roads, drainage, etc. and there was no discussion of technology or how to utilise it to solve these problems. From the discussions CIS has had with certain people who are quite involved with these public consultations, the impression that we have is that the solutions to these problems were not very complicated and required only some intent and execution, and if that was achieved it would go a long way in improving the infrastructure of the city. This perspective raises the question of whether or not India needs 'Smart Cities' to improve the life of residents or if basic urban solutions are adequate and are in fact needed to lay the foundation for any potential smart city that might be established in the future.

      It is quite interesting to see the difference in the levels at which the debate on smart cities is happening, in that when the central government talks about smart cities they try to highlight technology and other aspects such as smart meters, smart grids, etc. while the discussion on the ground in the actual cities is currently at a much more basic stage. For example the government website for the smart city project, while describing a smart city, mentions a number of “smart solutions” such as “electronic service delivery”, “smart meters” for water, “smart meters” for electricity, “smart parking”, Intelligent Traffic Management”, “Tele-medicine”, etc. Even in all the major public service announcements on the smart city project, the government effort seems to be to focus on these “smart solutions”, projecting technology as the answer to urban problems. However those in the cities themselves appear to be more concerned with adequate parking, adequate water supply, proper roads, waste disposal, etc. This difference in approach is only representative of the yawning gap between the mindspace of those who conceive these schemes and market them on the one hand and those who are tasked with implementing the schemes on the other hand as well as the realities of what cities in India need to address problems related to infrastructure and functioning. However the silver lining in this scenario, atleast on a personal level, is that the people on the ground, are not blindly turning to technology to solve their problems but actually trying to look for the best solutions regardless of whether it is a technology based solution or not.

      Agenda


      Rashtriya 1

      Rashtriya 2

      CIS's Comments on the CCWG-Accountability Draft Proposal

      by Pranesh Prakash last modified Jan 29, 2016 03:17 PM
      The Centre for Internet & Society (CIS) gave its comments on the failures of the CCWG-Accountability draft proposal as well as the processes that it has followed.

      We from the Centre for Internet and Society wishes to express our dismay at the consistent way in which CCWG-Accountability has completely failed to take critical inputs from organizations like ours (and others, some instances of which have been highlighted in Richard Hill’s submission) into account, and has failed to even capture our concerns and misgivings about the process — as expressed in our submission to the CCWG-Accountability’s 2nd Draft Proposal on Work Stream 1 Recommendations — in any document prepared by the CCWG.  We cannot support the proposal in its current form.

      Time for Comments

      We believe firstly that the 21 day comment period itself was too short and is going to result effectively in many groups or categories of people from not being able to meaningfully participate in the process, which flies in the face of the values that ICANN claims to uphold. This extremely short period amounts to procedural unsoundness, and restrains educated discussion on the way forward, especially given that the draft has altered quite drastically in the aftermath to ICANN55.

      Capture of ICANN and CCWG Process

      The participation in the accountability-cross-community mailing list clearly shows that the process is dominated by developed countries (of the top 30 non-staff posters to the list, 26 were from the ‘WEOG’ UN grouping, with 14 being from the USA, with only 1 from Asia Pacific, 2 from Africa, and 1 from Latin America), by males (27 of the 30 non-staff posters), and by industry/commercial interests (17 of the top 30 non-staff posters).  If this isn’t “capture”, what is?  There is no stress test that overcomes this reality of capture of ICANN by Western industry interests.  The global community is only nominally multistakeholder, while actually being grossly under-representative of the developing nations, women and minority genders, and communities that are not business communities or technical communities.  For instance, of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa.

      Culling statistics from the accountability-cross-community mailing list, we find that of the top 30 posters (excluding ICANN staff):

      • 57% were, as far as one could ascertain from public records, from a single country: the United States of America.
      • 87% were, as far as one could ascertain from public records, participants from countries which are part of the WEOG UN grouping (which includes Western Europe, US, Canada, Israel, Australia, and New Zealand), which only has developed countries. None of those who participated substantively were from the EEC (Eastern European) group and only 1 was from Asia-Pacific and only 1 was from GRULAC (Latin American and Caribbean Group).
      • 90% were male and 3 were female, as far as one could ascertain from public records.
      • 57% were identifiable as primarily being from industry or the technical community, as far as one could ascertain from public records, with only 2 (7%) being readily identifiable as representing governments.

      This lack of global multistakeholder representation greatly damages the credibility of the entire process, since it gains its legitimacy by claiming to represent the global multistakeholder Internet community.

      Bogey of Governmental Capture

      With respect to Stress Test 18, dealing with the GAC, the report proposes that the ICANN Bylaws, specifically Article XI, Section 2, be amended to create a provision where if two-thirds of the Board so votes, they can reject a full GAC consensus advice. This amendment is not connected to the fear of government capture or the fear that ICANN will become a government-led body; given that the advice given by the GAC is non-binding that is not a possibility. Given the state of affairs described in the submission made above, it is clear that for much of the world, their governments are the only way in which they can effectively engage within the ICANN ecosystem. Therefore, nullifying the effectiveness of GAC advice is harmful to the interests of fostering a multistakeholder ecosystem, and contributes to the strengthening of the kind of industry capture described above.

      Jurisdiction

      All discussions on the Sole Designator Model seem predicated on the unflinching certainty of ICANN’s jurisdiction continuing to remain in California, as the legal basis of that model is drawn from Californian corporate law.  To quote the draft report itself, in Annexe 12, it is stated that:

      "Jurisdiction directly influences the way ICANN’s accountability processes are structured and operationalized. The fact that ICANN today operates under the legislation of the U.S. state of California grants the corporation certain rights and implies the existence of certain accountability mechanisms. It also imposes some limits with respect to the accountability mechanisms it can adopt. The topic of jurisdiction is, as a consequence, very relevant for the CCWG-Accountability. ICANN is a public benefit corporation incorporated in California and subject to California state laws, applicable U.S. federal laws and both state and federal court jurisdiction."

      Jurisdiction has been placed within the mandate of WS2, to be dealt with post the transition.  However, there is no analysis in the 3rd Draft on how the Sole Designator Model would continue to be upheld if future Work Stream 2 discussions led to a consensus that there needed to be a shift in the jurisdiction of ICANN. In the event that ICANN shifts to, say, Delaware or Geneva, would there be a basis to the Sole Designator Model in the law?  Therefore this is an issue that needs to be addressed before this model is adopted, else there is a risk of either this model being rendered infructuous in the future, or this model foreclosing open debate and discussion in Work Stream 2.

      Right of Inspection

      We strongly support the incorporation of the rights of Inspection under this model as per Section 6333 of the California Corporations Code as a fundamental bylaw. As there is a severe gap between the claims that ICANN raises about its own transparency and the actual amount of transparency that it upholds, we opine that the right of inspection needs to be provided to each member of the ICANN community.

      Timeline for WS2 Reforms

      We support the CCWG’s commitment to the review of the DIDP Process, which they have committed to enhancing in WS2. Our research on this matter indicates that ICANN has in practice been able to deflect most requests for information. It regularly utilised its internal processes and discussions with stakeholders clauses, as well as clauses on protecting financial interests of third parties (over 50% of the total non-disclosure clauses ever invoked - see chart below) to do away with having to provide information on pertinent matters such as its compliance audits and reports of abuse to registrars. We believe that even if ICANN is a private entity legally, and not at the same level as a state, it nonetheless plays the role of regulating an enormous public good, namely the Internet. Therefore, there is a great onus on ICANN to be far more open about the information that they provide. Finally, it is extremely disturbing that they have extended full disclosure to only 12% of the requests that they receive. An astonishing 88% of the requests have been denied, partly or otherwise. See "Peering behind the veil of ICANN's DIDP (II)".

      In the present format, there has been little analysis on the timeline of WS2; the report itself merely states that:

      "The CCWG-Accountability expects to begin refining the scope of Work Stream 2 during the upcoming ICANN 55 Meeting in March 2016. It is intended that Work Stream 2 will be completed by the end of 2016."

      Without further clarity and specification of the WS2 timeline, meaningful reform cannot be initiated. Therefore we urge the CCWG to come up with a clear timeline for transparency processes.

      The Internet Has a New Standard for Censorship

      by Jyoti Panday last modified Jan 30, 2016 09:17 AM
      The introduction of the new 451 HTTP Error Status Code for blocked websites is a big step forward in cataloguing online censorship, especially in a country like India where access to information is routinely restricted.
      The Internet Has a New Standard for Censorship

      Featured image credit: span112/Flickr, CC BY 2.0.

      The article was published in the Wire on January 29, 2016. The original can be read here.


      Ray Bradbury’s dystopian novel Fahrenheit 451 opens with the declaration, “It was a pleasure to burn.” The six unassuming words offer a glimpse into the mindset of the novel’s protagonist, ‘the fireman’ Guy Montag, who burns books. Montag occupies a world of totalitarian state control over the media where learning is suppressed and censorship prevails. The title alludes to the ‘temperature at which book paper catches fire and burns,’ an apt reference to the act of violence committed against citizens through the systematic destruction of literature. It is tempting to think about the novel solely as a story of censorship. It certainly is. But it is also a story about the value of intellectual freedom and the importance of information.

      Published in 1953, Bradbury’s story predates home computers, the Internet, Twitter and Facebook, and yet it anticipates the evolution of these technologies as tools for censorship. When the state seeks to censor speech, they use the most effective and easiest mechanisms available. In Bradbury’s dystopian world, burning books did the trick; in today’s world, governments achieve this by blocking access to information online. The majority of the world’s Internet users encounter censorship even if the contours of control vary depending on the country’s policies and infrastructure.

      Online censorship in India

      In India, information access blockades have become commonplace and are increasingly enforced across the country for maintaining political stability, for economic reasons, in defence of national security or preserving social values. Last week, the Maharashtra Anti-terror Squad blocked 94 websites that were allegedly radicalising the youth to join the militant group ISIS. Memorably, in 2015 the NDA government’s ham-fisted attempts at enforcing a ban on online pornography resulted in widespread public outrage. Instead of revoking the ban, the government issued yet another vaguely worded and in many senses astonishing order. As reported by Medianama, the revised order delegates the responsibility of determining whether banned websites should remain unavailable to private intermediaries.

      The state’s shifting reasons for blocking access to information is reflective of its tendentious attitude towards speech and expression. Free speech in India is messily contested and normally, the role of the judiciary acts as a check on the executive’s proclivity for banning. For instance, in 2010 the Supreme Court upheld the Maharashtra High Court’s decision to revoke the ban on the book on Shivaji by American author James Laine, which, according to the state government, contained material promoting social enmity. However, in the context of communications technology the traditional role of courts is increasingly being passed on to private intermediaries.

      The delegation of authority is evident in the government notifying intermediaries to proactively filter content for ‘child pornography’ in the revised order issued to deal with websites blocked as result of its crackdown on pornography. Such screening and filtering requires intermediaries to make a determination on the legality of content in order to avoid direct liability. As international best practices such as the Manila Principles on Intermediary Liability point out, such screening is a slow process and costly and  intermediaries are incentivised to simply limit access to information.

      Blocking procedures and secrecy

      The constitutional validity of Section 69A of the Information Technology Act, 2008 which grants power to the executive to block access to information unchecked, and in secrecy was challenged in Shreya Singhal v. Union of India. Curiously, the Supreme Court upheld S69A reasoning that the provisions were narrowly-drawn with adequate safeguards and noted that any procedural inconsistencies may be challenged through writ petitions under Article 226 of the Constitution. Unfortunately as past instances of blocking under S69A reveal the provisions are littered with procedural deficiencies, amplified manifold by the authorities responsible for interpreting and implementing the orders.

      Problematically, an opaque confidentiality criteria built into the blocking rules mandates secrecy in requests and recommendations for blocking and places written orders outside the purview of public scrutiny. As there are no comprehensive list of blocked websites or of the legal orders, the public has to rely on ISPs leaking orders, or media reports to understand the censorship regime in India. RTI applications requesting further information on the implementation of these safeguards have at best provided incomplete information.

      Historically, the courts in India have held that Article 19(1)(a) of the Constitution of India is as much about the right to receive information as it is to disseminate, and when there is a chilling effect on speech, it also violates the right to receive information. Therefore, if a website is blocked citizens have a constitutional right to know the legal grounds on which access is being restricted. Just like the government announces and clarifies the grounds when banning a book, users have a right to know the grounds for restrictions on their speech online.

      Unfortunately, under the present blocking regime in India there is no easy way for a service provider to comply with a blocking order while also notifying users that censorship has taken place. The ‘Blocking Rules’ require notice “person or intermediary” thus implying that notice may be sent to either the originator or the intermediary. Further, the confidentiality clause raises the presumption that nobody beyond the intermediaries ought to know about a block.

      Naturally, intermediaries interested in self-preservation and avoiding conflict with the government become complicit in maintaining secrecy in blocking orders. As a result, it is often difficult to determine why content is inaccessible and users often mistake censorship for technical problem in accessing content. Consequently, pursuing legal recourse or trying to hold the government accountable for their censorious activity becomes a challenge. In failing to consider the constitutional merits of the confidentiality clause, the Supreme Court has shied away from addressing the over-broad reach of the executive.

      Secrecy in removing or blocking access is a global problem that places limits on the transparency expected from ISPs. Across many jurisdictions intermediaries are legally prohibited from publicising filtering orders as well as information relating to content or service restrictions. For example in United Kingdom, ISPs are prohibited from revealing blocking orders related to terrorism and surveillance. In South Korea, the Korean Communications Standards Commission holds public meetings that are open to the public. However, the sheer volume of censorship (i.e. close to 10,000 URLs a month) makes it unwieldy for public oversight.

      As the Manila Principles note, providing users with an explanation and reasons for placing restrictions on their speech and expression increases civic engagement. Transparency standards will empower citizens to demand that companies and governments they interact with are more accountable when it comes to content regulation. It is worth noting, for conduits as opposed to content hosts, it may not always be technically feasible for to provide a notice when content is unavailable due to filtering. A new standard helps improve transparency standards for network level intermediaries and for websites bound by confidentiality requirements. The recently introduced HTTP code for errors is a critical step forward in cataloguing censorship on the Internet.

      A standardised code for censorship

      On December 21, 2015, the Internet Engineering Standards Group (IESG) which is the organisation responsible for reviewing and updating the internet’s operating standards approved the publication of 451-’An HTTP Status Code to Report Legal Obstacles’. The code provides intermediaries a standardised way to notify users know when a website is unavailable following a legal order. Publishing the code allows intermediaries to be transparent about their compliance with court and executive orders across jurisdictions and is a huge step forward for capturing online censorship. HTTP code 451 was introduced by software engineer Tim Bray and the code’s name is an homage to Bradbury’s novel Fahrenheit 451.

      Bray began developing the code after being inspired by a blog post by Terence Eden calling for a  censorship error code. The code’s official status comes after two years of discussions within the technical community and is a result of campaigning from transparency and civil society advocates who have been pushing for clearer labelling of internet censorship. Initially, the code received pushback from within the technical community for reasons enumerated by Mark Nottingham, Chair of the IETF HTTP Working Group in his blog. However, soon sites began using the code on an experimental and unsanctioned basis and faced with increasing demand for and feedback, the code was accepted.

      The HTTP code 451 works as a machine-readable flag and has immense potential as a tool for organisations and users who want to quantify and understand censorship on the internet. Cataloguing online censorship is a challenging, time-consuming and expensive task. The HTTP code 451 circumvents confidentiality obligations built into blocking or licensing regimes and reduces the cost of accessing blocking orders.

      The code creates a distinction between websites blocked following a court or an executive order, and when information is inaccessible due to technical errors. If implemented widely, Bray’s new code will help prevent confusion around blocked sites. The code addresses the issue of the ISP’s misleading and inaccurate usage of Error 403 ‘Forbidden’ (to indicate that the server can be reached and understood the request, but refuses to take any further action) or 404 ‘Not Found’ (to indicate that the requested resource could not be found but may be available again in the future).

      Adoption of the new standard is optional, though at present there are no laws in India that prevent intermediaries doing so. Implementing a standardised machine-readable flag for censorship will go a long way in bolstering the accountability of ISPs that have in the past targeted an entire domain instead of the specified URL. Adoption of the standard by ISPs will also improve the understanding of the burden imposed on intermediaries for censoring and filtering content as presently, there is no clarity on what constitutes compliance.  Of course, censorious governments may prohibit the use of the code, for example by issuing an order that specifies not only that a page be blocked, but also precisely which HTTP return code should be used. Though such sanctions should be viewed as evidence of systematic rights violation and totalitarian regimes.

      In India where access to software code repositories such as Github and Sourceforge are routinely restricted, the need for such code is obvious. The use of the code will improve confidence in blocking practices, allowing  users to understand the grounds on which their right to information is being restricted. Improving transparency around censorship is the only way to build trust between the government and its citizens about the laws and policies applicable to internet content.

      Nature of Knowledge

      by Scott Mason — last modified Jan 30, 2016 11:42 AM

      Introduction

      In 2008 Chris Anderson infamously proclaimed the 'end of theory'. Writing for Wired Magazine, Anderson predicted that the coming age of Big Data would create a 'deluge of data' so large that the scientific methods of hypothesis, sampling and testing would be rendered 'obsolete' [1]. For him and others, the hidden patterns and correlations revealed through Big Data analytics enable us to produce objective and actionable knowledge about complex phenomena not previously possible using traditional methodologies. As Anderson himself put it, 'there is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot' [2] .

      In spite of harsh criticism of Anderson's article from across the academy, his uniquely (dis)utopian vision of the scientific utility of Big Data has since become increasingly mainstream with regular interventions from politicians and business leaders evangelising about Big Data's potentially revolutionary applications. Nowhere is this bout of data-philia more apparent than in India where the governments recently announced the launch of 'Digital India', a multi-million dollar project which aims to harness the power of public data to increase the efficiency and accessibility of public services [3]. In spite of the ambitious promises associated with Big Data however, many theorists remain sceptical about its practical benefits and express concern about its potential implications for conventional scientific epistemologies. For them the increased prominence of Big Data analytics in science does not signal a paradigmatic transition to a more enlightened data-driven age, but a hollowing out of the scientific method and an abandonment of casual knowledge in favour of shallow correlative analysis. In response, they emphasise the continued importance of theory and specialist knowledge to science, and warn against what they see as the uncritical adoption of Big Data in public policy-making [4]. In this article I will examine the challenges posed by Big Data technologies to established scientific epistemologies as well as the possible implications of these changes for public-policymaking. Beginning with an exploration of some of the ways in which Big Data is changing our understanding of scientific research and knowledge, I will argue that claims that Big Data represents a new paradigm of scientific inquiry are predicated upon a number of implicit assumptions about the nature of knowledge. Through a critic of these assumptions I will highlights some of the potential risks that an over-reliance on Big Data analytics poses for public policy-making, before finally making the case for a more nuanced approach to Big Data, which emphasises the continued importance of theory to scientific research.

      Big Data: The Fourth Paradigm?

      "Revolutions in science have often been preceded by revolutions in measurement".

      In his book the Structure of Scientific Revolutions Kuhn describes scientific paradigms as 'universally recognized scientific achievements that, for a time, provide model problems and solutions for a community of researchers'[5]. Paradigms as such designate a field of intelligibility within a given discipline, defining what kinds of empirical phenomena are to be observed and scrutinized, the types of questions which can be asked of those phenomena, how those questions are to be structured as well as the theoretical frameworks within which the results can be analysed and interpreted. In short, they 'constitute an accepted way of interrogating the world and synthesizing knowledge common to a substantial proportion of researchers in a discipline at any one moment in time'[6]. Periodically however, Kuhn argues, that these paradigms can become destabilised by the development of new theories or the discovery of anomalies that cannot be explained through reference to the dominate paradigm. In such instances Kuhn claims, the scientific discipline is thrown into a period of 'crisis', during which new ideas and theories are proposed and tested, until a new paradigm is established and gains acceptance from the community.

      More recently computer scientists Jim Gray, adopted and developed Kuhn's concept of the paradigm shift, charting history of science through the evolution of four broad paradigms, experimental science, theoretical science, computational science and exploratory science [7]. Unlike Kuhn however, who proposed that paradigm shifts occur as the result of anomalous empirical observations which scientists are unable to account for within the existing paradigm, Gray suggested that transitions in scientific practice are in fact primarily driven by advances and innovations in methods of data collection and analysis. The emergence of the experimental paradigm according to Gray can therefore be traced back to the ancient Greece and China when philosophers began to describe their empirical observations using natural rather an spiritual explanations. Likewise, the transition to the theoretical paradigm of science can be located in the 17th Century during which time scientists began to build theories and models which made generalizations based upon their empirical observations. Thirdly, Gray identifies the emergence of a computational paradigm in the latter part of the 20th Century in which advanced techniques of simulation and computational modelling were developed to help solve equations and explore fields of inquiry such as climate modelling which would have been impossible using experimental or theoretical methods. Finally, Gray proposed that we are today witnessing a transition to a 'fourth paradigm of science', which he termed the exploratory paradigm. Although also utilising advanced computational methods, unlike the previous computational paradigm which developed programs based upon established rules and theories, Gray suggested that within this new paradigm, scientists begin with the data itself; designing programs to mine enormous databases in the search for correlations and patterns; in effect using the data to discover the rules [8].

      The implications of this shift are potentially significant for the nature of knowledge production, and are already beginning to be seen across a wide range of sectors. In the retail sector for example, data mining and algorithmic analysis are already being used to help predict items that a customers may wish to purchase based upon previous shopping habits[9]. Here, unlike with traditional research methodologies the analysis does not presuppose or hypothesise a relationship between items which it then attempts to prove through a process of experimentation, instead the relationships are identified inductively through the processing and reprocessing of vast quantities of data alone. By starting with the data itself, Big Data analysts circumvent the need for predictions or hypothesis about what one is likely to find, as Dyche observes 'mining Big Data reveals relationships and patterns that we didn't even know to look for'[10]. Similarly, by focussing primarily on the search for correlations and patterns as opposed to causation Big Data analysts also reject the need interpretive theory to frame the results instead researchers claim the outcomes are inherently meaningful and interpretable by anyone without the need for domain specific or contextual knowledge. For example, Joh observes how Big Data is being used in policing and law enforcement to help make better decisions about the allocation of police resources. By looking for patterns in the crime data they are able to make accurate predictions about the localities and times in which crimes are most likely to occur and dispatch their officers accordingly[11]. Such analysis according to Big Data proponents requires no knowledge of the cause of the crime, nor the social or cultural context within which it is being perpetrated, instead predictions and assessments are made purely on the basis of patterns and correlations identified within the historical data by statistical modelling.

      In summary then, Gray's exploratory paradigm represents a radical inversion of the deductive scientific method, allowing researchers to derive insights directly from the data itself without the use of hypothesis or theory. Thus it is claimed, by enabling the collection and analysis of datasets of unprecedented scale and variety Big Data allows analysts to 'let the data speak for itself'[12], providing exhaustive coverage of social phenomena, and revealing correlations that are inherently meaningful and interpretable by anyone without the need for specialised subject knowledge or theoretical frameworks.

      For Gray and others this new paradigm is made possible only by the recent exponential increase in the generation and collection of data as well as the emergence of new forms of data science, known collectively as "Big Data". For them the 'deluge of data' produced by the increase in the number of internet enabled devices as well as the nascent development of the internet of things, presents scientists and researchers with unprecedented opportunities to utilise data in new and innovative way to develop new insights across a wide range of sectors, many of which would have been unimaginable even 10 years ago. Furthermore, advances in computational and statistical methods as well as innovations in data visualization and methods of linking datasets, mean that scientist can now utilise the data available to its full potential or as professor Gary King quipped ' Big Data is nothing compared to a big algorithm'[13].

      These developments in statistical and computational analysis combined with the velocity variety and quantity of data available to analysts have therefore allowed scientists to pursue new types of research, generating new forms of knowledge and facilitating a radical shift in how we think about "science" itself. As Boyd and Crawford note, ' Big Data [creates] a profound change at the levels of epistemology and ethics. Big Data reframes key questions about the constitution of knowledge, the processes of research, how we should engage with information, and the nature and the categorization of reality . . . [and] stakes out new terrains of objects, methods of knowing, and definitions of social life '[14]. For many these changes in the nature of knowledge production provide opportunities to improve decision-making, increase efficiency, encourage innovation across a broad range of sectors from healthcare and policing to transport to international development[15]. For others however, many of the claims of Big Data are premised upon some questionable methodological and epistemological assumptions, some of which threat to impoverish the scientific method and undermine scientific rigour [16].

      Assumptions of Big Data

      Given its bold claims the allure of Big Data in both the public and privates sectors is perhaps understandable. However despite the radical and rapid changes to research practice and methodology, there has nevertheless seemingly been a lack of reflexive and critical reflection concerning the epistemological implications of the research practices used in Big Data analytics. And yet implicit within this vision of the future of scientific inquiry lie a number of important and arguably problematic epistemological and ontological assumptions, most notably;

      - Big Data can provide comprehensive coverage of phenomenon, capturing all relevant information.

      - Big Data does not require hypothesis, a priori theory, or models to direct the data collection or research questions.

      - Big Data analytics do not require theoretical framing in order to be interpretable. The data is inherently meaningful transcending domain specific knowledge and can be understood be anyone.

      - Correlative knowledge is sufficient to make accurate predictions and guide policy decisions.

      For many, these assumptions are highly problematic and call into question the claims that Big Data makes about itself. I will now look at each one in turn before proposing there possible implications for Big Data in Policy-making.

      Firstly, whilst Big Data may appear to be exhaustive in its scope, it can only be considered to be so in the context of the particular ontological and methodological framework chosen by the researcher. No data set however large can scrutinize all information relevant to a given phenomenon. Indeed, even if it were somehow possible to capture all relevant quantifiable data within a specific domain, Big Data analytics would still be unable to fully account for the multifarious variables which are unquantifiable or undatafiable. As such Big Data does not provide an omnipresent 'gods-eye view', instead much like any other scientific sample it must be seen to provide the researcher with a singular and limited perspective from which he or she can observe a phenomenon and draw conclusions. It is important to recognise that this vantage point provides only one of many possible perspectives, and is shaped by the technologies and tools used to collect the data, as well as the ontological assumptions of the researchers. Furthermore, as with any other scientific sample, it is also subject to sampling bias and is dependent upon the researcher to make subjective judgements about which variables are relevant to the phenomena being studied and which can be safely ignored.

      Secondly, claims by Big Data analysts to be able to generate insights directly from the data, signals a worrying divergence from deductive scientific methods which have been hegemonic within the natural sciences for centuries. For Big Data enthusiasts such as Prensky, 'scientists no longer have to make educated guesses, construct hypotheses and models, and test them with data-based experiments and examples. Instead, they can mine the complete set of data for patterns that reveal effects, producing scientific conclusions without further experimentation '[17]. Whereas, deductive reasoning begins with general statements or hypotheses and then proceeds to observe relevant data equipped with certain assumptions about what should be observed if the theory is to be proven valid; inductive reasoning conversely begins with empirical observations of specific examples from which it attempts to draw general conclusions. The more data collected the greater the probability that the general conclusions generated will be accurate, however regardless of the quantity of observations no amount of data can ever conclusively prove causality between two variables, since it is always possible that my conclusions may in future be falsified by an anomalous observation. For example, a researcher who had only ever observed the existence of white swans may reasonably draw the conclusion that 'all swans are white', whilst they would be justified in making such a claim, it would nevertheless be comprehensively disproven the day a black swan was discovered. This is what David Hume called the 'problem of induction'[18] and strikes at the foundation of Big Data claims to be able to provide explanatory and predictive analysis of complex phenomena, since any projections made are reliant upon the 'principle of uniformity of nature', that is the assumption that a sequence of events will always occur as it has in the past. As a result, although Big Data may be well suited to providing detailed descriptive accounts of social phenomena, without theoretical grounding it nevertheless remains unable to prove casual links between variables and therefore is limited in its ability to provide robust explanatory conclusions or give accurate predictions about future events.

      Finally, just as Big Data enthusiasts claim that theory or hypotheses are not needed to guide data collection, so too they insist human interpretation or framing is no longer required for the processing and analysis of the data. Within this new paradigm therefore, 'the data speaks for itself' [19], and specialised knowledge is not needed to interpret the results which are now supposedly rendered comprehensible to anyone with even a rudimentary grasp of statistics. Furthermore, the results we are told are inherently meaningful, transcending culture, history or social context and providing pure objective facts uninhibited by philosophical or ideological commitments.

      Initially inherited from the natural sciences, this radical form of empiricism thus presupposes the existence of an objective social reality occupied by static and immutable entities whose properties are directly determinable through empirical investigation. In this way, Big Data reduces the role of social science to the perfunctory calculation and analysis of the mechanical processes of pre-formed subjects, in much the same way as one might calculate the movement of the planets or the interaction of balls on a billiard table. Whilst proponents of Big Data claim that such an approach allows them to produce objective knowledge, by cleansing the data of any kind of philosophical or ideological commitment, it nevertheless has the effect of restricting both the scope and character of social scientific inquiry; projecting onto the field of social research meta-theoretical commitments that have long been implicit in the positivist method, whilst marginalising those projects which do not meet the required levels of scientificity or erudition.

      This commitment to an empiricist epistemology and methodological monism is particularly problematic in the context of studies of human behaviour, where actions cannot be calculated and anticipated using quantifiable data alone. In such instances, a certain degree of qualitative analysis of social, historical and cultural variables may be required in order to make the data meaningful by embedding it within a broader body of knowledge. The abstract and intangible nature of these variables requires a great deal of expert knowledge and interpretive skill to comprehend. It is therefore vital that the knowledge of domain specific experts is properly utilized to help 'evaluate the inputs, guide the process, and evaluate the end products within the context of value and validity'[20].

      Despite these criticisms however, Big Data is perhaps unsurprisingly increasingly becoming popular within the business community, lured by the promise of cheap and actionable scientific knowledge, capable of making their operations more efficient reducing overheads and producing better more competitive services. Perhaps most alarming from the perspective of Big Data's epistemological and methodological implications however, is the increasingly prominent role Big Data is playing in public policy-making. As I will now demonstrate, whilst Big Data can offer useful inputs into public policy-making processes, the methodological assumptions implicit within Big Data methodologies problems pose a number of risks to the effectiveness as well as the democratic legitimacy of public policy-making. Following an examination of these risks I will argue for a more reflexive and critical approach to Big Data in the public sector.

      Big Data and Policy-Making: Opportunities and Risks

      In recent year Big Data has begun to play an increasingly important role in public policy-making. Across the global, government funded projects designed to harvest and utilise vast quantities of public data are being developed to help improve the efficiency and performance of public services as well as better inform policy-making processes. At first glance, Big Data would appear to be the holy-grail for policy-makers - enabling truly evidence-based policy-making, based upon pure and objective facts, undistorted by political ideology or expedience. Furthermore, in an era of government debt and diminishing budgets, Big Data promises not only to produce more effective policy, but also to deliver on the seemingly impossible task of doing more with less, improving public services whilst simultaneously reducing expenditure.

      In the Indian context, the government's recently announced 'Digital India' project promises to harness the power of public data to help modernise Indian's digital infrastructure and increase access to public services. The use of Big Data is seen as being central to the project's success, however, despite the commendable aspirations of Digital India, many commentators remain sceptical about the extent to which Big Data can truly deliver on its promises of better more efficient public services, whilst others have warned of the risk to public policy of an uncritical and hasty adoption of Big Data analytics [21]. Here I argue that the epistemological and methodological assumptions which are implicit within the discourse around Big Data threaten to undermine the goal of evidence based policy-making, and in the process widen already substantial digital divides.

      It has long been recognised that science and politics are deeply entwined. For many social scientists the results of social research can be never entirely neutral, but are conditioned by the particular perspective of the researcher. As Shelia Jasanoff observed, 'Most thoughtful advisers have rejected the facile notion that giving scientific advice is simply a matter of speaking truth to power. It is well recognized that in thorny areas of public policy, where certain knowledge is difficult to come by, science advisers can offer at best educated guesses and reasoned judgments, not unvarnished truth' [22]. Nevertheless, 'unvarnished truth' is precisely what Big Data enthusiasts claim to be able to provide. For them the capacity of Big Data to derive results and insights directly from the data without any need for human framing, allows policy-makers to incorporate scientific knowledge directly into their decision-making processes without worrying about the 'philosophical baggage' usually associated with social scientific research.

      However, in order to be meaningful, all data requires a certain level of interpretative framing. As such far from cleansing science of politics, Big Data simply acts to shift responsibility for the interpretation and contextualisation of results away from domain experts - who possess the requisite knowledge to make informed judgements regarding the significance of correlations - to bureaucrats and policy-makers, who are more susceptible to emphasise those results and correlations which support their own political agenda. Thus whilst the discourse around Big Data may promote the notion of evidence based policy-making, in reality the vast quantities of correlations generated by Big Data analytics act simply to broaden the range of 'evidence' from which politician can chose to support their arguments; giving new meaning to Mark Twain's witticism that there are 'lies, damn lies, and statistics'.

      Similarly, for many an over-reliance on Big Data analytics for policy-making, risks leading to public policy which is blind to the unquantifiable and intangible. As already discussed above, Big Data's neglect of theory and contextual knowledge in favour of strict empiricism marginalises qualitative studies which emphasise the importance of traditional social scientific categories such as race, gender, and religion, in favour of a purely quantitative analysis of relational data. For many however consideration of issues such as gender, race, and religious sensitivity can be just as important to good public policy-making as quantitative data; helping to contextualise the insights revealed in the data and provide more explanatory accounts of social relations. They warn that neglect of such considerations as part of policy-making processes can have significant implications for the quality of the policies produced[23]. Firstly, although Big Data can provide unrivalled accounts of "what" people do, without a broader understanding of the social context in which they act, it fundamentally fails to deliver robust explanations of "why" people do it. This problem is especially acute in the case of public policy-making since without any indication of the motivations of individuals, policy-makers can have no basis upon which to intervene to incentivise more positive outcomes. Secondly, whilst Big Data analytics can help decision-makers to design more cost-effective policy, by for example ensuring better use of scarce resources; efficiency and cost-effectiveness are not the only metrics by which good policy can be judged. Public policy regardless of the sector must consider and balance a broad range of issues during the policy process including matters such as race, gender issues and community relations. Normative and qualitative considerations of this kind are not subject to a simplistic 1-0 quantification but instead require a great deal of contextual knowledge and insight to navigate successfully

      Finally, to the extent that policy-makers are today attempting to harvest and utilise individual citizens personal data as direct inputs for the policy-making process, Big Data driven policy can in a very narrow sense be considered to offer a rudimentary form of direct democracy. At first glance this would appear to help to democratise political participation allowing public services to become automatically optimised to better meet the needs and preferences of citizens without the need for direct political participation. In societies such as India however, where there exist high levels of inequality in access to information and communication technologies, there remain large discrepancies in the quantities of data produced by individuals. In a Big Data world in which every byte of data is collected, analysed and interpreted in order to make important decisions about public services therefore, those who produce the greatest amounts of data, are better placed to have their voices heard the loudest, whilst those who lack access to the means to produce data risk becoming disenfranchised, as policy-making processes become configured to accommodate the needs and interests of a privilege minority. Similarly, using user generated data as the basis for policy decisions also leaves systems vulnerable to coercive manipulation. That is, once it has become apparent that a system has been automated on the basis of user inputs, groups or individuals may change their behaviour in order to achieve a certain outcome. Given these problems it is essential that in seeking to utilise new data resources for policy-making, we avoid an uncritical adoption of Big Data techniques, and instead as I argue below encourage a more balanced and nuanced approach to Big Data.

      Data-Driven Science: A more Nuanced Approach?

      Although an uncritical embrace of Big Data analytics is clearly problematic, it is not immediately obvious that a stubborn commitment to traditional knowledge-driven deductive methodologies would necessarily be preferable. Whilst deductive methods have formed the basis of scientific inquiry for centuries, the particular utility of this approach is largely derived from its ability to produce accurate and reliable results in situations where the quantities of data available are limited. In an era of ubiquitous data collection however, an unwillingness to embrace new methodologies and forms of analysis which maximise the potential value of the volumes of data available would seem unwise.

      For Kitchen and others however, it is possible to reap the benefits of Big Data without comprising scientific rigour or the pursuit of casual explanations. Challenging the 'either or' propositions which favour either scientific modelling and hypothesis or data correlations, Kitchen instead proposes a hybrid approach which utilises the combined advantages of inductive, deductive and so-called 'abductive' reasoning, to develop theories and hypotheses directly from the data[24]. As Patrick W. Gross, commented 'In practice, the theory and the data reinforce each other. It's not a question of data correlations versus theory. The use of data for correlations allows one to test theories and refine them' [25].

      Like the radical empiricism of Big Data, 'data-driven science' as Kitchen terms it, introduces an aspect of inductivism into the research design, seeking to develop hypotheses and insights 'born from the data' rather than 'born from theory'. Unlike the empiricist approach however, the identification of patterns and correlations is not considered the ultimate goal of the research process. Instead these correlations simply form the basis for new types of hypotheses generation, before more traditional deductive testing is used to assess the validity of the results. Put simply therefore, rather than interpreting data deluge as the 'end of theory', data-driven science instead attempts to harness its insights to develop new theories using alternative data-intensive methods of theory generation.

      Furthermore unlike new empiricism, data is not collected indiscriminately from every available source in the hope that sheer size of the dataset will unveil some hidden pattern or insight. Instead, in keeping with more conventional scientific methods, various sampling techniques are utilised, 'underpinned by theoretical and practical knowledge and experience as to whether technologies and their configurations will capture or produce appropriate and useful research material'[26]. Similarly analysis of the data once collected does not take place within a theoretical vacuum, nor are all relationships deemed to be inherently meaningful; instead existing theoretical frameworks and domain specific knowledge are used to help contextualise and refine the results, identifying those patterns that can be dismissed as well as those that require closer attention.

      Thus for many, data-driven science provides a more nuanced approach to Big Data allowing researchers to harness the power of new source of data, whilst also maintaining the pursuit of explanatory knowledge. In doing so, it can help to avoid the risks of uncritical adoption of Big Data analytics for policy-making providing new insights but also retaining the 'regulating force of philosophy'.

      Conclusion

      Since the publication of the Structure of Scientific Revolutions, Kuhn's notion of the paradigm has been widely criticised for producing a homogenous and overly smooth account of scientific progress, which ignores the clunky and often accidental nature of scientific discovery and innovation. Indeed the notion of the 'paradigm shift' is in many ways in typical of a self-indulgent and somewhat egotistical tendency amongst many historians and theorists to interpret events contemporaneous to themselves as in some way of great historical significance. Historians throughout the ages have always perceived themselves as living through periods of great upheaval and transition. In actual fact as has been noted by many, history and the history of science in particular rarely advances in a linear or predictable way, nor can progress when it does occur be so easily attributed to specific technological innovations or theoretical developments. As such we should remain very sceptical of the claims that Big Data represents a historic and paradigmatic shift in scientific practice. Such claims exhibit more than a hint of technological determinism and often ignore the substantial limitations to Big Data analytics. In contrast to these claims, it is important to note that technological advances alone do not drive scientific revolutions; the impact of Big Data will ultimately depend on how we decide to use it as well as the types of questions we ask of it.

      Big Data holds the potential to augment and support existing scientific practices, creating new insights and helping to better inform public policy-making processes. However, contrary to the hyperbole surrounding its development, Big Data does not represent a sliver-bullet for intractable social problems and if adopted uncritically and without consideration of its consequences, Big Data risks not only to diminishing scientific knowledge but also jeopardising our privacy and creating new digital divides. It is critical therefore that we see through the hyperbole and headlines to reflect critically on the epistemological consequences of Big Data as well as its implications for policy making, a task unfortunately which in spite of the pace of technological change is only just beginning.

      Bibliography

      Anderson C (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired, 23 June 2008. Available at: http://www.wired.com/science/discoveries/magazine/16-07/pb_theory (accessed 31 October 2015).

      Bollier D (2010) The Promise and Peril of Big Data. The Aspen Institute. Available at: http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf (accessed 19 October 2015).

      Bowker, G., (2013) The Theory-Data Thing, International Journal of Communication 8 (2043), 1795-1799

      Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679.

      Cukier K (2010) Data, data everywhere. The Economist, 25 February (accessed 5 November 2015).

      Department of Electronics and Information Technology (2015) Digital India, [ONLINE] Available at: http://www.digitalindia.gov.in/. [Accessed 13 December 15].

      Dyche J (2012) Big data 'Eurekas!' don't just happen, Harvard Business Review Blog. 20 November. Available at: http://blogs.hbr.org/cs/2012/11/eureka_doesnt_just_ happen.html

      Hey, T., Tansley, S., and Tolle, K (eds)., (2009) The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond: Microsoft Research, pp. xvii-xxxi.

      Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      Hume, D., (1748), Philosophical Essays Concerning Human Understanding (1 ed.). London: A. Millar.

      Jasanoff, S., (2013) Watching the Watchers: Lessons from the Science of Science Advice, Guardian 8 April 2013, available at: http://www.theguardian.com/science/political-science/2013/apr/08/lessons-science-advice

      Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspacelaw/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1;

      Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      Kuhn T (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

      Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

      McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/

      Prensky M (2009) H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate 5(3), Available at: http://www.innovateonline.info/index.php?view¼article&id¼705

      Raghupathi, W., & Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014)

      Shaw, J., (2014) Why Big Data is a Big Deal, Harvard Magazine March-April 2014, available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal



      [1] Anderson, C (2008) "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", WIRED, June 23 2008, www.wired.com/2008/06/pb-theory/

      [2] Ibid.,

      [3] Department of Electronics and Information Technology (2015) Digital India, [ONLINE] Available at: http://www.digitalindia.gov.in/. [Accessed 13 December 15].

      [4] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679; Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [5] Kuhn T (1962) The Structure of Scientific Revolutions. Chicago: University of Chicago Press.

      [6] Ibid.,

      [7] Hey, T., Tansley, S., and Tolle, K (eds)., (2009) The Fourth Paradigm: Data-Intensive Scientific Discovery, Redmond: Microsoft Research, pp. xvii-xxxi.

      [8] Ibid.,

      [9] Dyche J (2012) Big data 'Eurekas!' don't just happen, Harvard Business Review Blog. 20 November. Available at: http://blogs.hbr.org/cs/2012/11/eureka_doesnt_just_ happen.html

      [10] Ibid.,

      [11] Joh. E, (2014) 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1

      [12] Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

      [13] King quoted in Shaw, J., (2014) Why Big Data is a Big Deal, Harvard Magazine March-April 2014, available at: http://harvardmagazine.com/2014/03/why-big-data-is-a-big-deal

      [14] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679.

      [15] Joh. E, 'Policing by Numbers: Big Data and the Fourth Amendment', Washington Law Review, Vol. 85: 35, (2014) https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1319/89WLR0035.pdf?sequence=1 ; Raghupathi, W., &Raghupathi, V. Big data analytics in healthcare: promise and potential. Health Information Science and Systems, (2014); Morris, D. Big data could improve supply chain efficiency-if companies would let it, Fortune, August 5 2015, http://fortune.com/2015/08/05/big-data-supply-chain/ ; , Hilbert, M. Big Data for Development: From Information- to Knowledge Societies (2013). Available at SSRN: http://ssrn.com/abstract=2205145

      [16] Boyd D and Crawford K (2012) Critical questions for big data. Information, Communication and Society 15(5): 662-679; Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [17] Prensky M (2009) H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate 5(3), Available at: http://www.innovateonline.info/index.php?view¼article&id¼705

      [18] Hume, D., (1748), Philosophical Essays Concerning Human Understanding (1 ed.). London: A. Millar.

      [19] Mayer-Schonberger V and Cukier K (2013) Big Data: A Revolution that Will Change How We Live, Work and Think. London: John Murray

      [20] McCue, C., Data Mining and Predictive Analysis: Intelligence Gathering and Crime Analysis, Butterworth-Heinemann, (2014)

      [21] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12;

      [22] Jasanoff, S., (2013) Watching the Watchers: Lessons from the Science of Science Advice, Guardian 8 April 2013, available at: http://www.theguardian.com/science/political-science/2013/apr/08/lessons-science-advice

      [23] Bowker, G., (2013) The Theory-Data Thing, International Journal of Communication 8 (2043), 1795-1799

      [24] Kitchen, R (2014) Big Data, new epistemologies and paradigm shifts, Big Data & Society, April-June 2014: 1-12

      [25] Gross quoted in Ibid.,

      [26] Ibid.,

      Facebook's Fall from Grace: Arab Spring to Indian Winter

      by Sunil Abraham last modified Feb 11, 2016 03:51 PM
      Facebook’s Free Basics has been permanently banned in India! The Indian telecom regulator, TRAI has issued the world’s most stringent net neutrality regulation! To be more accurate, there is more to come from TRAI in terms of net neutrality regulations especially for throttling and blocking but if the discriminatory tariff regulation is anything to go by we can expect quite a tough regulatory stance against other net neutrality violations as well.

      The article was published in First Post on February 9, 2016. It can be read here.


      Even the regulations it cites in the Explanatory Memorandum don’t go as far as it does. The Dutch regulation will have to be reformulated in light of the new EU regulations and the Chilean regulator has opened the discussion on an additional non-profit exception by allowing Wikipedia to zero-rate its content in partnership with telecom operators.

      Bravo to Nikhil Pahwa, Apar Gupta, Raman Chima, Kiran Jonnalagadda and the thousands of volunteers at Save The Internet and associated NGOs, movements, entrepreneurs and activists who mobilized millions of Indians to stand up and petition TRAI to preserve some of the foundational underpinnings of the Internet. And finally bravo to Facebook for having completely undermined any claim to responsible stewardship of our information society through their relentless, shrill and manipulative campaign filled with the staggeringly preposterous lies. Having completely lost the trust of the Indian public and policy-makers, Facebook only has itself to blame for polarizing what was quite a nuanced debate in India through its hyperbole and setting the stage for this firm action by TRAI.

      And most importantly bravo to RS Sharma and his team at TRAI for several reasons for the notification of “Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016” aka differential pricing regulations. The regulation exemplifies six regulatory best practices that I briefly explore below.

      Transparency and Agility: Two months from start to finish, what an amazing turn around! TRAI was faced with unprecedented public outcry and also comments and counter-comments. Despite visible and invisible pressures, from the initial temporary ban on Free Basics to RS Sharma’s calm, collected and clear interactions with different stakeholders resulted in him regaining the credibility which was lost during the publication of the earlier consultation paper on Regulatory Framework for Over-the-top (OTTs) services. Despite being completely snowed over electronically by what Rohin Dharmakumar dubbed as Facebook’s DDOS attack, he gave Facebook one last opportunity to do the right thing which they of course spectacularly blew.

      Brevity and Clarity: The regulation fits onto three A4-sized pages and is a joy to read. Clarity is often a result of brevity but is not necessarily always the case. At the core of this regulation is a single sentence which prohibits discriminatory tariffs on the basis of content unless it is a “data service over closed electronic communications network”. And unlike many other laws and regulations, this regulation has only one exemption for offering or charging of discriminatory tariffs and that is for “emergency services” or during “grave public emergency”. Even the best lawyers will find it difficult to drive trucks through that one. Even if imaginative engineers architect a technical circumvention, TRAI says “if such a closed network is used for the purpose of evading these regulations, the prohibition will nonetheless apply”. Again clear signal that the spirit is more important than the letter of the regulation when it comes to enforcement.

      Certainty and Equity: Referencing the noted scholar Barbara Van Schewick, TRAI explains that a case-by-case approach based on principles [standards] or rules would “fail to provide much needed certainty to industry participants…..service providers may refrain from deploying network technology” and perversely “lead to further uncertainty as service providers undergoing [the] investigation would logically try to differentiate their case from earlier precedents”. Our submission from the Centre for Internet and Society had called for more exemptions but TRAI went with a much cleaner solution as it did not want to provide “a relative advantage to well-financed actors and will tilt the playing field against those who do not have the resources to pursue regulatory or legal actions”.

      What next? Hopefully the telecom operators and Facebook will have the grace to abide with the regulation without launching a legal challenge. And hopefully TRAI will issue equally clear regulations on throttling and blocking to conclude the “Regulatory Framework for Over-the-top Services” consultation process. Critically, TRAI must forbear from introducing any additional regulatory burdens on OTTs, a.k.a Internet companies based on unfounded allegations of regulatory arbitrage. There are some legitimate concerns around issues like taxation and liability but that has to be addressed by other arms of the government. To address the digital divide, there are other issues outside net neutrality such as shared spectrum, unlicensed spectrum and shared backhaul infrastructure that TRAI must also prioritize for regulation and deregulation.

      Without doubt other regulators from the global south will be inspired by India’s example and will hopefully take firm steps to prevent the rise of additional and unnecessary gatekeepers and gatekeeping practices on the Internet. The democratic potential of the Internet must be preserved through enlightened and appropriate regulation informed by principles and evidence.


      The writer is Executive Director, Centre for Internet and Society, Bengaluru. He says CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world).

      Database on Big Data and Smart Cities International Standards

      by Vanya Rakesh last modified Feb 11, 2016 03:49 PM
      The Centre for Internet and Society is in the process of mapping international standards specifically around Big Data, IoT and Smart Cities. Here is a living document containing a database of some of these key globally accepted standards.

      1. International Organisation for Standardization: ISO/IEC JTC 1 Working group on Big Data (WG 9 )

      ● Background

      - The International Organization for Standardization /International Electrotechnical Commission (ISO/IEC) Joint Technical Committee (JTC) 1, Information Technology announced the creation of a Working Group (WG) focused on standardization in connection with big data.

      - JTC 1 is the standards development environment where experts come together to develop worldwide standards on Information and Communication Technology (ICT) for integrating diverse and complex ICT technologies.[1]

      - The American National Standards Institute (ANSI) holds the secretariat to JTC 1 and the ANSI-accredited U.S. Technical Advisory Group (TAG) Administrator to JTC 1 is theInterNational Committee for Information Technology Standards (INCITS) [2], an ANSI member and accredited standards developer (ASD). InterNational Committee for Information Technology standards (INCITS) is a technical committee on Big Data to serve as the US Technical Advisory Group (TAG) to JTC 1/WG 9 on Big Data/ pending approval of a New Work Item Proposal (NWIP). The INCITS/Big Data will address standardization in the areas assigned to JTC 1/WG 9. [3]

      - Under U.S. leadership, WG 9 on Big Data will serve as the focus of JTC 1's big data standardization program.

      ● Objective

      - To identify standardization gaps.

      - Develop foundational standards for Big Data.

      - Develop and maintain liaisons with all relevant JTC 1 entities

      - Grow the awareness of and encourage engagement in JTC 1 Big Data standardization efforts within JTC 1. [4]

      ● Status

      - JTC 1 appoints Mr. Wo Chang to serve as Convenor of the JTC 1 Working Group on Big Data.

      - The WG has set up a Study Group on Big Data.

      2. International Organisation for Standardization: ISO/IEC JTC 1 Study group on Big Data

      ● Background

      - The ISO/IEC JTC1 Study Group on Big Data (JTC1 SGBD) was created by Resolution 27 at the November, 2013 JTC1 Plenary at the request of the USA and other national bodies for consideration of Big Data activities across all of JTC 1.

      - A Study Group (SG) is an ISO mechanism by which the convener of a Working Group (WG) under a sub-committee appoints a smaller group of experts to do focused work in a specific area to identify a clear group to focus attention on a major area and expand the manpower of the committee.

      - The goal of an SG is to create a proposal suitable for consideration by the whole WG, and it is the WG that will then decide whether and how to progress the work.[5]

      ● Objective

      JTC 1 establishes a Study Group on Big Data for consideration of Big Data

      activities across all of JTC 1 with the following objectives:

      - Mapping the existing landscape: Map existing ICT landscape for key technologies and relevant standards /models/studies /use cases and scenarios for Big Data from JTC 1, ISO, IEC and other standards setting organizations,

      - Identify key terms : Identify key terms and definitions commonly used in the area of Big Data,

      - Assess status of big data standardization : Assess the current status of Big Data standardization market requirements, identify standards gaps, and propose standardization priorities to serve as a basis for future JTC 1 work, and

      - Provide a report with recommendations and other potential deliverables to the 2014 JTC 1 Plenary. [6]

      ● Current Status

      - The study group released a preliminary report in the year 2014, which can be accessed here : http://www.iso.org/iso/big_data_report-jtc1.pdf.

      3. The National Institute of Standards and Technology Big Data Interoperability Framework :

      ● Background

      - NIST is leading the development of a Big Data Technology Roadmap which aims to define and prioritize requirements for interoperability, portability, reusability, and extensibility for big data analytic techniques and technology infrastructure to support secure and effective adoption of Big Data.

      - To help develop the ideas in the Big Data Technology Roadmap, NIST is creating the Public Working Group for Big Data which Released Seven Volumes of Big Data Interoperability Framework on September 16, 2015.[7]

      ● Objective

      - To advance progress in Big Data, the NIST Big Data Public Working Group (NBD-PWG) is working to develop consensus on important, fundamental concepts related to Big Data.

      ● Status

      - The results are reported in the NIST Big Data Interoperability Framework series of volumes. Under the framework, seven volumes have been released by NIST, available here:

      http://bigdatawg.nist.gov/V1_output_docs.php

      4. IEEE Standards Association

      ● Background:

      - The IEEE Standards Association introduced a number of standards

      related to big-data applications.

      ● Status:

      The following standard is under development:

      - IEEE P2413

      "IEEE Standard for an Architectural Framework for the Internet of Things (IoT)" defines the relationships among devices used in industries, including transportation and health care. It also provides a blueprint for data privacy, protection, safety, and security, as well as a means to document and mitigate architecture divergence.[8]

      5. ITU

      ● Background:

      - The International Telecommunications Union (ITU) has announced its first standards for big data services, entitled 'Recommendation ITU-T Y.3600 "Big data - cloud computing based requirements and capabilities"', recognizing the need for strong technical standards considering the growth of big data to ensure that processing tools are able to achieve powerful results in the areas of collection, analysis, visualization, and more.[9]

      ● Objective:

      - Recommendation Y.3600 provides requirements, capabilities and use cases of

      cloud computing based big data as well as its system context. Cloud computing

      based big data provides the capabilities to collect, store, analyze, visualize and

      manage varieties of large volume datasets, which cannot be rapidly transferred

      and analysed using traditional technologies.[10]

      - It also outlines how cloud computing systems can be leveraged to provide big-data services.

      ● Status:

      - The standard was relseased in the year 2015 and is avaiabe here: http://www.itu.int/rec/T-REC-Y.3600-201511-I .

      Smart cities

      1. ISO Standards on Smart Cities

      ● Background:

      - ISO, the International Organization for Standardization, established a strategic advisory group in 2014 for smart cities, comprised of a wide range of international experts to advise ISO on how to coordinate current and future Smart City standardization activities, in cooperation with other international standards organizations, to benefit the market.[11]

      - Seven countries, China, Germany, UK, France, Japan, Korea and USA, are currently involved in the research.

      ● Objective:

      - The main aims of which are to formulate a definition of a Smart City

      - Identify current and future ISO standards projects relating to Smart Cities

      - Examine involvement of potential stakeholders, city requirements, potential interface problems. [12]

      ● Status:

      - ISO/TC 268, which is focused on sustainable development in communities, has one working group developing city indicators and other developing metrics for smart community infrastructures. In early 2016 this committee will be joined by another - IEC - systems committee. The first standard produced by ISO/TC 268 is ISO/TR 37150:2014.

      - ISO/TR 37150:2014 Smart community infrastructures -- Review of existing activities relevant to metrics: this standard provides a review of existing activities relevant to metrics for smart community infrastructures. The concept of smartness is addressed in terms of performance relevant to technologically implementable solutions, in accordance with sustainable development and resilience of communities, as defined in ISO/TC 268. ISO/TR 37150:2014 addresses community infrastructures such as energy, water, transportation, waste and information and communications technology (ICT). It focuses on the technical aspects of existing activities which have been published, implemented or discussed. Economic, political or societal aspects are not analyzed in ISO/TR 37150:2014.[13]

      - ISO 37120:2014 provides city leaders and citizens a set of clearly defined city performance indicators and a standard approach for measuring each. Though some indicators will be more helpful for cities than others, cities can now consistently apply these indicators and accurately benchmark their city services and quality of life against other cities.[14] This new international standard was developed using the framework of the Global City Indicators Facility (GCIF) that has been extensively tested by more than 255 cities worldwide. This is a demand-led standard, driven and created by cities, for cities. ISO 37120 defines and establishes definitions and methodologies for a set of indicators to steer and measure the performance of city services and quality of life. The standard includes a comprehensive set of 100 indicators - of which 46 are core - that measures a city's social, economic, and environmental performance. [15]

      The GCIF global network, supports the newly constituted World Council on City Data - a sister organization of the GCI/GCIF - which allows for independent, third party verification of ISO 37120 data.[16]

      - ISO/TS 37151 and ISO/TR 37152 Smart community infrastructures -- Common framework for development & operation: outlines 14 categories of basic community needs (from the perspective of residents, city managers and the environment) to measure the performance of smart community infrastructures. These are typical community infrastructures like energy, water, transportation, waste and information and communication technology systems, which have been optimized with sustainable development and resilience in mind. [17] The committee responsible for this document is ISO/TC 268, Sustainable development in communities, Subcommittee SC 1, Smart community infrastructures. The objective is to develop international consensus on a harmonised metrics to evaluate the smartness of key urban infrastructure.[18]

      - ISO 37101 Sustainable development of communities -- Management systems -- Requirements with guidance for resilience and smartness : By setting out requirements and guidance to attain sustainability with the support of methods and tools including smartness and resilience, it can help communities improve in a number of areas such as: Developing holistic and integrated approaches instead of working in silos (which can hinder sustainability), Fostering social and environmental changes, Improving health and wellbeing, Encouraging responsible resource use and Achieving better governance. [19] The objective is to develop a Management System Requirements Standard reflecting consensus on an integrated, cross-sector approach drawing on existing standards and best practices.

      - ISO 37102 Sustainable development & resilience of communities - Vocabulary . The objective is to establish a common set of terms and definitions for standardization in sustainable development, resilience and smartness in communities, cities and territories since there is pressing need for harmonization and clarification. This would provide a common language for all interested parties and stakeholders at the national, regional and international levels and would lead to improved ability to conduct benchmarks and to share experiences and best practices.

      - ISO/TR 37121 Inventory & review of existing indicators on sustainable development & resilience in cities : A common set of indicators useable by every city in the world and covering most issues related to sustainability, resilience and quality of life in cities. [20]

      - ISO/TR 12859:2009 gives general guidelines to developers of intelligent transport systems (ITS) standards and systems on data privacy aspects and associated legislative requirements for the development and revision of ITS standards and systems. [21]

      2. International Organisation for Standardization: ISO/IEC JTC 1 Working group on Smart Cities (WG 11 )

      ● Background:

      - Serve as the focus of and proponent for JTC 1's Smart Cities standardization program and works for development of foundational standards for the use of ICT in Smart Cities - including the Smart City ICT Reference Framework and an Upper Level Ontology for Smart Cities - for guiding Smart Cities efforts throughout JTC 1 upon which other standards can be developed.[22]

      ● Objective:

      - To develop a set of ICT related indicators for Smart Cities in collaboration with ISO/TC 268.

      - Identify JTC 1 (and other organization) subgroups developing standards and related material that contribute to Smart Cities.

      - Grow the awareness of, and encourage engagement in, JTC 1 Smart Cities standardization efforts within JTC 1.

      ● Status

      - Ms Yuan Yuan is the Convenor of this Working group.

      - The purpose was to provide a report with recommendations to the JTC 1 Plenary in the year 2014, to which a preliminary report was submitted. [23]

      3. International Organisation for Standardization: ISO/IEC JTC 1 Study Group (SG1) on Smart Cities

      ● Background:

      - The Study Group (SG) - Smart Cities was established in 2013[24] SG 1 will explicitly consider the work going on in the following committees: ISO/TMB/AG on Smart Cities, IEC/SEG 1, ITU-T/FG SSC and ISO/TC 268. [25]

      ● Objective :

      - To examine the needs and potentials for standardization in this area.

      ● Status:

      - SG 1 is paying particular attention to monitoring cloud computing activities, which it sees as the key element of the Smart Cities infrastructure. DIN's Information Technology and Selected IT Applications Standards Committee (NIA (www.nia.din.de)) is formally responsible for ISO/IEC JTC1 /SG 1, but an autonomous national mirror committee on Smart Cities does not yet exist and the work is being overseen by DIN's Smart Grid steering body. [26]

      - A preliminary report has been released in the 2014, available here- http://www.iso.org/iso/smart_cities_report-jtc1.pdf

      4. ITU

      ● Background:

      - ITU members have established an ITU-T Study Group titled "ITU-T Study Group 20: IoT and its applications, including smart cities and communities" [27]

      - ITU-T has also established a Focus Group on Smart Sustainable Cities (FG-SSC).

      ● Objective:

      - The study group will address the standardization requirements of Internet of Things (IoT) technologies, with an initial focus on IoT applications in smart cities.

      - The focus group shall assess the standardization requirements of cities aiming to boost their social, economic and environmental sustainability through the integration of information and communication technologies (ICTs) in their infrastructures and operations.

      - The Focus Group will act as an open platform for smart-city stakeholders - such as municipalities; academic and research institutes; non-governmental organizations (NGOs); and ICT organizations, industry forums and consortia - to exchange knowledge in the interests of identifying the standardized frameworks needed to support the integration of ICT services in smart cities.[28]

      ● Status:

      - The study group will develop standards that leverage IoT technologies to address urban-development challenges.

      - The FG-SSC concluded its work in May 2015 by approving 21 Technical Specifications and Reports. [29]

      - So far, ITU-T SG 5 FG-SSC has issued the following reports- Technical report "An overview of smart sustainable cities and the role of information and communication technologies", Technical report "Smart sustainable cities: an analysis of definitions", Technical report "Electromagnetic field (EMF) considerations in smart sustainable cities", Technical specifications "Overview of key performance indicators in smart sustainable cities", Technical report "Smart water management in cities".[30]

      5. PRIPARE Project :

      ● Background:

      - The 7001 - PRIPARE Smart City Strategy is to to ensure that ICT solutions integrated in EIP smart cities will be compliant with future privacy regulation.

      - PRIPARE aims to develop a privacy and security-by-design software and systems engineering methodology, using the combined expertise of the research community and taking into account multiple viewpoints (advocacy, legal, engineering, business).

      ● Objective:

      - The mission of PRIPARE is to facilitate the application of a privacy and security-by-design methodology that will contribute to the advent of unhindered usage of Internet against disruptions, censorship and surveillance, support its practice by the ICT research community to prepare for industry practice and foster risk management culture through educational material targeted to a diversity of stakeholders.

      ● Status:

      - Liaison is currently on-going so that it becomes a standard (OASIS and ISO).[31]

      6. BSI-UK

      ● Background:

      - In the UK, the British Standards Institution (BSI) has been commissioned by the UK Department of Business, Innovation and Skills (BIS) to conceive a Smart Cities Standards Strategy to identify vectors of smart city development where standards are needed.

      - The standards would be developed through a consensus-driven process under the BSI to ensure good practise is shared between all the actors. [32]

      ● Objective:

      The BIS launched the City's Standards Institute to bring together cities and key

      industry leaders and innovators :

      - To work together in identifying the challenges facing cities,

      - Providing solutions to common problems, and

      - Defining the future of smart city standards.[33]

      ● Status:

      The following standards and publications help address various issues for a city to

      become a smart city:

      - The development of a standard on Smart city terminology (PAS 180)

      - The development of a Smart city framework standard (PAS 181)

      - The development of a Data concept model for smart cities (PAS 182)

      - A Smart city overview document (PD 8100)

      - A Smart city planning guidelines document (PD 8101)

      - BS 8904 Guidance for community sustainable development provides a decision-making framework that will help setting objectives in response to the needs and aspirations of city stakeholders

      - BS 11000 Collaborative relationship management

      - BSI BIP 2228:2013 Inclusive urban design - A guide to creating accessible public spaces.

      7. Spain

      ● Background:

      - AENOR, the Spanish standards developing organization (SDO), has issued two new standards on smart cities: the UNE 178303 and UNE-ISO 37120. These standards joined the already published UNE 178301.

      ● Objective:

      - The texts, prepared by the Technical Committee of Standardization of AENOR on Smart Cities (AEN / CTN 178) and sponsored by the SETSI (Secretary of State for Telecommunications and Information Society of the Ministry of Industry, Energy and Tourism), aim to encourage the development of a new model of urban services management based on efficiency and sustainability.

      ● Status:

      Some of the standards that have been developed are:

      - UNE 178301 on Open Data evaluates the maturity of open data created or held by the public sector so that its reuse is provided in the field of Smart Cities.

      - UNE 178303 establishes the requirements for proper management of municipal assets.

      - UNE-ISO 37120 which collects the international urban sustainability indicators.

      - Following the publication of these standards, 12 other draft standards on Smart Cities have just been made public, most of them corresponding to public services such as water, electricity and telecommunications, and multiservice city networks. [34]

      8. China

      ● Background:

      Several national standardization committees and consortia have started

      standardization work on Smart Cities, including:

      - China National IT Standardization TC (NITS),

      - China National CT Standardization TC,

      - China National Intelligent Transportation System Standardization TC,

      - China National TC on Digital Technique of Intelligent Building and Residence Community of Standardization Administration, China Strategic Alliance of Smart City Industrial Technology Innovation[35]

      ● Objective:

      - In the year 2014, all the ministries involved in building smart cities in China joined with the Standardization Administration of China to create working groups whose job is to manage and standardize smart city development, though their activities have not been publicized. [36]

      ● Status:

      - China will continue to promote international standards in building smart cities and improve the competitiveness of its related industries in global market.

      - Also, China's Standardization Administration has joined hands with National Development and Reform Commission, Ministry of Housing and Urban-Rural Development and Ministry of Industry and Information Technology in establishing and implementing standards for smart cities.

      - When building smart cities, the country will adhere to the ISO 37120 and by the year 2020, China will establish 50 national standards on smart cities. [37]

      9. Germany

      ● Background :

      - Member of European Innovation Partnership (EIP) for Smart Cities and Communities DKE (German Commission for Electrical, Electronic & Information Technologies) and DIN (GermanInstitute for Standardization) have developed a joint roadmap and Smart Cities recommendations for action in Germany.

      ● Objective:

      - Its purpose is to highlight the need for standards and to serve as a strategic template for national and international standardization work in the field of smart city technology.

      - The Standardization Roadmap highlights the main activities required to create smart cities. [38]

      ● Status:

      - An updated version of the standardization roadmap was released in the year 2015. [39]

      10. Poland

      ● Background:

      - A coordination group on Smart and Sustainable Cities and Communities (SSCC) was set up in the beginning of 2014 to monitor any national standardization activities.

      ● Objective:

      - It was decided to put forward a proposal to form a group at the Polish Committee for Standardization (PKN) providing recommendations for smart sustainable city standardization in Poland.

      ● Status:

      It has two thematic groups:

      - GT 1-2 on terminology and Technical Bodies in PKN Its scope covers a collection of English terms and their Polish equivalents related to smart and sustainable development of cities and communities to allow better communication among various smart city stakeholders. This includes the preparation of the list of Technical Bodies (OT) in PKN involved in standardization activities related to specific aspects of smart and sustainable local development and making proposals concerning the allocation of standardization works to the relevant OT in PKN.

      - GT 3 for gathering information and the development and implementation of a work programme Its scope includes identifying stakeholders in Poland, and gathering information on any national "smart city" initiatives having an impact on environment-friendly development, sustainability, and liveability of a city. The group is also tasked with developing a work programme for GZ 1 based on identified priorities for Poland. Finally, its aim is to conduct communication and dissemination of activities to make the results of GZ 1 visible. [40]

      11. Europe

      ● Background:

      - In 2012, the European standardization organizations CEN and CENELEC founded the Smart and Sustainable Cities and Communities Coordination Group (SSCC-CG), which is a Coordination Group established to coordinate standardization activities and foster collaboration around standardization work. [41]

      ● Objective:

      - The aim of the CEN-CENELEC-ETSI (SSCC-CG) is to coordinate and promote European standardization activities relating to Smart Cities and to advise the CEN and CENELEC (Technical) and ETSI Boards on standardization activities in the field of Smart and Sustainable Cities and Communities.

      - The scope of the SSCC-CG is to advise on European interests and needs relating to standardization on Smart and Sustainable cities and communities.

      ● Status:

      - Originally conceived to be completed by the end of 2014, SSCC-CG's mandate has been extended by the European standards organizations CEN, CENELEC and ETSI by a further two years and will run until the end of 2016.[42]

      - The SSCC-CG does not develop standards, but reports directly to the management boards of the standardization organizations and plays an advisory role. Current members of the SSCC.CG include representatives of the relevant technical committees, the CEN/CENELEC secretariat, the European Commission, the European associations and the national standardization organizations.[43]

      - CEN/CENELEC/ETSI Joint Working Group on Standards for Smart Grids: The aim of this document is to provide a strategic report which outlines the standardization requirements for implementing the European vision of smart grids, especially taking into account the initiatives by the Smart Grids Task Force of the European Commission. It provides an overview of standards, current activities, fields of action, international cooperation and strategic recommendations[44]

      12. Singapore

      ● Background:

      - In the year 2015, SPRING Singapore, the Infocomm Development Authority of Singapore (IDA) and the Information Technology Standards Committee (ITSC), under the purview of the Singapore Standards Council (SSC), have laid out an Internet of Things (IoT) Standards Outline in support of Singapore's Smart Nation initiative.

      ● Objective:

      - Realising importance of standards in laying the foundation for the nation empowered by big data, analytics technology and sensor networks in light of Singapore's vision of becoming a Smart Nation.

      ● Status:

      Three types of standards - sensor network standards, IoT foundational standards and domain-specific standards - have been identified under the IoT Standards Outline. Singapore actively participates in the ISO Technical Committee (TC) working on smart city standards.[45]


      [1] ISO/IEC JTC 1, Information Technology, http://www.iso.org/iso/jtc1_home.html

      [2] The InterNational Committee for Information Technology Standards, JTC 1 Working Group on Big Data, http://www.incits.org/committees/big-data

      [3] ISO/IEC JTC 1 Forms Two Working Groups on Big Data and Internet of Things, 27th January 2015, https://www.ansi.org/news_publications/news_story.aspx?menuid=7&articleid=5b101d27-47b5-4540-bca3-657314402591

      [4] JTC 1 November 2014 Resolution 28 - Establishment of a Working Group on Big Data, and Call for Participation, 20th January 2015, http://jtc1sc32.org/doc/N2601-2650/32N2625-J1N12445_JTC1_Big_Data-call_for_participation.pdf

      [5] SD-3: Study Group Organizational Information, https://isocpp.org/std/standing-documents/sd-3-study-group-organizational-information

      [6] ISO/IEC JTC 1 Study Group on Big Data (BD-SG), http://jtc1bigdatasg.nist.gov/home.php

      [7] NIST Released V1.0 Seven Volumes of Big Data Interoperability Framework (September 16, 2015),http://bigdatawg.nist.gov/home.php

      [8] Standards That Support Big Data, Monica Rozenfeld, 8th September 2014, http://theinstitute.ieee.org/benefits/standards/standards-that-support-big-data

      [9] ITU releases first ever big data standards, Madolyn Smith, 21st December 2015, http://datadrivenjournalism.net/news_and_analysis/itu_releases_first_ever_big_data_standards#sthash.m3FBt63D.dpuf

      [10] ITU-T Y.3600 (11/2015) Big data - Cloud computing based requirements and capabilities, http://www.itu.int/itu-t/recommendations/rec.aspx?rec=12584

      [11] ISO Strategic Advisory Group on Smart Cities - Demand-side survey, March 2015, http://www.platform31.nl/uploads/media_item/media_item/41/62/Toelichting_ISO_Smart_cities_Survey-1429540845.pdf

      [12] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [13] ISO/TR 37150:2014 Smart community infrastructures -- Review of existing activities relevant to metrics, http://www.iso.org/iso/catalogue_detail?csnumber=62564

      [14] Dissecting ISO 37120: Why this new smart city standard is good news for cities, 30th July 2014, http://smartcitiescouncil.com/article/dissecting-iso-37120-why-new-smart-city-standard-good-news-cities

      [15] World Council for City Data, http://www.dataforcities.org/wccd/

      [16] Global City Indicators Facility, http://www.cityindicators.org/

      [17] How to measure the performance of smart cities, Maria Lazarte, 5th October 2015

      http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref2001

      [18] http://iet.jrc.ec.europa.eu/energyefficiency/sites/energyefficiency/files/files/documents/events/slideslairoctober2014.pdf

      [19] A standard for improving communities reaches final stage, Clare Naden, 12th February 2015,

      http://www.iso.org/iso/news.htm?refid=Ref1932

      [20] http://iet.jrc.ec.europa.eu/energyefficiency/sites/energyefficiency/files/files/documents/events/slideslairoctober2014.pdf

      [21] ISO/TR 12859:2009 Intelligent transport systems -- System architecture -- Privacy aspects in ITS standards and systems, http://www.iso.org/iso/catalogue_detail.htm?csnumber=52052

      [22] ISO/IEC JTC 1 Information technology, WG 11 Smart Cities, http://www.iec.ch/dyn/www/f?p=103:14:0::::FSP_ORG_ID,FSP_LANG_ID:12973,25

      [23] Work of ISO/IEC JTC1 Smart Ci4es Study group , https://interact.innovateuk.org/documents/3158891/17680585/2+JTC1+Smart+Cities+Group/e639c7f6-4354-4184-99bf-31abc87b5760

      [24] JTC1 SAC - Meeting 13 , February 2015, http://www.finance.gov.au/blog/2015/08/05/jtc1-sac-meeting-13-february-2015/

      [25] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [26] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [27] ITU standards to integrate Internet of Things in Smart Cities, 10th June 2015, https://www.itu.int/net/pressoffice/press_releases/2015/22.aspx

      [28] ITU-T Focus Group Smart Sustainable Cities, https://www.itu.int/dms_pub/itu-t/oth/0b/04/T0B0400004F2C01PDFE.pdf

      [29] Focus Group on Smart Sustainable Cities, http://www.itu.int/en/ITU-T/focusgroups/ssc/Pages/default.aspx

      [30] The German Standardization Roadmap Smart City Version 1.1, May 2015, https://www.vde.com/en/dke/std/documents/nr_smartcity_en_v1.1.pdf

      [31] 7001 - PRIPARE Smart City Strategy, https://eu-smartcities.eu/commitment/7001

      [32] Financing Tomorrow's Cities: How Standards Can Support the Development of Smart Cities, http://www.longfinance.net/groups7/viewdiscussion/72-financing-financing-tomorrow-s-cities-how-standards-can-support-the-development-of-smart-cities.html?groupid=3

      [33] BSI-Smart Cities, http://www.bsigroup.com/en-GB/smart-cities/

      [34] New Set of Smart Cities Standards in Spain, https://eu-smartcities.eu/content/new-set-smart-cities-standards-spain

      [35] Technical Report, M2M & ICT Enablement in Smart Cities, Telecommunication Engineering Centre, Department of Telecommunications, Ministry of Communications and Information Technology, Government of India, November 2015, http://tec.gov.in/pdf/M2M/ICT%20deployment%20and%20strategies%20for%20%20Smart%20Cities.pdf

      [36] Smart City Development in China, Don Johnson, 17th June 2014, http://www.chinabusinessreview.com/smart-city-development-in-china/

      [37] China to continue develop standards on smart cities, 17th December 2015, http://www.chinadaily.com.cn/world/2015wic/2015-12/17/content_22732897.htm

      [38] The German Standardization Roadmap Smart City, April 2014, https://www.dke.de/de/std/documents/nr_smart%20city_en_version%201.0.pdf

      [39] This version of the Smart City Standardization Roadmap, Version 1.1, is an incremental revision of Version 1.0. In Version 1.1, a special focus is placed on giving an overview of current standardization activities and interim results, thus illustrating German ambitions in this area.

      [40] SSCC-CG Final report Smart and Sustainable Cities and Communities Coordination Group, January 2015, https://www.etsi.org/images/files/SSCC-CG_Final_Report-recommendations_Jan_2015.pdf

      [41] Orchestrating infrastructure for sustainable Smart Cities , http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf

      [42] Urbanization- Why do we need standardization?, http://www.din.de/en/innovation-and-research/smart-cities-en

      [43] CEN-CENELEC-ETSI Coordination Group 'Smart and Sustainable Cities and Communities' (SSCC-CG), http://www.cencenelec.eu/standards/Sectors/SmartLiving/smartcities/Pages/SSCC-CG.aspx

      [44] Final report of the CEN/CENELEC/ETSI Joint Working Group on Standards for Smart Grids, https://www.etsi.org/WebSite/document/Report_CENCLCETSI_Standards_Smart%20Grids.pdf

      [45] SPRING Singapore Supported Close to 600 Companies in Standards Adoption, and Service Excellence Projects , 12th August 2015, http://www.spring.gov.sg/NewsEvents/PR/Pages/Internet-of-Things-(IoT)-Standards-Outline-to-Support-Smart-Nation-Initiative-Unveiled-20150812.aspx

      India Electronics Week 2016 & the IoT Show

      by Vanya Rakesh last modified Feb 12, 2016 03:12 AM
      The India Electronics Week 2016 was held at the Bangalore International Exhibition Centre from 11th-13th January 2016, along with Bangalore's biggest IoT Exhibition and Conference, bringing the global electronics industry together. The event also had the EFY Expo 2016, supported by the Department of Electronics and Information Technology & the Ministry of Communications and Information Technology, Government of India.

      Expo

      The show catered to manufacturers, developers and technology leaders interested in the domestic as well as global markets by displaying their products & services. EFY Expo was a catalyst for  accelerated growth and value addition, acquisition of technology and joint ventures between Indian and global players to enable growth of Electronic Manufacturing in the country.

      Conference
      CIS had the opportunity to attend the conference on Smart Cities on the 13th with experts discussing Smart Governance, Risk Assessment of Iot, Role of IoT in Smart Cities, and building Smart Cities with everything as a service.

      The session started with a talk on building secure and flexible IoT platforms, where the need to focus on risk and security was emphasised. Several issues which require attention from the security perspective were raised, including: the focus must be on end-to-end security with IoT being present everywhere. Secondly, there must be IoT resilient standards addressing authentication and device management, and the Industry and Government must adopt must open standards to make the ecosystem flexible.  Also, the platforms must be secured and employ encryption to ensure trusted execution of software.

      This was followed by a session on Smart Governance, discussing the changing nature of society where we see people moving from being connected with people to now being connected to devices. From the perspective of smart governance, the talk was divided into segments like Government to Government, Government to Business, Government to Employees and Government to Citizens. For smart cities, several e-governance initiatives have been undertaken so far, apart from e-delivery of services. After the Smart Cities Mission was announced, the Central Government sent several indicators of smart governance to the State Governments like : telecare (for example Karnataka had telejob portal), smart parking, smart grids, etc. From the business point of view, areas to be considered for building in-house competence for companies to build efficient and successful smart cities were suggested, some of them being: smarter education, buildings, environment, transportation, etc. It was suggested that smart governance can be ensured by regular measurement of the outcomes, redefining the gaps and analysis of these gaps with clearly laid policies. The key challenges to implementation of smart governance include :

      • The inherent IoT challenges
      • Government departments working in silos
      • Lack of clarity in objectives
      • Lack of transparency
      • No standardized platforms
      • Data privacy- the issue of personal data being stored in Government repository
      • Scalable infrastructure
      • Growing population

      A survey was done to study the success rate of e-governance projects in India, where it was found that  50% of them were complete failures, while 35% were partial failures. Therefore, it becomes important to ponder over these challenges which may create a roadblock to smart governance raising concerns on projects like smart cities.

      RIOT-Risk assessment of Internet of Things-  A Session to understand the security issue in IoT and discuss about secure IoT implementation. In smart cities, IoT has huge potential which may face roadblocks due to lack of open platforms, lack of an ecosystem of sensors, gateways, platforms and the challenges of integration with existing systems. The IoT security issues, on the other hand, like absence of set standards, lack of motivation for security and little awareness about such issues need due attention. This requires levels of check, for example, at the IoT surface level  in devices, the cloud or the mobile. Another important area here becomes the issue of data privacy and security for IoT implementation.

      Everything as a service- An insight into what it takes to build a smart city with EaaS and understand the various components that go into this, how they interact and how it can be implemented. This session highlighted the importance of data in a city, as it becomes very useful to provide information like-about disasters, enabling the Government to make plans and take actions accordingly, information about the traffic in the city, the waste level, city health map, etc. With multiple actors using the same data, the use of such information in a smart city varies across various sectors like

      • Smart Government- for Transparency, accountability and better decision-making
      • Smart Mobility- Intelligent traffic, management, safer roads
      • Smart Healthcare- health maps, better emergency services
      • Smart Living-  safety and security, better quality of life
      • Smart Utilities- Resource conservation, Resilience
      • Smart Environment- Better waste, management, air quality monitoring.

      To use everything as a service, it is considered as an attribute/state where there is a nexus between the users and state. For this, information is collected on the basis of data captured so far, or new data is captured by opening up existing sources like telecom operators, machines, citizens , hospitals, etc., or install new sensors to generate new data. Here, the need for data privacy and government policy was emphasized upon . For EaaS, there is an urgent need to standardize the interface between the sensor network, data publisher, insight providers and service provider in a smart city.

      The conference gave insight into the perspective of the industry about smart cities, along with  the actors involved, issues and challenges envisioned by private companies in the development of smart cities in India. The companies see role of IoT as an integral part of the project, with data security, privacy and need to formulate/adopt standards for implementation of IoT in the new as well as the existing structure key for the Smart Cities Mission in India.

      There is No Such Thing as Free Basics

      by Subhashish Panigrahi last modified Feb 14, 2016 11:37 AM
      India would not see the rain of Free Basics advertisements on billboards with images of farmers and common people explaining how much they could benefit from this Firefox project. Because the Telecom Regulatory Authority of India (TRAI) has taken a historical step by banning the differential pricing without discriminating services.

      The article was published in Bangalore Mirror on February 9, 2016.


      In their notes, TRAI has explained, "In India, given that a majority of the population are yet to be connected to the Internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users' Internet experience." Not just that, violation of this ban would cost Rs 50,000 every day.

      Facebook's earlier plan was to launch Free Basics in India by making a few websites—that are mostly partners with Facebook—available for free. The company not just advertised heavily on billboards and commercials across the nation, it also embedded a campaign inside Facebook asking users to vote in support of Free Basics.

      TRAI criticised Facebook's attempt for such a manipulative public provocation. However, Facebook was heavily criticised by many policy and Internet advocates, including non-profits groups like Free Software Movement of India and Savetheinternet.in campaign.

      The latter two collectives were strongly discouraging Free Basics by bringing public opinion wherein Savetheinternet.org was used to send over 10 lakh emails to TRAI to disallow Free Basics.

      Furthermore 500 start ups including major ones like Cleartrip, Zomato, Practo, Paytm and Cleartax also wrote to prime minister Narendra Modi requesting continued support for Net Neutrality — a concept that advocates equal treating of websites — on the Republic Day.

      Stand-up comedy groups like AIB and East India Comedy had created humorous but informative videos explaining the regulatory debate and supporting net neutrality which went viral.

      Technology critic and Quartz writer Alice Truong reacted saying: "Zuckerberg almost portrays net neutrality as a first-world problem that doesn't apply to India because having some service is better than no service."

      In the light of differential pricing, news portal Medianama's founder Nikhil Pawa, in his opinion piece in Times of India, emphasised the way Aircel in India, Grameenphone in Bangladesh and Orange in Africa were providing free access to Internet with a sole motif of access to Internet, and criticised the walled Internet of Facebook that confines users inside Facebook only.

      Had the differential pricing been allowed, it would have affected start ups and content-based smaller companies adversely, as they could never have managed to pay the high price to a partner service provider to make their service available for free.

      On the other hand, tech-giants like Facebook could have easily managed to capture the entire market. Since the inception of the Facebook-run non-profit Internet.org has run into a lot of controversies because of the hidden motive behind the claimed support for social cause.

      The decision by the government has been welcomed largely in the country and outside.

      In support of the move, Web We Want programme manager at the World Wide Web Foundation, Renata Avila, has shared saying,

      "As the country with the second largest number of Internet users worldwide, this decision will resonate around the world.

      "It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can't create a two-tier Internet — one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web."

      A Case for Greater Privacy Paternalism?

      by Amber Sinha — last modified Feb 20, 2016 07:28 AM
      This is the second part of a series of three articles exploring the issues with the privacy self management framework and potential alternatives.
       

      The first part of the series can be accessed here.

       

      Background

      The current data privacy protection framework across most jurisdictions is built around a rights based approach which entrusts the individual with having the wherewithal to make informed decisions about her interests and well-being.[1] In his book, The Phantom Public, published in 1925, Walter Lippmann argues that the rights based approach is based on the idea of a sovereign and omnicompetent citizens, who can direct public affairs, however, this idea is a mere phantom or an abstraction. [2] Jonathan Obar, Assistant Professor of Communication and Digital Media Studies in the Faculty of Social Science and Humanities at University of Ontario Institute of Technology, states that Lippmann's thesis remains equally relevant in the context of current models of self-management, particularly for privacy.[3] In the previous post, Scott Mason and I had looked at the limitations of a 'notice and consent' regime for privacy governance. Having established the deficiencies of the existing framework for data protection, I will now look at some of the alternatives proposed that may serve to address these issues.

      In this article, I will look at paternalistic solutions posed as alternatives to the privacy self-management regime. I will look at theories of paternalism and libertarianism in the context of privacy and with reference to the works of some of the leading philosophers on jurisprudence and political science. The paper will attempt to clarify the main concepts and the arguments put forward by both the proponents and opponents of privacy paternalism. The first alternative solution draws on Anita Allen's thesis in her book, Unpopular Privacy,[4] which deals with the questions whether individuals have a moral obligation to protect their own privacy. Allen expands the idea of rights to protect one's own self interests and duties towards others to the notion that we may have certain duties not only towards others but also towards ourselves because of their overall impact on the society. In the next section, we will look at the idea of 'libertarian paternalism' as put forth by Cass Sunstein and Richard Thaler[5] and what its impact could be on privacy governance.

      Paternalism

      Gerald Dworkin, Professor Emeritus at University of California, Davis, defines paternalism as "interference of a state or an individual with another person, against their will, and defended or motivated by a claim that the person interfered with will be better off or protected from harm." [6] Any act of paternalism will involve some limitation on the autonomy of the subject of the regulation usually without the consent of the subject, and premised on the belief that such act shall either improve the welfare of the subject or prevent it from diminishing.[7] Seana Shiffrin, Professor of Philosophy and Pete Kameron Professor of Law and Social Justice at UCLA, takes a broader view of paternalism and includes within its scope not only matters which are aimed at improving the subject's welfare, but also the replacement of the subject's judgement about matters which may otherwise have lied legitimately within the subject's control.[8] In that sense, Shiffrin's view is interesting for it dispenses with both the requirement for active interference, and such act being premised on the subject's well-being.

      The central premise of John Stuart Mill's On Liberty is that the only justifiable purpose to exert power over the will of an individual is to prevent harm to others. "His own good, either physical or moral," according to Mill, "is not a sufficient warrant." However, various scholars over the years have found Mill's absolute prohibition problematic and support some degree of paternalism. John Rawls' Principle of Fairness, for instance has been argued to be inherently paternalistic. If one has to put it in a nutshell, the aspect about paternalism that makes it controversial is that it involves coercion or interference, which in any theory of normative ethics or political science needs to be justified based on certain identified criteria. Staunch opponents of paternalism believe that this justification can never be met. Most scholars however, do not argue that all forms of paternalism are untenable and the bulk of scholarship on paternalism is devoted to formulating the conditions under which this justification is satisfied.

      Paternalism interferes with self-autonomy in two ways according to Peter de Marneffe, the Professor of Philosophy at the School of Historical, Philosophical and Religious Studies, Arizona State University.[9] The first is the prohibition principle, under which a person's autonomy is violated by being prohibited from making a choice. The second is the opportunity principle which undermines the autonomy of a person by reducing his opportunities to make a choice. Both the cases should be predicated upon a finding that the paternalistic act will lead to welfare or greater autonomy. According to de Marneffe, there are three conditions under which such acts of paternalism are justified - the benefits of welfare should be substantial, evident and must outweigh the benefits of self-autonomy.[10]

      There are two main strands of arguments made against paternalism.[11] The first argues that interference with the choices of informed adults will always be an inferior option to letting them decide for themselves, as each person is the 'best judge' of his or her interests. The second strand does not engage with the question about whether paternalism can make better decisions about individuals, but states that any benefit derived from the paternalist act is outweighed by the harm of violation of self-autonomy. Most proponents of soft-paternalism build on this premise by trying to demonstrate that not all paternalistic acts violate self-autonomy. There are various forms of paternalism that we do not question despite them interfering with our autonomy - seat belt laws and restriction of tobacco advertising being a few of them. If we try to locate arguments for self-autonomy in the Kantian framework, it refers not just to the ability to do what one chooses, but to rational self-governance.[12] This theory automatically "opens the door for justifiable paternalism."[13] In this paper, I assume that certain forms of paternalism are justified. In the remaining two section, I will look at two different theories advocating greater paternalism in the context of privacy governance and try to examine the merits and issues with such measures.

      A moral obligation to protect one's privacy

      Modest Paternalism

      In her book, Unpopular Privacy,[14] Anita Allen states that enough emphasis is not placed by people on the value of privacy. The right of individuals to exercise their free will and under the 'notice and consent' regime, give up their rights to privacy as they deem fit is, according to her, problematic. The data protection law in most jurisdictions, is designed to be largely value-neutral in that it does not sit on judgement on what is the nature of information that is being revealed and how the collector uses it. Its primary emphasis is on providing the data subject with information about the above and allowing him to make informed decisions. In my previous post, Scott Mason and I had discussed that with online connectivity becomes increasingly important to participation in modern life, the choice to withdraw completely is becoming less and less of a genuine option.[15] Lamenting that people put little emphasis on privacy and often give away information which, upon retrospection and due consideration, they would feel, they ought not have disclosed, Allen proposes what she calls 'modest paternalism' in which regulations mandate that individuals do not waive their privacy is certain limited circumstances.

      Allen acknowledges the tension between her arguments in favor of paternalism and her avowed support for the liberal ideals of autonomy and that government interference should be limited, to the extent possible. However, she tries to make a case for greater paternalism in the context of privacy. She begins by categorizing privacy as a "primary good" essential for "self respect, trusting relationships, positions of responsibility and other forms of flourishing." In another article, Allen states that this "technophilic generation appears to have made disclosure the default rule of everyday life."[16] Relying on various anecdotes and examples of individuals' disregard for privacy, she argues that privacy is so "neglected in contemporary life that democratic states, though liberal and feminist, could be justified in undertaking a rescue mission that includes enacting paternalistic privacy laws for the benefit of un-eager beneficiaries." She does state that in most cases it may be more advantageous to educate and incentivise individuals towards making choices that favor greater privacy protection. However, in exceptional cases, paternalism would be justified as a tool to ensure greater privacy.

      A duty towards oneself

      In an article for the Harvard Symposium on Privacy in 2013, Allen states that laws generally provide a framework built around rights of individuals that enable self-protection and duties towards others. G A Cohen describes Robert Nozick's views which represents this libertarian philosophy as follows: "The thought is that each person is the morally rightful owner of himself. He possesses over himself, as a matter of moral right, all those rights that a slaveholder has over a chattel slave as a matter of legal right, and he is entitled, morally speaking, to dispose over himself in the way such a slaveholder is entitled, legally speaking, to dispose over his slave."[17] As per the libertarian philosophy espoused by Nozick, everyone is licensed to abuse themselves in the same manner slaveholders abused their slaves.

      Allen asks the question whether there is a duty towards oneself and if such a duty exists, should it be reflected in policy or law. She accepts that a range of philosophers consider the idea of duties to oneself as illogical or untenable. [18] Allen, however relies on the works of scholars such as Lara Denis, Paul Eisenberg and Daniel Kading who have located such a duty. She develops a schematic of two kinds of duties - first order duties that requires we protect ourselves for the sake of others, and second order, derivative duties that we protect ourself. Through the essay, she relies on the Kantian framework of categorical imperative to build the moral thrust of her arguments. Kantian view of paternalism would justify those acts which interfere with an individual's autonomy in order to prevent her from exercising her autonomy irrationally, and draw her towards rational end that agree with her conception of good.[19] However, Allen goes one step further and she locates the genesis for duties to both others (perfect duties) and oneself (imperfect duties) in the categorical imperative . Her main thesis is that there are certain situations where we have a moral duty to protect our own privacy where failure to do so would have an impact on either specific others or the society, at large.

      Issues

      Having built this interesting and somewhat controversial premise, Allen does not sufficiently expand upon it to present a nuanced solution. She provides a number of anecdotes but does not formulate any criteria for when privacy duties could be self-regarding. Her test for what kinds of paternalistic acts are justified is also extremely broad. She argues for paternalism where is protects privacy rights that "enhance liberty, liberal ways of life, well-being and expanded opportunity." She does not clearly define the threshold for when policy should move from incentives to regulatory mandate nor does she elaborate upon what forms paternalism would both serve the purpose of protecting privacy as well as ensuring that there is no unnecessary interference with the rights of individual.[20]

      Nudge and libertarian paternalism

      What is nudge?

      In 2006, Richard Thaler and Cass Sunstein published their book Nudge: Improving decisions about health, wealth and happiness. [21] The central thesis of the book is that in order to make most of decisions, we rely on a menu of options made available to us and the order and structure of choices is characterised by Thaler and Sunstein as "choice architecture." According to them, the choice architecture has a significant impact on the choices that we make. The book looks at examples from a food cafeteria, the position of restrooms and how whether the choice is to opt-in or opt-out influences the retirement plans that were chosen. This choice architecture influences our behavior without coercion or a set of incentives, as conventional public policy theory would have us expect. The book draws on work done by cognitive scientists such as Daniel Kahneman[22] and Amos Tversky[23] as well as Thaler's own research in behavioral economics. [24] The key takeaway from cognitive science and behavioral economics used in this book is that choice architecture influences our actions in anticipated ways and leads to predictably irrational behavior. Thaler and Sunstein believe that this presents a great potential for policy makers. They can tweak the choice architecture in their specific domains to influence the decisions made by its subjects and nudge them towards behavior that is beneficial to them and/or the society.

      The great attraction of the argument made by Thaler and Sunstein is that it offers a compromise between forbearance and mandatory regulation. If we identify the two ends of the policy spectrum as - a) paternalists who believe in maximum interference through legal regulations that coerce behavior to meet the stated goals of the policy, and b) libertarians who believe in the free market theory that relies on the individuals making decisions in their best interests, 'nudging' falls somewhere in the middle, leading to the oxymoronic yet strangely apt phrase, "libertarian paternalism." The idea is to design choices in such as way that they influence decision-making so as to increase individual and societal welfare. In his book, The Laws of Fear, Cass Sunstein argues that the anti-paternalistic position is incoherent as "there is no way to avoid effects on behavior and choices."

      The proponents of libertarian paternalism refute the commonly posed question about who decides the optimal and desirable results of choice architecture, by stating that this form of paternalism does not promote a perfectionist standard of welfare but an individualistic and subjective standard. According to them, choices are not prohibited, cordoned off or made to carry significant barriers. However, it is often difficult to conclude what it is that is better for the welfare of people, even from their own point of view. The claim that nudges lead to choices that make them better off by their own standards seems more and more untenable. What nudges do is lead people towards certain broad welfare which the choice-architects believe make the lives of people better in the longer term.[25]

      How nudges could apply to privacy?

      Our previous post echoes the assertion made by Thaler and Sunstein that the traditional rational choice theory that assumes that individuals will make rationally optimal choices in their self interest when provided with a set of incentives and disincentives, is largely a fiction. We have argued that this assertion holds true in the context of privacy protection principles of notice and informed consent. Daniel Solove has argued that insights from cognitive science, particularly using the theory of nudge would be an acceptable compromise between the inefficacy of privacy self-management and the dangers of paternalism.[26] His rationale is that while nudges influence choice, they are not overly paternalistic in that they still give the individual the option of making choices contrary to those sought by the choice architecture. This is an important distinction and it demonstrates that 'nudging' is less coercive than how we generally understand paternalistic policies.

      One of the nudging techniques which makes a lot of sense in the context of the data protection policies is the use of defaults. It relies on the oft-mentioned status quo bias.[27] This is mentioned by Thaler and Sunstein with respect to encouraging retirement savings plans and organ donation, but would apply equally to privacy. A number of data collectors have maximum disclosure as their default settings and effort in understanding and changing these settings is rarely employed by users. A rule which mandates that data collectors set optimal defaults that ensure that the most sensitive information is subjected to least degree of disclosure unless otherwise chosen by the user, will ensure greater privacy protection.

      Ryan Calo and Dr. Victoria Groom explored an alternative to the traditional notice and consent regime at the Centre of Internet and Society, Stanford University.[28] They conducted a two-phase experimental study. In the first phase, a standard privacy notice was compared with a control condition and a simplified notice to see if improving the readability impacted the response of users. In the second phase, the notice was compared with five notices strategies, out of which four were intended to enhance privacy protective behavior and one was intended to lower it. Shara Monteleone and her team used a similar approach but with a much larger sample size.[29] One of the primary behavioral insights used was that when we do repetitive activities including accepting online terms and conditions or privacy notices, we tend to use our automatic or fast thinking instead to reflective or slow thinking.[30] Changing them requires leveraging the automatic behavior of the individuals.

      Alessandro Acquisti, Professor of Information Technology and Public Policy at the Heinz College, Carnegie Mellon University, has studied the application of methodologies from behavioral economics to investigate privacy decision-making.[31] He highlights a variety of factors that distort decision-making such as - "inconsistent preferences and frames of judgment; opposing or contradictory needs (such as the need for publicity combined with the need for privacy); incomplete information about risks, consequences, or solutions inherent to provisioning (or protecting) personal information; bounded cognitive abilities that limit our ability to consider or reflect on the consequences of privacy-relevant actions; and various systematic (and therefore predictable) deviations from the abstractly rational decision process." Acquisti looks at three kinds of policy solutions taking the example of social networking sites collecting sensitive information- a) hard paternalistic approach which ban making visible certain kind of information on the site, b) a usability approach that entails designing the system in way that is most intuitive and easy for users to decide whether to provide the information, c) a soft paternalistic approach which seeks to aid the decision-making by providing other information such as how many people would have access to the information, if provided, and set defaults such that the information is not visible to others unless explicitly set by the user. The last two approaches are typically cited as examples of nudging approaches to privacy.

      Another method is to use tools that lead to decreased disclosure of information. For example, tools like Social Media Sobriety Test[32] or Mail Goggles[33] serve to block the sites during certain hours set by user during which one expects to be at their most vulnerable, and the online services are blocked unless the user can pass a dexterity examination.[34] Rebecca Belabako and her team are building privacy enhanced tools for Facebook and Twitter that will provide greater nudges in restricting who they share their location on Facebook and restricting their tweets to smaller group of people.[35] Ritu Gulia and Dr. Sapna Gambhir have suggested nudges for social networking websites that randomly select pictures of people who will have access to the information to emphasise the public or private setting of a post.[36] These approaches try to address the myopia bias where we choose immediate access to service over long term privacy harms.

      The use of nudges as envisioned in the examples above is in some ways an extension of already existing research which advocates a design standard that makes the privacy notices more easily intelligible.[37] However, studies show only an insignificant improvement by using these methods. Nudging, in that sense goes one step ahead. Instead of trying to make notices more readable and enable informed consent, the design standard will be intended to simply lead to choices that the architects deem optimal.

      Issues with nudging

      One of the primary justifications that Thaler and Sunstein put forward for nudging is that the choice architecture is ubiquitous. The manner in which option are presented to us impact how we make decision whether it was intended to do so or not, and that there is no such thing a neutral architecture. This inevitability, according to them, makes a strong case for nudging people towards choices that will lead to their well-being. However, this assessment does not support the arguments made by them that libertarian paternalism nudges people towards choices from their own point of view. It is my contention that various examples of libertarian paternalism, as put forth by Thaler and Sunstein, do in fact interfere with our self-autonomy as the choice architecture leads us not to options that we choose for ourselves in a fictional neutral environments, but to those options that the architects believe are good for us. This substitution of judgment would satisfy the definition by Seana Shiffron. Second, the fact that there is no such things as a neutral architecture, is by itself, not justification enough for nudging. If we view the issue only from the point of view of normative ethics, assuming that coercion and interference are undesirable, intentional interference is much worse than unintentional interference.

      However, there are certain nudges that rely primarily on providing information, dispensing advice and rational persuasion.[38] The freedom of choice is preserved in these circumstances. Libertarians may argue that even these circumstances the shaping of choice is problematic. This issue, J S Blumenthal-Barby argues, is adequately addressed by the publicity condition, a concept borrowed by Thaler and Sunstein from John Rawls.[39] The principle states that officials should never use a technique they would be uncomfortable defending to the public; nudging is no exception. However, this seems like a simplistic solution to a complex problem. Nudges are meant to rely on inherent psychological tendencies, leveraging the theories about automatic and subconscious thinking as described by Daniel Kahneman in his book, "Thinking Fast, Thinking Slow."[40] In that sense, while transparency is desirable it may not be very effective.

      Other commentators also note that while behavioral economics can show why people make certain decisions, it may not be able to reliably predict how people will behave in different circumstances. The burden of extrapolating the observations into meaningful nudges may prove to be too heavy.[41] However, the most oft-quoted criticism of nudging is that it will rely on officials to formulate the desired goals towards which the choice architecture will lead us.[42] The judgments of these officials could be flawed and subject to influence by large corporations.[43] These concerns echo the best judge argument made against all forms of paternalism, mentioned earlier in this essay. J S Blumenthal-Barby, Assistant Professor at the Center for Medical Ethics and Health Policy, Baylor College of Medicine, also examines the claim that the choice architects will be susceptible to the same biases while designing the choice environment.[44] His first argument in response to this is that experts who extensively study decision-making may be less prone to these errors. Second, he argues that even with errors and biases, a choice architecture which attempts to the rights the wrongs of a random and unstructured choice environment is a preferable option.[45]

      Conclusion

      Most libertarians will find the notion that individuals are prevented from sharing some information about themselves problematic. Anita Allen's idea about self-regarding duties is at odds how we understand rights and duties in most jurisdictions. Her attempt to locate an ethical duty to protect one's privacy, while interesting, is not backed by a formulation of how such a duty would work. While she relies largely on an Kantian framework, her definition of paternalism, as can be drawn from her writing is broader than that articulated by Kant himself. On the other hand, Thaler and Sunstein's book Nudge and related writings by them do attempt to build a framework of how nudging would work and answer some questions they anticipate would be raised against the idea of libertarian paternalism.

      By and large, I feel that, Thaler and Sunstein's idea of libertarian paternalism could be justified in the context of privacy and data protection governance. It would be fair to say the first two conditions of de Marneffe under which such acts of paternalism are justified [46] are largely satisfied by nudges that ensures greater privacy protection. If nudges can ensure greater privacy protection, its benefits are both substantial and evident. However, the larger question is whether these purported benefits outweigh the costs of loss of self-autonomy. Given the numerous ways in which the 'notice and consent' framework is ineffective and leads to very little informed consent, it can be argued that there is little exercise of autonomy, to begin with, and hence, the loss of self-autonomy is not substantial. Some of the conceptual issues which doubt the ability of nudges to solve complex problems remain unanswered and we will have to wait for more analysis by both cognitive scientists and policy-makers. However, given the growing inefficacy of the existing privacy protection framework, it would be a good idea of begin using some insights from cognitive science and behavioral economics to ensure greater privacy protection.

      The current value-neutrality of data protection law with respect of the kind of data collected and its use, and its complete reliance on the data subject to make an informed choice is, in my opinion, an idea that has run its course. Rather than focussing solely on the controls at the stage of data collection, I believe we need a more robust theory of how to govern the subsequent uses of data. This will is the focus of the next part of this series in which I will look at the greater use of risk-based approach to privacy protection.



      [1] With invaluable inputs from Scott Mason.

      [2] Walter Lippmann, The Phantom Public, Transaction Publishers, 1925.

      [3] Jonathan Obar, Big Data and the Phantom Public: Walter Lippmann and the fallacy of data privacy self management, Big Data and Society, 2015, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2239188

      [4] Anita Allen, Unpopular Privacy: What we must hide?, Oxford University Press USA, 2011.

      [5] Richard Thaler and Cass Sunstein, Nudge, Improving decisions about health, wealth and happinessYale University Press, 2008.

      [7] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 29.

      [8] Seana Shiffrin, Paternalism, Unconscionability Doctrine, and Accommodation, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2682745

      [9] Peter de Marneffe, Self Sovereignty and Paternalism, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 58.

      [10] Id .

      [11] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 74.

      [12] Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 115.

      [13] Ibid at 116.

      [14] Anita Allen, Unpopular Privacy: What we must hide?, Oxford University Press USA, 2011.

      [15] Janet Vertasi, My Experiment Opting Out of Big Data Made Me Look Like a Criminal, 2014, available at http://time.com/83200/privacy-internet-big-data-opt-out/

      [16] Anita Allen, Privacy Law: Positive Theory and Normative Practice, available at http://harvardlawreview.org/2013/06/privacy-law-positive-theory-and-normative-practice/ .

      [17] G A Cohen, Self ownership, world ownership and equality, available at http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=3093280

      [19] Michael Cholbi, Kantian Paternalism and suicide intervention, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013.

      [20] Eric Posner, Liberalism and Concealment, available at https://newrepublic.com/article/94037/unpopular-privacy-anita-allen

      [21] Richard Thaler and Cass Sunstein, Nudge, Improving decisions about health, wealth and happinessYale University Press, 2008.

      [22] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

      [23] Daniel Kahneman, Paul Slovic and Amos Tversky, Judgment under uncertainty: heuristics and biases, Cambridge University Press, 1982; Daniel Kahneman and Amos Tversky, Choices, Values and Frames, Cambridge University Press, 2000.

      [24] Richard Thaler, Advances in behavioral finance, Russell Sage Foundation, 1993.

      [25] Thaler, Sunstein and Balz, Choice Architecture, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1583509.

      [26] Daniel Solove, Privacy self-management and consent dilemma, 2013 available at http://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2093&context=faculty_publications

      [27] Frederik Borgesius, Behavioral sciences and the regulation of privacy on the Internet, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2513771.

      [28] Ryan Calo and Dr. Victoria Groom, Reversing the Privacy Paradox: An experimental study, available at http://ssrn.com/abstract=1993125

      [29] Shara Monteleon et al, Nudges to Privacy Behavior: Exploring an alternative approahc to privacy notices, available at http://publications.jrc.ec.europa.eu/repository/bitstream/JRC96695/jrc96695.pdf

      [30] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

      [31] Alessandro Acquisti, Nudging Privacy, available at http://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf

      [34] Rebecca Balebako et al, Nudging Users towards privacy on mobile devices, available at https://www.andrew.cmu.edu/user/pgl/paper6.pdf.

      [35] Id .

      [36] Ritu Gulia and Dr. Sapna Gambhir, Privacy and Privacy Nudges for OSNs: A Review, available at http://www.ijircce.com/upload/2014/march/14L_Privacy.pdf

      [37] Annie I. Anton et al., Financial Privacy Policies and the Need for Standardization, 2004 available at https://ssl.lu.usi.ch/entityws/Allegati/pdf_pub1430.pdf; Florian Schaub, R. Balebako et al, "A Design Space for effective privacy notices" available at https://www.usenix.org/system/files/conference/soups2015/soups15-paper-schaub.pdf

      [38] Daniel Hausman and Bryan Welch argue that these cases are mistakenly characterized as nudges. They believe that nudges do not try to inform the automatic system, but manipulate the inherent cognitive biases. Daniel Hausman and Bryan Welch, Debate: To Nudge or Not to Nudge, Journal of Political Philosophy 18(1).

      [39] Ryan Calo, Code, Nudge or Notice, available at

      [40] Daniel Kahneman, Thinking, fast and slow, Farrar, Straus and Giroux, 2011.

      [41] Evan Selinger and Kyle Powys Whyte, Nudging cannot solve complex policy problems.

      [42] Mario J. Rizzo & Douglas Glen Whitman, The Knowledge Problem of New Paternalism, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1310732; Pierre Schlag, Nudge, Choice Architecture, and Libertarian Paternalism, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1585362.

      [43] Edward L. Glaeser, Paternalism and Psychology, available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=917383.

      [44] J S BLumenthal-Barby, Choice Architecture: A mechanism for improving decisions

      while preserving liberty?, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013.

      [45] Id .

      [46] According to de Marneffe, there are three conditions under which such acts of paternalism are justified - the benefits of welfare should be substantial, evident and must outweigh the benefits of self-autonomy. Peter de Marneffe, Self Sovereignty and Paternalism, from Christian Coons and Michael Weber, ed., Paternalism: Theory and Practice; Cambridge University Press, 2013. at 58.

      Internet Freedom

      by Sunil Abraham and Vidushi Marda — last modified Feb 15, 2016 02:51 AM
      The modern medium of the web is an open-sourced, democratic world in which equality is an ideal, which is why what is most important is Internet freedom.

      The article by Sunil Abraham and Vidushi Marda was published by Asian Age on February 14, 2016.


      What would have gone wrong if India’s telecom regulator Trai had decided to support programmes like Facebook’s Free Basics and Airtel’s Zero Rating instead of issuing the regulation that prohibits discriminatory tariffs? Here are possible scenarios to look at in case the discriminatory tarrifs were allowed as they are in some countries.

      Possible impact on elections

      Facebook would have continued to amass its product — eyeballs. Indian eyeballs would be more valuable than others for three reasons 1. Facebook would have an additional layer of surveillance thanks to the Free Basics proxy server which stores the time, the site url and data transferred for all the other destinations featured in the walled garden 2. As part of Digital India, most government entities will set up Facebook pages and a majority of the interaction with citizens would happen on the social media rather than the websites of government entities and, consequently, Facebook would know what is and what is not working in governance 3. Given the financial disincentive to leave the walled garden, the surveillance would be total.

      What would this mean for democracies? Eight years ago, Facebook began to engineer the News Feed to show more posts of a user’s friends voting in order to influence voting behavior. It introduced the “I’m Voting” button into 61 million users’ feeds during the 2010 US presidential elections to increase voter turnout and found that this kind of social pressure caused people to vote. Facebook has also admitted to populating feeds with posts from friends with similar political views. During the 2012 Presidential elections, Facebook was able to increase voter turnout by altering 1.9 million news feeds.

      Indian eyeballs may not be that lucrative in terms of advertising. But these users are extremely valuable to political parties and others interested in influencing elections. Facebook’s notifications to users when their friends signed on to the “Support Free Basics” campaign was configured so that you were informed more often than with other campaigns. In other words, Facebook is not just another player on their platform. Given that margins are often slim, would Facebook be tempted to try and install a government of its choice in India during the 2019 general elections?

      In times of disasters

      Most people defending Free Basics and defending forbearance as the regulatory response in 2015/16 make the argument that “95 per cent of Internet users in developing countries spend 95 per cent of their time on Facebook”.

      This is not too far from the truth as LirneAsia demonstrated in 2012 with most people using Facebook in Indonesia not even knowing they were using the internet. In other words, they argue that regulators should ignore the fringe user and fringe usage and only focus on the mainstream. The cognitive bias they are appealing to is smaller numbers are less important.

      Since all the sublime analogies in the Net Neutrality debate have been taken, forgive us for using the scatological. That is the same as arguing that since we spend only 5% of our day in toilets, only 5% of our home’s real estate should be devoted to them.

      Everyone agrees that it is far easier to live in a house without a bedroom than a house without a toilet. Even extremely low probabilities or ‘Black Swan’ events can be terribly important! Imagine you are an Indian at the bottom of the pyramid. You cannot afford to pay for data on your phone and, as a result, you rarely and nervously stray out of the walled garden of Free Basics.

      During a natural disaster you are able to use the Facebook Safety Check feature to mark yourself safe but the volunteers who are organising both offline and online rescue efforts are using a wider variety of platforms, tools and technologies.

      Since you are unfamiliar with the rest of the Internet, you are ill equipped when you try to organise a rescue for you and your loved ones.

      Content and carriage converge

      Some people argue that TRAI should have stayed off the issue since the Competition Commission of India (CCI) is sufficient to tackle Net Neutrality harms. However it is unclear if predatory pricing by Reliance, which has only 9% market share, will cross the competition law threshold for market dominance? Interestingly, just before the Trai notification, the Ambani brothers signed a spectrum sharing pact and they have been sharing optic fibre since 2013.

      Will a content sharing pact follow these carriage pacts? As media diversity researcher, Alam Srinivas, notes “If their plans succeed, their media empires will span across genres such as print, broadcasting, radio and digital. They will own the distribution chains such as cable, direct-to-home (DTH), optic fibre (terrestrial and undersea), telecom towers and multiplexes.”

      What does this convergence vision of the Ambani brothers mean for media diversity in India? In the absence of net neutrality regulation could they use their dominance in broadcast media to reduce choice on the Internet? Could they use a non-neutral provisioning of the Internet to increase their dominance in broadcast media? When a single wire or the very same radio spectrum delivers radio, TV, games and Internet to your home — what under competition law will be considered a substitutable product? What would be the relevant market? At the Centre for Internet and Society (CI S), we argue that competition law principles with lower threshold should be applied to networked infrastructure through infrastructure specific non-discrimination regulations like the one that Trai just notified to protect digital media diversity.

      Was an absolute prohibition the best response for TRAI? With only two possible exemptions — i.e. closed communication network and emergencies - the regulation is very clear and brief. However, as our colleague Pranesh Prakash has said, TRAI has over regulated and used a sledgehammer where a scalpel would have sufficed. In CIS’ official submission, we had recommended a series of tests in order to determine whether a particular type of zero rating should be allowed or forbidden. That test may be legally sophisticated; but as TRAI argues it is clear and simple rules that result in regulatory equity. A possible alternative to a complicated multi-part legal test is the leaky walled garden proposal. Remember, it is only in the case of very dangerous technologies where the harms are large scale and irreversible and an absolute prohibition based on the precautionary principle is merited.

      However, as far as network neutrality harms go, it may be sufficient to insist that for every MB that is consumed within Free Basics, Reliance be mandated to provide a data top up of 3MB.

      This would have three advantages. One, it would be easy to articulate in a brief regulation and therefore reduce the possibility of litigation. Two, it is easy for the consumer who is harmed to monitor the mitigation measure and last, based on empirical data, the regulator could increase or decrease the proportion of the mitigation measure.

      This is an example of what Prof Christopher T. Marsden calls positive, forward-looking network neutrality regulation. Positive in the sense that instead of prohibitions and punitive measures, the emphasis is on obligations and forward-looking in the sense that no new technology and business model should be prohibited.

      What is Net neutrality?

      According to this principle, all service providers and governments should not discriminate between various data on the internet and consider all as one. They cannot give preference to one set of apps/ websites while restricting others.

      • 2006: TRAI invites opinions regarding the regulation of net neutrality from various telecom industry bodies and stakeholdersFeb. 2012: Sunil Bharti Mittal, CEO of Bharti Airtel, suggests services like YouTube should pay an interconnect charge to network operators, saying that if telecom operators are building highways for data then there should be a tax on the highway
      • July 2012: Bharti Airtel’s Jagbir Singh suggests large Internet companies like Facebook and Google should share revenues with telecom companies.
      • August 2012: Data from M-Lab said You Broadband, Airtel, BSNL were throttling traffic of P2P services like BitTorrent
      • Feb. 2013: Killi Kiruparani, Minister for state for communications and technology says government will look into legality of VoIP services like Skype
      • June 2013: Airtel starts offering select Google services to cellular broadband users for free, fixing a ceiling of 1GB on the data
      • Feb. 2014: Airtel operations CEO Gopal Vittal says companies offering free messaging apps like Skype and WhatsApp should be regulated
      • August 2014: TRAI rejects proposal from telecom companies to make messaging application firms share part of their revenue with the carriers/government
      • Nov. 2014: Trai begins investigation on Airtel implementing preferential access with special packs for WhatsApp and Facebook at rates lower than standard data rates
      • Dec. 2014: Airtel launches 2G, 3G data packs with VoIP data excluded in the pack, later launches VoIP pack.
      • Feb. 2015: Facebook launches Internet.org with Reliance communications, aiming to provide free access to 38 websites through single app
      • March 2015: Trai publishes consultation paper on regulatory framework for over the top services, explaining what net neutrality in India will mean and its impact, invited public feedback
      • April 2015: Airtel launches Airtel Zero, a scheme where apps sign up with airtle to get their content displayed free across the network. Flipkart, which was in talks for the scheme, had to pull out after users started giving it poor rating after hearing about the news
      • April 2015: Ravi Shankar Prasad, Communication and information technology minister announces formation of a committee to study net neutrality issues in the country
      • 23 April 2015: Many organisations under Free Software Movement of India protested in various parts of the country. In a counter measure, Cellular Operators Association of India launches campaign , saying its aim is to connect the unconnected citizens, demanding VoIP apps be treated as cellular operators
      • 27 April 2015: Trai releases names and email addresses of users who responded to the consultation paper in millions. Anonymous India group, take down Trai’s website in retaliation, which the government could not confirm
      • Sept. 2015: Facebook rebrands Internet.org as Free Basics, launches in the country with massive ads across major newspapers in the country. Faces huge backlash from public
      • Feb. 2016: Trai rules in favour of net neutrality, barring telecom operators from charging different rates for data services.

      The writers work at the Centre for Internet and Society, Bengaluru. CIS receives about $200,000 a year from WMF, the organisation behind Wikipedia, a site featured in Free Basics and zero-rated by many access providers across the world

      Free Speech and the Law on Sedition

      by Siddharth Narrain — last modified Feb 17, 2016 09:13 AM
      Siddharth Narrain explains how the law in India has addressed sedition.

      Sedition is an offence that criminalizes speech that is construed to be disloyal to or threatening to the state. The main legal provision in India is section 124A of the Indian Penal Code that criminalizes speech that “brings or attempts to bring into hatred or contempt, or attempts or attempts to excite disaffection” towards the government. The law makes a distinction between “disapprobation” (lawful criticism of the government) and “disaffection” (expressing disloyalty or enmity which is proscribed).

      The British introduced this law in 1898, as a part of their efforts to curb criticism of colonial rule, and to stamp out any dissent. Many famous nationalists including Bal Gangadhar Tilak and Mahatma Gandhi have been tried and imprisoned for sedition. After a spirited debate, the Indian Constitutional Assembly decided not to include ‘sedition’ as a specific exception to Article 19(1)(a). However section 124A IPC remained on the statute book. After the First Amendment to the Constitution and the introduction of the words “in the interests of public order” to the exceptions to Article 19(1)(a), it became extremely difficult to challenge the constitutionality of section 124A.

      In 1962, the Supreme Court upheld the constitutionality of the law in the Kedarnath Singh case, but narrowed the scope of the law to acts involving intention or tendency to create disorder, or disturbance of law and order, or incitement to violence. Thus the Supreme Court provided an additional safeguard to the law: not only was constructive criticism or disapprobation allowed, but if the speech concerned did not have an intention or tendency to cause violence or a disturbance of law and order, it was permissible.

      However, even though the law allows for peaceful dissent and constructive criticism, over the years various governments have used section 124A to curb dissent. The trial and conviction of the medical doctor and human rights activist Binayak Sen, led to a renewed call for the scrapping of this law. In the Aseem Trivedi case, where a cartoonist was arrested for his work around the theme of corruption, the Bombay High Court has laid down guidelines to be followed by the government in arrests under section 124A. The court reaffirmed the law laid down in Kedarnath Singh, and held that for a prosecution under section 124A, a legal opinion in writing must be obtained from the law officer of the district(it did not specify who this was) followed by a legal opinion in writing within two weeks from the state public prosecutor. This adds to the existing procedural safeguard under section 196 of the Code of Criminal Procedure (CrPC) that says that courts cannot take cognizance of offences punishable under section 124A IPC unless the Central or State government has given sanction or permission to proceed.

      The serious nature of section 124A is seen in the light of the punishment associated with it. Section 124A is a cognizable (arrests can be made without a warrant), non-bailable and non-compoundable offence. Punishment for the offence can extend up to life imprisonment. Because of the seriousness of the offence, courts are often reluctant to grant bail. Sedition law is seen as an anachronism in many countries including the United Kingdom, and it has been repealed in most Western democracies.

      IMPORTANT CASE LAW

      Kedarnath Singh v. State of Bihar, AIR 1962 SC 955 Supreme Court, 5 Judges,

      Medium: Offline

      Brief Facts: Kedarnath Singh, a member of the Forward Communist Party, was prosecuted for sedition related to a speech that he made criticising the government for its capitalist policies. Singh challenged the constitutionality of the sedition law. The Supreme Court bunched Singh’s case with other similar incidents where persons were prosecuted under the sedition law.

      Held: The law is constitutional and covered written or spoken words that had the implicit idea of subverting the government by violent means. However, this section would not cover words that were used as disapprobation of measures of the government that were meant to improve or alter the policies of the government through lawful means. Citizens can criticize the government as long as they are not inciting people to violence against the government with an intention to create public disorder. The court drew upon the Federal Court’s decision in Niharendru Dutt Majumdar where the court held that offence of sedition is the incitement to violence or the tendency or the effect of bringing a government established by law into hatred or contempt or creating disaffection in the sense of disloyalty to the state. While the Supreme Court upheld the validity of section 124A, it limited its application to acts involving intention or tendency to create disorder, or a disturbance of law and order, or incitement to violence.

      Balwant Singh and Anr v. State of Punjab: AIR 1985 SC 1785

      Brief Facts: The accused had raised the slogan “Khalistan Zindabad” outside a cinema hall just after the assassination of Prime Minister Indira Gandhi.

      Held: The slogans raised by the accused had no impact on the public. Two individuals casually raising slogans could not be said to be exciting disaffection towards the government. Section 124A would not apply to the facts and circumstances of this case.

      Sanskar Marathe v. State of Maharashtra & Ors, Criminal Public Interest Litigation No. 3 of 2015, Bombay High Court, 2 judges

      Medium: Online and Offline

      Brief Facts: The case arose out of the arrest of Aseem Trivedi, a political cartoonist who was involved with the India Against Corruption movement. Trivedi was arrested in 2012 in Mumbai for sedition and insulting the National Emblems Act. The court considered the question of how it could intervene to prevent the misuse of section 124A. Held: The cartoons were in the nature of political satire, and there was no allegation of incitement to violence, or tendency or intention to create public disorder. The Court issued guidelines to all police personnel in the form of preconditions for prosecutions under section 124A: Words, signs, or representations must bring the government into hatred or contempt, or must cause, or attempt to cause disaffection, enmity or disloyalty to the government. The words, signs or representation must also be an incitement to violence or must be intended or tend to create public disorder or a reasonable apprehension of public disorder. Words, signs or representations, just by virtue of being against politicians or public officials cannot be said to be against the government. They must show the public official as representative of the government. Disapproval or criticism of the government to bring about a change in government through lawful means does not amount to sedition. Obscenity or vulgarity by itself is not a factor to be taken into account while deciding if a word, sign or representation violates section 124A. In order to prosecute under section 124A, the government has to obtain a legal opinion in writing from the law officer of the district (the judgment does not specify who this is) and in the next two weeks, a legal opinion in writing from the public prosecutor of the state.

      Free Speech and Public Order

      by Gautam Bhatia — last modified Feb 18, 2016 06:23 AM
      In this post, Gautam Bhatia has explained the law on public order as a reasonable restriction to freedom of expression under Article 19(2) of the Constitution of India.

      Article 19(2) of the Constitution authorises the government to impose, by law, reasonable restrictions upon the freedom of speech and expression “in the interests of… public order.” To understand the Supreme Court’s public order jurisprudence, it is important to break down the sub-clause into its component parts, and focus upon their separate meanings. Specifically, three terms are important: “reasonable restrictions”, “in the interests of”, and “public order”.

      The Supreme Court’s public order jurisprudence can be broadly divided into three phases. Phase One (1949 – 1950), which we may call the pre-First Amendment Phase, is characterised by a highly speech-protective approach and a rigorous scrutiny of speech-restricting laws. Phase Two (1950 – 1960), which we may call the post-First Amendment Expansionist Phase, is characterised by a judicial hands-off approach towards legislative and executive action aimed at restricting speech. Phase Three (1960 - present day), which we may call the post-First Amendment Protectionist phase, is characterised by a cautious, incremental move back towards a speech-protective, rigorous-scrutiny approach. This classification is broad-brush and generalist, but serves as a useful explanatory device.

      Before the First Amendment, the relevant part of Article 19(2) allowed the government to restrict speech that “undermines the security of, or tends to overthrow, the State.” The scope of the restriction was examined by the Supreme Court in Romesh Thappar vs State of Madras and Brij Bhushan vs State of Delhi, both decided in 1950. Both cases involved the ban of newspapers or periodicals, under state laws that authorised the government to prohibit the entry or circulation of written material, ‘in the interests of public order’. A majority of the Supreme Court struck down the laws. In doing so, they invoked the concept of “over-breadth”: according to the Court, “public order” was synonymous with public tranquility and peace, while undermining the security of, or tending to overthrow the State, referred to acts which could shake the very foundations of the State. Consequently, while acts that undermined or tended to overthrow the State would also lead to public disorder, not all acts against public order would rise to the level of undermining the security of the State. This meant that the legislation proscribed acts that, under Article 19(2), the government was entitled to prohibit, as well as those that it wasn’t. This made the laws “over-broad”, and unconstitutional. In a dissenting opinion, Fazl Ali J. argued that “public order”, “public tranquility”, “the security of the State” and “sedition” were all interchangeable terms, that meant the same thing.

      In Romesh Thappar and Brij Bhushan, the Supreme Court also held that the impugned legislations imposed a regime of “prior restraint” – i.e., by allowing the government to prohibit the circulation of newspapers in anticipation of public disorder, they choked off speech before it even had the opportunity to be made. Following a long-established tradition in common law as well as American constitutional jurisprudence, the Court held that a legislation imposing prior restraint bore a heavy burden to demonstrate its constitutionality.

      The decisions in Romesh Thappar and Brij Bhushan led to the passage of the First Amendment, which substituted the phrase “undermines the security of, or tends to overthrow, the State” with “public order”, added an additional restriction in the interests of preventing an incitement to an offence, and – importantly – added a the word “reasonable” before “restrictions”.

      The newly-minted Article 19(2) came to be interpreted by the Supreme Court in Ramji Lal Modi vs State of UP (1957). At issue was a challenge to S. 295A of the Indian Penal Code, which criminalised insulting religious beliefs with an intent to outrage religious feelings of any class. The challenge made an over-breadth argument: it was contended that while some instances of outraging religious beliefs would lead to public disorder, not all would, and consequently, the Section was unconstitutional. The Court rejected this argument and upheld the Section. It focused on the phrase “in the interests of”, and held that being substantially broader than a term such as “for the maintenance of”, it allowed the government wide leeway in restricting speech. In other words, as long as the State could show that there was some connection between the law, and public order, it would be constitutional. The Court went on to hold that the calculated tendency of any speech or expression aimed at outraging religious feelings was, indeed, to cause public disorder, and consequently, the Section was constitutional. This reasoning was echoed in Virendra vs State of Punjab (1957), where provisions of the colonial era Press Act, which authorised the government to impose prior restraint upon newspapers, were challenged. The Supreme Court upheld the provisions that introduced certain procedural safeguards, like a time limit, and struck down the provisions that didn’t. Notably, however, the Court upheld the imposition of prior restraint itself, on the ground that the phrase “in the interests of” bore a very wide ambit, and held that it would defer to the government’s determination of when public order was jeopardised by speech or expression.

      In Ramji Lal Modi and Virendra, the Court had rejected the argument that the State can only impose restrictions on the freedom of speech and expression if it demonstrates a proximate link between speech and public order. The Supreme Court had focused closely on the breadth of the phrase “in the interests of”, but had not subjected the reasonable requirement to any analysis. In earlier cases such as State of Madras vs V.G. Row, the Court had stressed that in order to be “reasonable”, a restriction would have to take into account the nature and scope of the right, the extent of infringement, and proportionality. This analysis failed to figure in Ramji Lal Modi and Virendra. However, in Superintendent, Central Prison vs Ram Manohar Lohia, the Supreme Court changed its position, and held that there must be a “proximate” relationship between speech and public disorder, and that it must not be remote, fanciful or far fetched. Thus, for the first time, the breath of the phrase “in the interests of” was qualified, presumably from the perspective of reasonableness. In Lohia, the Court also stressed again that “public order” was of narrower ambit than mere “law and order”, and would require the State to discharge a high burden of proof, along with evidence.

      Lohia marks the start of the third phase in the Court’s jurisprudence, where the link of proximity between speech and public disorder has gradually been refined. In Babulal Parate vs State of Maharashtra (1961) and Madhu Limaye vs Sub-Divisional Magistrate (1970), the Court upheld prior restraints under S. 144 of the CrPC, while clarifying that the Section could only be used in cases of an Emergency. Section 144 of the CrPC empowers executive magistrates (i.e., high-ranking police officers) to pass very wide-ranging preventive orders, and is primarily used to prohibit assemblies at certain times in certain areas, when it is considered that the situation is volatile, and could lead to violence. In Babulal Parate and Madhu Limaye, the Supreme Court upheld the constitutionality of Section 144, but also clarified that its use was restricted to situations when there was a proximate link between the prohibition, and the likelihood of public dirsorder.

      In recent years, the Court has further refined its proximity test. In S. Rangarajan vs P. Jagjivan Ram (1989), the Supreme Court required proximity to be akin to a “spark in a powder keg”. Most recently, in Arup Bhuyan vs State of Assam (2011), the Court read down a provision in the TADA criminalizing membership of a banned association to only apply to cases where an individual was responsible for incitement to imminent violence (a standard borrowed from the American case of Brandenburg).[GB1]

      Lastly, in 2015,  we have seen the first instance of the application of Section 144 of the CrPC to online speech. The wide wording of the section was used in Gujarat to pre-emptively block mobile internet services, in the wake of Hardik Patel’s Patidar agitation for reservations. Despite the fact that website blocking is specifically provided for by Section 69A of the IT Act, and its accompanying rules, the Gujarat High Court upheld the state action.

      The following conclusions emerge:

      (1)  “Public Order” under Article 19(2) is a term of art, and refers to a situation of public tranquility/public peace, that goes beyond simply law-breaking

      (2)  Prior restraint in the interests of public order is justified under Article 19(2), subject to a test of proximity; by virtue of the Gujarat High Court judgment in 2015, prior restraint extends to the online sphere as well

      (3)  The proximity test requires the relationship between speech and public order to be imminent, or like a spark in a powder keg

      World Trends in Freedom of Expression and Media Development

      by Pranesh Prakash last modified Feb 17, 2016 04:41 PM

      PDF document icon WTR Global - FINAL 27 Feb.pdf — PDF document, 1592 kB (1630530 bytes)

      World Trends in Freedom of Expression and Media Development

      by Pranesh Prakash last modified Feb 17, 2016 05:03 PM
      The United Nations Educational, Scientific and Cultural Organisation (UNESCO) had published a book in 2014 that examines free speech, expression and media development. The chapter contains a Foreword by Irina Bokova, Director General, UNESCO. Pranesh Prakash contributed to Independence: Introduction - Global Media Chapter. The book was edited by Courtney C. Radsch.

      Foreword

      Tectonic shifts in technology and economic models have vastly expanded the opportunities for press freedom and the safety of journalists, opening new avenues for freedom of expression for women and men across the world. Today, more and more people are able to produce, update and share information widely, within and across national borders. All of this is a blessing for creativity, exchange and dialogue.

      At the same time, new threats are arising. In a context of rapid change, these are combining with older forms of restriction to pose challenges to freedom of expression, in the shape of controls not aligned with international standards for protection of freedom of expression and rising threats against journalists.

      These developments raise issues that go to the heart of UNESCO’s mandate “to promote the flow of ideas by word and image” between all peoples, across the world. For UNESCO, freedom of expression is a fundamental human right that underpins all other civil liberties, that is vital for the rule of law and good governance, and that is a foundation for inclusive and open societies. Freedom of expression stands at the heart of media freedom and the practice of journalism as a form of expression aspiring to be in the public interest.

      At the 36th session of the General Conference (November 2011), Member States mandated UNESCO to explore the impact of change on press freedom and the safety of journalists. For this purpose, the Report has adopted four angles of analysis, drawing on the 1991 Windhoek Declaration, to review emerging trends through the conditions of media freedom, pluralism and independence, as well as the safety of journalists. At each level, the Report has also examined trends through the lens of gender equality.

      The result is the portrait of change -- across the world, at all levels, featuring as much opportunity as challenge. The business of media is undergoing a revolution with the rise of digital networks, online platforms, internet intermediaries and social media. New actors are emerging, including citizen journalists, who are redrawing the boundaries of the media. At the same time, the Report shows that the traditional news institutions continue to be agenda-setters for media and public communications in general – even as they are also engaging with the digital revolution. The Report highlights also the mix of old and new challenges to media freedom, including increasing cases of threats against the safety of journalists.

      The pace of change raises questions about how to foster freedom of expression across print, broadcast and internet media and how to ensure the safety of journalists. The Report draws on a rich array of research and is not prescriptive -- but it sends a clear message on the importance of freedom of expression and press freedom on all platforms.

      To these ends, UNESCO is working across the board, across the world. This starts with global awareness raising and advocacy, including through World Press Freedom Day. It entails supporting countries in strengthening their legal and regulatory frameworks and in building capacity. It means standing up to call for justice every time a journalist is killed, to eliminate impunity. This is the importance of the United Nations Plan of Action on the Safety of Journalists and the Issue of Impunity, spearheaded by UNESCO and endorsed by the UN Chief Executives Board in April 2012. UNESCO is working with countries to take this plan forward on the ground. We also seek to better understand the challenges that are arising – most recently, through a Global Survey on Violence against Female Journalists, with the International News Safety Institute, the International Women’s Media Foundation, and the Austrian Government.

      Respecting freedom of expression and media freedom is essential today, as we seek to build inclusive, knowledge societies and a more just and peaceful century ahead. I am confident that this Report will find a wide audience, in Member States, international and regional organizations, civil society and academia, as well as with the media and journalists, and I wish to thank Sweden for its support to this initiative. This is an important contribution to understanding a world in change, at a time when the international community is defining a new global sustainable development agenda, which must be underpinned and driven by human rights, with particular attention to freedom of expression.

      Executive Summary

      Freedom of expression in general, and media development in particular, are core to UNESCO’s constitutional mandate to advance ‘the mutual knowledge and understanding of peoples, through all means of mass communication’ and promoting ‘the free flow of ideas by word and image.’ For UNESCO, press freedom is a corollary of the general right to freedom of expression. Since 1991, the year of the seminal Windhoek Declaration, which was endorsed by the UN General Assembly, UNESCO has understood press freedom as designating the conditions of media freedom, pluralism and independence, as well as the safety of journalists.  It is within this framework that this report examines progress as regards press freedom, including in regard to gender equality, and makes sense of the evolution of media actors, news media institutions and journalistic roles over time.

      This report has been prepared on the basis of a summary report on the global state of press freedom and the safety of journalists, presented to the General Conference of UNESCO Member States in November 2013, on the mandate of the decision by Member States taken at the 36th session of the General Conference of the Organization.[*]

      The overarching global trend with respect to media freedom, pluralism, independence and the safety of journalists over the past several years is that of disruption and change brought on by technology, and to a lesser extent, the global financial crisis. These trends have impacted traditional economic and organizational structures in the news media, legal and regulatory frameworks, journalism practices, and media consumption and production habits. Technological convergence has expanded the number of and access to media platforms as well as the potential for expression. It has enabled the emergence of citizen journalism and spaces for independent media, while at the same time fundamentally reconfiguring journalistic practices and the business of news.

      The broad global patterns identified in this report are accompanied by extensive unevenness within the whole.  The trends summarized above, therefore, go hand in hand with substantial variations between and within regions as well as countries.

      Download the PDF


      [*]. 37 C/INF.4 16 September 2013 “Information regarding the implementation of decisions of the governing bodies”. http://unesdoc.unesco.org/images/0022/002230/223097e.pdf; http://unesdoc.unesco.org/images/0022/002230/223097f.pdf

      Net Neutrality Advocates Rejoice As TRAI Bans Differential Pricing

      by Subhashish Panigrahi last modified Feb 23, 2016 02:10 AM
      India would not see any more Free Basics advertisements on billboards with images of farmers and common people explaining how much they benefited from this Facebook project.

      The article by Subhashish Panigrahi was published by Odisha TV on February 9, 2016.


      Because the Telecom Regulatory Authority of India (TRAI) has taken a historical step by banning differential pricing without discriminating services. In their notes TRAI has explained, “In India, given that a majority of the population are yet to be connected to the internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users’ internet experience.” Not just that, violation of this ban would cost Rs. 50,000 every day.

      Facebook planned to launch Free Basics in India by making a few websites – mostly partners with Facebook—available for free. The company not just advertised aggressively on bill boards and commercials across the nation, it also embedded a campaign inside Facebook asking users to vote in support of Free Basics. TRAI criticized Facebook’s attempt to manipulate public opinion. Facebook was also heavily challenged by many policy and internet advocates including non-profits like Free Software Movement of India and Savetheinternet.in campaign. The two collectives strongly discouraged Free Basics by moulding public opinion against it with Savetheinternet.in alone used to send over 2.4 million emails to TRAI to disallow Free Basics. Furthermore, 500 Indian start-ups, including major names like Cleartrip, Zomato, Practo, Paytm and Cleartax, also wrote to India’s Prime Minister Narendra Modi requesting continued support for Net Neutrality – a concept that advocates equal treatment of websites – on Republic Day. Stand-up comedians like Abish Mathew and groups like All India Bakchod and East India Comedy created humorous but informative videos explaining the regulatory debate and supporting net neutrality. Both went viral.

      Technology critic and Quartz writer Alice Truong reacted to Free Basics saying; “Zuckerberg almost portrays net neutrality as a first-world problem that doesn’t apply to India because having some service is better than no service.”

      The decision of the Indian government has been largely welcomed in the country and outside. In support of the move, Web We Want programme manager at the World Wide Web Foundation Renata Avila has said; “As the country with the second largest number of Internet users worldwide, this decision will resonate around the world. It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can’t create a two-tier Internet – one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web.”

      There are mixed responses on the social media, both in support and in opposition to the TRAI decision. Josh Levy, Advocacy Director at Accessnow, has appreciated saying, “India is now the global leader on #NetNeutrality. New rules are stronger than those in EU and US.”

      Had differential pricing been allowed, it would have affected start-ups and content-based smaller companies adversely as they could never have managed to pay the high price to a partner service provider to make their service available for free. On the other hand, tech-giants like Facebook could have easily managed to capture the entire market. Since the inception, the Facebook-run non-profit Internet.org has run into a lot of controversies because of the hidden motive behind the claimed support for social cause.

      ‘A Good Day for the Internet Everywhere': India Bans Differential Data Pricing

      by Subhashish Panigrahi last modified Feb 25, 2016 01:21 AM
      India distinguished itself as a global leader on network neutrality on February 8, when regulators officially banned “differential pricing”, a process through which telecommunications service providers could or charge discriminatory tariffs for data services offered based on content.

      The article was published by Global Voices on February 9, 2016


      In short, this means that Internet access in India will remain an open field, where users should be guaranteed equal access to any website they want to visit, regardless of how they connect to the Internet.

      In their ruling, Telecommunication Regulatory Authority of India (TRAI) commented:

      In India, given that a majority of the population are yet to be connected to the internet, allowing service providers to define the nature of access would be equivalent of letting TSPs shape the users’ internet experience.

      The decision of the Indian government has been welcomed largely in the country and outside. In support of the move, the World Wide Web Foundation's Renata Avila, also a Global Voices community member, wrote:

      As the country with the second largest number of Internet users worldwide, this decision will resonate around the world. It follows a precedent set by Chile, the United States, and others which have adopted similar net neutrality safeguards. The message is clear: We can’t create a two-tier Internet – one for the haves, and one for the have-nots. We must connect everyone to the full potential of the open Web.

      A blow for Facebook's “Free Basics”

      While the new rules should long outlast this moment in India's Internet history, the ruling should immediately force Facebook to cancel the local deployment of “Free Basics”, a smart phone application that offers free access to Facebook, Facebook-owned products like WhatsApp, and a select suite of other websites for users who do not pay for mobile data plans.

      Facebook's efforts to deploy and promote Free Basics as what they described as a remedy to India's lack of “digital equality” has encountered significant backlash. Last December, technology critic and Quartz writer Alice Truong reacted to Free Basics saying:

      Zuckerberg almost portrays net neutrality as a first-world problem that doesn’t apply to India because having some service is better than no service.”

      When TRAI solicited public comments on the matter of differential pricing, Facebook responded with an aggressive advertising campaign on bill boards and in television commercials across the nation. It also embedded a campaign inside Facebook, asking users to write to TRAI in support of Free Basics.

      TRAI criticized Facebook for what it seemed to regard as manipulation of the public. Facebook was also heavily challenged by many policy and open Internet advocates including non-profits like the Free Software Movement of India and the Savetheinternet.in campaign. The latter two collectives strongly discouraged Free Basics by bringing public opinion where Savetheinternet.in alone facilitated a campaign in which citizens sent over 2.4 million emails to TRAI urging the agency to put a stop to differential pricing.

      Alongside these efforts, 500 Indian startups including major ones like Cleartrip, Zomato, Practo, Paytm and Cleartax also wrote to India's prime minister Narendra Modi requesting continued support for net neutrality—on the Indian Republic Day January 26.

      Stand-up comedians like Abish Mathew and groups like All India Bakchod and East India Comedy created humorous and informative videos explaining the regulatory debate and supporting net neutrality which went viral.

      Had differential pricing been officially legalized, it would have adversely affected startups and content-based smaller companies, who most likely could never manage to pay higher prices to partner with service providers to make their service available for free. This would have paved the way for tech-giants like Facebook to capture the entire market. And this would be no small gain for a company like Facebook: India represents the world's largest market of Internet users after the US and China, where Facebook remains blocked.

      The Internet responds

      There have been mixed responses on social media, both supporting and opposing. Among open Internet advocates both in India and the US, the response was celebratory:

      There are also those like Panuganti Rajkiran who opposed the ruling:

      A terrible decision.. The worst part here is the haves deciding for the have nots what they can have and what they cannot.

      Soumya Manikkath says:

      So all is not lost in the world, for the next two years at least. Do come back with a better plan, dear Facebook, and we'll rethink, of course.

      The ruling leaves an open pathway for companies to offer consumers free access to the Internet, provided that this access is truly open and does not limit one's ability to browse any site of her choosing.

      Bangalore-based Internet policy expert Pranesh Prakash noted that this work must continue until India is truly — and equally — connected:

      Comments by the Centre for Internet and Society on the Report of the Committee on Medium Term Path on Financial Inclusion

      by Vipul Kharbanda last modified Mar 01, 2016 01:53 PM
      Apart from item-specific suggestions, CIS would like to make one broad comment with regard to the suggestions dealing with linking of Aadhaar numbers with bank accounts. Aadhaar is increasingly being used by the government in various departments as a means to prevent fraud, however there is a serious dearth of evidence to suggest that Aadhaar linkage actually prevents leakages in government schemes. The same argument would be applicable when Aadhaar numbers are sought to be utilized to prevent leakages in the banking sector.

       

      The Centre for Internet and Society (CIS) is a non-governmental organization which undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives.

      In the course of its work CIS has also extensively researched and witten about the Aadhaar Scheme of the Government of India, specially from a privacy and technical point of view. CIS was part of the Group of Experts on Privacy constituted by the Planning Commission under the chairmanship of Justice AP Shah Committee and was instrumental in drafting a major part of the report of the Group. In this background CIS would like to mention that it is neither an expert on banking policy in general nor wishes to comment upon the purely banking related recommendations of the Committee. We would like to limit our recommendations to the areas in which we have some expertise and would therefore be commenting only on certain Recommendations of the Committee.

      Before giving our individual comments on the relevant recommendations, CIS would like to make one broad comment with regard to the suggestions dealing with linking of Aadhaar numbers with bank accounts. Aadhaar is increasingly being used by the government in various departments as a means to prevent fraud, however there is a serious dearth of evidence to suggest that Aadhaar linkage actually prevents leakages in government schemes. The same argument would be applicable when Aadhaar numbers are sought to be utilized to prevent leakages in the banking sector.

      Another problem with linking bank accounts with Aadhaar numbers, even if it is not mandatory, is that when the RBI issues an advisory to (optionally) link Aadhaar numbers with bank accounts, a number of banks may implement the advisory too strictly and refuse service to customers (especially marginal customers) whose bank accounts are not linked to their Aadhaar numbers, perhaps due to technical problems in the registration procedure, thereby denying those individuals access to the banking sector, which is contrary to the aims and objectives of the Committee and the stated policy of the RBI to improve access to banking.

      Individual Comments

      Recommendation 1.4 - Given the predominance of individual account holdings, the Committee recommends that a unique biometric identifier such as Aadhaar should be linked to each individual credit account and the information shared with credit information companies. This will not only be useful in identifying multiple accounts, but will also help in mitigating the overall indebtedness of individuals who are often lured into multiple borrowings without being aware of its consequences.

      CIS Comment: The discussion of the committee before making this recommendation revolves around the total incidence of indebtedness in rural areas and their Debt-to-Asset ratio representing payment capacity. However, the committee has not discussed any evidence which indicates that borrowing from multiple banks leads to greater indebtedness for individual account holders in the rural sector. Without identifying the problem through evidence the Committee has suggested linking bank accounts with Aadhaar numbers as a solution.

      Recommendation 2.2 - On the basis of cross-country evidence and our own experience, the Committee is of the view that to translate financial access into enhanced convenience and usage, there is a need for better utilization of the mobile banking facility and the maximum possible G2P payments, which would necessitate greater engagement by the government in the financial inclusion drive.

      CIS Comment: The drafting of the recommendation suggests that RBI is batting for the DBT rather than the subsidy model. However an examination of the discussion in the report suggests that all that the Committee has not discussed or examined the subsidy model vis-à-vis the direct benefit transfer (DBT) model here (though it does recommend DBT in the chapter on G-2-P payments), but only is trying to say is that where government to people money transfer has to take place, it should take place using mobile banking, payment wallets or other such technologies, which have been known to be successful in various countries across the world.

      Recommendation 3.1 - The Committee recommends that in order to increase formal credit supply to all agrarian segments, the digitization of land records should be taken up by the states on a priority basis.

      Recommendation 3.2 - In order to ensure actual credit supply to the agricultural sector, the Committee recommends the introduction of Aadhaar-linked mechanism for Credit Eligibility Certificates. For example, in Andhra Pradesh, the revenue authorities issue Credit Eligibility Certificates to Tenant Farmers (under ‘Andhra Pradesh Land Licensed Cultivators Act No 18 of 2011'). Such tenancy /lease certificates, while protecting the owner’s rights, would enable landless cultivators to obtain loans. The Reserve Bank may accordingly modify its regulatory guidelines to banks to directly lend to tenants / lessees against such credit eligibility certificates.

      CIS Comment: The Committee in its discussion before the recommendation 3.2 has discussed the problems faced by landless farmers, however there is no discussion or evidence which suggests that an Aadhaar linked Credit Eligibility Certificate is the best solution, or even a solution to the problem. The concern being expressed here is not with the system of a Credit Eligibility Certificate, but with the insistence on linking it to an Aadhaar number, and whether the system can be put in place without linking the same to an Aadhaar number.

      Recommendation 6.11 - Keeping in view the indebtedness and rising delinquency, the Committee is of the view that the credit history of all SHG members would need to be created, linking it to individual Aadhaar numbers. This will ensure credit discipline and will also provide comfort to banks.

      CIS Comment: There is no discussion in the Report on the reasons for increase in indebtedness of SHGs. While the recommendation of creating credit histories for SHGs is laudable and very welcome, however there is no logical reason that has been brought out in the Report as to why the same needs to be linked to individual Aadhaar numbers and how such linkage will solve any problems.

      Recommendation 6.13 - The Committee recommends that bank credit to MFIs should be encouraged. The MFIs must provide credit information on their borrowers to credit bureaus through Aadhaar-linked unique identification of individual borrowers.

      CIS Comment: Since the discussion before this recommendation clearly indicates multiple lending practices as one of the problems in the Microfinance sector and also suggests better credit information of borrowers as a possible solution, therefore this recommendation per se, seems sound. However, we would still like to point out that the RBI may think of alternative means to get borrower credit history rather than relying upon just the Aadhaar numbers.

      Recommendation 7.3 - Considering the widespread availability of mobile phones across the country, the Committee recommends the use of application-based mobiles as PoS for creating necessary infrastructure to support the large number of new accounts and cards issued under the PMJDY. Initially, the FIF can be used to subsidize the associated costs. This will also help to address the issue of low availability of PoS compared to the number of merchant outlets in the country. Banks should encourage merchants across geographies to adopt such applicationbased mobile as a PoS through some focused education and PoS deployment drives.

      Recommendation 7.5 - The Committee recommends that the National Payments Corporation of India (NPCI) should ensure faster development of a multi-lingual mobile application for customers who use non-smart phones, especially for users of NUUP; this will address the issue of linguistic diversity and thereby promote its popularization and quick adoption.

      Recommendation 7.8 - The Committee recommends that pre-paid payment instrument (PPI) interoperability may be allowed for non-banks to facilitate ease of access to customers and promote wider spread of PPIs across the country. It should however require non-bank PPI operators to enhance their customer grievance redressal mechanism to deal with any issues thereof.

      Recommendation 7.9 - The Committee is of the view that for non-bank PPIs, a small-value cashout may be permitted to incentivize usage with the necessary safeguards including adequate KYC and velocity checks.

      CIS Comments: While CIS supports the effort to use technology and mobile phones to increase banking penetration and improve access to the formal financial sector for rural and semi-rural areas, sufficient security mechanisms should be put in place while rolling out these services keeping in mind the low levels of education and technical sophistication that are prevalent in rural and semi-rural areas.

      Recommendation 8.1 - The Committee recommends that the deposit accounts of beneficiaries of government social payments, preferably all deposits accounts across banks, including the ‘inprinciple’ licensed payments banks and small finance banks, be seeded with Aadhaar in a timebound manner so as to create the necessary eco-system for cash transfer. This could be complemented with the necessary changes in the business correspondent (BC) system (see Chapter 6 for details) and increased adoption of mobile wallets to bridge the ‘last mile’ of service delivery in a cost-efficient manner at the convenience of the common person. This would also result in significant cost reductions for the government besides promoting financial inclusion.

      CIS Comment: While the report of the Committee has already given several examples of how cash transfer directly into the bank accounts (rather than requiring the beneficiaries to be at a particular place at a particular time) could be more efficient as well as economical, the Committee is making the same point again here under the chapter that deals specifically with government to person payments. However even before this recommendation, there has been no discussion as to the need for linking or “seeding” the deposit accounts of the beneficiaries with Aadhaar numbers, let alone a discussion of how it would solve any problems.

      Recommendation 10.6 - Given the focus on technology and the increasing number of customer complaints relating to debit/credit cards, the National Payments Corporation of India (NPCI) may be invited to SLBC meetings. They may particularly take up issues of Aadhaar-linkage in bank and payment accounts.

      CIS Comment: There is no discussion on why this recommendation has been made, more particularly; there is no discussion at all on why issues of Aadhaar linkage in bank and payment accounts need to be taken up at all.

      NN_Conference Report.pdf

      by Prasad Krishna last modified Feb 27, 2016 08:07 AM

      PDF document icon NN_Conference Report.pdf — PDF document, 1049 kB (1075119 bytes)

      Adoption of Standards in Smart Cities - Way Forward for India

      by Vanya Rakesh last modified Apr 11, 2016 03:04 AM
      With a paradigm shift towards the concept of “Smart Cities’ globally, as well as India, such cities have been defined by several international standardization bodies and countries, however, there is no uniform definition adopted globally. The glue that allows infrastructures to link and operate efficiently is standards as they make technologies interoperable and efficient.

      Click here to download the full file

      Globally, the pace of urbanization is increasing exponentially. The world’s urban population is projected to rise from 3.6 billion to 6.3 billion between 2011 and 2050. A solution for the same has been development of sustainable cities by improving efficiency and integrating infrastructure and services [1]. It has been estimated that during the next 20 years, 30 Indians will leave rural India for urban areas every minute, necessitating smart and sustainable cities to accommodate them [2]. The Smart Cities Mission of the Ministry of Urban Development was announced in the year 2014, followed by selection of 100 cities in the year 2015 and 20 of them being selected for the first Phase of the project in the year 2016. The Mission [3] lists the “core infrastructural elements” that a smart city would incorporate like adequate water supply, assured electricity, sanitation, efficient public transport, affordable housing (especially for the poor), robust IT connectivity and digitisation, e-governance and citizen participation, sustainable environment, safety and security for citizens, health and education.

      With a paradigm shift towards the concept of “Smart Cities’ globally, as well as India, such cities have been defined by several international standardization bodies and countries, however, there is no uniform definition adopted globally. The envisioned modern and smart city promises delivery of high quality services to the citizens and will harness data capture and communication management technologies. The performance of such cities would be monitored on the basis of physical as well as the social structure comprising of smart approaches and solution to utilities and transport.

      The glue that allows infrastructures to link and operate efficiently is standards as they make technologies interoperable and efficient. Interoperability is essential and to ensure smart integration of various systems in a smart city, internationally agreed standards that include technical specifications and classifications must be adhered to. Development of international standards ensure seamless interaction between components from different suppliers and technologies [4].

      Standardized indicators within standards benefit smart cities in the following ways:

      1. Effective governance and efficient delivery of services.
      2. International and Local targets, benchmarking and planning.
      3. Informed decision making and policy formulation.
      4. Leverage for funding and recognition in international entities.
      5. Transparency and open data for investment attractiveness.
      6. A reliable foundation for use of big data and the information explosion to assist cities in building core knowledge for city decision-making, and enable comparative insight.

      The adoption of standards for smart cities has been advocated across the world as they are perceived to be an effective tool to foster development of the cities. The Director of the ITU Telecommunication Standardization Bureau Chaesub Lee is of the view that “Smart cities will employ an abundance of technologies in the family of the Internet of Things (IoT) and standards will assist the harmonized implementation of IoT data and applications , contributing to effective horizontal integration of a city’s subsystems” [5].

      Smart Cities standards in India

      National Association of Software and Services Companies (NASSCOM) partnered with Accenture [6] to prepare a report called ‘Integrated ICT and Geospatial Technologies Framework for 100 Smart Cities Mission’ [7] to explore the role of ICT in developing smart cities [8], after the announcement of the Mission by Indian Government. The report, released in May 2015, lists down 55 global standards, keeping in view several city sub-systems like urban planning, transport, governance, energy, climate and pollution management, etc which could be applicable to the smart cities in India.

      Though NASSCOM is working closely with the Ministry of Urban Development to create a sustainable model for smart cities [9], due to lack of regulatory standards for smart cities, the Bureau of Indian Standards (BIS) in India has undertaken the task to formulate standardised guidelines for central and state authorities in planning, design and construction of smart cities by setting up a technical committee under the Civil engineering department of the Bureau. However, adoption of the standards by implementing agencies would be voluntary and intends to complement internationally available documents in this area [10].

      Developing national standards in line with these international standards would enable interoperability (i.e. devices and systems working together) and provide a roadmap to address key issues like data protection, privacy and other inherent risks in the digital delivery and use of public services in the envisioned smart cities, which call for comprehensive data management standards in India to instill public confidence and trust [11].

      Key International Smart Cities Standards

      Following are the key internationally accepted and recognized Smart Cities standards developed by leading organisations and the national standardization bodies of several countries that India could adopt or develop national standards in line with these.

      The International Organization for Standardization (ISO) - Smart Cities Standards

      ISO is an instrumental body advocating and developing for smart cities to safeguard rights of the people against a liveable and sustainable environment. The ISO Smart Cities Strategic Advisory Group uses the following working definition: A ‘Smart City’ is one that dramatically increases the pace at which it improves its social, economic and environmental (sustainability) outcomes, responding to challenges such as climate change, rapid population growth, and political and economic instability by fundamentally improving how it engages society, how it applies collaborative leadership methods, how it works across disciplines and city systems, and how it uses data information and modern technologies in order to transform services and quality of life for those in and involved with the city (residents, businesses, visitors), now and for the foreseeable future, without unfair disadvantage of others or degradation of the natural environment. [For details see ISO/TMB Smart Cities Strategic Advisory Group Final Report, September 2015 ( ISO Definition, June 2015)].

      The ISO Technical Committee 268 works on standardization in the field of Sustainable Development in Communities [12] to encourage the development and implementation of holistic, cross-sector and area-based approaches to sustainable development in communities. The Committee comprises of 3 Working Groups [13]:

      • Working Group 1: System Management ISO 37101- This standard sets requirements, guidance and supporting techniques for sustainable development in communities. It is designed to help all kinds of communities manage their sustainability, smartness and resilience to improve the contribution of communities to sustainable development and assess their performance in this area [14].
      • Working Group  2 : City Indicators- The key Smart Cities Standards developed by ISO TC 268 WG 2 (City Indicators) are:

      ISO 37120 Sustainable Development of Communities — Indicators for City Services and Quality of Life

      One of the key standards and an important step in this regard was ISO 37120:2014 under the ISO’s Technical Committee 268 (See Working on Standardization in the field of Sustainable Development in Communities) providing clearly defined city performance indicators (divided into core and supporting indicators) as a benchmark for city services and quality of life, along with a standard approach for measuring each for city leaders and citizens [15]. The standard is global in scope and can help cities prioritize city budgets, improve operational transparency, support open data and applications [16]. It follows the principles [17] set out and can be used in conjunction with ISO 37101.

      ISO 37120 was the first ISO Standard on Global City Indicators published in the year 2014, developed on the basis of a set of indicators developed and extensively tested by the Global City Indicators Facility (a project by University of Toronto) and its 250+ member cities globally. GCIF is committed to build standardized city indicators for performance management including a database of comparable statistics that allow cities to track their effectiveness on everything from planning and economic growth to transportation, safety and education [18].

      The World Council on City Data (WCCD) [19] - a sister organization of the GCI/GCIF - was established in the year 2014 to operationalize ISO 37120 across cities globally. The standards encompasses 100 indicators developed around 17 themes to support city services and quality of life, and is accessible through the WCCD Open City Data Portal which allows for cutting-edge visualizations and comparisons. Indian cities are not yet listed with WCCD [20].

      The indicators are listed under the following heads [21]:

      1. Economy
      2. Education
      3. Environment
      4. Energy
      5. Finance
      6. Fire and Emergency Responses
      7. Governance
      8. Health
      9. Safety
      10. Shelter
      11. Recreation
      12. Solid Waste
      13. Telecommunication and innovation
      14. Transportation
      15. Urban Planning
      16. Waste water
      17. Water and Sanitation

      This International Standard is applicable to any city, municipality or local government that undertakes to measure its performance in a comparable and verifiable manner, irrespective of size and location or level of development. City indicators have the potential to be used as critical tools for city managers, politicians, researchers, business leaders, planners, designers and other professionals [22]. The WCCD forum highlights need for cities to have a set of globally standardized indicators to [23]:

      1. Manage and make informed decisions through data analysis
      2. Benchmark and target
      3. Leverage Funding with senior levels of government
      4. Plan and establish new frameworks for sustainable urban development
      5. Evaluate the impact of infrastructure projects on the overall performance of a city.

      ISO/DTR 37121- Inventory and Review of Existing Indicators on Sustainable Development and Resilience in Cities

      The second standard under ISO TC 268 WG 2 is ISO 37121, which defines additional indicators related to sustainable development and resilience in cities. Some of the indicators include: Smart Cities, Smart Grid, Economic Resilience, Green Buildings, Political Resilience, Protection of biodiversity, etc. The complete list can be viewed on the Resilient Cities website [24].

      Working Group 3: Terminology - There are no publicly available documents so far, giving details about the status of the activities of this group. The ISO Technical Committee 268 also includes Sub Committee 1 (Smart Community Infrastructure) [25], comprising of the following Working Groups: 1) WG 1 Infrastructure metrics, and 2) WG 2 Smart Community Infrastructure.

      The key Smart Cities Standards developed by ISO under this are:

      • ISO 37151:2015 Smart community infrastructures — Principles and Requirements for Performance Metrics
        In the year 2015, a new ISO technical specification for smart cities- 37151:2015 for Principles and requirements for performance metrics was released.  The purpose of standardization in the field of smart community infrastructures such as energy, water, transportation, waste, information and communications technology (ICT), etc. is to promote the international trade of community infrastructure products and services and improve sustainability in communities by establishing harmonized product standards [26]. The metrics in this standard will support city and community managers in planning and measuring performance, and also compare and select procurement proposals for products and services geared at improving community infrastructures [27].
        This Technical Specification gives principles and specifies requirements for the definition,identification, optimization, and harmonization of community infrastructure performance metrics, and gives recommendations for analysis, regarding interoperability, safety, security of community infrastructures [28]. This new Technical Specification supports the use of the ISO 37120 [29].

      • ISO/TR 37150:2014 Smart Community Infrastructures - Review of Existing Activities Relevant to Metrics
        This standard addresses community infrastructures such as energy, water, transportation, waste and information and communications technology (ICT). Smart community infrastructures take into consideration environmental impact, economic efficiency and quality of life by using information and communications technology (ICT) and renewable energies to achieve integrated management and optimized control of infrastructures. Integrating smart community infrastructures for a community helps improve the lifestyles of its citizens by, for example: reducing costs, increasing mobility and accessibility, and reducing environmental pollutants.
        ISO/TR 37150 reviews relevant metrics for smart community infrastructures and provides stakeholders with a better understanding of the smart community infrastructures available around the world to help promote international trade of community infrastructure products and give information about leading-edge technologies to improve sustainability in communities [30]. This standard, along with the above mentioned standards [31] supports the multi-billion dollar smart cities technology industry.

      Several other ISO Working Groups developing standards applicable to smart and sustainable cities have been listed in our website [32].

      The International Telecommunications Union (ITU)

      The ITU is another global body working on development of standards regarding smart cities.

      A study group was formed in the year 2015 to tackle standardization requirements for the Internet of Things, with an initial focus on IoT applications in smart cities to address urban development challenges [33], to enable the coordinated development of IoT technologies, including machine-to-machine communications and ubiquitous sensor networks. The group is titled “ITU-T Study Group 20: IoT and its applications, including smart cities and communities”, established to develop standards that leverage IoT technologies to address urban-development challenges and the mechanisms for the interoperability of IoT applications and datasets employed by various vertically oriented industry sectors [34].

      ITU-T also concluded a focused study group looking at smart sustainable cities in May 2015, acting as an open platform for smart city stakeholders to exchange knowledge in the interests of identifying the standardized frameworks needed to support the integration of ICT services in smart cities. Its parent group is ITU-T Study Group 5, which has  agreed on the following definition of a Smart Sustainable City:
      "A smart sustainable city is an innovative city that uses information and communication technologies (ICTs) and other means to improve quality of life, efficiency of urban operation and services, and competitiveness, while ensuring that it meets the needs of present and future generations with respect to economic, social, environmental as well as cultural aspects".

      UK - British Standards Institution

      Apart from the global standards setting organisations, many countries have been looking at developing standards to address the growth of smart cities across the globe. In the UK, the British Standards Institution (BSI) has been commissioned by the UK Department of Business, Innovation and Skills (BIS) to conceive a Smart Cities Standards Strategy to identify vectors of smart city development where standards are needed. The standards would be developed through a consensus-driven process under the BSI to ensure good practise is shared between all the actors. The BIS launched the City's Standards Institute to bring together cities and key industry leaders and innovators to work together in identifying the challenges facing cities, providing solutions to common problems and defining the future of smart city standards [35].

      • PAS 181 Smart city framework- Guide to establishing strategies for smart cities and communities establishes a good practice framework for city leaders to develop, agree and deliver smart city strategies that can help transform their city’s ability to meet challenges faced in the future and meet the goals. The smart city framework (SCF) does not intend to describe a one-size-fits-all model for the future of UK cities but focuses on the enabling processes by which the innovative use of technology and data, together with organizational change, can help deliver the diverse visions for future UK cities in more efficient, effective and sustainable ways [36].

      • PD 8101 Smart cities- Guide to the role of the planning and development process gives guidance regarding planning for new development for smart city plans and provides an overview of the key issues to be considered and prioritized. The document is for use by local authority planning and regeneration officers to identify good practice in a UK context, and what tools they could use to implement this good practice. This aims to enable new developments to be built in a way that will support smart city aspirations at minimal cost [37].

      • PAS 182 Smart city concept model. Guide to establishing a model for data establishes an interoperability framework and data-sharing between agencies for smart cities for the following purposes:

        1. To have a city where information can be shared and understood between organizations and people at each level
        2. The derivation of data in each layer can be linked back to data in the previous layer
        3. The impact of a decision can be observed back in operational data. The smart city concept model (SCCM) provides a framework that can normalize and classify information from many sources so that data sets can be discovered and combined to gain a better picture of the needs and behaviours of a city’s citizens (residents and businesses) to help identify issues and devise solutions. PAS 182 is aimed at organizations that provide services to communities in cities, and manage the resulting data, as well as decision-makers and policy developers in cities [38].
      • PAS 180 Smart cities Vocabulary helps build a strong foundation for future standardization and good practices by providing an industry-agreed understanding of smart city terms and definitions to be used in the UK. It provides a working definition of a Smart City- “Smart Cities” is a term denoting the effective integration of physical, digital and human systems in the built environment to deliver a sustainable, prosperous and inclusive future for its citizens [39]. This aims to help improve communication and understanding of smart cities by providing a common language for developers, designers, manufacturers and clients. The standard also defines smart city concepts across different infrastructure and systems’ elements used across all service delivery channels and is intended for city authorities and planners, buyers of smart city services and solutions [40], as well as product and service providers.

       

      Endnotes

      [1] See: http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf.

      [2] See: http://www.ibm.com/smarterplanet/in/en/sustainable_cities/ideas/.

      [3] See: http://www.thehindubusinessline.com/economy/smart-cities-mission-welcome-to-tomorrows-world/article8163690.ece.

      [4] See: http://www.iec.ch/whitepaper/pdf/iecWP-smartcities-LR-en.pdf.

      [5] See: http://www.iso.org/iso/news.htm?refid=Ref2042.

      [6] See: http://www.livemint.com/Companies/5Twmf8dUutLsJceegZ7I9K/Nasscom-partners-Accenture-to-form-ICT-framework-for-smart-c.html.

      [7] See: http://www.nasscom.in/integrated-ict-and-geospatial-technologies-framework-100-smart-cities-mission.

      [8] See: http://www.cxotoday.com/story/nasscom-creates-framework-for-smart-cities-project/.

      [9] See: http://www.livemint.com/Companies/5Twmf8dUutLsJceegZ7I9K/Nasscom-partners-Accenture-to-form-ICT-framework-for-smart-c.html.

      [10] See: http://www.business-standard.com/article/economy-policy/in-a-first-bis-to-come-up-with-standards-for-smart-cities-115060400931_1.html.

      [11] See: http://www.longfinance.net/groups7/viewdiscussion/72-financing-financing-tomorrow-s-cities-how-standards-can-support-the-development-of-smart-cities.html?groupid=3.

      [12] See: http://www.iso.org/iso/iso_technical_committee?commid=656906.

      [13] See: http://cityminded.org/wp-content/uploads/2014/11/Patricia_McCarney_PDF.pdf.

      [14] See: http://www.iso.org/iso/news.htm?refid=Ref1877.

      [15] See: http://smartcitiescouncil.com/article/new-iso-standard-gives-cities-common-performance-yardstick.

      [16] See: http://smartcitiescouncil.com/article/dissecting-iso-37120-why-new-smart-city-standard-good-news-cities.

      [17] See: http://www.iso.org/iso/catalogue_detail?csnumber=62436.

      [18] See: http://www.cityindicators.org/.

      [19] See: http://www.dataforcities.org/.

      [20] See: http://news.dataforcities.org/2015/12/world-council-on-city-data-and-hatch.html.

      [21] See: http://news.dataforcities.org/2015/12/world-council-on-city-data-and-hatch.html.

      [22] See: http://www.iso.org/iso/37120_briefing_note.pdf.

      [23] See: http://www.dataforcities.org/wccd/.

      [24] See: http://resilient-cities.iclei.org/fileadmin/sites/resilient-cities/files/Webinar_Series/HERNANDEZ_-_ICLEI_Resilient_Cities_Webinar__FINAL_.pdf.

      [25] See: http://www.iso.org/iso/iso_technical_committee?commid=656967.

      [26] See: https://www.iso.org/obp/ui/#iso:std:iso:ts:37151:ed-1:v1:en.

      [27] See: http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref2001&utm_medium=email&utm_campaign=ISO+Newsletter+November&utm_content=ISO+Newsletter+November+CID_4182720c31ca2e71fa93d7c1f1e66e2f&utm_source=Email%20marketing%20software&utm_term=Read%20more.

      [28] See: http://www.iso.org/iso/37120_briefing_note.pdf.

      [29] See: http://standardsforum.com/isots-37151-smart-cities-metrics/.

      [30] See: http://www.iso.org/iso/executive_summary_iso_37150.pdf.

      [31] See: http://standardsforum.com/isots-37151-smart-cities-metrics/.

      [32] See: http://cis-india.org/internet-governance/blog/database-on-big-data-and-smart-cities-international-standards.

      [33] See: http://smartcitiescouncil.com/article/itu-takes-internet-things-standards-smart-cities.

      [34] See: https://www.itu.int/net/pressoffice/press_releases/2015/22.aspx.

      [35] See: http://www.bsigroup.com/en-GB/smart-cities/.

      [36] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-181-smart-cities-framework/.

      [37] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PD-8101-smart-cities-planning-guidelines/.

      [38] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-182-smart-cities-data-concept-model/.

      [39] See: http://www.iso.org/iso/smart_cities_report-jtc1.pdf.

      [40] See: http://www.bsigroup.com/en-GB/smart-cities/Smart-Cities-Standards-and-Publication/PAS-180-smart-cities-terminology/.

      Document Actions