Blog

by kaeru — last modified Mar 25, 2013 11:14 AM

GSMA Partners Meeting

by Prasad Krishna last modified May 05, 2014 07:03 AM

PDF document icon GSMA_Meeting_9.04.2014.pdf — PDF document, 105 kB (107973 bytes)

Identity and Privacy

by Prasad Krishna last modified May 06, 2014 04:00 AM

ZIP archive icon Identity and Privacy.zip — ZIP archive, 3969 kB (4064327 bytes)

National Security and Privacy

by Prasad Krishna last modified May 06, 2014 04:03 AM

ZIP archive icon National Security and Privacy.zip — ZIP archive, 1701 kB (1742258 bytes)

Transparency and Privacy

by Prasad Krishna last modified May 06, 2014 04:07 AM

ZIP archive icon Transperancy and Privacy.zip — ZIP archive, 1225 kB (1255316 bytes)

Networks: What You Don’t See is What You (for)Get

by Nishant Shah last modified May 28, 2014 09:30 AM
When I start thinking about DML (digital media and learning) and other such “networks” that I am plugged into, I often get a little confused about what to call them.
Networks: What You Don’t See is What You (for)Get

Banner image credit: Alexander Baxevanis

The blog entry was originally published in DML Central on April 17, 2014 and mirrored in Hybrid Publishing Lab on May 13, 2014.


Are we an ensemble of actors? A cluster of friends? A conference of scholars? A committee of decision makers? An array of perspectives? A group of associates? A play-list of voices? I do not pose these  questions rhetorically, though I do enjoy rhetoric. I want to look at this inability to name collectives and the confusions and ambiguity it produces as central to our conversations around digital thinking. In particular, I want to look at the notion of the network. Because, I am sure, that if we were to go for the most neutralised digital term to characterise this collection that we all weave in and out of, it would have to be the network. We are a network.[1]

But, what does it mean to say that we are a network? The network is a very strange thing. Especially within the realms of the Internet, which, in itself, purports to be a giant network, the network is self-explanatory, self-referential and completely denuded of meaning. A network is benign, and like the digital, that foregrounds the network aesthetic, the network is inscrutable. You cannot really touch a network or name it. You cannot shape it or define it. You can produce momentary snapshots of it, but you can never contain it or limit it. The network cannot be held or materially felt.

And yet, the network touches us. We live within networked societies. We engage in networking – network as a verb. We are a network – network as a noun. We belong to networks – network as a collective. In all these poetic mechanisms of network, there is perhaps the core of what we want to talk about today – the tension between the local and the global and the way in which we will understand the Internet and then the frameworks of governance and policy that surround it.

Let me begin with a genuine question. What predates the network? Because the network is a very new word. The first etymological trace of the network is in 1887, where it was used as a verb, within broadcast and communications models, to talk about an outreach. As in ‘to cover with a network.’ The idea of a network as a noun is older where in the 1550s, the idea of ‘net-like arrangements of threads, wires, etc.’ was first identified as a network. In the second half of the industrial 19th Century, the term network was used for understanding an extended, complex, interlocking system. The idea of network as a set of connected people emerged in the latter half of the 20thCentury. I am pointing at these references to remind us that the ubiquitous presence of the network, as a practice, as a collective, and as a metaphor that seeks to explain the rest of the world around us, is a relatively new phenomenon. And we need to be aware of the fact, that the network, especially as it is understood in computing and digital technologies, is a particular model through which objects, individuals and the transactions between them are imagined.

For anybody who looks at the network itself – especially the digital network that we have accepted as the basis on which everything from social relationships on Facebook to global financial arcs are defined – we know that the network is in a state of crisis.

Networks of crises: The Bangalore North East Exodus

Let me illustrate the multiple ways in which the relationship between networks and crisis has been imagined through a particular story. In August 2012, I woke up one morning to realise that I was living in a city of crisis. Bangalore, which is one of my homes, where the largest preoccupations to date have been about bad roads, stray dogs, and occasionally, the lack of a nightlife, was suddenly a space that people wanted to flee and occupy simultaneously.

Through the technology mediated gossip mill that produced rumours faster than the speed of a digital click, imagination of terror, danger, and material harm found currency. The city suddenly witnessed thousands of people running away from it, heading back to their imagined homelands. It was called the North East exodus, where, following an ethnic-religious clash between two traditionally hostile communities in Assam, there were rumours that the large North East Indian community in Bangalore was going to be attacked by certain Muslim factions at the end of Ramadan.
The media spectacle of the exodus around questions of religion, ethnicity, regionalism and belonging only emphasised the fact that there is a new way of connectedness that we live in – the network society that no longer can be controlled, contained or corrected by official authorities and their voices. Despite a barrage of messages from law enforcement and security authorities, on email, on large screens on the roads, and on our cell phones, there was a growing anxiety and a spiralling information explosion that was producing an imaginary situation of precariousness and bodily harm. For me, this event, was one of the first signalling how to imagine the network society in a crisis, especially when it came to Bangalore, which is supposed to represent the Silicon dreams of an India that is shining brightly. While there is much to be unpacked about the political motivations and the ecologies of fear that our migrant lives in global cities are enshrined in, I want to specifically focus on what the emergence of this network society means.

There is an imagination, especially in cities like Bangalore, of digital technologies as necessarily plugging in larger networks of global information consumption. The idea that technology plugs us into the transnational circuits is so huge that it only tunes us toward an idea of connectedness that is always outward looking, expanding the scope of nation, community and body.

However, the ways in which information was circulating during this phenomenon reminds us that digital networks are also embedded in local practices of living and survival. Most of the time, these networks are so natural and such an integral part of our crucial mechanics of urban life that they appear as habits, without any presence or visibility. In times of crises – perceived or otherwise – these networks make themselves visible, to show that they are also inward looking. But in this production of hyper-visible spectacles, the network works incessantly to make itself invisible.

Which is why, in the case of the North East exodus, the steps leading to the resolution of the crisis, constructed and fuelled by networks is interesting. As government and civil society efforts to control the rumours and panic reached an all-time high and people continued to flee the city, the government eventually went in to regulate the technology itself. There were expert panel discussions about whether the digital technologies are to be blamed for this rumour mill. There was a ban on mass-messaging and there was a cap on the number of messages which could be sent on a day by each mobile phone subscriber. The Information and Broadcast Ministry along with the Information Technologies cell, started monitoring and punishing people for false and inflammatory information.

Network as Crisis: The unexpected visibility of a network

What, then, was the nature of the crisis in this situation? It is a question worth exploring. We would imagine that this crisis was a crisis about the nationwide building of mega-cities filled with immigrant bodies that are not allowed their differences because they all have to be cosmopolitan and mobile bodies. The crisis could have been read as one of neo-liberal flatness in imagining the nation and its fragments, that hides the inherent and historical sites of conflict under the seductive rhetoric of economic development. And yet, when we look at the operationalization of the resolutions, it looked as if the crisis was the appearance and the visibility of the hitherto hidden local networks of information and communication.

In her analysis of networks, Brown University’s Wendy Chun posits that this is why networks are an opaque metaphor. If the function of metaphor is to explain, through familiarity, objects which are new to us, the network as an explanatory paradigm presents a new conundrum. While the network presumes and exteriority that it seeks to present, while the network allows for a subjective interiority of the actor and its decisions, while the network grants visibility and form to the everyday logic of organisation, what the network actually seeks to explain is itself. Or, in less evocative terms, the network is not only the framework through which we analyse, but it is also the object of analyses. Once the network has been deployed as a paradigm through which to understand a crisis, once the network has made itself visible, all our efforts are driven at explaining and strengthening, and almost like digital mothers, comfort the network back into its peaceful existence as infrastructure. We develop better tools to regulate the network. We define new parameters to mine the data more effectively. We develop policies to govern and govern through the network with greater transparency and ease.

Thus, in the case of the North East exodus, instead of addressing the larger issues of conservative parochialism, an increasing backlash by right-wing governments and a growing hostility that emerges from these cities that nobody possesses and nobody belongs to, the efforts were directed at blaming technology as the site where the problem is located and the network as the object that needs to be controlled. What emerged was a series of corrective mechanisms and a set of redundant regulations that controlled the number of text messages that people were able to send per day or policing the Internet for spreading rumours. The entire focus was on information management, as if the reason for the mass exodus of people from the NE Indian states and the sense of fragility that the city had been immersed in, was all due to the pervasive and ubiquitous information gadgets and their ability to proliferate in p2p (peer-to-peer) environments outside of the government’s control. This lack of exteriority to the network is something that very few critical voices have pointed out.

Duncan Watts, the father of network computing, working through the logic of nodes, traffic and edges, has suggested there is a great problem in the ways in which we understand the process of network making. I am paraphrasing his complex mathematical text that explains the production of physical networks – what he calls the small worlds – and pointing out his strong critique about how the social scientists engage with networks. In the social sciences’ imagination of networks, there is a messy exteriority – fuzzy, complex and often not reducible to patterns or basic principles. The network is a distilling of the messy exteriority, a representation of the complex interplay between different objects and actors, and a visual mapping of things as they are. Which is to say, we imagine there is a material reality and the network is a tool by which this reality, or at least parts of this reality, are mapped and represented to us in patterns which can help us understand the true nature of this reality.

Drawing from practices of network modelling and building, Watts proved, that we have the equation wrong. The network is not a representation of reality but the ontology of reality. The network is not about trying to make sense of an exteriority. Instead, the network is an abstract and ideological map that constructs the reality in a particular way. In other words, the network precedes the real, and because of its ability to produce objective, empiricist and reductive principles (constantly filtering out that which is not important to the logic or the logistics of the network design), it then gives us a reality that is produced through the network principles. To make it clear, the network representation is not the derivative of the real but the blue-print of the real. And the real as we access it, through these networked tools, is not the raw and messy real but one that is constructed and shaped by the network in those ways. The network, then, needs to be understood, examined and critiqued, not as something that represents the natural, but something that shapes our understanding of the natural itself.

In the case of the Bangalore North East Exodus, the network and its visibility created a problem for us – and the problem was, that the network, which is supposed to be infrastructure, and hence, by nature invisible, had suddenly become visible. We needed to make sure that it was shamed, blamed, named and tamed so that we can go back to our everyday practices of regulation, governance and policy.

The Intersectional Network

What I want to emphasise, then, is that this binary of local versus the global, or local working in tandem with global, or the quaintly hybridised glocal are not very generative in thinking of policy and politics around the Internet. What we need is to recognise what gets hidden in this debate. What becomes visible when it is not supposed to? What remains invisible beyond all our efforts? And how do we develop a framework that actually moves beyond these binary modes of thinking, where the resolution is either to collapse them or to pretend that they do not exist in the first place? Working with frameworks like the network makes us aware of the ways in which these ideas of the global and the local are constructed and continue to remain the focus of our conversations, making invisible the real questions at hand.

Hence, we need to think of networks, not as spaces of intersection, but in need of intersections. The networks, because of their predatory, expanding nature, and the constant interaction with the edges, often appear as dynamic and inclusive. We need to now think of the networks as in need of intersections – or of intersectional networks. Developing intersections, of temporality, of geography and of contexts are great. But, we need to move one step beyond – and look at the couplings of aspiration, inspiration, autonomy, control, desire, belonging and precariousness that often mark the new digital subjects. And our policies, politics and regulations will have to be tailored to not only stop the person abandoning her life and running to a place of safety, not only stop the rumours within the Information and communication networks, not only create stop-gap measures of curbing the flows of gossip, but to actually account for the human conditions of life and living.


[1]. This post has grown from conversations across three different locations. The first draft of this talk was presented at the Habits of Living Conference, organised by the Centre for Internet & Society and Brown University, in Bangalore. A version of this talk found great inputs from the University of California Humanities Research Institute in Irvine, where I found great ways of sharpening the focus. The responses at the Milton Wolf Seminar at the America Austria Foundation, Austria, to this story, helped in making it more concrete to the challenges that the “network” throws to our digital modes of thinking. I am very glad to be able to put the talk into writing this time, and look forward to more responses.

Filtering content on the internet

by Chinmayi Arun last modified May 06, 2014 09:33 AM

The op-ed was published in the Hindu on May 2, 2014.


On May 5, the Supreme Court will hear Kamlesh Vaswani’s infamous anti-pornography petition again. The petition makes some rather outrageous claims. Watching pornography ‘puts the country’s security in danger’ and it is ‘worse than Hitler, worse than AIDS, cancer or any other epidemic,’ it says. This petition has been pending before the Court since February 2013, and seeks a new law that will ensure that pornography is exhaustively curbed.

Disintegrating into binaries

The petition assumes that pornography causes violence against women and children. The trouble with such a claim is that the debate disintegrates into binaries; the two positions being that pornography causes violence or that it does not. The fact remains that the causal link between violence against women and pornography is yet to be proven convincingly and remains the subject of much debate. Additionally, since the term pornography refers to a whole range of explicit content, including homosexual adult pornography, it cannot be argued that all pornography objectifies women or glamorises violent treatment of them.

Allowing even for the petitioner’s legitimate concern about violence against women, it is interesting to note that of all the remedies available, he seeks the one which is authoritarian but may not have any impact at all. Mr. Vaswani could have, instead, encouraged the state to do more toward its international obligations under the Convention on the Elimination of Discrimination against Women (CEDAW). CEDAW’s General Recommendation No. 19 is about violence against women and recommends steps to be taken to reduce violence against women. These include encouraging research on the extent, causes and effects of violence, and adopting preventive measures, such as public information and education programmes, to change attitudes concerning the roles and status of men and women.

Child pornography

Although different countries disagree about the necessity of banning adult pornography, there is general international consensus about the need to remove child pornography from the Internet. Children may be harmed in the making of pornography, and would at the very minimum have their privacy violated to an unacceptable degree. Being minors, they are not in a position to consent to the act. Each act of circulation and viewing adds to the harmful nature of child pornography. Therefore, an argument can certainly be made for the comprehensive removal of this kind of content.

Indian policy makers have been alive to this issue. The Information Technology Act (IT Act) contains a separate provision for material depicting children explicitly or obscenely, stating that those who circulate such content will be penalised. The IT Act also criminalises watching child pornography (whereas watching regular pornography is not a crime in India).

Intermediaries are obligated to take down child pornography once they have been made aware that they are hosting it. Organisations or individuals can proactively identify and report child pornography online. Other countries have tried, with reasonable success, systems using hotlines, verification of reports and co-operation of internet service providers to take down child pornography. However, these systems have also sometimes resulted in the removal of other legitimate content.

Filtering speech on the Internet

Child pornography can be blocked or removed using the IT Act, which permits the government to send lists of URLs of illegal content to internet service providers, requiring them to remove this content. Even private parties can send notices to online intermediaries informing them of illegal content and thereby making them legally accountable for such content if they do not remove it. However, none of this will be able to ensure the disappearance of child pornography from the Internet in India.

Technological solutions like filtering software that screens or blocks access to online content, whether at the state, service provider or user level, can at best make child pornography inaccessible to most people. People who are more skilled than amateurs will be able to circumvent technological barriers since these are barriers only until better technology enables circumvention.

Additionally, attempts at technological filtering usually even affect speech that is not targeted by the filtering mechanism. Therefore, any system for filtering or blocking content from the Internet needs to build in safeguards to ensure that processes designed to remove child pornography do not end up being used to remove political speech or speeches that are constitutionally protected.

In the Vaswani case, the government has correctly explained to the Supreme Court that any greater attempt to monitor pornography is not technologically feasible. It has pointed out that human monitoring of content will delay transmission of data substantially, will slow down the Internet, and will also be ineffective, since the illegal content can easily be moved to other servers in other countries.

Making intermediaries liable for the content they host will undo the safe harbour protection granted to them by the IT Act. Without it, intermediaries like Facebook will actually have to monitor all the content they host, and the resources required for such monitoring will reduce the content that makes its way online. This would seriously impact the extensiveness and diversity of content available on the Internet in India. Additionally, when demands are made for the removal of legitimate content, profit-making internet companies will be disinclined to risk litigation much in the same way as Penguin was reluctant to defend Wendy Doniger’s book.

If the Supreme Court makes the mistake of creating a positive obligation to monitor Internet content for intermediaries, it will effectively kill the Internet in India.

(Chinmayi Arun is research director, Centre for Communication Governance, National Law University, Delhi, and fellow, Centre for Internet and Society, Bangalore)

Round-table on User Safety Internet

by Prasad Krishna last modified May 06, 2014 09:53 AM

PDF document icon Agenda_roundtable discussion-Bengaluru (1).pdf — PDF document, 258 kB (264900 bytes)

European Court of Justice rules Internet Search Engine Operator responsible for Processing Personal Data Published by Third Parties

by Jyoti Panday last modified May 14, 2014 02:18 PM
The Court of Justice of the European Union has ruled that an "an internet search engine operator is responsible for the processing that it carries out of personal data which appear on web pages published by third parties.” The decision adds to the conundrum of maintaining a balance between freedom of expression, protecting personal data and intermediary liability.

The ruling is expected to have considerable impact on reputation and privacy related takedown requests as under the decision, data subjects may approach the operator directly seeking removal of links to web pages containing personal data. Currently, users prove whether data needs to be kept online—the new rules reverse the burden of proof, placing an obligation on companies, rather than users for content regulation.

A win for privacy?

The ECJ ruling addresses Mario Costeja González complaint filed in 2010, against Google Spain and Google Inc., requesting that personal data relating to him appearing in search results be protected and that data which was no longer relevant be removed. Referring to the Directive 95/46/EC of the European Parliament, the court said, that Google and other search engine operators should be considered 'controllers' of personal data. Following the decision, Google will be required to consider takedown requests of personal data, regardless of the fact that processing of such data is carried out without distinction in respect of information other than the personal data.

The decision—which cannot be appealed—raises important of questions of how this ruling will be applied in practice and its impact on the information available online in countries outside the European Union.  The decree forces search engine operators such as Google, Yahoo and Microsoft's Bing to make judgement calls on the fairness of the information published through their services that reach over 500  million people across the twenty eight nation bloc of EU.

ECJ rules that search engines 'as a general rule,' should place the right to privacy above the right to information by the public. Under the verdict, links to irrelevant and out of date data need to be erased upon request, placing search engines in the role of controllers of information—beyond the role of being an arbitrator that linked to data that already existed in the public domain. The verdict is directed at highlighting the power of search engines to retrieve controversial information while limiting their capacity to do so in the future.

The ruling calls for maintaining a balance in addressing the legitimate interest of internet users in accessing personal information and upholding the data subject’s fundamental rights, but does not directly address either issues. The court also recognised, that the data subject's rights override the interest of internet users, however, with exceptions pertaining to nature of information, its sensitivity for the data subject's private life and the role of the data subject in public life. Acknowledging that data belongs to the individual and is not the right of the company, European Commissioner Viviane Reding, hailed the verdict, "a clear victory for the protection of personal data of Europeans".

The Court stated that if data is deemed irrelevant at the time of the case, even if it has been lawfully processed initially, it must be removed and that the data subject has the right to approach the operator directly for the removal of such content. The liability issue is further complicated by the fact, that search engines such as Google do not publish the content rather they point to information that already exists in the public domain—raising questions of the degree of liability on account of third party content displayed on their services.

The ECJ ruling is based on the case originally filed against Google, Spain and it is important to note that, González argued that searching for his name linked to two pages originally published in 1998, on the website of the Spanish newspaper La Vanguardia. The Spanish Data Protection Agency did not require La Vanguardia to take down the pages, however, it did order Google to remove links to them. Google appealed this decision, following which the National  High Court of Spain sought advice from the European court. The definition of Google as the controller of information, raises important questions related to the distinction between liability of publishers and the liability of processors of information such as search engines.

The 'right to be forgotten'

The decision also brings to the fore, the ongoing debate and fragmented opinions within the EU, on the right of the individual to be forgotten. The 'right to be forgotten' has evolved from the European Commission's wide-ranging plans of an overhaul of the commission's 1995 Data Protection Directive. The plans for the law included allowing people to request removal of personal data with an obligation of compliance for service providers, unless there were 'legitimate' reasons to do otherwise. Technology firms rallying around issues of freedom of expression and censorship, have expressed concerns about the reach of the bill. Privacy-rights activist and European officials have upheld the notion of the right to be forgotten, highlighting the right of the individual to protect their honour and reputation.

These issues have been controversial amidst EU member states with the UK's Ministry of Justice claiming the law 'raises unrealistic and unfair expectations' and  has sought to opt-out of the privacy laws. The Advocate General of the European Court Niilo Jääskinen's opinion, that the individual's right to seek removal of content should not be upheld if the information was published legally, contradicts the verdict of the ECJ ruling. The European Court of Justice's move is surprising for many and as Richard Cumbley, information-management and data protection partner at the law firm Linklaters puts it, “Given that the E.U. has spent two years debating this right as part of the reform of E.U. privacy legislation, it is ironic that the E.C.J. has found it already exists in such a striking manner."

The economic implications of enforcing a liability regime where search engine operators censor legal content in their results aside, the decision might also have a chilling effect on freedom of expression and access to information. Google called the decision “a disappointing ruling for search engines and online publishers in general,” and that the company would take time to analyze the implications. While the implications of the decision are yet to be determined, it is important to bear in mind that while decisions like these are public, the refinements that Google and other search engines will have to make to its technology and the judgement calls on the fairness of the information available online are not public.

The ECJ press release is available here and the actual judgement is available here.

Net Neutrality, Free Speech and the Indian Constitution – III: Conceptions of Free Speech and Democracy

by Gautam Bhatia last modified May 27, 2014 10:21 AM
In this 3 part series, Gautam Bhatia explores the concept of net neutrality in the context of Indian law and the Indian Constitution.

In the modern State, effective exercise of free speech rights is increasingly dependent upon an infrastructure that includes newspapers, television and the internet. Access to a significant part of this infrastructure is determined by money. Consequently, if what we value about free speech is the ability to communicate one’s message to a non-trivial audience, financial resources influence both who can speak and, consequently, what is spoken. The nature of the public discourse – what information and what ideas circulate in the public sphere – is contingent upon a distribution of resources that is arguably unjust and certainly unequal.

There are two opposing theories about how we should understand the right to free speech in this context. Call the first one of these the libertarian conception of free speech. The libertarian conception takes as given the existing distribution of income and resources, and consequently, the unequal speaking power that that engenders. It prohibits any intervention designed to remedy the situation. The most famous summary of this vision was provided by the American Supreme Court, when it first struck down campaign finance regulations, in Buckley v. Valeo: “the concept that government may restrict the speech of some [in] order to enhance the relative voice of others is wholly foreign to the First Amendment.” This theory is part of the broader libertarian worldview, which would restrict government’s role in a polity to enforcing property and criminal law, and views any government-imposed restriction on what people can do within the existing structure of these laws as presumptively wrong.

We can tentatively label the second theory as the social-democratic theory of free speech. This theory focuses not so much on the individual speaker’s right not to be restricted in using their resources to speak as much as they want, but upon the collective interest in maintaining a public discourse that is open, inclusive and home to a multiplicity of diverse and antagonistic ideas and viewpoints. Often, in order to achieve this goal, governments regulate access to the infrastructure of speech so as to ensure that participation is not entirely skewed by inequality in resources. When this is done, it is often justified in the name of democracy: a functioning democracy, it is argued, requires a thriving public sphere that is not closed off to some or most persons.

Surprisingly, one of the most powerful judicial statements for this vision also comes from the United States. In Red Lion v. FCC, while upholding the “fairness doctrine”, which required broadcasting stations to cover “both sides” of a political issue, and provide a right of reply in case of personal attacks, the Supreme Court noted:

“[Free speech requires] preserv[ing] an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee it is the right of the public to receive suitable access to social, political, esthetic, moral, and other ideas and experiences which is crucial here.”

What of India? In the early days of the Supreme Court, it adopted something akin to the libertarian theory of free speech. In Sakal Papers v. Union of India, for example, it struck down certain newspaper regulations that the government was defending on grounds of opening up the market and allowing smaller players to compete, holding that Article 19(1)(a) – in language similar to what Buckley v. Valeo would hold, more than fifteen years later – did not permit the government to infringe the free speech rights of some in order to allow others to speak. The Court continued with this approach in its next major newspaper regulation case, Bennett Coleman v. Union of India, but this time, it had to contend with a strong dissent from Justice Mathew. After noting that “it is no use having a right to express your idea, unless you have got a medium for expressing it”, Justice Mathew went on to hold:

What is, therefore, required is an interpretation of Article 19(1)(a) which focuses on the idea that restraining the hand of the government is quite useless in assuring free speech, if a restraint on access is effectively secured by private groups. A Constitutional prohibition against governmental restriction on the expression is effective only if the Constitution ensures an adequate opportunity for discussion… Any scheme of distribution of newsprint which would make the freedom of speech a reality by making it possible the dissemination of ideas as news with as many different facets and colours as possible would not violate the fundamental right of the freedom of speech of the petitioners. In other words, a scheme for distribution of a commodity like newsprint which will subserve the purpose of free flow of ideas to the market from as many different sources as possible would be a step to advance and enrich that freedom. If the scheme of distribution is calculated to prevent even an oligopoly ruling the market and thus check the tendency to monopoly in the market, that will not be open to any objection on the ground that the scheme involves a regulation of the press which would amount to an abridgment of the freedom of speech.

In Justice Mathew’s view, therefore, freedom of speech is not only the speaker’s right (the libertarian view), but a complex balancing act between the listeners’ right to be exposed to a wide range of material, as well as the collective, societal right to have an open and inclusive public discourse, which can only be achieved by preventing the monopolization of the instruments, infrastructure and access-points of speech.

Over the years, the Court has moved away from the majority opinions in Sakal Papers and Bennett Coleman, and steadily come around to Justice Mathew’s view. This is particularly evident from two cases in the 1990s: in Union of India v. The Motion Picture Association, the Court upheld various provisions of the Cinematograph Act that imposed certain forms of compelled speech on moviemakers while exhibiting their movies, on the ground that “to earmark a small portion of time of this entertainment medium for the purpose of showing scientific, educational or documentary films, or for showing news films has to be looked at in this context of promoting dissemination of ideas, information and knowledge to the masses so that there may be an informed debate and decision making on public issues. Clearly, the impugned provisions are designed to further free speech and expression and not to curtail it.

LIC v. Manubhai D. Shah is even more on point. In that case, the Court upheld a right of reply in an in-house magazine, “because fairness demanded that both view points were placed before the readers, however limited be their number, to enable them to draw their own conclusions and unreasonable because there was no logic or proper justification for refusing publication… the respondent’s fundamental right of speech and expression clearly entitled him to insist that his views on the subject should reach those who read the magazine so that they have a complete picture before them and not a one sided or distorted one…” This goes even further than Justice Mathew’s dissent in Bennett Coleman, and the opinion of the Court in Motion Picture Association, in holding that not merely is it permitted to structure the public sphere in an equal and inclusive manner, but that it is a requirement of Article 19(1)(a).

We can now bring the threads of the separate arguments in the three posts together. In the first post, we found that public law and constitutional obligations can be imposed upon private parties when they discharge public functions. In the second post, it was argued that the internet has replaced the park, the street and the public square as the quintessential forum for the circulation of speech. ISPs, in their role as gatekeepers, now play the role that government once did in controlling and keeping open these avenues of expression. Consequently, they can be subjected to public law free speech obligations. And lastly, we discussed how the constitutional conception of free speech in India, that the Court has gradually evolved over many years, is a social-democratic one, that requires the keeping open of a free and inclusive public sphere. And if there is one thing that fast-lanes over the internet threaten, it is certainly a free and inclusive (digital) public sphere. A combination of these arguments provides us with an arguable case for imposing obligations of net neutrality upon ISPs, even in the absence of a statutory or regulatory obligations, grounded within the constitutional guarantee of the freedom of speech and expression.

For the previous post, please see: http://cis-india.org/internet-governance/blog/-neutrality-free-speech-and-the-indian-constitution-part-2.

_____________________________________________________________________________________________________

Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and presently an LLM student at the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he will be blogging on issues of online freedom of speech and expression.

Global Governance Reform Initiative

by Prasad Krishna last modified May 27, 2014 09:45 AM

PDF document icon Conference Program_GGRI_ FINAL.pdf — PDF document, 984 kB (1007636 bytes)

Net Freedom Campaign Loses its Way

by Sunil Abraham last modified May 27, 2014 11:07 AM
A recent global meet was a victory for governments and the private sector over civil society interests.

The article was published in the Hindu Businessline on May 10, 2014.


One word to describe NetMundial: Disappointing! Why? Because despite the promise, human rights on the Internet are still insufficiently protected. Snowden’s revelations starting last June threw the global Internet governance processes into crisis.

Things came to a head in October, when Brazil’s President Dilma Rousseff, horrified to learn that she was under NSA surveillance for economic reasons, called for the organisation of a global conference called NetMundial to accelerate Internet governance reform.

The NetMundial was held in São Paulo on April 23-24 this year. The result was a statement described as “the non-binding outcome of a bottom-up, open, and participatory process involving … governments, private sector, civil society, technical community, and academia from around the world.” In other words — it is international soft law with no enforcement mechanisms.

The statement emerges from “broad consensus”, meaning governments such as India, Cuba and Russia and civil society representatives expressed deep dissatisfaction at the closing plenary. Unlike an international binding law, only time will tell whether each member of the different stakeholder groups will regulate itself.

Again, not easy, because the outcome document does not specifically prescribe what each stakeholder can or cannot do — it only says what internet governance (IG) should or should not be. And finally, there’s no global consensus yet on the scope of IG. The substantive consensus was disappointing in four important ways:

Mass surveillance : Civil society was hoping that the statement would make mass surveillance illegal. After all, global violation of the right to privacy by the US was the raison d'être of the conference.

Instead, the statement legitimised “mass surveillance, interception and collection” as long as it was done in compliance with international human rights law. This was clearly the most disastrous outcome.

Access to knowledge: The conference was not supposed to expand intellectual property rights (IPR) or enforcement of these rights. After all, a multilateral forum, WIPO, was meant to address these concerns. But in the days before the conference the rights-holders lobby went into overdrive and civil society was caught unprepared.

The end result — “freedom of information and access to information” or right to information in India was qualified “with rights of authors and creators”. The right to information laws across the world, including in India, contains almost a dozen exemptions, including IPR. The only thing to be grateful for is that this limitation did not find its way into the language for freedom of expression.

Intermediary liability: The language that limits liability for intermediaries basically provides for a private censorship regime without judicial oversight, and without explicit language protecting the rights to freedom of expression and privacy. Even though the private sector chants Hillary Clinton's Internet freedom mantra — they only care for their own bottomlines.

Net neutrality: Even though there was little global consensus, some optimistic sections of civil society were hoping that domestic best practice on network neutrality in Brazil’s Internet Bill of Right — also known as Marco Civil, that was signed into law during the inaugural ceremony of NetMundial — would make it to the statement. Unfortunately, this did not happen.

For almost a decade since the debate between the multi-stakeholder and multilateral model started, the multi-stakeholder model had produced absolutely nothing outside ICANN (Internet Corporation for Assigned Names and Numbers, a non-profit body), its technical fraternity and the standard-setting bodies.

The multi-stakeholder model is governance with the participation (and consent — depending on who you ask) of those stakeholders who are governed. In contrast, in the multilateral system, participation is limited to nation-states.

Civil society divisions

The inability of multi-stakeholderism to deliver also resulted in the fragmentation of global civil society regulars at Internet Governance Forums.

But in the run-up to NetMundial more divisions began to appear. If we ignore nuances — we could divide them into three groups. One, the ‘outsiders’ who are best exemplified by Jérémie Zimmermann of the La Quadrature du Net. Jérémie ran an online campaign, organised a protest during the conference and did everything he could to prevent NetMundial from being sanctified by civil society consensus.

Two, the ‘process geeks’ — for these individuals and organisations process was more important than principles. Most of them were as deeply invested in the multi-stakeholder model as ICANN and the US government and some who have been riding the ICANN gravy train for years.

Even worse, some were suspected of being astroturfers bootstrapped by the private sector and the technical community. None of them were willing to rock the boat. For the ‘process geeks’, seeing politicians and bureaucrats queue up like civil society to speak at the mike was the crowning achievement.

Three, the ‘principles geeks’ perhaps best exemplified by the Just Net Coalition who privileged principles over process. Divisions were also beginning to sharpen within the private sector. For example, Neville Roy Singham, CEO of Thoughtworks, agreed more with civil society than he did with other members of the private sector in his interventions.

In short, the ‘outsiders’ couldn't care less about the outcome and will do everything to discredit it, the ‘process geeks’ stood in ovation when the outcome document was read at the closing plenary and the ‘principles geeks’ returned devastated.

For the multi-stakeholder model to survive it must advance democratic values, not undermine them.

This will only happen if there is greater transparency and accountability. Individuals, organisations and consortia that participate in Internet governance processes need to disclose lists of donors including those that sponsor travel to these meetings.

Civil Society - Privacy Bill

by Prasad Krishna last modified May 27, 2014 11:34 AM

PDF document icon privacy bill related story.pdf — PDF document, 999 kB (1023754 bytes)

FOEX Live: May 26-27, 2014

by Geetha Hariharan last modified May 27, 2014 12:42 PM
A selection of news from across India implicating online freedom of expression and use of digital technology

Media reports across India are focusing on the new government and its Cabinet portfolios. In the midst of the celebration of and grief over the regime change, we found many reports indicating that civil society is wary of the new government’s stance towards Internet freedoms.

Andhra Pradesh:

Andhra MLA and All India Majlis-e-Ittihad ul-Muslimin member Akbaruddin Owaisi has been summoned to appear before a Kurla magistrate’s court on grounds of alleged hate speech and intention to harm harmony of Hinduism and Islam. Complainant Gulam Hussain Khan saw an online video of a December 2012 speech by Owaisi and filed a private complaint with the court. “I am prima facie satisfied that it disclosed an offence punishable under Section(s) 153A and 295A of the Indian Penal Code,” the Metropolitan Magistrate said.

Goa:

A Goa Sessions Judge has dismissed shipbuilding diploma engineer Devu Chodankar’s application for anticipatory bail. On the basis of an April 26 complaint by CII state president Atul Pai Kane, Goa cybercrime cell registered a case against Chodankar for allegedly posting matter on a Facebook group with the intention of promoting enmity between religious groups in view of the 2014 general elections. The Judge noted, inter alia, that Sections 153A and 295A of the Indian Penal Code were attracted, and that it is necessary to find out whether, on the Internet, “there is any other material which could be considered as offensive or could create hatred among different classes of citizens of India”.

Karnataka:

Syed Waqas, an MBA student from Bhatkal pursuing an internship in Bangalore, was picked up for questioning along with four of his friends after Belgaum social activist Jayant Tinaikar filed a complaint. The cause of the complaint was a MMS, allegedly derogatory to Prime Minister Narendra Modi. After interrogation, the Khanapur (Belgaum) police let Waqas off on the ground that Waqas was not the originator of the MMS, and that Mr. Tinaikar had provided an incorrect mobile phone number.

In another part of the country, Digvijaya Singh is vocal about Indian police’s zealous policing of anti-Modi comments, while they were all but visible when former Prime Minister Dr. Manmohan Singh was the target of abusive remarks.

Kerala:

The Anti-Piracy Cell of Kerala Police plans to target those uploading pornographic content on to the Internet and its sale through memory cards. A circular to this effect has been issued to all police stations in the state, and civil society cooperation is requested.

In other news, Ernakulam MLA Hibi Eden inaugurated “Hibi on Call”, a public outreach programme that allows constituents to reach the MLA directly. A call on 1860 425 1199 registers complaints.

Maharashtra:

Mumbai police are investigating pizza delivery by an unmanned drone, which they consider a security threat.

Tamil Nadu:

Small and home-run businesses in Chennai are flourishing with the help of Whatsapp and Facebook: Mohammed Gani helps his customers match bangles with Whatsapp images, Ayeesha Riaz and Bhargavii Mani send cakes and portraits to Facebook-initiated customers. Even doctors spread information and awareness using Facebook. In Madurai, you can buy groceries online, too.

Opinion:

Chethan Kumar fears that Indian cyberspace is strangling freedom of expression through the continued use of the ‘infamous’ Section 66A of the Information Technology Act, 2000 (as amended in 2008). Sunil Garodia expresses similar concerns, noting a number of arrests made under Section 66A.

However, Ankan Bose has a different take; he believes there is a thin but clear line between freedom of expression and a ‘freedom to threaten’, and believes Devu Chodankar and Syed Waqar may have crossed that line. For more on Section 66A, please redirect here.

While Nikhil Pahwa is cautious of the new government’s stance towards Internet freedoms, given the (as yet) mixed signals of its ministers, Shaili Chopra ruminates on the new government’s potential dive into a “digital mutiny and communications revolution” and wonders about Modi’s social media management strategy. For Kashmir Times reader Hardev Singh, even Kejriwal’s arrest for allegedly defaming Nitin Gadkari will lead to a chilling effect on freedom of expression.

Elsewhere, the Hindustan Times is intent on letting Prime Minister Narendra Modi know that his citizens demand their freedom of speech and expression. Civil society and media all over India express their concerns for their freedom of expression in light of the new government.

Legislating for Privacy - Part II

by Bhairav Acharya last modified May 28, 2014 09:59 AM
Apart from the conflation of commercial data protection and privacy, the right to privacy bill has ill-informed and poorly drafted provisions to regulate surveillance.

The article was published in the Hoot on May 20, 2014.


Emblem

In October 2010, the Department of Personnel and Training ("DOPT") of the Ministry of Personnel, Public Grievances and Pensions released an ‘Approach Paper’ towards drafting a privacy law for India. The Approach Paper claims to be prepared by a leading Indian corporate law firm that, to the best of my knowledge, has almost no experience of criminal procedure or constitutional law. The Approach Paper resulted in the drafting of a Right to Privacy Bill, 2011 ("DOPT Bill") which, although it has suffered several leaks, has neither been published for public feedback nor sent to the Cabinet for political clearance prior to introduction in Parliament.

Approach Paper and DOPT Bill

The first article in this two-part series broadly examined the many legal facets of privacy. Notions of privacy have long informed law in common law countries and have been statutorily codified to protect bodily privacy, territorial or spatial privacy, locational privacy, and so on. These fields continue to evolve and advance; for instance, the legal imperative to protect intimate body privacy from violation has now expanded to include biometric information, and the protection given to the content of personal communications that developed over the course of the twentieth century is now expanding to encompass metadata and other ‘information about information’.

The Approach Paper suffers from several serious flaws, the largest of which is its conflation of commercial data protection and privacy. It ignores the diversity of privacy law and jurisprudence in the common law, instead concerning itself wholly with commercial data protection. This creates a false equivalency, albeit not one that cannot be rectified by re-naming the endeavour to describe commercial data protection only.

However, there are other errors. The paper claims that no right of action exists for privacy breaches between citizens inter se. This is false, the civil wrongs of nuisance, interference with enjoyment, invasion of privacy, and other similar torts and actionable claims operate to redress privacy violations. In fact, in the case of Ratan Tata v. Union of India that is currently being heard by the Supreme Court of India, at least two parties are arguing that privacy is already adequately protected by civil law. Further, the criminal offences of nuisance and defamation, amongst others, and the recently introduced crimes of stalking and voyeurism, all create rights of action for privacy violations. These measures are incomplete, – this is not contested, the premise of these articles is the need for better privacy protection law – but denying their existence is not useful.

The shortcomings of the Approach Paper are reflected in the draft legislation it resulted in. A major concern with the DOPT Bill is its amateur treatment of surveillance and interception of communications. This is inevitable for the Approach Paper does not consider this area at all although there is sustained and critical global and national attention to the issues that attend surveillance and communications privacy. For an effort to propose privacy law, this lapse is quite astonishing. The Approach Paper does not even examine if Parliament is competent to regulate surveillance, although the DOPT Bill wades into this contested turf.

Constitutionality of Interceptions

In a federal country, laws are weighed by the competence of their legislatures and struck down for overstepping their bounds. In India, the powers to legislate arise from entries that are contained in three lists in Schedule VII of the Constitution. The power to legislate in respect of intercepting communications traditionally emanates from Entry 31 of the Union List, which vests the Union – that is, Parliament and the Central Government – with the power to regulate “Posts and telegraphs; telephones, wireless, broadcasting and other like forms of communication” to the exclusion of the States. Hence, the Indian Telegraph Act, 1885, and the Indian Post Office Act, 1898, both Union laws, contain interception provisions. However, after holding the field for more than a century, the Supreme Court overturned this scheme in Bharat Shah’s case in 2008.

The case challenged the telephone interception provisions of the Maharashtra Control of Organised Crime Act, 1999 ("MCOCA"), a State law that appeared to transgress into legislative territory reserved for the Union. The Supreme Court held that Maharashtra’s interception provisions were valid and arose from powers granted to the States – that is, State Assemblies and State Governments – by Entries 1 and 2 of the State List, which deal with “public order” and “police” respectively. This cleared the way for several States to frame their own communications interception regimes in addition to Parliament’s existing laws. The question of what happens when the two regimes clash has not been answered yet. India’s federal scheme anticipates competing inconsistencies between Union and State laws, but only when these laws derive from the Concurrent List which shares legislative power. In such an event, the ‘doctrine of repugnancy’ privileges the Union law and strikes down the State law to the extent of the inconsistency.

In competitions between Union and State laws that do not arise from the Concurrent List but instead from the mutually exclusive Union and State Lists, the ‘doctrine of pith and substance’ tests the core substance of the law and traces it to one the two Lists. Hence, in a conflict, a Union law the substance of which was traceable to an entry in the State List would be struck down, and vice versa.

However, the doctrine permits incidental interferences that are not substantive. For example, as in a landmark 1946 case, a State law validly regulating moneylenders may incidentally deal with promissory notes, a Union field, since the interference is not substantive. Since surveillance is a police activity, and since “police” is a State subject, care must be taken by a Union surveillance law to remain on the pale of constitutionality by only incidentally affecting police procedure. Conversely, State surveillance laws were required to stay clear of the Union’s exclusive interception power until Bharat Shah’s case dissolved this distinction without answering the many questions it threw up.

Since the creation of the Republic, India’s federal scheme was premised on the notion that the Union and State Lists were exclusive of each other. Conceptually, the Union and the States could not have competing laws on the same subject. But Bharat Shah did just that; it located the interception power in both the Lists and did not enunciate a new doctrine to resolve their (inevitable) future conflict. This both disturbs Indian constitutional law and goes to the heart of surveillance and privacy law.

Three Principles of Interception

Apart from the important questions regarding legislative competence and constitutionality, the DOPT Bill proposed weak, ill-informed, and poorly drafted provisions to regulate surveillance and interceptions. It serves no purpose to further scrutinise the 2011 DOPT Bill. Instead, at this point, it may be constructive to set out the broad contours of a good interceptions regulation regime. Some clarity on the concepts: intercepting communications means capturing the content and metadata of oral and written communications, including letters, couriers, telephone calls, facsimiles, SMSs, internet telephony, wireless broadcasts, emails, and so on. It does not include activities such visual capturing of images, location tracking or physical surveillance; these are separate aspects of surveillance, of which interception of communications is a part.

Firstly, all interceptions of communications must be properly sanctioned. In India, under Rule 419A of the Indian Telegraph Rules, 1951, the Home Secretary – an unelected career bureaucrat, or a junior officer deputised by the Home Secretary – with even lesser accountability, authorises interceptions. In certain circumstances, even senior police officers can authorise interceptions. Copies of the interception orders are supposed to be sent to a Review Committee, consisting of three more unelected bureaucrats, for bi-monthly review. No public information exists, despite exhaustive searching, regarding the authorisers and numbers of interception orders and the appropriateness of the interceptions.

The Indian system derives from outdated United Kingdom law that also enables executive authorities to order interceptions. But, the UK has constantly revisited and revised its interception regime; its present avatar is governed by the Regulation of Investigatory Powers Act, 2000 ("RIPA") which creates a significant oversight mechanism headed by an independent commissioner, who monitors interceptions and whose reports are tabled in Parliament, and quasi-judicially scrutinised by a tribunal comprised of judges and senior independent lawyers, which hears public complaints, cancels interceptions, and awards monetary compensation. Put together, even though the current UK interceptions system is executively sanctioned, it is balanced by independent and transparent quasi-judicial authorities.

In the United States, all interceptions are judicially sanctioned because American constitutional philosophy – the separation of powers doctrine – requires state action to be checked and balanced. Hence, ordinary interceptions of criminals’ communications as also extraordinary interceptions of perceived national security threats are authorised only by judges, who are ex hypothesi independent, although, as the PRISM affairs teaches us, independence can be subverted. In comparison, India’s interception regime is incompatible with its democracy and must be overhauled to establish independent and transparent authorities to properly sanction interceptions.

Secondly, no interceptions should be sanctioned but upon ‘probable cause’. Simply described, probable cause is the standard that convinces a reasonable person of the existence of criminality necessary to warrant interception. Probable case is an American doctrine that flows from the US Constitution’s Fourth Amendment that protects the rights of people to be secure in places in which they have a reasonable expectation of privacy. There is no equivalent standard in UK law, except perhaps the common law test of reasonability that attaches to all government action that abridges individual freedoms. If a coherent ‘reasonable suspicion’ test could be coalesced from the common law, I think it would fall short of the strictness that the probable cause doctrine imposes on the executive. Therefore, the probable cause requirement is stronger than ordinary constraint of reasonability but weaker than the standard of reasonable doubt beyond which courts may convict. In this spectrum of acceptable standards, India’s current law in section 5(2) of the Indian Telegraph Act, 1885 is the weakest for it permits interceptions merely “on the occurrence of any public emergency or in the interest of public safety”, which determination is left to the “satisfaction” of a bureaucrat. And, under Rule 419A(2) of the Telegraph Rules, the only imposition on the bureaucrat when exercising this satisfaction is that the order “contain reasons” for the interception.

Thirdly, all interceptions should be warranted. This point refers not to the necessity or otherwise of the interception, but to the framework within which it should be conducted. Warrants should clearly specify the name and clear identity of the person whose communications are sought to be intercepted. The target person’s identity should be linked to the specific means of communication upon which the suspected criminal conversations take place. Therefore, if the warrant lists one person’s name but another person’s telephone number – which, because of the general ineptness of many police forces, is not uncommon – the warrant should be rejected and the interception cancelled. And, by extension, the specific telephone number, or email account, should be specified. A warrant against a person called Rahul Kumar, for instance, cannot be executed against all Rahul Kumars in the vicinity, nor also against all the telephones that the one specific Rahul Kumar uses, but only against the one specific telephone number that is used by the one specific Rahul Kumar. Warrants should also specify the duration of the interception, the officer responsible for its conduct and thereby liable for its abuse, and other safeguards. Some of these concerns were addressed in 2007 when the Telegraph Rules were amended, but not all.

A law that fails to substantially meet the standards of these principles is liable, perhaps in the not too distant future, to be read down or struck down by India’s higher judiciary. But, besides the threat of judicial review, a democratic polity must protect the freedoms and diversity of its citizens by holding itself to the highest standards of the rule of law, where the law is just.

Accountability of ICANN

by Geetha Hariharan last modified May 28, 2014 10:45 AM
Smarika Kumar's post on submissions to NETmundial

PDF document icon S. Kumar, Accountability of ICANN.pdf — PDF document, 135 kB (138513 bytes)

FOEX Live: May 28-29, 2014

by Geetha Hariharan last modified May 29, 2014 08:58 AM
A selection of news from across India with a bearing on online freedom of expression and use of digital technology

Media focus on the new government and its ministries and portfolios has been extensive, and to my knowledge, few newspapers or online sources have reported violations of freedom of speech. However, on his first day in office, the new I&B Minister, Prakash Javadekar, acknowledged the importance of press freedom, avowing that it was the “essence of democracy”. He has assured that the new government will not interfere with press freedom.

Assam:

A FICCI discussion in Guwahati, attended among others by Microsoft and Pricewaterhouse Coopers, focused on the role of information technology in governance.

Goa:

Following the furore over allegedly inflammatory, ‘hate-mongering’ Facebook posts by shipping engineer Devu Chodankar, a group of Goan netizens formed a ‘watchdog forum’ to police “inappropriate and communally inflammatory content” on social media. Diana Pinto feels, however, that some ‘compassion and humanism’ ought to have prompted only a stern warning in Devu Chodankar’s case, and not a FIR.

Karnataka:

Syed Waqar was released by Belgaum police after questioning revealed he was a recipient of the anti-Modi MMS. The police are still tracing the original sender.

Madhya Pradesh:

The cases of Shaheen Dhada and Rinu Srinivasan, and recently of Syed Waqar and Devu Chodankar have left Indore netizens overly cautious about “posting anything recklessly on social media”. Some feel it is a blow to democracy.

Maharashtra:

In Navi Mumbai, the Karjat police seized several computers, hard disks and blank CDs from the premises of the Chandraprabha Charitable Trust in connection with an investigation into sexual abuse of children at the Trust’s school-shelter. The police seek to verify whether the accused recorded any obscene videos of child sexual abuse.

In Mumbai, even as filmmakers, filmgoers, artistes and LGBT people celebrated the Kashish Mumbai International Queer Film Festival, all remained apprehensive of the new government’s social conservatism, and were aware that the films portrayed acts now illegal in India.

Manipur:

At the inauguration of the 42nd All Manipur Shumang Leela Festival, V.K. Duggal, State Governor and Chairman of the Manipur State Kala Akademi, warned that the art form was under threat in the digital age, as Manipuri films are replacing it in popularity.

Rajasthan:

Following the lead of the Lok Sabha, the Rajasthan state assembly has adopted a digital conference and voting system to make the proceedings in the House more efficient and transparent.

Seemandhra:

Seemandhra Chief Minister designate N. Chandrababu Naidu promised a repeat of his hi-tech city miracle ‘Cyberabad’ in Seemandhra.

West Bengal:

West Bengal government has hired PSU Urban Mass Transit Company Limited to study, install and operationalize Intelligent Transport System in public transport in Kolkata. GPS will guide passengers about real-time bus routes and availability. While private telecom operators have offered free services to the transport department, there are no reports of an end-date or estimated expenditure on the project.

News and Opinion:

Over a week ago, Avantika Banerjee wrote a speculative post on the new government’s stance towards Internet policy. At Fair Observer, Gurpreet Mahajan laments that community politics in India has made a lark of banning books.

India’s Computer Emergency Response Team (CERT-In) has detected high-level virus activity in Microsoft’s Internet Explorer 8, and recommends upgrading to Explorer 11.

Of the projected 400 million users that Twitter will have by 2018, India and Indonesia are expected to outdo the United Kingdom in user base. India saw nearly 60% growth in user base this year, and Twitter played a major role in Elections 2014. India will have over 18.1 million users by 2018.

Elsewhere in the world:

Placing a bet on the ‘Internet of Everything’, Cisco CEO John Chambers predicted a “brutal consolidation” of the IT industry in the next five years. A new MarketsandMarkets report suggests that the value of the ‘Internet of Things’ may reach US $1423.09 billion by 2020 at an estimated CAGR of 4.08% from 2014 to 2020.

China’s Xinhua News Agency announced its month-long campaign to fight “infiltration from hostile forces at home and abroad” through instant messaging. Message providers WeChat, Momo, Mi Talk and Yixin have expressed their willingness to cooperate in targeting those engaging in fraud, or in spreading ‘rumours’, violence, terrorism or pornography. In March this year, WeChat deleted at least 40 accounts with political, economic and legal content.

Thailand’s military junta interrupted national television broadcast to deny any role in an alleged Facebook-block. The site went down briefly and caused alarm among netizens.

Snowden continues to assure that he is not a Russian spy, and has no relationship with the Russian government.

Search and Seizure and the Right to Privacy in the Digital Age: A Comparison of US and India

by Divij Joshi last modified Jun 02, 2014 06:45 AM
The development of information technology has transformed the way in which individuals make everyday transactions and communicate with the world around us. These interactions and transactions are recorded and stored – constantly available for access by the individual and the company through which the service was used.

For example, the ubiquitous smartphone, above and beyond a communication device, is a device which can maintain a complete record of the communications data, photos, videos and documents, and a multitude of other deeply personal information, like application data which includes location tracking, or financial data of the user. As computers and phones increasingly allow us to keep massive amounts of personal information accessible at the touch of a button or screen (a standard smartphone can hold anything between 500 MB to 64 GB of data), the increasing reliance on computers as information-silos also exponentially increases the harms associated with the loss of control over such devices and the information they contain. This vulnerability is especially visceral in the backdrop of law enforcement and the use of coercive state  power to maintain security, juxtaposed with the individual’s right to secure their privacy.

American Law - The Fourth Amendment Protection against Unreasonable Search and Seizure

The right to conduct a search and seizure of persons or places is an essential part of investigation and the criminal justice system. The societal interest in maintaining security is an overwhelming consideration which gives the state a restricted mandate to do all things necessary to keep law and order, which includes acquiring all possible information for investigation of criminal activities, a restriction which is based on recognizing the perils of state-endorsed coercion and its implication on individual liberty. Digitally stored information, which is increasingly becoming a major site of investigative information, is thus essential in modern day investigation techniques. Further, specific crimes which have emerged out of the changing scenario, namely, crimes related to the internet, require investigation almost exclusively at the level of digital evidence. The role of courts and policy makers, then, is to balance the state’s mandate to procure information with the citizens’ right to protect it.

The scope of this mandate is what is currently being considered before the Supreme Court of the United States, which begun hearing arguments in the cases Riley v. California,[1] and United States v Wurie,[2]on the 29th of April, 2014. At issue is the question of whether the police should be allowed to search the cell phones of individuals upon arrest, without obtaining a specific warrant for such search. The cases concern instances where the accused was arrested on account of a minor infraction and a warrantless search was conducted, which included the search of cell phones in their possession. The information revealed in the phones ultimately led to the evidence of further crimes and the conviction of the accused of graver crimes. The appeal is for a suppression of the evidence so obtained, on grounds that the search violates the Fourth Amendment of the American Constitution. Although there have been a plethora of conflicting decisions by various lower courts (including the judgements in Wurie and Riley),[3] the Federal Supreme Court will be for the first time deciding upon the issue of whether cell phone searches should require a higher burden under the Fourth Amendment.

At the core of the issue are considerations of individual privacy and the right to limit the state’s interference in private matters. The fourth amendment in the Constitution of the United States expressly grants protection against unreasonable searches and seizure,[4]however, without a clear definition of what is unreasonable, it has been left to the courts to interpret situations in which the right to non-interference would trump the interests of obtaining information in every case, leading to vast and varied jurisprudence on the issue. The jurisprudence stems from the wide fourth amendment protection against unreasonable government interference, where the rule is generally that any warrantless search is unreasonable, unless covered by certain exceptions. The standard for the protection under the Fourth Amendment is a subjective standard, which is determined as per the state of the bind of the individual, rather than any objective qualifiers such as physical location; and extends to all situations where individuals have a reasonable expectation of privacy, i.e., situations where individuals can legitimately expect privacy, which is a subjective test, not purely dependent upon the physical space being searched.[5]

Therefore, the requirement of reasonableness is generally only fulfilled when a search is conducted subsequent to obtaining a warrant from a neutral magistrate, by demonstrating probable cause to believe that evidence of any unlawful activity would be found upon such search. A warrant is, therefore, an important limitation on the search powers of the police. Further, the protection excludes roving or general searches and requires particularity of the items to be searched. The restriction derives its power from the exclusionary rule, which bars evidence obtained through unreasonable search or seizure, obtained directly or through additional warrants based upon such evidence, from being used in subsequent prosecutions. However, there have evolved several exceptions to the general rule, which includes cases where the search takes place upon the lawful arrest of an accused, a practice which is justified by the possibility of hidden weapons upon the accused or of destruction of important evidence.[6]

The appeal, if successful, would provide an exception to the rule that any search upon lawful arrest is always reasonable, by creating a caveat for the search of computer devices like smartphones. If the court does so, it would be an important recognition of the fact that evolving technologies have transmuted the concept of privacy to beyond physical space, and legal rules and standards that applied to privacy even twenty years ago, are now anachronistic in an age where individuals can record their entire lives on an iPhone. Searching a person nowadays would not only lead to the recovery of calling cards or cigarettes, but phones and computers which can be the digital record of a person’s life, something which could not have been contemplated when the laws were drafted. Cell phone and computer searches are the equivalent of searches of thousands of documents, photos and personal records, and the expectation of privacy in such cases is much higher than in regular searches. Courts have already recognized that cell phones and laptop computers are objects in which the user may have a reasonable expectation of privacy by making them analogous to a “closed container” which the police cannot search and hence coming under the protection of the Fourth Amendment.[7]

On the other hand, cell phones and computers also hold data which could be instrumental in investigating criminal activity, and with technologies like remote wipes of computer data available, such data is always at the risk of destruction if delay is occurred upon the investigation. As per the oral arguments, being heard now, the Court seems to be carving out a specific principle applicable to new technologies. The Court is likely to introduce subtleties specific to the technology involved – for example, it may seek to develop different principles for smartphones (at issue in Riley) and the more basic kind of cell-phones (at issue in Wurie), or it may recognize that only certain kinds of information may be accessed,[8]or may even evolve a rule that would allow seizure, but not a search, of the cell phone before a search warrant can be obtained.[9] Recognizing that transformational technology needs to be reflected in technology-specific legal principles is an important step in maintaining a synchronisation between law and technology and the additional recognition of a higher threshold adopted for digital evidence and privacy would go a long way in securing digital privacy in the future.

Search and Seizure in India

Indian jurisprudence on privacy is a wide departure from that in the USA. Though it is difficult to strictly compartmentalize the many facets of the right to privacy, there is no express or implicit mention of such a right in the Indian Constitution. Although courts have also recognized the importance of procedural safeguards in protecting against unreasonable governmental interference, the recognition of the intrinsic right to privacy as non-interference, which may be different from the instrumental rights that criminal procedure seeks to protect (such as misuse of police power), is sorely lacking. The general law providing for the state’s power of search and seizure of evidence is found in the Code of Criminal Procedure, 1973.

Section 93 provides for the general procedure of search. Section 93 allows for a magistrate to issue a warrant for the search of any “document or thing”, including a warrant for general search of an area, where it believes it is required for the purpose of investigation. The particularity of the search warrant is not a requirement under S. 93(2), and hence a warrant may be for general or roving search of a place. Section 100, which further provides for the search of a closed place, includes certain safeguards such as the presence of witnesses and the requirement of a warrant before a police officer may be allowed ingress into the closed place. However, under S. 165 and S. 51 of the code, the requirements of a search warrant are exempted. S. 165 dispenses with the warrant requirement and provides for an officer in charge of a police station, or any other officer duly authorized by him, to conduct the search of any place as long as he has reasonable grounds to believe that such search would be for the purpose of an investigation and a belief that a search warrant cannot be obtained without undue delay. Further, the officer conducting such search must as far as possible note down the reasons for such belief in writing prior to conducting the search. Section 51 provides another express exception to the requirement of search warrants, by allowing the search of a person arrested lawfully provided that the arrested person may not or cannot be admitted to bail, and requires any such seized items to be written in a search memo. As long as these conditions are fulfilled, the police has an unqualified authority to search a person upon arrest. Therefore, where the arrestee can be admitted to bail as per the warrant, or, in cases of warrantless arrest, as per the law, the search and seizure of such person may not be regular, and the evidence so collected would be subject to greater scrutiny by the court. However, besides these minimal protections, there is no additional procedural protection of individual privacy, and the search powers of the police are extremely wide and discretionary. In fact, there is a specific absence of the exclusionary rule as a protection as well, which means that, unlike under the Fourth Amendment, the non-compliance with the procedural requirements of search would not by itself vitiate the proceedings or suppress the evidence so found, but would only amount to an irregularity which must be simply another factor considered in evaluating the evidence.[10]

The extent of the imputation of the Fourth Amendment protection against unreasonable governmental interference in the Indian constitution is also uncertain. A direct imputation of the Fourth Amendment into the Indian Constitution has been disregarded by the Supreme Court.[11]Though the allusions to the Fourth Amendment have mostly been invoked on facts where unreasonable intrusions into the homes of persons were challenged, the indirect imputation of the right to privacy into the right under Article 21 of the Constitution, invoking the right to privacy as a right to non-interference and a right to live with dignity, would suggest that the considerations for privacy under the Constitution are not merely objective, or physical, but depend on the subjective facts of the situation, i.e. its effect on the right to live with dignity (analogous to the reasonable expectation of privacy test laid down in Katz).[12] Further, the court has specifically struck down provisions for search and seizure which confer particularly wide and discretionary powers on the executive without judicial scrutiny, holding that searches must be subject to the doctrine of proportionality, and that a provision probable cause to effect any search.[13] The Fourth Amendment protection against unreasonable interference in private matters by the state is a useful standard to assess privacy, since it imputes a concept of privacy as an intrinsic right as well as an instrumental one, i.e. privacy as non-interference is a good in itself, notwithstanding the rights it helps achieve, like the freedom of movement or speech.

Regarding digital privacy in particular, Indian law and policy has failed to stand up to the challenges that new technologies pose to privacy and has in fact been regressive, by engaging in surveillance of communications and by allowing governmental access to digital records of online communications (including emails, website logs, etc.) without judicial scrutiny and accountability.[14] In an age of transformative technology and of privacy being placed at a much greater risk, laws which were once deemed reasonable are now completely inadequate in guaranteeing freedom and liberty as encapsulated by the right to privacy. The disparity is even more pronounced in cases of investigation of cyber-crimes which rely almost exclusively on digital evidence, such as those substantively enumerated under the Information Technology Act, but investigated under the general procedure laid down in the Code of Criminal Procedure, which is already mentioned. The procedures for investigation of cyber-crimes and the search and seizure of digital evidence require special consideration and must be brought in line with changing norms. Although S.69 and 69B lay down provisions for investigation of certain crimes,[15] which requires search upon an order by competent authority, i.e. the Secretary to the Department of IT in the Government of India, the powers of search and seizure are also present in several other rules, such as rule 3(9) of the Information Technology (Due diligence observed by intermediaries guidelines) Rules, 2011 which allows access to information from intermediaries by a simple written order by any agency or person who are lawfully authorised for investigative, protective, cyber security or intelligence activity; or under rule 6 of the draft Reasonable Security Practices Rules, 2011 framed under Section 43A of the Information Technology Act, where any government agency may, for the prevention, detection, investigation, prosecution, and punishment of offences, obtain any personal data from an intermediate “body corporate” which stores such data. The rules framed for investigation of digital evidence, therefore, do not inspire much confidence where safeguarding privacy is concerned. In the absence of specific guidelines or amendments to the procedures of search and seizure of digital evidence, the inadequacies of applying archaic standards leads to unreasonable intrusions of individual privacy and liberties – an incongruity which requires remedy by the courts and legislature of the country.


[1]. http://www.supremecourt.gov/oral_arguments/argument_transcripts/13-132_h315.pdf

[2]. http://www.supremecourt.gov/oral_arguments/argument_transcripts/13-212_86qd.pdf

[3]. In Wurie, the motion to supress was allowed, while in Riley it was denied. Also see US v Jacob Finley, US v Abel Flores-Lopez where the motion to suppress was denied.

[4]. The Fourth Amendment to the Constitution of the United States of America: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

[5]. Katz v United States, 389 U.S. 347, 352 (1967).

[6]. Stephen Saltzer, American Criminal Procedure

[7]. United States v Chan, 830 F. Supp. 531,534 (N.D. Cal. 1993).

[8]. A factor considered in US v Abel Flores-Lopez, where the court held that the search of call history in a cell phone did not constitute a sufficient infringement of privacy to require the burden of a warrant.

[9]. The decision in Smallwood v. Florida, No. SC11-1130, before the Florida Supreme Court, made such a distinction.

[10]. State Of Maharashtra v. Natwarlal Damodardas Soni, AIR 1980 SC 593; Radhakrishnan v State of UP, 1963 Supp. 1 S.C.R. 408

[11]. M.P. Sharma v Satish Chandra, AIR 1954 SC 300

[12]. Kharak Singh v State of UP, (1964) 1 SCR 332; Gobind v State of Madhya Pradesh, 1975 AIR 1378

[13]. District Registrar and Collector v. Canara Bank, AIR 2005 SC 186, which related to S.73 of the Andhra Pradesh Stamps Act which allowed ‘any person’ to enter into ‘any premises’ for the purpose of conducting a search.

[14]. S. 69 and 69B of the Information Technology (Amendment) Act, 2008.

[15]. Procedures and Safeguards for Monitoring and collecting traffic data or information rules 2009, available at http://cis-india.org/internet-governance/resources/it-procedure-and-safeguard-for-monitoring-and-collecting-traffic-data-or-information-rules-2009

Two Arguments Against the Constitutionality of Section 66A

by Gautam Bhatia — last modified Jun 04, 2014 03:42 AM
Gautam Bhatia explores the constitutionality of Section 66A in light of recent events.

In the immediate aftermath of the elections, free speech issues have come to the fore again. In Goa, a Facebook user was summoned for a post warning a second holocaust if Modi was elected to power. In Karnataka, a MBA student was likewise arrested for circulating an MMS that showed Modi’s face morphed onto a corpse, with the slogan “Abki baar antim sanskaar”. These arrests have reopened the debate about the constitutional validity of Section 66A of the IT Act, which is the legal provision governing online speech in India. Section 66A criminalises, among other things, the sending of information that is “grossly offensive or menacing in character” or causes “annoyance or inconvenience”. The two instances cited above raise – not for the first time – the concern that when it comes to implementation, Section 66A is unworkable to the point of being unconstitutional.

Like all legal provisions, Section 66A must comply with the fundamental rights chapter of the Indian Constitution. Article 19(1)(a) guarantees the freedom of speech and expression, and Article 19(2) permits reasonable restrictions in the interests of – inter alia – “public order, decency or morality”. Presumably, the only way in which Section 66A can be justified is by showing that it falls within the category of “public order” or of “morality”. The precedent of the Supreme Court, however, has interpreted Article 19(2) in far narrower terms than the ones that Section 66A uses. The Court has held that “public order” may only be invoked if there is a direct and immediate relation between the offending speech and a public order disturbance – such as, for instance, a speaker making an incendiary speech to an excited mob, advocating imminent violence (the Court has colloquially stated the requirement to be a “spark in a powder keg”). Similarly, while the Court has never precisely defined what “morality” – for the purposes of Article 19(2) – means, the term has been invoked where (arguably) pornographic materials are concerned – and never simply because speech has “offended” or “menaced” someone. Indeed, the rhetoric of the Court has consistently rejected the proposition that the government can prohibit individuals from offending one another.

This raises two constitutional problems with Section 66A: the problems of overbreadth and vagueness. Both doctrines have been developed to their fullest in American free speech law, but the underlying principles are universal.

A statute is overbroad when it potentially includes within its prohibitions both speech that it is entitled to prohibit, and speech that it is not. In Gooding v. Wilson, a Georgia statute criminalized the use of “opprobrious words or abusive language”. In defending the statute, the State of Georgia argued that its Courts had read it narrowly, limiting its application to “fighting words” – i.e., words that by their very nature tended to incite an imminent breach of the peace, something that was indisputably within the power of the State to prohibit. The Supreme Court rejected the argument and invalidated the statute. It found that the words “opprobrious” and “abusive” had greater reach than “fighting words”. Thus, since the statute left “wide open the standard of responsibility, so that it [was] easily susceptible to improper application”, the Court struck it down.

A statute is vague when persons of “ordinary intelligence… have no reasonable opportunity to know what is prohibited.” In Grayned v. Rockford, the American Supreme Court noted that a vague law impermissibly delegates basic policy matters to policemen, judges, and juries for resolution on an ad hoc and subjective basis, with the attendant dangers of arbitrary and discriminatory application.” There are, therefore, a number of problems with vague laws: one of the fundamental purposes of law is to allow citizens to plan their affairs with a degree of certainty. Vagueness in legislation prevents that. And equally importantly, vague laws leave a wide scope of implementing power with non-elected bodies, such as the police – leading to the fear of arbitrary application.

While overbreadth and vagueness are problems that affect legislation across the board, they assume a particular urgency when it comes to free speech. This is because, as the American Supreme Court has recognized on a number of occasions, speech regulating statutes must be scrutinized with specific care because of the chilling effect: when speech is penalized, people will – out of fear and caution – exercise self-censorship, and the political discourse will be impoverished. If we accept – as the Indian Courts have – that a primary reason for guaranteeing free expression rights is their indispensability to democracy, then the danger of self-censorship is one that we should be particularly solicitous of. Hence, when speech-regulating statutes do proscribe expression, they must be clear and narrowly drawn, in order to avoid the chilling effect. As the American Supreme Court euphemistically framed it, “free speech needs breathing space to survive.” Overbroad and vague speech-restricting statutes are particularly pernicious in denying it that breathing space.

There seems to be little doubt that Section 66A is both overbroad and vague. However ill-judged a holocaust comparison or a morphed corpse-image may be, neither of them are like sparks in a powder keg, which will lead to an immediate breach in public order – or “immoral” in the way of explicit pornography. We can therefore see, clearly, that the implementation of the law leaves almost unbounded scope to officials such as the police, provides room for unconstitutional interpretations, and is so vaguely framed that it is almost impossible to know, in advance, what actions fall within the rule, and which ones are not covered by it. If there is such a thing as over-breadth and vagueness par excellence, then Section 66A is surely it!

At various times in its history, the Supreme Court has acknowledged the problems of overbreadth, vagueness and the chilling effect, but never directly incorporated them into Indian law. As we have seen, each of these elements is connected to the other: over-broad and vague speech-regulating statutes are problematic because of the chilling effect. Since Section 66A is presently being challenged before the Supreme Court, there is a great opportunity for the Court both to get rid of this unconstitutional law, as well as strengthen the foundations of our free speech jurisprudence.


Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and presently an LLM student at the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he blogs on issues of online freedom of speech and expression.

CIS Statement at ICANN 49's Public Forum

by Pranesh Prakash last modified Jun 04, 2014 05:31 AM
This was a statement made by Pranesh Prakash at the ICANN 49 meeting (on March 27, 2014), arguing that ICANN's bias towards the North America and Western Europe result in a lack of legitimacy, and hoping that the IANA transition process provides an opportunity to address this.

Good afternoon. My name is Pranesh Prakash, and I'm with the Yale Information Society Project and the Centre for Internet and Society.

I am extremely concerned about the accountability of ICANN to the global community. Due to various decisions made by the US government relating to ICANN's birth, ICANN has had a troubled history with legitimacy. While it has managed to gain and retain the confidence of the technical community, it still lacks political legitimacy due to its history. The NTIA's decision has presented us an opportunity to correct this.

However, ICANN can't hope to do so without going beyond the current ICANN community, which while nominally being 'multistakeholder' and open to all, grossly under-represents those parts of the world that aren't North America and Western Europe.

Of the 1010 ICANN-accredited registrars, 624 are from the United States, and 7 from the 54 countries of Africa. In a session yesterday, a large number of the policies that favour entrenched incumbents from richer countries were discussed. But without adequate representation from poorer countries, and adequate representation from the rest of the world's Internet population, there is no hope of changing these policies.

This is true not just of the business sector, but of all the 'stakeholders' that are part of global Internet policymaking, whether they follow the ICANN multistakeholder model or another. A look at the boardmembers of the Internet Architecture Board, for instance, would reveal how skewed the technical community can be, whether in terms of geographic or gender diversity.

Without greater diversity within the global Internet policymaking communities, there is no hope of equity, respect for human rights -- civil, political, cultural, social and economic --, and democratic funtioning, no matter how 'open' the processes seem to be, and no hope of ICANN accountability either.

WSIS+10 Final Agreed Draft

by Prasad Krishna last modified Jun 04, 2014 10:12 AM

PDF document icon WSIS10-StatementOutcomes-A,B,C 28-05-2014.pdf — PDF document, 695 kB (711942 bytes)

FOEX Live: June 1-7, 2014

by Geetha Hariharan last modified Jun 07, 2014 01:33 PM
A weekly selection of news on online freedom of expression and digital technology from across India (and some parts of the world).

Delhi NCR:

Following a legal notice from Dina Nath Batra, publisher Orient BlackSwan “set aside… for the present” Communalism and Sexual Violence: Ahmedabad Since 1969 by Dr. Megha Kumar, citing the need for a “comprehensive assessment”. Dr. Kumar’s book is part of the ‘Critical Thinking on South Asia’ series, and studies communal and sexual violence in the 1969, 1985 and 2002 riots of Ahmedabad. Orient BlackSwan insists this is a pre-release assessment, while Dr. Kumar contests that her book went to print in March 2014 after extensive editing and peer review. Dina Nath Batra’s civil suit led Penguin India to withdraw Wendy Doniger’s The Hindus: An Alternative History earlier this year.

The Delhi Police’s Facebook page aimed at reaching out to Delhi residents hailing from the North East proved to be popular.

Goa:

Shipbuilding engineer Devu Chodankar’s ordeal continued. Chodankar, in a statement to the cyber crime cell of the Goa police, clarified that his allegedly inflammatory statements were directed against the induction of the Sri Ram Sene’s Pramod Muthalik into the BJP. Chodankar’s laptop, hard-disk and mobile Internet dongle were seized.

Jammu & Kashmir:

Chief Minister Omar Abdullah announced the withdrawal of a four-year-old SMS ban in the state. The ban was instituted in 2010 following widespread protests, and while it was lifted for post-paid subscribers six months later, pre-paid connections were banned from SMSes until now.

Maharashtra:

In a move to contain public protests over ‘objectionable posts’ about Chhatrapati Shivaji, Dr. B.R. Ambedkar and the late Bal Thackeray (comments upon whose death led to the arrests of Shaheen Dhada and Renu Srinivasan under Section 66A), Maharashtra police will take action against even those who “like” such posts. ‘Likers’ may be charged under the Information Technology Act and the Criminal Procedure Code, say Nanded police.

A young Muslim man was murdered in Pune, apparently connected to the online publication of ‘derogatory’ pictures of Chhatrapati Shivaji and Bal Thackarey. Members of Hindu extremists groups celebrated his murder, it seems. Pune’s BJP MP, Anil Shirole, said, “some repercussions are natural”. Members of the Hindu Rashtra Sena were held for the murder, but it seems that the photographs were uploaded from foreign IP addresses. Across Maharashtra, 187 riotingcases have been registered against a total of 710 persons, allegedly in connection with the offensive Facebook posts.

On a lighter note, Bollywood hopes for a positive relationship with the new government on matters such as film censorship, tax breaks and piracy.

News & Opinion:

Shocking the world, Vodafone reported the existence of secret, direct-access wires that enable government surveillance on citizens. India is among 29 governments that sought access to its networks, says Vodafone.

I&B Minister Prakash Javadekar expressed his satisfaction with media industry self-regulation, and stated that while cross-media ownership is a matter for debate, it is the legality of transactions such as the Reliance-Network18 acquisition that is important.

Nikhil Pahwa of Medianama wrote of a ‘right to be forgotten’ request they received from a user in light of the recent European Court of Justice ruling. The right raises a legal dilemma in India, LiveMint reportsMedianama also comments on Maharashtra police’s decision to take action against Facebook ‘likes’, noting that at the very least, a like and a comment do not amount to the same thing.

The Hindu was scorching in its editorial on the Pune murder, warning that the new BJP government stands to lose public confidence if it does not clearly demonstrate its opposition to religious violence. The Times of India agrees.

Sanjay Hegde wrote of Section 66A of the Information Technology Act, 2000 (as amended in 2008) as a medium-focused criminalization of speech. dnaEdit also published its criticism of Section 66A.

Ajit Ranade of the Mumbai Mirror comments on India as a ‘republic of hurt sentiments’, criminalizing exercises of free speech from defamation, hate speech, sedition and Section 66A. But in this hurt and screaming republic, dissent is crucial and must stay alive.

A cyber security expert is of the opinion that the police find it difficult to block webpages with derogatory content, as servers are located outside India. But data localization will not help India, writes Jayshree Bajoria.

Dharma Adhikari tries to analyze the combined impact of converging media ownership, corporate patronage of politicians and elections, and recent practices of forced and self-censorship and criminalization of speech.

Elsewhere in the world:

In Pakistan, Facebook has been criticized for blocking pages of a Pakistani rock band and several political groups, primarily left-wing. Across the continent in Europe, Google is suffering from a popularity dip.

The National Council for Peace and Order, the military government in Thailand, has taken over not only the government,but also controls the media. The military cancelled its meetings with Google and Facebook. Thai protesters staged a quiet dissent. The Asian Human Rights Commission condemned the coup. For an excellent take on the coup and its dangers, please redirect here. For a round-up of editorials and op-eds on the coup, redirect here.

China has cracked down on Google, affecting Gmail, Translate and Calendar. It is speculated that the move is connected to the 25th anniversary of the Tiananmen Square protests and government reprisal. At the same time, a Tibetan filmmaker who was jailed for six years for his film, Leaving Fear Behindhas been released by Chinese authorities. Leaving Fear Behind features a series of interviews with Tibetans of the Qinghai province in the run-up to the controversial Beijing Olympics in 2008.

Japan looks set to criminalize possession of child pornography. According to reports, the proposed law does not extend to comics or animations or digital simulations.

Egypt’s police is looking to build a social media monitoring system to track expressions of dissent, including “profanity, immorality, insults and calls for strikes and protests”.

Human rights activists asked Facebook to deny its services to the election campaign of Syrian President Bashar al-Assad, ahead of elections on June 3.

Call for inputs:

The Law Commission of India seeks comments from stakeholders and citizens on media law. The consultation paper may be found here. The final date for submission is June 19, 2014.

____________________________________________________________________________________________________________

For feedback and comments, Geetha Hariharan is available by email at [email protected] or on Twitter, where her handle is @covertlight.

Free Speech and Contempt of Court – I: Overview

by Gautam Bhatia — last modified Jun 08, 2014 03:29 PM
Gautam Bhatia explores an under-theorised aspect of India's free speech jurisprudence: the contempt power that equips courts to "protect the dignity of the Bench". In this introductory post, he examines jurisprudence from the US and England to inform our analysis of Indian law.

On May 31, the Times of India reported some observations of a two-judge bench of the Supreme Court on its contempt powers. The Court noted that the power to punish for contempt was necessary to “secure public respect and confidence in the judicial process”, and also went on to add – rather absurdly – to lay down the requirements, in terms of timing, tone and tenor, of a truly “contrite” apology. This opinion, however, provides us with a good opportunity to examine one of the most under-theorised aspects of Indian free speech law: the contempt power.

Indeed, the contempt power finds express mention in the Constitution. Article 19(2) permits the government to impose reasonable restrictions upon the freedom of speech and expression “… in relation to contempt of court.” The legislation governing contempt powers is the 1971 Contempt of Courts Act. Contempt as a civil offence involves willful disobedience of a court order. Contempt as a criminal offence, on the other hand, involves either an act or expression (spoken, written or otherwise visible) that does one of three things: scandalises, or tends to scandalize, or lowers, or tends to lower, the authority of any court; prejudices or interferes (or tends to interfere) with judicial proceedings; or otherwise obstructs, or tends to obstruct, the administration of justice. As we can see, contempt can – broadly – take two forms: first, obstructing the proceedings of the Court by acts such as disobeying an order, holding up a hearing through absence or physical/verbal disturbance etc. This is straightforward enough. More problematically, however, contempt also covers instances of what we may call “pure speech”: words or other forms of expression about the Court that are punished for no other reason but their content. In particular, “scandalising the Court” seems to be particularly vague and formless in its scope and ambit.

“Scandalising the court” is a common law term. The locus classicus is the 1900 case of R v. Gray, which – in language that the Contempt of Courts Act has largely adopted – defined it as “any act done or writing published calculated to bring a Court or a judge of the Court into contempt, or to lower his authority.” The basic idea is that if abusive invective against the Court is permitted, then people will lose respect for the judiciary, and justice will be compromised.

It is obvious that this argument is flawed in many respects, and we shall analyse the Supreme Court’s problematic understanding of its contempt powers in the next post. First, however, it is instructive to examine the fate of contempt powers in the United States – which, like India, constitutionally guarantees the freedom of speech – and in England, whose model India has consciously followed.

America’s highly speech-protective Courts have taken a dim view of contempt powers. Three cases stand out. Bridges v. California involved a contempt of court accusation against a labour leader for calling a Court decision “outrageous”, and threatening a strike if it was upheld. Reversing his prior conviction, the Supreme Court noted that “public interest is much more likely to be kindled by a controversial event of the day than by a generalization, however penetrating, of the historian or scientist. Given the strong public interest, the burden of justifying restrictions upon this speech was particularly high. The Court identified two possible justifications: respect for the judiciary, and the orderly administration of justice. On the first, it observed that an enforced silence, however limited, solely in the name of preserving the dignity of the bench would probably engender resentment, suspicion, and contempt much more than it would enhance respect.” On the second, it held that since striking itself was entirely legal, it was no argument that the threat of a strike would illegally intimidate a judge and subvert the course of justice. Throughout the case, the Court stressed that unfettered speech on matters of public interest was of paramount value, and could only be curtailed if there was a “clear and present danger” that the substantially evil consequences would result out of allowing it.

Similarly, in Garrison v. Lousiana, an attorney accused certain judges of inefficiency and laziness. Reversing his conviction, the Supreme Court took note of the paramount public interest in a free flow of information to the people concerning public officials, their servants…. few personal attributes are more germane to fitness for office than dishonesty, malfeasance, or improper motivation, even though these characteristics may also affect the official's private character.” Consequently, it held that only those statements could be punished that the author either knew were false, or were made with reckless disregard for the truth. And lastly, in Landmark Communications v. Virginia, the Court held that “the operations of the courts and the judicial conduct of judges are matters of utmost public concern”, and endorsed Justice Frankfurter’s prior statement, that “speech cannot be punished when the purpose is simply "to protect the court as a mystical entity or the judges as individuals or as anointed priests set apart from the community and spared the criticism to which in a democracy other public servants are exposed.

What stands out here is the American Courts’ rejection of the ideas that preserving the authority of judges by suppressing certain forms of speech is an end in itself, and that the Courts must be insulated to some greater degree than other officials of government. Consequently, it must be shown that the impugned expression presents a clear and present danger to the administration of justice, before it can be punished.

Now to England. The last successful prosecution of the offence was in 1931. In 2012, the Law Commission published a paper on contempt powers, in which it expressly recommended abolishing the offence of “scandalising the Court”; its recommendations were accepted, and the offence was abolished in 2013. Admittedly, the offence remains on the statute books in many commonwealth nations, although two months ago – in April 2014 – the Privy Council gave it a highly circumscribed interpretation while adjudicating a case on appeal from Mauritius: there must, it held, be a “real risk of undermining public confidence in the administration of justice” (something akin to clear and present danger?), and the Prosecution must demonstrate that the accused either intended to do so, or acted in reckless disregard of whether or not he was doing so.

What is particularly interesting is the Law Commission’s reasoning in its recommendations. Tracing the history of the offence back to 18th century England, it noted that the original justification was to maintain a “haze of glory” around the Courts, and it was crucial that the Courts not only be universally impartial, but also perceived to be so. Consequently, the Law Commission observed that this language suggests that “to be impartial” and “to be universally thought so” are two independent requirements, implying that the purpose of the offence is not confined to preventing the public from getting the wrong idea about the judges, and that where there are shortcomings, it is equally important to prevent the public from getting the right idea.Obviously, this was highly problematic.

The Law Commission also noted the adverse impact of the law on free speech: the well-known chilling effect, whereby people would self-censor even justified criticism. This was exacerbated by the vagueness of the offence, which left unclear the intent requirement, and the status of defences based on truth and public interest. The Law Commission was concerned, as well, about the inherently self-serving nature of the offence, which give judges the power to sit in judgment over speech and expression that was directly critical of them. Lastly, the Law Commission noted that the basic point of contempt powers was similar to that of seditious libel: to ensure the good reputation of the State (or, in the case of scandalising, the judges) by controlling what could be said about them. With the abolition of seditious libel, the raison d’être of scandalising the Court was also – now – weakened.

We see, therefore, that the United States has rejected sweeping contempt powers as unconstitutional. England, which created the offence that India incorporated into its law, stopped prosecuting people for it in 1931, and formally abolished it last year. And even when its hands have been bound by the law that it is bound the enforce, the Privy Council has interpreted the offence in as narrow a manner as possible, in order to remain solicitous of free speech concerns. Unfortunately, as we shall see in the next essay, all these developments have utterly passed our Courts by.


Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and presently an LLM student at the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he blogs on issues of online freedom of speech and expression.

A Review of the Functioning of the Cyber Appellate Tribunal and Adjudicatory Officers under the IT Act

by Divij Joshi last modified Jul 03, 2014 05:43 AM
Tribunals and quasi-judicial bodies are a regular feature of the Indian judicial system, as they provide for easier and less onerous methods for dispute resolution, especially disputes which relate to technical areas and often require technical knowledge and familiarity with specialised factual scenarios.

Further, quasi-judicial bodies do not have the same procedural restrictions as proper courts, which makes the adjudication of disputes easier. The Information Technology Act of India, which regulates several important aspects of electronic information, including the regulation of private electronic transactions as well as detailing civil and criminal offences relating to computers and electronic information, contemplates a specialised dispute resolution mechanism for disputes relating to the offences detailed under the Act. The Act provides for the establishment of quasi-judicial bodies, namely adjudicating officers under S.46, to hear disputes arising out of Chapter IX of the Act, namely, offences of a civil nature under S.43, 43A, 44 and 45 of the Act, as well as criminal offences described under Chapter XI of the Act. The adjudicating officer has the power to both award compensation as damages in a civil remedy, as well as impose penalties for the contravention of the Act,[1] and therefore has powers of both civil and criminal courts. The first appellate body provided in the Act, i.e. the authority that any party not satisfied by the decision of the adjudicating officer can appeal to, is the Cyber Appellate Tribunal, consisting of a Chairperson and any other members so prescribed by the Central Government.[2] The second appeal, if a party is aggrieved by the decision of the Cyber Appellate Tribunal, may be filed before the High Court having jurisdiction, within 60 days from the date of communication of the order.[3]

Functioning of the Offices of the State Adjudicating Officers and the Cyber Appellate Tribunal

The office of the adjudicating officer is established under S.46 of the IT Act, which provides that the person appointed to such a post must be a government officer of a rank not below that of a Director or an equivalent rank, and must have experience both in the field of Information Technology as well as legal or judicial experience.[4] In most cases, the appointed adjudicating officer is the Principle Secretary to the Department of Information Technology in the state.[5] The decisions of these adjudicating officers determine the scope and meaning of several provisions of the IT Act, and are instrumental in the development of the law in this field and filling a lacuna regarding the interpretation of these important provisions, particularly in areas such as data protection and privacy.[6] However, despite the large number of cyber-crime cases being registered across the country,[7] there is a lack of available judgements on the adjudication of disputes under Sections 43, 43A, 44 and 45 of the Act. Of all the states, only the websites of the Departments of Information Technology in Maharashtra,[8], Tamil Nadu[9], New Delhi[10], and Haryana[11] have reported judgements or orders of the Adjudicating Officers.  The adjudicating officer in Maharasthra, Rajesh Aggarwal, has done a particularly commendable job, having disposed of 51 cases under the IT Act, with 20 cases still pending.

The first Cyber Appellate Tribunal set up by the Central Government is located at New Delhi. Although a second branch of the Tribunal was to be set up in Bangalore, no efforts seem to have been made in this regard.[12] Further, the position of the Chairperson of the Appellate Tribunal, has been left vacant since 2011, after the appointed Chairperson attained the age of superannuation and retired. Although judicial and technical members have been appointed at various points, the tribunal cannot hold hearings without a chairperson. A total of 17 judgements have been passed by the Cyber Appellate Tribunal prior to the retirement of the chairperson, while the backlog of cases is continuously growing.[13] Despite a writ petition being filed before the Karnataka High Court and the secretary of the Department of IT coming on record to state that the Chairperson would be appointed within 6 months (of September 2013), no action seems to have been taken in this regard, and the lacunae in the judicial mechanism under the IT Act continues. The proper functioning of adjudicating officers and the Cyber Appellate Tribunal is particularly necessary for the functioning of a just judicial system in light of the provisions of the Act (namely, Section 61) which bar the jurisdiction of ordinary civil courts in claims below the amount of Rs. 5 Crores, where the adjudicating officer or the CAT is empowered.[14]

Analysis of Cases Filed under Section 43A

Section 43A of the Information Technology Act was inserted by the 2008 Amendment, and is the principle provision governing protection of information held by intermediaries under the Act. Section 43A provides that “body corporates” handling “sensitive personal data” must implement reasonable security practices for the protection of this information. If it is negligent in providing or maintaining such reasonable security practices, the body corporate is to be held liable and must pay compensation for the loss occurred.[15] Rule 3 of the Draft Reasonable Security Practices Rules, defines sensitive personal data as including – passwords, user details as provided at the time of registration or thereafter, information related to financial information such as Bank account/ credit card /debit card /other payment instrument details of the users, physiological and mental health conditions, medical records and history, biometric information, information received by body corporate for processing, stored or processed under lawful contract or otherwise and call data records.[16]

All the decisions of appointed adjudicators are available for an analysis of Section 43A are from the adjudicating officer in Maharashtra, Mr. Rajesh Tandon, who despite having no judicial experience, has very cogent analysis and knowledge of legal issues involved in the cases, which is commendable for a quasi-judicial officer.

One class of cases, constituting a major chunk of the claims, is where the complainant is claiming against a bank for the fraudulent transfer of funds from the claimants account to another account. In most of these cases, the adjudicating officer examined the compliance of the bank with “Know Your Customer” norms and guidelines framed by the Reserve Bank of India for prevention of banking fraud and, where such compliance was found to be lacking and information which allowed the bank accounts of the complainant was allowed to be accessed by fraudsters, the presumption is that the bank was negligent in the handling of “sensitive personal information”,[17] by failing to provide for reasonable security practices and consequently was liable for compensation under S.43A, notwithstanding that the complainant also contributed to compromising certain personal information by responding to phishing mails,[18] or divulging information to other third parties.[19] These instances clearly fall within the scope of Section 43A, which protects “information related to financial information such as Bank account/ credit card /debit card /other payment instrument details of the users” as sensitive personal data from negligent handling by body corporates. The decisions of the adjudicating officer must be applauded for placing a higher duty of care on banks to protect informational privacy of its customers, given that they are in a position where they ought to be well equipped to deal with intimate financial information and holding them accountable for lack of proper mechanisms to counter bank fraud using stolen information, which reflects in the compensation which the banks have been liable to pay, not only as indemnification for losses, but also punitive damages.[20]

In Nirmalkumar Bhagerwal v IDBI Bank and Meenal Bhagerwal, the sensitive financial information of the complainant, namely, the bank statement, had been accessed by the complainants wife. In holding the bank to be liable for divulging the same, and that access to personal information by a spouse is also covered under S.43A, the officer seems to have imputed the loss of privacy on account of such negligence as ‘wrongful loss’ which deserves compensation. One anomalous decision of the officer was where the operator of an ATM was held liable for fraudulent credit card transactions in that Machine, due to “reasonable security practices” such as security personnel or CCTV footage, and therefore causing the loss of “sensitive personal data”. However, it is difficult to see how ATM operators can be held liable for failing to protect sensitive information from being divulged, when the case is simply of a person fraudulently using a credit card.

Another class of cases, generally linked with the above cases, is complaints against cell phone providers for divulging information through falsely procured Sim Cards. In such instances, the officer has held that by negligently allowing the issuance of duplicate sim cards, the phone company has led to the access of sensitive personal data and thus caused wrongful loss to the complainant. This interpretation of Section 43A is somewhat confusing. The officer seems to have interpreted the provisions of Section 43A to include carriers of the information which was originally sent through the computer resource of the banking companies. In this way, they are imputed the status of “handlers” of sensitive personal information, and their communications infrastructure through which the information is sent is the “computer resource” which it operates for the purpose of the Act. Therefore, through their negligence, they are abetting the offence under 43A.[21]

For example, in the case of Sanjay Govind Dhandhe v ICICI and Vodafone, the officer remarked that –“A SIM card is a veritable key to person’s sensitive financial and personal information. Realizing this, there are clear guidelines issued by the DOT regarding the issuance of SIM cards. The IT Act also intends to ensure that electronic personal and sensitive data is kept secured and reasonable measures are used to maintain its confidentiality and integrity. It is extremely crucial that Telecom companies actively follow strict security procedures while issuing SIM cards, especially in wake of the fact that mobiles are being increasingly used to undertake financial transactions. In many a case brought before me, financial frauds have been committed by fraudsters using the registered mobile numbers of the banks’ account holders.” Therefore, intermediaries such as telecom companies, which peripherally handle the data, are also liable under the same standards for ensuring its privacy. The adjudicating officer has also held telephone companies liable for itemized phone bills as Call Data Records negligently divulged by them, which again clearly falls under the scope of the Reasonable Security Practices Rules.[22]

Note:

"Credentek v Insolutions (http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_Credentek_Vs_Insolutions-28012014.pdf) . This case holds that banks and the National Payments Corporation of India were liable under S. 43A for divulging information relating to transactions by their customers to a software company which provides services to these banks using the data, without first making them sign non-disclosure agreements. The NCPI was fined a nominal amount of Rs. 10,000."


[1]. Section 46, Information Technology Act, 2000.

[2]. Section 48 and 49 of the Information Technology Act, 2000 (Amended as of 2008).

[3]. Section 62, IT Act. However, The High Court may extend this period if there was sufficient cause for the delay.

[4]. S. 46(3), Information Technology Act, “No person shall be appointed as an adjudicating officer unless he possesses such experience in the field of Information Technology and Legal or Judicial experience as may be prescribed by the Central Government.”

[5]. From whatever data is available, the adjudicating officers in the states of Maharashtra, New Delhi, Haryana, Tamil Nadu and Karnataka are all secretaries to the respective state departments relating to IT.

[6]. See http://cis-india.org/internet-governance/blog/analysis-of-cases-filed-under-sec-48-it-act-for-adjudication-maharashtra; Also see the decision of the Karnataka adjudicating officer which held that body corporates are not persons under S.43 of the IT Act, and thus cannot be liable for compensation or even criminal action for offences under that Section, available at http://www.naavi.org/cl_editorial_13/adjudication_gpl_mnv.pdf.

[7]. Maharashtra Leads in War Against Cyber Crime, The Times of India, available at http://timesofindia.indiatimes.com/city/mumbai/Maharashtra-leads-in-war-against-cyber-crime/articleshow/30579310.cms. (18th February, 2014).

[8]. https://it.maharashtra.gov.in/1089/IT-Act-Judgements

[9]. http://www.tn.gov.in/documents/atoz/J

[10]. http://www.delhi.gov.in/wps/wcm/connect/DoIT_IT/doit_it/it+home/orders+of+adjudicating+officer

[11]. http://haryanait.gov.in/cyber.htm

[12]. Bangalore Likely to host southern chapter of Cyber Appellate Tribunal, The Hinduk http://www.thehindu.com/news/national/karnataka/bangalore-is-likely-to-host-southern-chapter-of-cyber-appellate-tribunal/article3381091.ece (2nd May, 2013).

[13]. http://catindia.gov.in/Judgement.aspx

[14]. Section 61 of the IT Act – ‘No court shall have jurisdiction to entertain any suit or proceeding in respect of any matter which an adjudicating officer appointed under this Act or the Cyber Appellate Tribunal constituted under this Act is empowered by or under this Act to determine and no injunction shall be granted by any court or other authority in respect of any action taken or to be taken in pursuance of any power conferred by or under this Act. Provided that the court may exercise jurisdiction in cases where the claim for injury or damage suffered by any person exceeds the maximum amount which can be awarded under this Chapter.

[15]. Section 43A, Information Technology Act, 2000 – ‘Compensation for failure to protect data (Inserted vide ITAA 2006) Where a body corporate, possessing, dealing or handling any sensitive personal data or information in a computer resource which it owns, controls or operates, is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person, such body corporate shall be liable to pay damages by way of compensation, to the person so affected. (Change vide ITAA 2008)

Explanation: For the purposes of this section (i) "body corporate" means any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities (ii) "reasonable security practices and procedures" means security practices and procedures designed to protect such information from unauthorized access, damage, use, modification, disclosure or impairment, as may be specified in an agreement between the parties or as may be specified in any law for the time being in force and in the absence of such agreement or any law, such reasonable security practices and procedures, as may be prescribed by the Central Government in consultation with such professional bodies or associations as it may deem fit. (iii) "sensitive personal data or information" means such personal information as may be prescribed by the Central Government in consultation with such professional bodies or associations as it may deem fit.

[16]. Draft Reasonable Security Practices Rules under Section 43A of the IT Act, available at http://www.huntonfiles.com/files/webupload/PrivacyLaw_Reasonable_Security_Practices_Sensitive_Personal_Information.pdf.

[17]. Ravindra Gunale v Bank of Maharashtra, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_RavindraGunale_Vs_BoM&Vodafone_20022013.PDF. Ram Techno Pack v State Bank of India, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_RamTechno_Vs_SBI-22022013.pdf.

Srinivas Signs v IDBI, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_SreenivasSigns_Vs_IDBI-18022014.PDF.

Raju Dada Raut v ICICI Bank, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_RajuDadaRaut_Vs_ICICIBank-13022013.pdf

Pravin Parkhi v SBI Cards, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_PravinParkhi_Vs_SBICardsPayment-30122013.PDF.

[18]. Sourabh Jain v ICICI, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_SourabhJain_Vs_ICICI&Idea-22022013.PDF.

[19]. Poona Automobiles v Punjab National Bank, https://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_PoonaAuto_Vs_PNB-22022013.PDF

[20]. Amit Patwardhan v Bank of Baroda, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudicaton_AmitPatwardhan_Vs_BankOfBaroda-30122013.PDF.

[21]. Ravindra Gunale v Bank of Maharashtra, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_RavindraGunale_Vs_BoM&Vodafone_20022013; Raju Dada Raut v ICICI Bank, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_RajuDadaRaut_Vs_ICICIBank-13022013.pdf.

[22]. Rohit Maheshwari v Vodafone, http://it.maharashtra.gov.in/Site/Upload/ACT/DIT_Adjudication_RohitMaheshwari_Vs_Vodafone&ors-04022014.PDF.

CIS Comments: Enhancing ICANN Accountability

by Geetha Hariharan last modified Jun 10, 2014 01:03 PM
On May 6, 2014, ICANN published a call for public comments on "Enhancing ICANN Accountability". This comes in the wake of the IANA stewardship transition spearheaded by ICANN and related concerns of ICANN's external and internal accountability mechanisms. Centre for Internet and Society contributed to the call for comments.

Introduction:

On March 14, 2014, the US National Telecommunications and Information Administration announced its intent to transition key Internet domain name functions to the global multi-stakeholder Internet governance community. ICANN was tasked with the development of a proposal for transition of IANA stewardship, for which ICANN subsequently called for public comments. At NETmundial, ICANN President and CEO Fadi Chehadé acknowledged that the IANA stewardship transition and improved ICANN accountability were inter-related issues, and announced the impending launch of a process to strengthen and enhance ICANN accountability in the absence of US government oversight. The subsequent call for public comments on “Enhancing ICANN Accountability” may be found here.

Suggestions for improved accountability:

In the event, Centre for Internet and Society (“CIS”) wishes to limit its suggestions for improved ICANN accountability to matters of reactive or responsive transparency on the part of ICANN to the global multi-stakeholder community. We propose the creation and implementation of a robust “freedom or right to information” process from ICANN, accompanied by an independent review mechanism.

Article III of ICANN Bye-laws note that “ICANN and its constituent bodies shall operate to the maximum extent feasible in an open and transparent manner and consistent with procedures designed to ensure fairness”. As part of this, Article III(2) note that ICANN shall make publicly available information on, inter alia, ICANN’s budget, annual audit, financial contributors and the amount of their contributions, as well as information on accountability mechanisms and the outcome of specific requests and complaints regarding the same. Such accountability mechanisms include reconsideration (Article IV(2)), independent review of Board actions (Article IV(3)), periodic reviews (Article IV(4)) and the Ombudsman (Article V).

Further, ICANN’s Documentary Information Disclosure Policy (“DIDP”) sets forth a process by which members of the public may request information “not already publicly available”. ICANN may respond (either affirmatively or in denial) to such requests within 30 days. Appeals to denials under the DIDP are available under the reconsideration or independent review procedures, to the extent applicable.

While ICANN has historically been prompt in its response to DIDP Requests, CIS is of the view that absent the commitments in the AoC following IANA stewardship transition, it would be desirable to amend and strengthen Response and Appeal procedures for DIDP and other, broader disclosures. Our concerns stem from the fact that, first, the substantive scope of appeal under the DIDP, on the basis of documents requested, is unclear (say, contracts or financial documents regarding payments to Registries or Registrars, or a detailed, granular break-up of ICANN’s revenue and expenditures); and second, that grievances with decisions of the Board Governance Committee or the Independent Review Panel cannot be appealed.

Therefore, CIS proposes a mechanism based on “right to information” best practices, which results in transparent and accountable governance at governmental levels.

First, we propose that designated members of ICANN staff shoulder responsibility to respond to information requests. The identity of such members (information officers, say) ought to be made public, including in the response document.

Second, an independent, third party body should be constituted to sit in appeal over information officers’ decisions to provide or decline to provide information. Such body may be composed of nominated members from the global multi-stakeholder community, with adequate stakeholder-, regional- and gender-representation. However, such members should not have held prior positions in ICANN or its related organizations. During the appointed term of the body, the terms and conditions of service ought to remain beyond the purview of ICANN, similar to globally accepted principles of an independent judiciary. For instance, the Constitution of India forbids any disadvantageous alteration of privileges and allowances of judges of the Supreme Court and High Courts during tenure.

Third, and importantly, punitive measures ought to follow unreasonable, unexplained or illegitimate denials of requests by ICANN information officers. In order to ensure compliance, penalties should be made continuing (a certain prescribed fine for each day of information-denial) on concerned officers. Such punitive measures are accepted, for instance, in Section 20 of India’s Right to Information Act, 2005, where the review body may impose continuing penalties on any defaulting officer.

Finally, exceptions to disclosure should be finite and time-bound. Any and all information exempted from disclosure should be clearly set out (and not merely as categories of exempted information). Further, all exempted information should be made public after a prescribed period of time (say, 1 year), after which any member of the public may request for the same if it continues to be unavailable.

CIS hopes that ICANN shall deliver on its promise to ensure and enhance its accountability and transparency to the global multi-stakeholder community. To that end, we hope our suggestions may be positively considered.

Comment repository:

All comments received by ICANN during the comment period (May 6, 2014 to June 6, 2014) may be found at this link.

Free Speech and Contempt of Courts – II: Article 19(1)(a) and Indian Law

by Gautam Bhatia last modified Jun 16, 2014 05:48 AM
Gautam Bhatia continues his examination of free speech implications of the law of contempt: the power that equips courts to "protect the dignity of the Bench".

Towards the end of the last post, we saw how the Law Commission traced the genealogy of the “scandalising the Court” offence, inasmuch as it sought to protect the “standing of the judiciary”, to that of seditious libel. The basic idea is the same: if people are allowed to criticise state institutions in derogatory terms, then they can influence their fellow-citizens who, in turn, will lose respect for those institutions. Consequently, the authority of those institutions will be diminished, and they will be unable to effectively perform their functions. Hence, we prevent that eventuality by prohibiting certain forms of speech when it concerns the functioning of the government (seditious libel) or the Courts (scandalising the Court). This, of course, often ties the judges into knots, in determining the exact boundary between strident – but legitimate – criticism, and sedition/scandalising the Court.

Seditious libel, of course, went out in the United States with the repeal of the Sedition Act in 1800, and was abolished in the England in 2009. Notoriously, it still remains on the statute books in India, in the form of S. 124A of the Indian Penal Code. An examination of the Supreme Court’s sedition jurisprudence would, therefore, be apposite. Section 124A makes it an offence to bring or attempt to bring into hatred or contempt, or excite or attempt to excite, disaffection, towards the government. The locus classicus is Kedar Nath Singh v. Union of India. I have analysed the case in detail elsewhere, but briefly, Kedar Nath Singh limited the scope of 124A to incitement to violence, or fostering public disorder, within the clear terms of Article 19(2). In other words, prosecution for sedition, if it was to succeed, would have to satisfy the Court’s public order jurisprudence under Article 19(2). The public order test itself – as we discussed previously on this blog, in a post about Section 66A – was set out in highly circumscribed terms in Ram Manohar Lohia’s Case, which essentially required a direct and imminent degree of proximity between the speech or expression, and the breach of public order (in that case, the Court refused to sustain the conviction of a speaker who expressly encouraged an audience to break the law). Subsequently, in S. Rangarajan v. P. Jagjivan Ram, the Court noted that the relation ought to be like that of a “spark in a powder keg” – something akin to inciting an enraged mob to immediate violence. Something that the Court has clearly rejected is the argument that it is permissible to criminalise speech and expression simply because its content might lower the authority of the government in the eyes of the public, which, in turn, could foster a disrespect for law and the State, and lead to breaches of public order.

Unfortunately, however, when it comes to contempt and scandalising, the Court has adopted exactly the chain of reasoning that it has rejected in the public order cases. As early as 1953, in Aswini Kumar Ghose v. Arabinda Bose, the Court observed that “it is obvious that if an impression is created in the minds of the public that the Judges in the highest Court in the land act on extraneous considerations in deciding cases, the confidence of the whole community in the administration of justice is bound to be undermined and no greater mischief than that can possibly be imagined.”

Subsequently, in D.C. Saxena v. CJI, the Court held that Any criticism about judicial system or the judges which hampers the administration of justice or which erodes the faith in the objective approach of the judges and brings administration of justice to ridicule must be prevented. The contempt of court proceedings arise out of that attempt. Judgments can be criticised. Motives to the judges need not be attributed. It brings the administration of justice into disrepute. Faith in the administration of justice is one of the pillars on which democratic institution functions and sustains.” Notice the chain of causation the Court is working with here: it holds faith in the administration of justice as a necessary pre-requisite to the administration of justice, and prohibits criticism that would cause other people to lose their faith in the judiciary. This is exactly akin to a situation in which I make an argument advocating Marxist theory, and I am punished because some people, on reading my article, might start to hold the government in contempt, and attempt to overthrow it by violent means. Not only is it absurd, it is also entirely disrespectful of individual autonomy: it is based on the assumption that the person legally and morally responsibly for a criminal act is not the actor, but the person who convinced the actor through words and arguments, to break the law – as though individuals are incapable of weighing up competing arguments and coming to decisions of their own accord. Later on, in the same case, the Court holds that scandalising includes “all acts which bring the court into disrepute or disrespect or which offend its dignity or its majesty or challenge its authority.” As we have seen before, however, disrepute or disrespect of an institution cannot in itself be a ground for punishment, unless there is something more. That something more is actual disruption of justice, which is presumably caused by people who have lost their confidence in the judiciary, but in eliding disrepute/disrespect with obstruction of justice, the Court entirely fails to consider the individual agency involved in crossing that bridge, the agency that is not that of the original speaker. This is why, again, in its sedition cases, the Court has gone out of its way to actually require a proximate relation between “disaffection” and public order breaches, in order to save the section from unconstitutionality. Its contempt jurisprudence, on the other hand, shows no such regard. It is perhaps telling that the Court, one paragraph on, adopts the “blaze of glory” formulation that was used in an 18th century, pre-democratic English case.

Indeed, the Court draws an express analogy with sedition, holding that “malicious or slanderous publication inculcates in the mind of the people a general disaffection and dissatisfaction on the judicial determination and indisposes in their mind to obey them.” Even worse, it then takes away even the basic protection of mens rea, holding that all that matters is the effect of the impugned words, regardless of the intention/recklessness with which they were uttered. The absence of mens rea, along with the absence of any meaningful proximity requirement, makes for a very dangerous cocktail – an offence that can cover virtually any activity that the Court believes has a “tendency” to certain outcomes: Therefore, a tendency to scandalise the court or tendency to lower the authority of the court or tendency to interfere with or tendency to obstruct the administration of justice in any manner or tendency to challenge the authority or majesty of justice, would be a criminal contempt. The offending act apart, any tendency if it may lead to or tends to lower the authority of the court is a criminal contempt. Any conduct of the contemnor which has the tendency or produces a tendency to bring the judge or court into contempt or tends to lower the authority of the court would also be contempt of the court.”

The assumption implicit in these judgments – that the people need to be protected from certain forms of speech, because they are incompetent at making up their own minds, in a reasonable manner, about it – was made express in Arundhati Roy’s Case, in 2002. After making observations about how confidence in the Courts could not be allowed to be “tarnished” at any cost, the Court noted that “the respondent has tried to cast an injury to the public by creating an impression in the mind of the people of this backward country regarding the integrity, ability and fairness of the institution of judiciary”, observed that the purpose of the offence was to protect the (presumably backward) public by maintaining its confidence in the judiciary, which had been enacted keeping in mind “the ground realities and prevalent socio-economic system in India, the vast majority of whose people are poor, ignorant, uneducated, easily liable to be misled. But who acknowledly (sic) have the tremendous faith in the dispensers of Justice.” So easy, indeed, to mislead, that there was no need for any evidence to demonstrate it: “the well-known proposition of law is that it punishes the archer as soon as the arrow is shot no matter if it misses to hit the target. The respondent is proved to have shot the arrow, intended to damage the institution of the judiciary and thereby weaken the faith of the public in general and if such an attempt is not prevented, disastrous consequences are likely to follow resulting in the destruction of rule of law, the expected norm of any civilised society.”

The American legal scholar, Vince Blasi, has outlined a “pathological perspective” of free speech. According to him, heightened protection of speech – even to the extent of protecting worthless speech – is important, because when the government passes laws to regulate speech that is hostile towards it, it will, in all likelihood, over-regulate purely out of self-interest, sometimes even unconsciously so. This is why, if the Courts err, they ought to err on the side of speech-protection, because it is quite likely that the government has over-estimated public order and other threats that stem out of hostile speech towards government itself. The pathological perspective is equally – if not more – applicable in the realm of contempt of Court, because here the Court is given charge of regulating speech hostile towards itself. Keenly aware of the perils of speech suppression that lie in such situations, we have seen that the United States and England have abolished the offence, and the Privy Council has interpreted it extremely narrowly.

The Indian Supreme Court, however, has gone in precisely the opposite direction. It has used the Contempt of Court statute to create a strict-liability criminal offence, with boundlessly manipulable categories, which is both overbroad and vague, entirely inconsistent with the Court’s own free speech jurisprudence, and at odds with free speech in a liberal democracy.


Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and presently an LLM student at the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he blogs on issues of online freedom of speech and expression.

Content Removal on Facebook — A Case of Privatised Censorship?

by Jessamine Mathew last modified Jun 16, 2014 05:23 AM
Any activity on Facebook, be it creating an account, posting a picture or status update or creating a group or page, is bound by Facebook’s Terms of Service and Community Guidelines. These contain a list of content that is prohibited from being published on Facebook which ranges from hate speech to pornography to violation of privacy.
Content Removal on Facebook — A Case of Privatised Censorship?

Jessamine Mathew

Facebook removes content largely on the basis of requests either by the government or by other users. The Help section of Facebook deals with warnings and blocking of content. It says that Facebook only removes content that violates Community Guidelines and not everything that has been reported.

I conducted an experiment to primarily look at Facebook’s process of content removal and also to analyse what kind of content they actually remove.

  1. I put up a status which contained personal information of a person on my Friend List (the information was false). I then asked several people (including the person about whom the status was made) to report the status — that of  being harassed  or for violation of  privacy rights. Seven people reported the status. Within half an hour of the reports being made, I received the following notification:
    "Someone reported your post for containing harassment and 1 other reason."

    The notification also contained the option to delete my post and said that Facebook would look into whether it violated their Community Guidelines.

    A day later, all those who had reported the status received notifications stating the following:

    "We reviewed the post you reported for harassment and found it doesn't violate our Community Standards."

    I received a similar notification as well.
  2. I, along with around thirteen others, reported a Facebook page which contained pictures of my friend and a few other women with lewd captions in various regional languages. We reported the group for harassment and bullying and also for humiliating someone we knew. The report was made on 24 March, 2014. On 30 April, 2014, I received a notification stating the following:

    "We reviewed the page you reported for harassment and found it doesn't violate our Community Standards.

    Note: If you have an issue with something on the Page, make sure you report the content (e.g. a photo), not the entire Page. That way, your report will be more accurately reviewed."

    I then reported each picture on the page for harassment and received a series of notifications on 5 May, 2014 which stated the following:

    "We reviewed the photo you reported for harassment and found it doesn't violate our Community Standards."

These incidents are in stark contrast with repeated attempts by Facebook to remove content which it finds objectionable. In 2013, a homosexual man’s picture protesting against the Supreme Court judgment in December was taken down. In 2012, Facebook removed artwork by a French artist which featured a nude woman.  In the same year, Facebook removed photographs of a child who was born with defect and banned the mother from accessing Facebook completely. Facebook also removed a picture of a breast cancer survivor who posted a picture of a tattoo that she had following her mastectomy. Following this, however, Facebook issued an apology and stated that mastectomy photographs are not in violation of their Content Guidelines. Even in the sphere of political discourse and dissent, Facebook has cowered under government pressure and removed pages and content, as evidenced by the ban on the progressive Pakistani band Laal’s Facebook page and other anti-Taliban pages. Following much social media outrage, Facebook soon revoked this ban. These are just a few examples of how harmless content has been taken down by Facebook, in a biased exercise of its powers.

After incidents of content removal have been made public through news reports and complaints, Facebook often apologises for removing content and issues statements that the removal was an “error.” In some cases, they edit their policies to address specific kinds of content after a takedown (like the reversal of the breastfeeding ban).

On the other hand, however, Facebook is notorious for refusing to take down content that is actually objectionable, partially evidenced by my own experiences listed above. There have been complaints about Facebook’s refusal to remove misogynistic content which glorifies rape and domestic violence through a series of violent images and jokes. One such page was removed finally, not because of the content but because the administrators had used fake profiles. When asked, a spokesperson said that censorship “was not the solution to bad online behaviour or offensive beliefs.” While this may be true, the question that needs answering is why Facebook decides to draw these lines only when it comes to certain kinds of ‘objectionable’ content and not others.

All of these examples represent a certain kind of arbitrariness on the part of Facebook’s censorship policies. It seems that Facebook is far more concerned with removing content that will cause supposed public or governmental outrage or defy some internal morality code, rather than protecting the rights of those who may be harmed due to such content, as their Statement of Policies so clearly spells out.

There are many aspects of the review and takedown process that are hazy, like who exactly reviews the content that is reported and what standards they are made to employ. In 2012, it was revealed that Facebook outsourced its content reviews to oDesk and provided the reviewers with a 17-page manual which listed what kind of content was appropriate and what was not. A bare reading of the leaked document gives one a sense of Facebook’s aversion to sex and nudity and its neglect of other harm-inducing content like harassment through misuse of content that is posted and what is categorised as hate speech.

In the process of monitoring the acceptability of content, Facebook takes upon itself the role of a private censor with absolutely no accountability or transparency in its working. A Reporting Guide was published to increase transparency in its content review procedures. The Guide reveals that Facebook provides for an option where the reportee can appeal the decision to remove content in “some cases.” However, the lack of clarity on what these cases are or what the appeal process is frustrates the existence of this provision as it can be misused. Additionally, Facebook reserves the right to remove content with or without notice depending upon the severity of the violation. There is no mention of how severe is severe enough to warrant uninformed content removal. In most of the above cases, the user was not notified that their content was found offensive and would be liable for takedown. Although Facebook publishes a transparency report, it only contains a record of takedowns following government requests and not those by private users of Facebook. The unbridled nature of the power that Facebook has over our personal content, despite clearly stating that all content posted is the user’s alone, threatens the freedom of expression on the site. A proper implementation of the policies that Facebook claims to employ is required along with a systematic record of the procedure that is used to remove content that is in consonance with natural justice.

FOEX Live: June 8-15, 2014

by Geetha Hariharan last modified Jun 16, 2014 10:22 AM
A weekly selection of news on online freedom of expression and digital technology from across India (and some parts of the world). Please email relevant news/cases/incidents to geetha[at]cis-india.org.

Karnataka:

A Hindu rightwing group demanded the arrest of a prominent activist, who during a speech on the much-debated Anti-superstition Bill, made comments that are allegedly blasphemous.

Kerala:

On June 10, the principal and six students of Government Polytechnic at Kunnamkulam, Thrissur, were arrested for publishing a photograph of Prime Minister Narendra Modi alongside photographs of Hitler, Osana bin Laden and Ajmal Kasab, under the rubric ‘negative faces’. An FIR was registered against them for various offences under the Indian Penal Code including defamation (Section 500), printing or engraving matter known to be defamatory (Section 501), intentional insult with intent to provoke breach of peace (Section 504), and concealing design to commit offence (Section 120) read with Section 34 (acts done by several persons in furtherance of common intention). The principal was later released on bail.

In a similarly unsettling incident, on June 14, 2014, a case was registered against the principal and 11 students of Sree Krishna College, Guruvayur, for using “objectionable and unsavoury” language in a crossword in relation to PM Narendra Modi, Rahul Gandhi, Shashi Tharoor, etc. Those arrested were later released on bail.

Maharashtra:

Facebook posts involving objectionable images of Dr. B.R. Ambedkar led to arson and vandalism in Pune. Police have sought details of the originating IP address from Facebook.

A Pune-based entrepreneur has set up a Facebook group to block ‘offensive’ posts against religious leaders. The Social Peace Force will use Facebook’s ‘Report Spam’ option to take-down of ‘offensive’ material.

Deputy Chief Minister Ajit Pawar suggested a ban on social media in India, and retracted his statement post-haste.

Punjab:

A bailable warrant was issued against singer Kailash Kher for failing to appear in court in relation to a case. The singer is alleged to have hurt religious sentiments of the Hindu community in a song, and a case registered under Sections 295A and 298, Indian Penal Code.

Uttar Pradesh:

The presence of a photograph on Facebook, in which an accused in a murder case is found posing with an illegal firearm, resulted in a case being registered against him under the IT Act.

News & Opinion:

Authors, civil society activists and other concerned citizens issued a joint statement questioning Prime Minister Modi’s silence over arrests and attacks on exercise of free speech and dissent. Signatories include Aruna Roy, Romila Thapar, Baba Adhav, Vivan Sundaram, Mrinal Pande, Jean Dreze, Jayati Ghosh, Anand Pathwardhan and Mallika Sarabhai.

In response to Mumbai police’s decision to take action against those who ‘like’ objectionable or offensive content on Facebook, experts say the freedom to ‘like’ or ‘share’ posts or tweets is fundamental to freedom of expression. India’s defamation laws for print and the Internet need harmonization, moreover.

While supporting freedom of expression, Minister for Information and Broadcasting Prakash Javadekar cautioned the press and all users of social media that the press and social media should be used responsibly for unity and peace. The Minister has also spoken out in favour of free publication, in light of recent legal action against academic work and other books.

Infosys, India’s leading IT company, served defamation notices on the Economic Times, the Times of India and the Financial Express, for “loss and reputation and goodwill due to circulation of defamatory articles”. Removal of articles and an unconditional apology were sought, and Infosys claimed damages amounting to Rs. 2000 crore. On a related note, Dr. Ashok Prasad argues that criminal defamation is a violation of freedom of speech.

Drawing on examples from the last 3 years, Ritika Katyal analyses India’s increasing violence and legal action against dissent and hurt sentiment, and concludes that Prime Minister Narendra Modi has both the responsibility and ability to “rein in Hindu hardliners”.

Discretionary powers resting with the police under the vaguely and broadly drafted Section 66A, Information Technology Act, are dangerous and unconstitutional, say experts.

Providing an alternative view, the Hindustan Times comments that the police ought to “pull up their socks” and understand the social media in order to effectively police objectionable and offensive content on the Internet.

Keeping Track:

Indconlawphil’s Free Speech Watch keeps track of violations of freedom of expression in India.

Multi-stakeholder Models of Internet Governance within States: Why, Who & How?

by Geetha Hariharan last modified Jun 16, 2014 02:27 PM
Internet governance, for long a global exercise, has found new awareness within national frameworks in recent times. Especially relevant for developing countries, effective national IG mechanisms are important to raise awareness and ensure multi-stakeholder participation at technical, infrastructural and public policy levels.

This post is a surface-level overview of national IG bodies, and is intended to inform introductory thoughts on national IG mechanisms.

A Short Introduction

The previous decade has seen a proliferation of regional, sub-regional and national initiatives for Internet governance (IG). Built primarily on the multi-stakeholder model, these initiatives aim at creating dialogue on issues of regional, local or municipal importance. In Asia, Bangladesh has instituted a national IGF, the Bangladesh IGF, with the stated objective of creating a national multi-stakeholder forum that is specialized in Internet governance issues, and to facilitate informed dialogue on IG policy issues among stakeholders. India, too, is currently in the process of instituting such a forum. At this juncture, it is useful to consider the rationale and modalities of national IG bodies.

The Internet has long been considered a sphere of non-governmental, multi-stakeholder, decentralized, bottom-up governance space. The Declaration of Independence of Cyberspace, John Perry Barlow’s defiant articulation of the Internet’s freedom from governmental control, is a classic instance of this. The Internet is a “vast ocean”, we claimed; “no one owns it”.[1] Even today, members of the technical community insist that everyone ought to “let techies do their job”: a plea, if you will, of the complexity of cyber-walls and –borders (or of their lack).

But as Prof. Milton Mueller argues in Ruling the Root, the Internet has always been a contentious resource: battles over its governance (or specifically, the governance of the DNS root, both the root-zone file and the root servers) have leapt from the naïveté of the Declaration of Independence to a private-sector-led, contract-based exploitation of Internet resources. The creation of ICANN was a crucial step in this direction, following arbitrary policy choices by Verizon and entities managing the naming and numbering resources of the Internet.

The mushrooming of parallel tracks of Internet governance is further evidence of the malleability of the space. As of today, various institutions – inter-governmental and multi-stakeholder – extend their claims of governance. ICANN, the World Summit of Information Society, the World Conference on International Telecommunications, the Internet Governance Forum and the Working Group on Enhanced Cooperation under the ECOSOC Committee for Science, Technology and Development are a few prominent tracks. As of today, the WSIS process has absorbed various UN special bodies (the ITU, UNESCO, UNCTAD, UNDP are but a few), with the UNESCO instituting a separate study on Internet-related issues. A proposal for a multilateral Committee on Internet-Related Policies remains stillborn.

Amongst these, the Internet Governance Forum (IGF) remains a strong contender for a truly multi-stakeholder process facilitating dialogue on IG. The IGF was set up following the recommendation of the Working Group of Internet Governance (WGIG), constituted after the Geneva phase of the WSIS.

Rationale: Why Have National IG bodies?

The issue of national multi-stakeholder cooperation/collaboration in IG is not new; it has been alive since the early 2000s. The Tunis Agenda, in paragraph 80, encourages the “development of multi-stakeholder processes at the national, regional and international levels to discuss and collaborate on the expansion and diffusion of the Internet as a means to support development efforts to achieve internationally agreed development goals and objectives, including the Millennium Development Goals” (emphasis supplied).

In its June 2005 Report, the Working Group on Internet Governance (WGIG) emphasizes that “global Internet governance can only be effective if there is coherence with regional, subregional and national-level policies”. Towards this end it recommends that “coordination be established among all stakeholders at the national level and a multi-stakeholder national Internet governance steering committee or similar body be set up” (emphasis supplied). The IGF, whose creation the WGIG recommended, has since been commended for its impact on the proliferation of national IGFs.

The rationale, then, was that multi-stakeholder steering committees at the national level would help to create a cohesive body to coordinate positions on Internet governance. In Reforming Internet Governance, WGIG member Waudo Siganga writes of the Internet Steering Committee of Brazil as a model, highlighting lessons that states (especially developing countries) may learn from CGI.br.

The Brazilian Internet Steering Committee (CGI.br) was set up in 1995 and is responsible, inter alia, for the management of the .br domain, distribution of Internet addresses and administration of metropolitan Internet exchange points. CERT.br ensures network security and extends support to network administrators. Siganga writes that CGI.br is a “well-structured multistakeholder entity, having representation from government and democratically chosen representatives of the business sector, scientific and technological community and an Internet expert”.

Why is CGI.br a model for other states? First, CGI.br exemplifies how countries can structure in an effective manner, a body that is involved in creating awareness about IG issues at the national level. Moreover, the multi-stakeholder nature of CGI.br shows how participation can be harnessed effectively to build capacity across domestic players. This also reflects the multi-stakeholder aspects of Internet governance at the global level, clarifying and implementing the WSIS standards (for instance). Especially in developing countries, where awareness and coordination for Internet governance is lacking at the national level, national IG committees can bridge the gap between awareness and participation. Such awareness can translate into local solutions for local issues, as well as contributing to an informed, cohesive stance at the global level.

Stakeholders: Populating a national IG body

A national IG body – be in steering committee, IGF or other forum – should ideally involve all relevant stakeholders. As noted before, since inception, the Internet has not been subject to exclusive governmental regulation. The World Summit on Information Society recognized this, but negotiations amongst stakeholders resulted in the delegation of roles and responsibilities: the controversial and much-debated paragraph 35 of the Tunis Agenda reads:

  1. Policy authority for Internet-related public policy issues is the sovereign right of States. They have rights and responsibilities for international Internet-related public policy issues.
  2. The private sector has had, and should continue to have, an important role in the development of the Internet, both in the technical and economic fields.
  3. Civil society has also played an important role on Internet matters, especially at community level, and should continue to play such a role.
  4. Intergovernmental organizations have had, and should continue to have, a facilitating role in the coordination of Internet-related public policy issues.
  5. International organizations have also had and should continue to have an important role in the development of Internet-related technical standards and relevant policies.

This position remains endorsed by the WSIS process; the recent WSIS+10 High Level Event endorsed by acclamation the WSIS+10 Vision for WSIS Beyond 2015, which “respect mandates given by Tunis Agenda and respect for the multi-stakeholder principles”. In addition to government, the private sector and civil society, the technical community is identified as a distinct stakeholder group. Academia has also found a voice, as demonstrated by stakeholder-representation at NETmundial 2014.

A study of the Internet Society (ISOC) on Assessing National Internet Governance Arrangements, authored by David Souter, maps IG stakeholders at the global, regional and national levels. At the global level, primary stakeholders include ICANN (not-for-profit, private sector corporation involved in governance and technical coordination of the DNS), the IETF, IAB and W3C (technical standards), governments and civil society organizations, all of which participate with different levels of involvements at the IGF, ICANN, ITU, etc.

At the national/municipal level, the list of stakeholders is as comprehensive. Governmental stakeholders include: (1) relevant Ministries (in India, these are the Ministry of Information and Broadcasting, and the Ministry of Communications and Information Technology – the Department of Electronics and Information Technology under the MCIT is particularly relevant), and (2) regulators, statutory and independent (the Telecom Regulatory Authority of India, for example). At the national level, these typically seek inputs from other stakeholders while making recommendations to governments, which then enact laws or make policy. In India, for instance, the TRAI conducts consultations prior to making recommendations to the government.

Within the private sector, there may be companies (1) on the supply-side, such as infrastructure networks, telecommunications service companies, Internet Service Providers, search engines, social networks, cybercafés, etc., and (2) on the demand-side, online businesses, advertising/media, financial service providers, etc. who use the Internet. There may also be national registries managing ccTLDs, such as the Registro.br or the National Internet Exchange of India (NIXI). There may also the press and news corporations representing both corporate and public interest under specific circumstances (media ownership and freedom of expression, for distinct examples).

Civil society organisations, including consumer organisations, think-tanks and grassroots organisations, participate at various levels of policy-making in the formal institutional structure, and are crucial in representing users and public interest. The complexity of stakeholders may be seen from Souter’s report, and this enumeration is but a superficial view of the national stakeholder-population.

Processes: Creating effective national IG bodies

National IG bodies – be they steering committees, IGFs, consultative/working groups or other forums – may be limited by formal institutional governmental settings. While limited by the responsibility-gradient in paragraph 35 of the Tunis Agenda, an effective national IG body requires robust multi-stakeholder participation, as Souter notes, in technical governance, infrastructure and public policy issues. Its effectiveness also lies in governmental acquiescence of its expertise and recommendations; in short, in the translation of the IG body’s decisions into policy.

How do these stakeholders interact at the national level? In addition to the Brazilian example (CGI.br), an ISOC study by Souter and Monica Kerretts-Makau, Internet Governance in Kenya: An Assessment, provides a detailed answer. At the technical level, the registry KENIC manages the .ke domain, while the Kenya Computer Incident Response Team Coordination Centre coordinates national responses to incidents and collaborates internationally on cyber-security issues. A specific IPv6 Force to promote Kenya’s transition to IPv6 was also created.

At the infrastructural level, both the government and the private sector play important roles. Directly, ministries and government departments consult with infrastructure providers in creating policy. In India, for instance, the TRAI conducts multi-stakeholder consultations on issues such as telecom tariffs, colocation tariffs for submarine cable stations and mobile towers, etc. The government may also take a lead in creating infrastructure, such as the national optic fibre networks in India and Kenya, as also creating investment opportunities such as liberalizing FDI. At the public policy level, there may exist consultations initiated by government bodies (such as the TRAI or the Law Commission), in which other stakeholders participate.

As one can see, government-initiated consultations by ministries, regulators, law commissions or specially constituted committees. Several countries have also set up national IGFs, which typically involve all major stakeholders in voluntary participation, and form a discussion forum for existing and emerging IG issues. National IGFs have been considered particularly useful to create awareness within the country, and may best address IG issues at the domestic policy level. However, Prof. Mueller writes that what is necessary is a “reliable mechanism reliable mechanisms for consistently feeding the preferences expressed in these forums to actual global policy-making institutions like ICANN, RIRs, WIPO, and WTO which impact distributional outcomes”.


[1] M. Mueller, Ruling the Root: Internet Governance and the Taming of Cyberspace 57 (2002).

Comments to ICANN Supporting the DNS Industry in Underserved Regions

by Jyoti Panday last modified Jul 04, 2014 06:48 AM
Towards exploring ideas and strategies to help promote the domain name industry in regions that have typically been underserved, ICANN published a call for public comments on May 14, 2014. In particular, ICANN sought comments related to existing barriers to Registrar Accreditation and operation and suggestions on how these challenges might be mitigated. CIS contributed to the comments on this report, which will be used to determine next steps to support the domain name industry in underserved regions.

Domain names and the DNS are used in virtually every aspect of the Internet, and without the DNS, the Internet as we know it, would not exist. The DNS root zone has economic value and  ICANN's contract with Verisign delineates the selling of domain names via only ICANN accredited registrars. By the indirect virtue of its control of the root, ICANN has the power and capacity to influence the decisions of entities involved in the management and operations of the DNS, including registrars.

Too far, too many?

We acknowledge some of the efforts for improvements, in particular with reference to barriers to participation in DNS-related business in regions such as Africa and the Middle East, including the creation of a fellowship program, and increased availability of translated materials. However, despite these efforts, the gaps in the distribution of the DNS registrars and registries across the world has become an issue of heightened concern.

This is particularly true, in light of the distribution of registrars and given that, of the 1124 ICANN-accredited registrars, North America has a total of 765 registrars. US and Canada together, have more than double the number of registrars than the rest of the world taken collectively. To put things further into perspective, of the total number of registrars 725 are from the United States alone, and 7 from the 54 countries of Africa.

A barrier to ICANN's capacity building initiatives has been the lack of trust, given the general view that, ICANN focuses on policies that favour entrenched incumbents from richer countries. Without adequate representation from poorer countries, and adequate representation from the rest of the world's Internet population, there is no hope of changing these policies or establishing trust. The entire region of Latin America and the Caribbean, comprising of a population of 542.4 million internet users[1] in 2012, has only 22 registrars spread across a total of 10 countries. In Europe, covering a population of 518.5 million internet users[2], are 158 registrars and 94 of those are spread across Germany, UK, France, Spain and Netherlands. The figures paint the most dismal picture with respect to South Asia, in particular India, where just 16 registrars cater to the population of internet users that is expected to reach 243 million by June 2014[3].

While we welcome ICANN's research and outreach initiatives with regard to the DNS ecosystem in underserved regions, without the crucial first step of clarifying the metrics that constitute an underserved region, these efforts might not bear their intended impact. ICANN cannot hope to identify strategies towards bridging the gaps that exist in the DNS  ecosystem, without going beyond the current ICANN community, which, while nominally being 'multistakeholder' and open to all, grossly under-represents those parts of the world that aren't North America and Western Europe.

The lack of registries in the developing world is another significant issue that needs to be highlighted and addressed. The top 5 gTLD registries are in the USA and it is important that users and the community feels that the fees being collected are equivalent compensation for the services they provide. As registries operate in captive markets that is allocated by ICANN, we invite ICANN to improve its financial accountability, by enabling its stakeholders to assess the finances collected on these registrations.

Multistakeholderism—community and consensus

As an organization that holds itself a champion of the bottom-up policy development process, and, as a private corporation fulfilling a public interest function, ICANN, is in a unique position to establish new norms of managing common resources. In theory and under ICANN’s extensive governance rules, the board is a legislative body that is only supposed to approve the consensus decisions of the community and the staff wield executive control. However in reality, both board and the staff have been criticised for decisions that are not backed by the community.

The formal negotiations between ICANN and Registrar Stakeholder Group Negotiating Team (Registrar NT) over the new Registrar Accreditation Agreement (RAA), is an example of processes that have a multistakeholder approach but fail on values of deliberation and pluralistic decision making.[4] ICANN staff insisted on including a "proposed Revocation (or "blow up") Clause that would have given them the ability to unilaterally terminate all registrar accreditations" and another proposal seeking to provide ICANN Board ability to unilaterally amend the RAA (identical to proposal inserted in the gTLD registry agreement - a clause met with strong opposition not only from the Registry Stakeholder Group but from the broader ICANN community).

Both proposals undermine the multistakeholder approach of the ICANN governance framework, as they seek more authority for the Board, rather than the community or protections for registrars and more importantly, registrants. The proposed amendments to the RAA were not issues raised by Law Enforcement, GAC or the GNSO but by the ICANN staff and received considerable pushback from the Registrar Stakeholder Group Negotiating Team (Registrar NT). The bottom-up policy making process at ICANN has also been questioned with reference to the ruling on vertical integration between registries and registrars, where the community could not even approach consensus.[5] Concerns have also been raised about the extent of the power granted to special advisory bodies handpicked by the ICANN president, the inadequacy of existing accountability mechanisms for providing a meaningful and external check on Board decisions and the lack of representation of underserved regions on these special bodies. ICANN must evolve its accountability mechanisms, to go beyond the opportunity to provide comments on proposed policy, and extend to a role for stakeholders in decision making, which is presently a privilege reserved for staff rather than bottom-up consensus.

ICANN was created as a consensus based organisation that would enable the Internet, its stakeholders and beneficiaries to move forward in the most streamlined, cohesive manner.[6] Through its management of the DNS, ICANN is undertaking public governance duties, and it is crucial that it upholds the democratic values entrenched in the multistakeholder framework. Bottom up policy making extends beyond passive participation and has an impact on the direction of the policy. Presently, while anyone can comment on policy issues, only a few have a say in which comments are integrated towards outcomes and action. We would like to stress not just improving and introducing checks and balances within the ICANN ecosystem, but also, integrating accountability and transparency practices at all levels of decision making.

Bridging the gap

We welcome the Africa Strategy working group and the public community process that was initiated by ICANN towards building domain name business industry in Africa, and, we are sure there will be lessons that will applicable to many other underserved regions. In the context of this report CIS, wants to examine the existing criteria of the accreditation process. As ICANN's role evolves and its revenues grow across the DNS and the larger Internet landscape, it is important in our view, that ICANN review and evolve it's processes for accreditation and see if they are as relevant today, as they were when launched.

The relationship between ICANN and every accredited registrar is governed by the individual RAA, which set out the obligations of both parties, and, we recommend simplifying and improving them. The RAA language is complex, technical and not relevant to all regions and presently, there are no online forms for the accreditation process. While ICANN's language will be English, the present framing has an American bias—we recommend—creating an online application process and simplifying the language keeping it contextual to the region. It would also be helpful, if ICANN invested in introducing some amount of standardization across forms, this would reduce the barrier of time and effort it takes to go through complex legal documents and contribute to the growth of DNS business.

The existing accreditation process for registrars requires applicants to procure US$70,000 or more for the ICANN accreditation to become effective. The applicants are also required to obtain and maintain for the length of accreditation process, a commercial general liability insurance with a policy limit of US$500,000 or more. The working capital and the insurance are quite high and create a barrier to entrance of underserved regions in the DNS ecosystem.

With lack of appropriate mechanisms registrars resort to using US companies for insurance, creating more foreign currency pressures on themselves. The commercial general liability insurance requirement for the registrars is not limited to their functioning as a registrar perhaps not the most appropriate option. ICANN should, and must, increase efforts towards helping registrars find suitable insurance providers and scaling down the working capital. Solutions may lie in exploring variable fee structures adjusted against profits, and derived after considering factors such as cost of managing domain names and sub-domain names, expansion needs, ICANN obligations and services, financial capacities of LDCs and financial help pledged to disadvantaged groups or countries.

Presently, the start-up capital required is too high for developing countries, and this is reflected in the number of registries in these areas. Any efforts to improve the DNS ecosystem in underserved regions, must tackle this by scaling down the capital in proportion to the requirements of the region.

Another potential issue that ICANN should consider, is that users getting sub-domain names from local registrars located in their own country, are usually taxed on the transaction, however, online registration through US registrars spares users from paying taxes in their country.[7] This could create a reverse incentive for registering domain sub-names online from US registrars. ICANN should push forward on efforts to ensure that registrars are sustainable by providing incentives for registering in underserved regions and help towards maintain critical mass of the registrants. The Business Constituency (BC)—the voice of commercial Internet users within ICANN, could play a role in this and ICANN should endeavour to either, expand the BC function or create a separate constituency for the representation of  underserved regions.


[1] Internet Users and Population stats 2012. http://www.internetworldstats.com/stats2.htm

[2] Internet Users and Population stats 2012. http://www.internetworldstats.com/stats4.htm

[3] Times of India IAMAI Report. http://timesofindia.indiatimes.com/tech/tech-news/India-to-have-243-million-internet-users-by-June-2014-IAMAI/articleshow/29563698.cms

[4] Mar/07/2013 - Registrar Stakeholder Group Negotiating Team (Registrar NT) Statement Regarding ICANN RAA Negotiations.http://www.icannregistrars.org/calendar/announcements.php

[5] Kevin Murphy, Who runs the internet? An ICANN 49 primer. http://domainincite.com/16177-who-runs-the-internet-an-icann-49-primer

[6] Stephen Ryan, Governing Cyberspace: ICANN, a Controversial Internet Standards Body http://www.fed-soc.org/publications/detail/governing-cyberspace-icann-a-controversial-internet-standards-body

[7] Open Root-Financing LDCs in the WSIS process. See: http://www.open-root.eu/about-open-root/news/financing-ldcs-in-the-wsis-process

Vodafone Report Explains Government Access to Customer Data

by Joe Sheehan last modified Jun 19, 2014 10:38 AM
Vodafone Group PLC, the world’s second largest mobile carrier, released a report on Friday, June 6 2014 disclosing to what extent governments can request their customers’ data.

The Law Enforcement Disclosure Report, a section of a larger annual Sustainability Report began by asserting that Vodafone "customers have a right to privacy which is enshrined in international human rights law and standards and enacted through national laws."

However, the report continues, Vodafone is incapable of fully protecting its customers right to privacy, because it is bound by the laws in the various countries in which it operates. "If we do not comply with a lawful demand for assistance, governments can remove our license to operate, preventing us from providing services to our customers," The report goes into detail about the laws in each of the 29 nations where the company operates.

Vodafone’s report is one of the first published by a multinational service provider. Compiling such a report was especially difficult, according to the report, for a few reasons. Because no comparable report had been published before, Vodafone had to figure out for themselves, the “complex task” of what information they could legally publish in each country. This difficulty was compounded by the fact that Vodafone operates physical infrastructure and thus sets up a business in each of the countries where it provides services. This means that Vodafone is subject to the laws and operating licenses of each nation where it operates, unlike as a search engine such as Google, which can provide services across international borders but still be subject to United States law – where it is incorporated.

The report is an important step forward for consumer privacy. First, the Report shows that the company is aware of the conflict of interest between government authorities and its customers, and the pivotal position that the company can play in honoring the privacy of its users by providing information regarding the same in all cases where it legally can. Additionally, providing the user insight into challenges that the company faces when addressing and responding to law enforcement requests, the Report provides a brief overview of the legal qualifications that must be met in each country to access customer data. Also, Vodafone’s report has encouraged other telecom companies to disclose similar information to the public. For instance, Deutsche Telekom AG, a large European and American telecommunications company, said Vodafone’s report had led it consider releasing a report of it’s own.

Direct Government Access

The report revealed that six countries had constructed secret wires or “pipes” which allowed them access to customers’ private data. This means that the governments of these six countries have immediate access to Vodafone’s network without any due process, oversight, or accountability for these opaque practices. Essentially, the report reveals, in order to operate in one of these jurisdictions, a communications company must ensure  that authorities have, real time and direct access to all personal customer data at any time, without any specific justification. The report does not name these six nations for legal reasons.

"These pipes exist, the direct access model exists,” Vodafone's group privacy officer, Stephen Deadman, told the Guardian. “We are making a call to end direct access as a means of government agencies obtaining people's communication data. Without an official warrant, there is no external visibility. If we receive a demand we can push back against the agency. The fact that a government has to issue a piece of paper is an important constraint on how powers are used."

Data Organization

Vodafone’s Report lists the aggregate number of content requests they received in each country where it operates, and groups these requests into two major categories. The first is Lawful Interceptions, which is when the government directly listens in or reads the content of a communication. In the past, this type of action has been called wiretapping, but now includes reading the content of text messages, emails, and other communications.

The second data point Vodafone provides for each country is the number of Communications Data requests they receive from each country. These are requests for the metadata associated with customer communications, such as the numbers they have been texting and the time stamps on all of their texts and calls.

It is worth noting that all of the numbers Vodafone reports are warrant statistics rather than target statistics. Vodafone, according to the report, has chosen to include the number of times a government sent a request to Vodafone to "intrude into the private affairs of its citizens, not the extent to which those warranted activities then range across an ever-expanding multiplicity of devices, accounts and apps."

Data Construction

However, in many cases, laws in the various companies in which Vodafone operates prohibit Vodafone from publishing all or part of the aforementioned data. In fact, this is the rule rather than the exception. The majority of countries, including India, prohibit Vodafone from releasing the number of data requests they receive. Other countries publish the numbers themselves, so Vodafone has chosen not to reprint their statistics either. This is because Vodafone wants to encourage governments to take responsibility for informing their citizens of the statistics themselves.

The report also makes note of the process Vodafone went through to determine the legality of publishing these statistics. It was not always straightforward. For example, in Germany, when Vodafone’s legal team went to examine the legislation governing whether or not they could publish statistics on government data requests, they concluded that the laws were unclear, and asked German authorities for advice on how to proceed. They were informed that publishing any such statistics would be illegal, so they did not include any German numbers in their report. However, since that time, other local carriers have released similar statistics, and thus the situation remains unresolved.

Other companies have also recently released reports. Twitter, a microblogging website, Facebook, a social networking website, and Google a search engine with social network capabilities have all released comparable reports, but their reports differ from Vodafone’s in a number of ways. While Twitter, Google, and Facebook all specified the percent of requests granted, Vodafone released no similar statistics. However, Vodafone prepared discussions of the various legal constraints that each country imposed on telecom companies, giving readers an understanding of what was required in each country for authorities to access their data, a component that was left out of other recent reports. Once again, Vodafone’s report differed from those of Google Facebook and Twitter because while Vodafone opens businesses in each of the countries where it operates and is subject to their laws, Google, Facebook, and Twitter are all Internet companies and so are only governed by United States law.

Google disclosed that it received 27,427 requests over a six-month period ending in December, 2013, and also noted that the number of requests has increased consistently each six-month period since data began being compiled in 2009, when fewer than half as many requests were being made. On the other hand Google said that the percentage of requests it complied with (64% over the most recent period) had declined significantly since 2010, when it complied with 76% of requests.

Google went into less detail when explaining the process non-American authorities had to go through to access data, but did note that a Mutual Legal Assistance Treaty was the primary way governments outside of the United States could force the release of user data. Such a treaty is an agreement between the United States and another government to help each other with legal proceedings. However, the report indicated that Google might disclose user information in situations when they were not legally compelled to, and did not go into detail about how or when it did that. Thus, given the difficulty of obtaining a Mutual Legal Assistance Treaty in addition to local warrants or subpoenas, it seems likely that Google complies with many more non-American data requests than it was legally forced to.

Facebook has only released two such reports so far, for the two six month periods in 2013, but they too indicated an increasing number of requests, from roughly 26,000 to 28,147. Facebook plans to continue issuing reports every six months.

Twitter has also seen an increase of 22% in government requests between this and the previous reporting period, six months ago. Twitter attributes this increase in requests to an increase in users internationally, and it does seem that the website has a similarly growing user base, according to charts released by Twitter. It is worth noting that while large nations such as the United States and India are responsible for the majority of government requests, smaller nations such as Bulgaria and Ecuador also order telecom and Internet companies to turn over data.

Vodaphone’s Statistics

Though Vodafone’s report didn’t print statistics for the majority of the countries the report covered, looking at the few numbers they did publish can shed some light on the behavior of governments in countries where publishing such statistics is illegal.  For the countries where Vodafone does release data, the numbers of government requests for Vodafone data were much higher than for Google data. For instance, Italy requested Vodafone data 605,601 times, while requesting Google data only 896 times. This suggests that other countries such as India could be looking at many more customers’ data through telecom companies like Vodafone than Internet companies like Google.

Vodafone stressed that they were not the only telecom company that was being forced to share customers’ data, sometimes without warrants. In fact, such access was the norm in countries where authorities demanded it.

India and the Reports

India is one of the most proliferate requesters of data, second only to the United States in number of requests for data from Facebook and fourth after the United States, France and Germany in number of requests for data from Google. In the most recent six-month period, India requested data from Google 2,513 times, Facebook 3,598 times, and Twitter 19 times. The percentage of requests granted varies widely from country. For example, while Facebook complies with 79% of United States authorities’ requests, it only grants 50% of India’s requests. Google responds to 83% of US requests but only 66% of India’s.

Facebook also provides data on the number of content restrictions each country requests. A content restriction request is where an authority asks Facebook to take down a particular status, photo, video, or other web content and no longer display it on their site. India, with 4,765 requests, is the country that most often asks Facebook to remove content.

While Vodafone’s report publishes no statistics on Indian data requests, because such disclosure would be illegal, it does discuss the legal considerations they are faced with. In India, the report explains, several laws govern Internet communications. The Information Technology Act (ITA) of 2000 is the parent legislation governing information technology in India. The ITA allows certain members of Indian national or state governments order an interception of a phone call or other communication in real time, for a number of reasons. According to the report, an interception can be ordered “if the official in question believes that it is necessary to do so in the: (a) interest of sovereignty and integrity of India; (b) the security of the State; (c) friendly relations with foreign states; (d) public order; or (e) the prevention of incitement of offences.” In short, it is fairly easy for a high-ranking official to order a wiretapping in India.

The report goes on to detail Indian authorities’ abilities to request other customer data beyond a lawful interception. The Code of Criminal Procedure allows a court or police officer to ask Vodafone and other telecom companies to produce “any document or other thing” that the officer believes is necessary for any investigation. The ITA extends this ability to any information stored in any computer, and requires service providers to extend their full assistance to the government. Thus, it is not only legally simple to order a wiretapping in India; it is also very easy for authorities to obtain customer web or communication data at any time.

It is clear that Indian laws governing communication have very little protections in place for consumer privacy. However, many in India hope to change this reality. The Group of Experts chaired by Justice AP Shah, the Department of Personnel and Training, along with other concerned groups have been working towards the  drafting of a privacy legislation for India. According to the Report of the Group of Experts on Privacy, the legislation would fix the 50 or so privacy laws in India that are outdated and unable to protect citizen’s privacy when they use modern technology.

On the other hand, the Indian government is moving forward with a number of plans to further infringe the privacy of civilians. For example, the Central Monitoring System, a clandestine electronic surveillance program, gives India’s security agencies and income tax officials direct access to communications data in the country. The program began in 2007 and was announced publicly in 2009 to little fanfare and muted public debate. The system became operational in 2013.

Conclusion

Vodafone’s report indicates that it is concerned about protecting its customer’s privacy, and Vodafone’s disclosure report is an important step forward for consumer web and communication privacy. The report stresses that company practice and government policy need to come together to protect citizen’s privacy and –businesses cannot do it alone. However, the report reveals what companies can do to effect privacy reform. By challenging authorities abilities to access customer data, as well as publishing information about these powers, they bring the issue to the government’s attention and open it up to public debate. Through Vodafone’s report, the public can see why their governments are making surveillance decisions. Yet, in India, there is still little adoption of transparent business practices such as these. Perhaps if more companies were transparent about the level of government surveillance their customers were being subjected to, their practices and policies for responding to requests from law enforcement, and the laws and regulations that they are subject to - the public would press the government for stronger privacy safeguards and protections.

UN Human Rights Council urged to protect human rights online

by Geetha Hariharan last modified Jun 19, 2014 01:28 PM
63 civil society groups urged the UN Human Rights Council to address global challenges to freedom of expression, privacy and other human rights on the Internet. Centre for Internet & Society joined in the statement, delivered on behalf of the 63 groups by Article 19.

The 26th session of the United Nations Human Rights Council (UNHRC) is currently ongoing (June 10-27, 2014). On June 19, 2014, 63 civil society groups joined together to urge the United Nations Human Rights Council to protect human rights online and address global challenged to their realization. Centre for Internet & Society joined in support of the statement ("the Civil Society Statement"), which was delivered by Article 19 on behalf of the 63 groups.

In its consensus resolution A/HRC/20/8 (2012), the UNHRC affirmed that the "same rights that people have offline must also be protected online, in particular freedom of expression, which is applicable regardless of frontiers and through any media of one’s choice". India, a current member of the UNHRC, stood in support of resolution 20/8. The protection of human rights online was also a matter of popular agreement at NETmundial 2014, which similarly emphasised the importance of protecting human rights online in accordance with international human rights obligations. Moreover, the WSIS+10 High Level Event, organised by the ITU in collaboration with other UN entities, emphasized the criticality of expanding access to ICTs across the globe, including infrastructure, affordability and reach.

The Civil Society Statement at HRC26 highlights the importance of retaining the Internet as a global resource - a democratic, free and pluralistic platform. However, the recent record of freedom of expression and privacy online have resulted in a deficit of trust and free, democratic participation. Turkey, Malaysia, Thailand, Egypt and Pakistan have blocked web-pages and social media content, while Edward Snowden's revelations have heightened awareness of human rights violations on the Internet.

At a time when governance of the Internet and its institutions is evolving, a human rights centred perspective is crucial. Openness and transparency - both in the governance of Internet institutions and rights online - are crucial to continuing growth of the Internet as a global, democratic and free resource, where freedom of expression, privacy and other rights are respected regardless of location or nationality. In particular, the Civil Society Statement calls attention to principles of necessity and proportionality to regulate targeted interception and collection of personal data.

The UNHRC, comprising 47 member states, is called upon to address these global challenges. Guided by resolutions A/HRC/20/8 and A/RES/68/167, the WSIS+10 High Level Event Outcome Documents (especially operative paragraphs 2, 8 and 11 of the Vision Document) and the forthcoming report of the UN High Commissioner for Human Rights regarding privacy in the digital age, the UNHRC as well as other states may gather the opportunity and intention to put forth a strong case for human rights online in our post-2015 development-centred world.

Civil Society Statement:

The full oral statement can be accessed here.

UNHRC Civil Society Statement (26th Session)

by Geetha Hariharan last modified Jun 19, 2014 01:24 PM
Statement endorsed by 63 civil society groups, urging the the UNHRC to address challenges to human rights online.

PDF document icon A19 joint oral statement on Internet & human rights - 19 June 2014.pdf — PDF document, 33 kB (34373 bytes)

Free Speech and Source Protection for Journalists

by Gautam Bhatia — last modified Jun 19, 2014 08:10 PM
Gautam Bhatia explores journalistic source protection from the perspective of the right to freedom of speech & expression. In this post, he articulates clearly the centrality of source protection to press freedoms, and surveys the differing legal standards in the US, Europe and India.

In the previous post, we discussed Vincent Blasi’s pathological perspective on free speech. The argument forms part of a broader conception that Blasi calls the “checking value of the First Amendment”. Blasi argues that the most important role of free speech is to “check” government abuses and reveal to the public information that government wants to keep secret from them. Naturally, in this model – which is a specific application of the democracy-centred theory of free speech – the press and the media become the most important organs of a system of free expression.

In addition to the checking value of free speech, there is another consideration that is now acknowledged by Courts in most jurisdictions, including our Supreme Court. When we speak about the “right” to free speech, we do not just mean – as might seem at first glance – the right of speakers to speak unhindered. We also mean the rights of listeners and hearers to receive information. A classic example is the Indian Supreme Court’s opinion in LIC v. Manubhai D. Shah, which used Article 19(1)(a) to vest a right-of-reply in a person who had been criticised in a newspaper editorial, on the ground of providing a balanced account to readers. Furthermore, instruments like the ICCPR and the ECHR make this clear in the text of the free speech right as well. For instance, Article 19 of the ICCPR states thateveryone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds.”

In addition to the individual right to receive information and ideas, free speech need not be understood exclusively in the language of a right at all. Free speech also serves as a public good – that is to say, a society with a thriving system of free expression is, all things considered, better off than a society without it. The unique value that free speech serves, as a public good, is in creating an atmosphere of accountability and openness that goes to the heart of the constitutive ideals of modern liberal democracies. As Justice Hugo Black noted, a good system of free speech rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Unsurprisingly, he went on to add immediately after, that “a free press is a condition of a free society.”

If free speech is about the right to receive information, and about the public good of a society in which information circulates freely and widely, then the vehicles of information occupy a central position in any theory or doctrine about the scope of the constitutional right. In our societies, the press is perhaps the most important of those vehicles.

Establishing the crucial role of the free press in free speech theory is important to understand a crucial issue that has largely gone unaddressed in Indian constitutional and statutory law: that of source-protection laws for journalists. A source-protection law exempts journalists from having to compulsorily reveal their sources when ordered to do so by government or by courts. Such exemptions form part of ordinary Indian statutory law: under the Indian Evidence Act, for example, communications between spouses are “privileged” – that is, inadmissible as evidence in Court.

The question came up before the US Supreme Court in Branzburg v. Hayes. In a 5-4 split, the majority ruled against an unqualified reporters’ privilege, that could be invoked in all circumstances. However, all the justices understood the importance of the issue. Justice White, writing for the majority, held that government must  convincingly show a substantial relation between the information sought and a subject of overriding and compelling state interest.” Justice Powell’s concurring opinion emphasised that the balance must be struck on a case-to-case basis. Since Branzburg, there has been no federal legislation dealing with source protection. A number of states have, however, passed “shield laws”, albeit with broad national security exceptions.

Perhaps the reason for the American Supreme Court’s reticence lies in its reluctance – notwithstanding Justice Black’s ringing oratory – to place journalists on any kind of special pedestal above the rest of the public. The European Court of Human Rights, however, has felt no such compunctions. In Goodwin v. UK, the ECHR made it clear that the press serves a crucial function as a “public watchdog” (a consistent theme in the ECHR’s jurisprudence). Compelled disclosure of sources would definitely have a chilling effect on the functioning of the press, since sources would be hesitant to speak – and journalists would be reluctant to jeopardise their sources – if it was easy to get a court order requiring disclosure. Consequently, the ECHR – which is normally hesitant to intervene in domestic matters, and accords a wide margin of appreciation to states, found the UK to be in violation of the Convention. Journalists could only be compelled to reveal their sources if there was an “overriding requirement in the public interest.”

Where both the United States and Europe have recognised the importance of source-protection, and the simple fact that some degree of source protection is essential if the press is to perform its checking – or watchdog – function effectively, Indian jurisprudence on the issue is negligible. The Law Commission has twice proposed some manner of a shield law, but no concrete action has been taken upon its recommendations.

In the absence of any law, Article 19(1)(a) could play a direct role in the matter. As argued at the beginning of this post, the Supreme Court has accepted the democracy-based justification for free speech, as well as the individual right to receive information. Both these arguments necessarily make the role of the press crucial, and the role of the press is dependant on maintaining the confidentiality of sources. Thus, there ought to be an Article 19(1)(a) right that journalists can invoke against compelled disclosure. If this is so, then any disclosure can only be required through law; and the law, in turn, must be a reasonable restriction in the interests of public order, which – in turn, has normally been given a narrow interpretation by the Supreme Court in cases such as Ram Manohar Lohia.

It is unclear, however, whether the Courts will be sympathetic. As this article points out, while the Supreme Court has yet to rule on this issue, various High Courts have ordered disclosure, seemingly without much concern for the free speech implications. One thing is evident though: either a strong shield law, or a definitive Supreme Court ruling, is required to fill the current vacuum that exists.


Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and has just received an LLM from the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he blogs on issues of online freedom of speech and expression.

Document Actions

WSIS+10 High Level Event: A Bird's Eye Report

by Geetha Hariharan last modified Jun 20, 2014 03:57 PM
The WSIS+10 High Level was organised by the ITU and collaborative UN entities on June 9-13, 2014. It aimed to evaluate the progress on implementation of WSIS Outcomes from Geneva 2003 and Tunis 2005, and to envision a post-2015 Development Agenda. Geetha Hariharan attended the event on CIS' behalf.

The World Summit on Information Society (WSIS) +10 High Level Event (HLE) was hosted at the ITU Headquarters in Geneva, from June 9-13, 2014. The HLE aimed to review the implementation and progress made on information and communication technology (ICT) across the globe, in light of WSIS outcomes (Geneva 2003 and Tunis 2005). Organised in three parallel tracks, the HLE sought to take stock of progress in ICTs in the last decade (High Level track), initiate High Level Dialogues to formulate the post-2015 development agenda, as well as host thematic workshops for participants (Forum track).

The High Level Track:

High Level Track

Opening Ceremony, WSIS+10 High Level Event (Source)

The High Level track opened officially on June 10, 2014, and culminated with the endorsement by acclamation (as is ITU tradition) of two Outcome Documents. These were: (1) WSIS+10 Statement on the Implementation of WSIS Outcomes, taking stock of ICT developments since the WSIS summits, (2) WSIS+10 Vision for WSIS Beyond 2015, aiming to develop a vision for the post-2015 global information society. These documents were the result of the WSIS+10 Multi-stakeholder Preparatory Platform (MPP), which involved WSIS stakeholders (governments, private sector, civil society, international organizations and relevant regional organizations).

The MPP met in six phases, convened as an open, inclusive consultation among WSIS stakeholders. It was not without its misadventures. While ITU Secretary General Dr. Hamadoun I. Touré consistently lauded the multi-stakeholder process, and Ambassador Janis Karklins urged all parties, especially governments, to “let the UN General Assembly know that the multi-stakeholder model works for Internet governance at all levels”, participants in the process shared stories of discomfort, disagreement and discord amongst stakeholders on various IG issues, not least human rights on the Internet, surveillance and privacy, and multi-stakeholderism. Richard Hill of the Association for Proper Internet Governance (APIG) and the Just Net Coalition writes that like NETmundial, the MPP was rich in a diversity of views and knowledge exchange, but stakeholders failed to reach consensus on crucial issues. Indeed, Prof. Vlamidir Minkin, Chairman of the MPP, expressed his dismay at the lack of consensus over action line C9. A compromise was agreed upon in relation to C9 later.

Some members of civil society expressed their satisfaction with the extensive references to human rights and rights-centred development in the Outcome Documents. While governmental opposition was seen as frustrating, they felt that the MPP had sought and achieved a common understanding, a sentiment echoed by the ITU Secretary General. Indeed, even Iran, a state that had expressed major reservations during the MPP and felt itself unable to agree with the text, agreed that the MPP had worked hard to draft a document beneficial to all.

Concerns around the MPP did not affect the review of ICT developments over the last decade. High Level Panels with Ministers of ICT from states such as Uganda, Bangladesh, Sweden, Nigeria, Saudi Arabia and others, heads of the UN Development Programme, UNCTAD, Food and Agriculture Organisation, UN-WOMEN and others spoke at length of rapid advances in ICTs. The focus was largely on ICT access and affordability in developing states. John E. Davies of Intel repeatedly drew attention to innovative uses of ICTs in Africa and Asia, which have helped bridge divides of affordability, gender, education and capacity-building. Public-private partnerships were the best solution, he said, to affordability and access. At a ceremony evaluating implementation of WSIS action-lines, the Centre for Development of Advanced Computing (C-DAC), India, won an award for its e-health application MOTHER.

The Outcome Documents themselves shall be analysed in a separate post. But in sum, the dialogue around Internet governance at the HLE centred around the success of the MPP. Most participants on panels and in the audience felt this was a crucial achievement within the realm of the UN, where the Tunis Summit had delineated strict roles for stakeholders in paragraph 35 of the Tunis Agenda. Indeed, there was palpable relief in Conference Room 1 at the CICG, Geneva, when on June 11, Dr. Touré announced that the Outcome Documents would be adopted without a vote, in keeping with ITU tradition, even if consensus was achieved by compromise.

The High Level Dialogues:

High Level Dialogues

Prof. Vladimir Minkin delivers a statement. (Source)

The High Level Dialogues on developing a post-2015 Development Agenda, based on WSIS action lines, were active on June 12. Introducing the Dialogue, Dr. Touré lamented the Millennium Development Goals as a “lost opportunity”, emphasizing the need to alert the UN General Assembly and its committees as to the importance of ICTs for development.

As on previous panels, there was intense focus on access, affordability and reach in developing countries, with Rwanda and Bangladesh expounding upon their successes in implementing ICT innovations domestically. The world is more connected than it was in 2005, and the ITU in 2014 is no longer what it was in 2003, said speakers. But we lack data on ICT deployment across the globe, said Minister Knutssen of Sweden, recalling the gathering to the need to engage all stakeholders in this task. Speakers on multiple panels, including the Rwandan Minister for CIT, Marilyn Cade of ICANN and Petra Lantz of the UNDP, emphasized the need for ‘smart engagement’ and capacity-building for ICT development and deployment.

A crucial session on cybersecurity saw Dr. Touré envision a global peace treaty accommodating multiple stakeholders. On the panel were Minister Omobola Johnson of Nigeria, Prof. Udo Helmbrecht of the European Union Agency for Network and Information Security (ENISA), Prof. A.A. Wahab of Cybersecurity Malaysia and Simon Muller of Facebook. The focus was primarily on building laws and regulations for secure communication and business, while child protection was equally considered.

The lack of laws/regulations for cybersecurity (child pornography and jurisdictional issues, for instance), or other legal protections (privacy, data protection, freedom of speech) in rapidly connecting developing states was noted. But the question of cross-border surveillance and wanton violations of privacy went unaddressed except for the customary, unavoidable mention. This was expected. Debates in Internet governance have, in the past year, been silently and invisibly driven by the Snowden revelations. So too, at WSIS+10 Cybersecurity, speakers emphasized open data, information exchange, data ownership and control (the right to be forgotten), but did not openly address surveillance. Indeed, Simon Muller of Facebook called upon governments to publish their own transparency reports: A laudable suggestion, even accounting for Facebook’s own undetailed and truncated reports.

In a nutshell, the post-2015 Development Agenda dialogues repeatedly emphasized the importance of ICTs in global connectivity, and their impact on GDP growth and socio-cultural change and progress. The focus was on taking this message to the UN General Assembly, engaging all stakeholders and creating an achievable set of action lines post-2015.

The Forum Track:

Forum Track

Participants at the UNESCO session on its Comprehensive Study on Internet-related Issues (Source)

The HLE was organized as an extended version of the WSIS Forum, which hosts thematic workshops and networking opportunities, much like any other conference. Running in parallel sessions over 5 days, the WSIS Forum hosted sessions by the ITU, UNESCO, UNDP, ICANN, ISOC, APIG, etc., on issues as diverse as the WSIS Action Lines, the future of Internet governance, the successes and failures of WCIT-2012, UNESCO’s Comprehensive Study on Internet-related Issues, spam and a taxonomy of Internet governance.

Detailed explanation of each session I attended is beyond the scope of this report, so I will limit myself to the interesting issues raised.

At ICANN’s session on its own future (June 9), Ms. Marilyn Cade emphasized the importance of national and regional IGFs for both issue-awareness and capacity-building. Mr. Nigel Hickson spoke of engagement at multiple Internet governance fora: “Internet governance is not shaped by individual events”. In light of criticism of ICANN’s apparent monopoly over IANA stewardship transition, this has been ICANN’s continual response (often repeated at the HLE itself). Also widely discussed was the role of stakeholders in Internet governance, given the delineation of roles and responsibilities in the Tunis Agenda, and governments’ preference for policy-monopoly (At WSIS+10, Indian Ambassador Dilip Sinha seemed wistful that multilateralism is a “distant dream”).

This discussion bore greater fruit in a session on Internet governance ‘taxonomy’. The session saw Mr. George Sadowsky, Dr. Jovan Kurbalija, Mr. William Drake and Mr. Eliot Lear (there is surprisingly no official profile-page on Mr. Lear) expound on dense structures of Internet governance, involving multiple methods of classification of Internet infrastructure, CIRs, public policy issues, etc. across a spectrum of ‘baskets’ – socio-cultural, economic, legal, technical. Such studies, though each attempting clarity in Internet governance studies, indicate that the closer you get to IG, the more diverse and interconnected the eco-system gets. David Souter’s diagrams almost capture the flux of dynamic debate in this area (please see pages 9 and 22 of this ISOC study).

There were, for most part, insightful interventions from session participants. Mr. Sadowsky questioned the effectiveness of the Tunis Agenda delineation of stakeholder-roles, while Mr. Lear pleaded that techies be let to do their jobs without interference. Ms. Anja Kovacs raised pertinent concerns about including voiceless minorities in a ‘rough consensus’ model. Across sessions, questions of mass surveillance, privacy and data ownership rose from participants. The protection of human rights on the Internet – especially freedom of expression and privacy – made continual appearance, across issues like spam (Question 22-1/1 of ITU-D Study Group 1) and cybersecurity.

Conclusion:

The HLE was widely attended by participants across WSIS stakeholder-groups. At the event, a great many relevant questions such as the future of ICTs, inclusions in the post-2015 Development Agenda, the value of muti-stakeholder models, and human rights such as free speech and privacy were raised across the board. Not only were these raised, but cognizance was taken of them by Ministers, members of the ITU and other collaborative UN bodies, private sector entities such as ICANN, technical community such as the ISOC and IETF, as well as (obviously) civil society.

Substantively, the HLE did not address mass surveillance and privacy, nor of expanding roles of WSIS stakeholders and beyond. Processually, the MPP failed to reach consensus on several issues comfortably, and a compromise had to be brokered.

But perhaps a big change at the HLE was the positive attitude to multi-stakeholder models from many quarters, not least the ITU Secretary General Dr. Hamadoun Touré. His repeated calls for acceptance of multi-stakeholderism left many members of civil society surprised and tentatively pleased. Going forward, it will be interesting to track the ITU and the rest of UN’s (and of course, member states’) stances on multi-stakeholderism at the ITU Plenipot, the WSIS+10 Review and the UN General Assembly session, at the least.

Forum Track

by Geetha Hariharan last modified Jun 20, 2014 01:18 PM
UNESCO Session on Comprehensive Study on Internet-related Issues (June 11, 2014)
Forum Track
Full-size image: 65.8 KB | View image View Download image Download

Forum Track

by Geetha Hariharan last modified Jun 20, 2014 01:19 PM
UNESCO Session on Comprehensive Study on Internet-related Issues (June 11, 2014)
Forum Track
Full-size image: 65.8 KB | View image View Download image Download

High Level Track

by Geetha Hariharan last modified Jun 20, 2014 01:20 PM
Opening Ceremony, WSIS+10 High Level Event
High Level Track
Full-size image: 48.4 KB | View image View Download image Download

High Level Dialogues

by Geetha Hariharan last modified Jun 20, 2014 01:23 PM
Prof. Vladimir Minkin delivers a statement.
High Level Dialogues
Full-size image: 28.1 KB | View image View Download image Download

IANA Transition - Descriptive Brief

by Geetha Hariharan last modified Jun 22, 2014 03:32 AM
A brief describing the IANA transition process so far, and outlining the Indian government's views on the same.

PDF document icon CIS - IANA Transition - Descriptive Brief - FINAL (1).pdf — PDF document, 486 kB (497879 bytes)

NTIA Announcement

by Geetha Hariharan last modified Jun 22, 2014 03:11 AM
IANA Oversight Mechanism
NTIA Announcement
Full-size image: 198.6 KB | View image View Download image Download

Understanding IANA Stewardship Transition

by Smarika Kumar — last modified Jun 22, 2014 03:23 AM
Smarika Kumar describes the process of the IANA stewardship transition, and enumerates what the NTIA announcement does and does not do.

NTIA Announcement and ICANN-convened Processes:

On 14 March 2014, the National Telecommunications and Information Administration (NTIA) of the US Government announcedits intent to transition key Internet domain name functions to the global multistakeholder community”. These key Internet domain name functions refer to the Internet Assigned Numbers Authority (IANA) functions. For this purpose, the NTIA asked the Internet Corporation for Assigned Names and Numbers (ICANN) to “convene global stakeholders to develop a proposal to transition the current role played by NTIA in the coordination of the Internet’s domain name system (DNS)”. This was welcome news for the global Internet community, which has been criticising unilateral US Government oversight of Critical Internet Resources for many years now. NTIA further announced that IANA transition proposal must have broad community support and should address the following four principles:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and
  • Maintain the openness of the Internet.

Subsequently, during ICANN49 in Singapore (March 23-27, 2014), ICANN held flurried discussions to gather initial community feedback from participants to come up with a Draft Proposal of the Principles, Mechanisms and Process to Develop a Proposal to Transition NTIA’s Stewardship of the IANA Functions on 8 April 2014, which was open to public comments until 8 May 2014, which was further extended to 31 May 2014. Responses by various stakeholders were collected in this very short period and some of them were incorporated into a Revised Proposal issued by ICANN on 6th June 2014. ICANN also unilaterally issued a Scoping Document defining the scope of the process for developing the proposal and also specifying what was not part of the scope. This Scoping Document came under severe criticism by various commentators, but was not amended.

ICANN also initiated a separate but parallel process to discuss enhancement of its accountability on 6 May 2014. This was launched upon widespread distress over the fact that ICANN had excluded its role as operator of IANA functions from the Scoping Document, as well as over questions of accountability raised by the community at ICANN49 in Singapore. In the absence of ICANN’s contractual relationship with NTIA to operate the IANA functions, it remains unclear how ICANN will stay accountable upon the transition. The accountability process looks to address the same through the ICANN community. The issue of ICANN accountability is then envisioned to be coordination within ICANN itself through an ICANN Accountability Working Group comprised of community members and a few subject matter experts.

What are the IANA Functions?

Internet Assigned Numbers Authority, or IANA functions consist of three separate tasks:

  1. Maintaining a central repository for protocol name and number registries used in many Internet protocols.
  2. Co-ordinating the allocation of Internet Protocol (IP) and Autonomous System (AS) numbers to the Regional Internet Registries, who then distribute IP and AS numbers to ISPs and others within their geographic regions.
  3. Processing root zone change requests for Top Level Domains (TLDs) and making the Root Zone WHOIS database consisting of publicly available information for all TLD registry operators.

The first two of the abovementioned functions are operated by ICANN in consonance with policy developed at the Internet Engineering Task Force (IETF) and Address Supporting Organisation (ASO) respectively, both of which exist under the ICANN umbrella.

The performance of last of these functions is distributed between ICANN and Verisign. NTIA has a Cooperative Agreement with Verisign to perform the related root zone management functions. The related root zone management functions are the management of the root zone “zone signing key” (ZSK), as well as implementation of changes to and distribution of the DNS authoritative root zone file, which is the authoritative registry containing the lists of names and addresses for all top level domains.

Currently, the US Government oversees this entire set of operations by contracting with ICANN as well as Verisign to execute the IANA functions. Though the US Government does not interfere generally in operations of either ICANN or Verisign in their role as operators of IANA functions, it cannot be denied that it exercises oversight on both the operators of IANA functions, through these contracts.

Import of the NTIA Announcement:

The NTIA announcement of 14th March intends to initiate the withdrawal of such oversight of IANA functions by the NTIA in order to move towards global multistakeholder governance. NTIA has asked ICANN to initiate a process to decide upon what such global multistakeholder governance of IANA functions may look like. The following diagram presents the current governance structure of IANA functions and the areas that the NTIA announcement seeks to change:

NTIA Announcement

The IANA Oversight Mechanism (Source)

What does the NTIA Announcement NOT DO?

The NTIA announcement DOES NOT frame a model for governance of IANA functions once it withdraws its oversight role.  NTIA has asked ICANN to convene a process, which would figure the details of IANA transition and propose an administrative structure for IANA functions once the NTIA withdraws its oversight role. But what this new administrative structure would look like has not itself been addressed in the NTIA announcement. As per the NTIA announcement, the new administrative structure is yet to be decided by a global multistakeholder community in accordance with the four principles outlined by the NTIA through a process, which ICANN shall convene.

The NTIA announcement DOES NOT limit discussions and participation in IANA transition process to within the ICANN community. NTIA has asked ICANN to convene “global stakeholders to develop a proposal to transition” IANA functions. This means all global stakeholders participation, including that of Governments and Civil Society is sought for the IANA transition process. ICANN has been asked “to work collaboratively with the directly affected parties, including the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Society (ISOC), the Regional Internet Registries (RIRs), top level domain name operators, VeriSign, and other interested global stakeholders”, in the NTIA announcement. This however does not signify that discussions and participation in development of proposal for IANA transition needs to be limited to the ICANN community or the technical community.  In fact, ICANN has itself said that the list of events provided as “Timeline of Events” in its Draft Proposal of 8 April 2014 for engagement in development of a proposal for IANA transition is non-exhaustive. This means proposal for IANA transition can be developed by different stakeholders, including governments and civil society in different fora appropriate to their working, including at the IGF and WSIS+10.

The NTIA announcement DOES NOT mean devolution of IANA functions administration upon ICANN. NTIA chooses ICANN and Verisign to operate the IANA functions. If NTIA withdraws from its role, the question whether ICANN or Verisign should operate the IANA functions at all becomes an open one, and should be subject to deliberation. By merely asking ICANN to convene the process, the NTIA announcement in no way assigns any administration of IANA functions to ICANN. It must be remembered that the NTIA announcement says that key Internet domain name functions shall transition to the global multistakeholder community, and not the ICANN community.

The NTIA announcement DOES NOT prevent the possibility of removal of ICANN from its role as operator of IANA functions. While ICANN has tried to frame the Scoping Document in a language to prevent any discussions on its role as operator of IANA functions, the question whether ICANN should continue in its operator role remains an open one. There are at least 12 submissions made in response to ICANN’s Draft Proposal by varied stakeholders, which in fact, call for the separation of ICANN’s role as policy maker (through IETF, ASO, gNSO, ccNSO), and ICANN’s role as the operator of IANA functions.  Such calls for separation come from private sector, civil society, as well as the technical community, among others. Such separation was also endorsed in the final NETmundial outcome document (paragraph 27). Governments have, in general, expressed no opinion on such separation in response to ICANN’s Draft Proposal. It is however urged that governments express their opinion in favour of such separation to prevent consolidation of both policy making and implementation within ICANN, which would lead to increased potential situations for the ICANN Board to abuse its powers.


Smarika Kumar is a graduate of the National Law Institute University, Bhopal, and a member of the Alternative Law Forum, a collective of lawyers aiming to integrate alternative lawyering with critical research, alternative dispute resolution, pedagogic interventions and sustained legal interventions in social issues. Her areas of interest include interdisciplinary research on the Internet, issues affecting indigenous peoples, eminent domain, traditional knowledge and pedagogy.

CIS Policy Brief: IANA Transition Fundamentals & Suggestions for Process Design

by Geetha Hariharan last modified Jul 08, 2014 08:39 AM
In March 2014, the US government announced that it would transfer oversight of IANA functions to an as-yet-indeterminate global multi-stakeholder body. This policy brief, written by Smarika Kumar and Geetha Hariharan, explains the process concisely.

Short Introduction:

In March 2014, the National Telecommunications and Information Administration (NTIA) announced its intention to transition key Internet domain name functions to the global multi-stakeholder community. Currently, the NTIA oversees coordination and implementation of IANA functions through contractual arrangements with ICANN and Verisign, Inc.

The NTIA will not accept a government-led or inter-governmental organization to steward IANA functions. It requires the IANA transition proposal to have broad community support, and to be in line with the following principles: (1) support and enhance the multi-stakeholder model; (2) maintain the security, stability, and resiliency of the Internet DNS; (3) meet the needs and expectation of the global customers & partners of IANA services; (4) maintain the openness of the Internet.

ICANN was charged with developing a proposal for IANA transition. It initiated a call for public input in April 2014. Lamentably, the scoping document for the transition did not include questions of ICANN’s own accountability and interests in IANA stewardship, including whether it should continue to coordinate the IANA functions. Public Input received in May 2014 revolved around the composition of a Coordination Group, which would oversee IANA transition. Now, ICANN will hold an open session on June 26, 2014 at ICANN-50 to gather community feedback on issues relating to IANA transition, including composition of the Coordination Group.

CIS Policy Brief:

CIS' Brief on IANA Transition Fundamentals explains the process further, and throws light on the Indian government's views. To read the brief, please go here.

Suggestions for Process Design

As convenor of the IANA stewardship transition, ICANN has sought public comments on issues relating to the transition process. We suggest certain principles for open, inclusive and transparent process-design:

Short Introduction:

In March 2014, the US government through National Telecommunications and Information Administration (NTIA) announced its intention to transition key Internet domain name functions (IANA) to the global multi-stakeholder community. The NTIA announcement states that it will not accept a government-led or intergovernmental organization solution to replace its own oversight of IANA functions. The Internet Corporation for Assigned Names and Numbers (ICANN) was charged with developing a Proposal for the transition.

At ICANN-49 in Singapore (March 2014), ICANN rapidly gathered inputs from its community to develop a draft proposal for IANA transition. It then issued a call for public input on the Draft Proposal in April 2014. Some responses were incorporated to create a Revised Proposal, published on June 6, 2014.

Responses had called for transparent composition of an IANA transition Coordination Group, a group comprising representatives of ICANN’s Advisory Committees and Supporting Organizations, as well as Internet governance organizations such as the IAB, IETF and ISOC. Also, ICANN was asked to have a neutral, facilitative role in IANA transition. This is because, as the current IANA functions operator, it has a vested interest in the transition. Tellingly, ICANN’s scoping document for IANA transition did not include questions of its own role as IANA functions operator.

ICANN is currently deliberating the process to develop a Proposal for IANA transition. At ICANN-50, ICANN will hold a governmental high-level meeting and a public discussion on IANA transition, where comments and concerns can be voiced. In addition, discussion in other Internet governance fora is encouraged.

CIS Policy Brief:

CIS' Brief on IANA Transition Principles explains our recommendations for transition process-design. To read the brief, please go here.

IANA Transition: Suggestions for Process Design

by Smarika Kumar — last modified Jun 22, 2014 09:15 AM
With analysis of community-input and ICANN processes, Smarika Kumar offers concrete suggestions for process design. She urges the Indian government to take a stronger position in matters of IANA transition.

Introduction:

On 14 March 2014, the NTIA of the US Government announced its intention to transition key internet domain name functions to the global multistakeholder community. These key internet domain name functions comprise functions executed by Internet Assigned Numbers Authority (IANA), which is currently contracted to ICANN by the US government. The US Government delineated that the IANA transition proposal must have broad community support and should address the following four principles:

  1. Support and enhance the multistakeholder model;
  2. Maintain the security, stability, and resiliency of the Internet DNS;
  3. Meet the needs and expectation of the global customers and partners of the IANA services; and
  4. Maintain the openness of the Internet.

Additionally, the US Government asked ICANN to convene a multistakeholder process to develop the transition plan for IANA. In April 2014, ICANN issued a Scoping Document for this process which outlined the scope of the process, as well as, what ICANN thinks, should not be a part of the process. In the spirit of ensuring broad community consensus, ICANN issued a Call for Public Input on the Draft Proposal of the Principles, Mechanisms and Process to Develop a Proposal to Transition NTIA’s Stewardship of IANA Functions on 8 April 2014, upon which the Government of India made its submission.

ICANN is currently deliberating the process for the development of a proposal for transition of IANA functions from the US Government to the global multistakeholder community, a step which would have implications for internet users all over the world, including India. The outcome of this process will be a proposal for IANA transition. The Scoping Document and process for development of the proposal are extremely limited and exclusionary, hurried, and works in ways which could potentially further ICANN’s own interests instead of global public interests. Accordingly, the Government of India is recommended take a stand on the following key points concerning the suggested process.

Submissions by the Government of India thus far, have however, failed to comment on the process being initiated by ICANN to develop a proposal for IANA transition. While the actual outcome of the process in form of a proposal for transition is an important issue for deliberation, we hold that it is of immediate importance that the Government of India, along with all governments of the world, pay particular attention to the way ICANN is conducting the process itself to develop the IANA transition proposal. The scrutiny of this process is of immense significance in order to ensure that democratic and representative principles sought by the GoI in internet governance are being upheld within the process of developing the IANA transition proposal. How the governance of the IANA functions will be structured will be an outcome of this process. Therefore if one expects a democratic, representative and transparent governance of IANA functions as the outcome, it is absolutely essential to ensure that the process itself is democratic, representative and transparent.

Issues and Recommendations:

Ensuring adequate representation and democracy of all stakeholders in the process for developing the proposal for IANA transition is essential to ensuring representative and democratic outcomes. Accordingly, one must take note of the following issues and recommendations concerning the process.

Open, inclusive deliberation by global stakeholders must define the Scope of the Process for developing proposal for IANA transition:

The current Scoping Document was issued by ICANN to outline the scope of the process by which the proposal for IANA transition would be deliberated. The Scoping Document was framed unilaterally by ICANN, without involvement of the global stakeholder community, and excluding all governments of the world including USA. Although this concern was voiced by a number of submissions to the Public Call by ICANN on the Draft Proposal, such concern was not reflected in ICANN’s Revised Proposal of 6 June 2014. It merely states that the Scoping Document outlines the “focus of this process.” Such a statement is not enough because the focus as well as the scope of the process needs to be decided in a democratic, unrepresentative and transparent manner by the global stakeholder community, including all governments.

This unilateral approach to outline which aspects of IANA transition should be allowed for discussion, and which aspects should not, itself defeats the multistakeholder principle which ICANN and the US government claim the process is based on. Additionally, global community consensus which the US Govt. hopes for the outcome of such process, cannot be conceivable when the scope of such process is decided in a unilateral and undemocratic manner. Accordingly, the current Scoping Document should be treated only as a draft, and should be made open to public comment and discussion by the global stakeholder community in order that the scope of the process reflects concerns of global stakeholders, and not just of the ICANN or the US Government.

Accountability of ICANN must be linked to IANA Transition within Scope of the Process:

ICANN Accountability must not run merely as a parallel process, since ICANN accountability has direct impact on IANA transition. The current Scoping Document states, “NTIA exercises no operational role in the performance of the IANA functions. Therefore, ICANN’s role as the operator of the IANA functions is not the focus of the transition: it is paramount to maintain the security, stability, and resiliency of the DNS, and uninterrupted service to the affected parties.” However this rationale to exclude ICANN’s role as operator of IANA from the scope of the process is not sound because NTIA does choose to appoint ICANN as the operator of IANA functions, thereby playing a vicarious operational role in the performance of IANA functions.

The explicit exclusion of ICANN’s role as operator of IANA functions from the scope of the process works to serve ICANN’s own interests by preventing discussions on those alternate models where ICANN does not play the operator role. Basically, this presumes that in absence of NTIA stewardship ICANN will control the IANA functions. Such presumption raises disturbing questions regarding ICANN’s accountability as the IANA functions operator. If discussions on ICANN’s role as operator of IANA functions is to be excluded from the process of developing the proposal for IANA transition, it also implies exclusion of discussions regarding ICANN’s accountability as operator of these functions.

Although ICANN announced a process to enhance its accountability on 6 May 2014, this was designed as a separate, parallel process and de-linked from the IANA transition process. As shown, ICANN’s accountability, its role as convenor of IANA transition process, and its role as current and/or potential future operator of IANA functions are intrinsically linked, and must not be discussed in separate, but parallel process. It is recommended that ICANN accountability in the absence of NTIA stewardship, and ICANN’s role as the operator of IANA functions must be included within the Scoping Document as part of the scope of the IANA transition process. This is to ensure that no kind of IANA transition is executed without ensuring ICANN’s accountability as and if as the operator of IANA functions so that democracy and transparency is brought to the governance of IANA functions.

Misuse or appearance of misuse of its convenor role by ICANN to influence outcome of the Process must not be allowed:

ICANN has been designated the convenor role by the US Govt. on basis of its unique position as the current IANA functions contractor and the global co-ordinator for the DNS. However it is this unique position itself which creates a potential for abuse of the process by ICANN. As the current contractor of IANA functions, ICANN has an interest in the outcome of the process being conducive to ICANN. In other words, ICANN prima facie is an interested party in the IANA transition process, which may tend to steer the process towards an outcome favourable to itself. ICANN has already been attempting to set the scope of the process to develop the proposal for IANA transition unilaterally, thus abusing its position as convenor. ICANN has also been trying to separate the discussions on IANA transition and its own accountability by running them as parallel processes, as well as attempting to prevent questions on ICANN’s role as operator of IANA functions by excluding it from the Scoping Document. Such instances provide a strong rationale for defining the limitations of the role of ICANN as convenor.

Although ICANN’s Revised Proposal of 6 June 2014 stating that ICANN will have a neutral role, and the Secretariat will be independent of ICANN staff is welcome, additional safeguards need to be put in place to avoid conflicts of interest or appearance of conflicts of interest. The Revised Proposal itself was unilaterally issued, whereby ICANN incorporated some of the comments made on its Proposed Draft, in the revised Draft, but excluded some others without providing rationale for the same. For instance, comments regarding inclusion of ICANN’s role as the operator of IANA functions within the Scoping Document, were ignored by ICANN in its Revised Proposal.

It is accordingly suggested that ICANN should limit its role to merely facilitating discussions and not extend it to reviewing or commenting on emerging proposals from the process. ICANN should further not compile comments on drafts to create a revised draft at any stage of the process. Additionally, ICANN staff must not be allowed to be a part of any group or committee which facilitates or co-ordinates the discussion regarding IANA transition.

Components of Diversity Principle should be clearly enunciated in the Draft Proposal:

The Diversity Principle was included by ICANN in the Revised Proposal of 6 June 2014 subsequent to submissions by various stakeholders who raised concerns regarding developing world participation, representation and lack of multilingualism in the process. This is laudable. However, past experience with ICANN processes has shown that many representatives from developing countries as well as from stakeholder communities outside of the ICANN community are unable to productively involve themselves in such processes because of lack of multilingualism or unfamiliarity with its way of functioning. This often results in undemocratic, unrepresentative and non-transparent decision-making in such processes.

In such a scenario, merely mentioning diversity as a principle is not adequate to ensure abundant participation by developing countries and non-ICANN community stakeholders in the process. Concrete mechanisms need to be devised to include adequate and fair geographical, gender, multilingual and developing countries’ participation and representation on all levels so that the process is not relegated merely to domination by North American or European entities. Accordingly, all the discussions in the process should be translated into multiple native languages of participants in situ, so that everyone participating in the process can understand what is going on. Adequate time must be given for the discussion issues to be translated and circulated widely amongst all stakeholders of the world, before a decision is taken or a proposal is framed. To concretise its diversity principle, ICANN should also set aside funds and develop a programme with community support for capacity building for stakeholders in developing nations to ensure their fruitful involvement in the process.

The Co-ordination Group must be made representative of the global multistakeholder community:

Currently, the Co-ordination Group includes representatives from ALAC, ASO, ccNSO, GNSO, gTLD registries, GAC, ICC/BASIS, IAB, IETF, ISOC, NRO, RSSAC and SSAC. Most of these representatives belong to the ICANN community, and is not representative of the global multistakeholder community including governments. This is not representative of even a multistakeholder model which the US Govt. has announced for the transition; nor in the multistakeholder participation spirit of NETmundial.

It is recommended that the Co-ordination Group then must be made democratic and representative to include larger global stakeholder community, including Governments, Civil Society, and Academia, with suitably diverse representation across geography, gender and developing nations. Adequate number of seats on the Committee must be granted to each stakeholder so that they can each co-ordinate discussions within their own communities and ensure wider and more inclusive participation.

Framing of the Proposal must allow adequate time:

All stakeholder communities must be permitted adequate time to discuss and develop consensus. Different stakeholder communities have different processes of engagement within their communities, and may take longer to reach a consensus than others. If democracy and inclusiveness are to be respected, then each stakeholder must be allowed enough time to reach a consensus within its own community, unlike the short time given to comment on the Draft Proposal. The process must not be rushed to benefit a few.


Smarika Kumar is a graduate of the National Law Institute University, Bhopal, and a member of the Alternative Law Forum, a collective of lawyers aiming to integrate alternative lawyering with critical research, alternative dispute resolution, pedagogic interventions and sustained legal interventions in social issues. Her areas of interest include interdisciplinary research on the Internet, issues affecting indigenous peoples, eminent domain, traditional knowledge and pedagogy.

IANA Transition Recommendatory Brief

by Geetha Hariharan last modified Jun 22, 2014 09:21 AM
Policy brief with recommendations for process-design principles for IANA transition

PDF document icon *CIS - IANA Recommendatory Brief.pdf — PDF document, 497 kB (509647 bytes)

FOEX Live: June 16-23, 2014

by Geetha Hariharan last modified Jun 24, 2014 10:23 AM
A weekly selection of news on online freedom of expression and digital technology from across India (and some parts of the world).

A quick and non-exhaustive perusal of this week’s content shows that many people are worried about the state of India’s free speech following police action on account of posts derogatory to or critical of the Prime Minister. Lawyers, journalists, former civil servants and other experts have joined in expressing this worry.

While a crackdown on freedom of expression would indeed be catastrophic and possibly unconstitutional, fears are so far based on police action in only 4 recent cases: Syed Waqar in Karnataka, Devu Chodankar in Goa and two cases in Kerala where college students and principals were arrested for derogatory references to Modi. Violence in Pune, such as the murder of a young Muslim man on his way home from prayer, or the creation of a Social Peace Force of citizens to police offensive Facebook content, are all related, but perhaps ought to be more carefully and deeply explored.

Kerala:

In the Assembly, State Home Minister Ramesh Chennithala said that the State government did not approve of the registration of cases against students on grounds of anti-Modi publications. The Minister denunciation of political opponents through cartoons and write-ups was common practice in Kerala, and “booking the authors for this was not the state government’s policy”.

Maharashtra:

Nearly 20,000 people have joined the Social Peace Force, a Facebook group that aims to police offensive content on the social networking site. The group owner’s stated aim is to target religious posts that may provoke riots, not political ones. Subjective determinations of what qualifies as ‘offensive content’ remain a troubling issue.

Tamil Nadu:

In Chennai, 101 people, including filmmakers, writers, civil servants and activists, have signed a petition requesting Chief Minister J. Jayalalithaa to permit safe screening of the Indo-Sri Lankan film “With You, Without You”. The petition comes after theatres cancelled shows of the film following threatening calls from some Tamil groups.

Telangana:

The K. Chandrasekhar Rao government has blocked two Telugu news channels for airing content that was “derogatory, highly objectionable and in bad taste”.

The Telagana government’s decision to block news channels has its supporters. Padmaja Shaw considers the mainstream Andhra media contemptuous and disrespectful of “all things Telangana”, while Madabushi Sridhar concludes that Telugu channel TV9’s coverage violates the dignity of the legislature.

West Bengal:

Seemingly anti-Modi arrests have led to worry among citizens about speaking freely on the Internet. Section 66A poses a particular threat.

News & Opinion:

The Department of Telecom is preparing a draft of the National Telecom Policy, in which it plans to treat broadband Internet as a basic right. The Policy, which will include deliberations on affordable broadband access for end users, will be finalised in 100 days.

While addressing a CII CEO’s Roundtable on Media and Industry, Information and Broadcasting Minister Prakash Javadekar promised a transparent and stable policy regime, operating on a time-bound basis. He promised that efforts would be streamlined to ensure speedy and transparent clearances.

A perceived increase in police action against anti-Modi publications or statements has many people worried. But the Prime Minister himself was once a fierce proponent of dissent; in protest against the then-UPA government’s blocking of webpages, Modi changed his display pic to black.

Medianama wonders whether the Mumbai police’s Cyber Lab and helpline to monitor offensive content on the Internet is actually a good idea.

G. Sampath wonders why critics of the Prime Minister Narendra Modi can’t voluntarily refrain from exercising their freedom of speech, and allow India to be an all-agreeable development haven. Readers may find his sarcasm subtle and hard to catch.

Experts in India mull over whether Section 79 of the Information Technology Act, 2000, carries a loophole enabling users to exercise a ‘right to be forgotten’. Some say Section 79 does not prohibit user requests to be forgotten, while others find it unsettling to provide private intermediaries such powers of censorship.

Some parts of the world:

Sri Lanka has banned public meetings or rallies intended to promote religious hatred.

In Pakistan, Twitter has restored accounts and tweets that were taken down last month on allegations of being blasphemous or ‘unethical’.

In Myanmar, an anti-hate speech network has been proposed throughout the country to raise awareness and opposition to hate speech and violence.


For feedback, comments and any incidents of online free speech violation you are troubled or intrigued by, please email Geetha at geetha[at]cis-india.org or on Twitter at @covertlight.

Free Speech and Civil Defamation

by Gautam Bhatia last modified Jul 08, 2014 08:31 AM
Does defamation become a tool in powerful hands to suppress criticism? Gautam Bhatia examines the strict and unrealistic demands of defamation law, and concludes that defamation suits are a weapon to silence dissent and bad press.

Previously on this blog, we have discussed one of the under-analysed aspects of Article 19(2) – contempt of court. In the last post, we discussed the checking – or “watchdog” – function of the press. There is yet another under-analysed part of 19(2) that we now turn to – one which directly implicates the press, in its role as public watchdog. This is the issue of defamation.

Unlike contempt of court – which was a last-minute insertion by Ambedkar, before the second reading of the draft Constitution in the Assembly – defamation was present in the restrictions clause since the Fundamental Rights Sub-Committee’s first draft, in 1947. Originally, it accompanied libel and slander, before the other two were dropped for the simpler “reasonable restrictions… in the interests of… defamation.” Unlike the other restrictions, which provoked substantial controversy, defamation did not provoke extended scrutiny by the Constituent Assembly.

In hindsight, that was a lapse. In recent years, defamation lawsuits have emerged as a powerful weapon against the press, used primarily by individuals and corporations in positions of power and authority, and invariably as a means of silencing criticism. For example, Hamish MacDonald’s The Polyester Prince, a book about the Ambanis, was unavailable in Indian bookshops, because of threats of defamation lawsuits. In January, Bloomsbury withdrew The Descent of Air India, which was highly critical of ex-Aviation Minister Praful Patel, after the latter filed a defamation lawsuit. Around the same time, Sahara initiated a 200 crore lawsuit against Tamal Bandyopadhayay, a journalist with The Mint, for his forthcoming book, Sahara: The Untold Story. Sahara even managed to get a stay order from a Calcutta High Court judge, who cited one paragraph from the book, and ruled that “Prima facie, the materials do seem to show the plaintiffs in poor light.” The issue has since been settled out of Court. Yet there is no guarantee that Bandyopadhyay would have won on merits, even with the absurd amount claimed as damages, given that a Pune Court awarded damages of Rs. 100 crores to former Justice P.B. Sawant against the Times Group, for a fifteen-second clip by a TV channel that accidentally showed his photograph next to the name of a judge who was an accused in a scam. What utterly takes the cake, though, is Infosys serving legal notices to three journalistic outlets recently, asking for damages worth Rs. 200 crore for “loss of reputation and goodwill due to circulation of defamatory articles.”

Something is very wrong here. The plaintiffs are invariably politicians or massive corporate houses, and the defendants are invariably journalists or newspapers. The subject is always critical reporting. The damages claimed (and occasionally, awarded) are astronomical – enough to cripple or destroy any business – and the actual harm is speculative. A combination of these factors, combined with a broken judicial system in which trials take an eternity to progress, leading to the prospect of a lawsuit hanging perpetually over one’s head, and financial ruin just around the corner, clearly has the potential to create a highly effective chilling effect upon newspapers, when it come to critical speech on matters of public interest.

One of the reasons that this happens, of course, is that extant defamation law allows it to happen. Under defamation law, as long as a statement is published, is defamatory (that is, tending to lower the reputation of the plaintiff in the minds of reasonable people) and refers to the plaintiff, a prima facie case of defamation is made out. The burden then shifts to the defendant to argue a justification, such as truth, or fair comment, or privileged communication. Notice that defamation, in this form, is a strict liability offence: that is, the publisher cannot save himself even if he has taken due care in researching and writing his story. Even an inadvertent factual error can result in liability. Furthermore, there are many things that straddle a very uncomfortable barrier between “fact” and “opinion” (“opinions” are generally not punishable for defamation): for example, if I call you “corrupt”, have I made a statement of fact, or one of opinion? Much of reporting – especially political reporting – falls within this slipstream.

The legal standard of defamation, therefore, puts almost all the burden upon the publisher, a burden that will often be impossible to discharge – as well as potentially penalising the smallest error. Given the difficulty in fact-checking just about everything, as well as the time pressures under which journalists operate, this is an unrealistic standard. What makes things even worse, however, is that there is no cap on damages, and that the plaintiff need not even demonstrate actual harm in making his claims. Judges have the discretion to award punitive damages, which are meant to serve both as an example and as a deterrent. When Infosys claims 2000 crores, therefore, it need not show that there has been a tangible drop in its sales, or that it has lost an important and lucrative contract – let alone showing that the loss was caused by the defamatory statement. All it needs to do is make abstract claims about loss of goodwill and reputation, which are inherently difficult to verify either way, and it stands a fair chance of winning.

A combination of onerous legal standards and crippling amounts in damages makes the defamation regime a very difficult one for journalists to operate freely in. We have discussed before the crucial role that journalists play in a system of free speech whose underlying foundation is the maintenance of democracy: a free press is essential to maintaining a check upon the actions of government and other powerful players, by subjecting them to scrutiny and critique, and ensuring that the public is aware of important facts that government might be keen to conceal. In chilling journalistic speech, therefore, defamation laws strike at the heart of Article 19(1)(a). When considering what the appropriate standards ought to be, a Court therefore must consider the simple fact that if defamation – as it stands today – is compromising the core of 19(1)(a) itself, then it is certainly not a “reasonable restriction” under 19(2) (some degree of proportionality is an important requirement for 19(2) reasonableness, as the Court has held many times).

This is not, however, a situation unique to India. In Singapore, for instance, “[political] leaders have won hundreds of thousands of dollars in damages in defamation cases against critics and foreign publications, which they have said are necessary to protect their reputations from unfounded attacks” – the defamation lawsuit, indeed, was reportedly a legal strategy used by Lee Kuan Yew against political opponents.

Particularly in the United States, the European Union and South Africa, however, this problem has been recognised, and acted upon. In the next post, we shall examine some of the legal techniques used in those jurisdictions, to counter the chilling effect that strict defamation laws can have on the press.

We discussed the use of civil defamation laws as weapons to stifle a free and critical press. One of the most notorious of such instances also birthed one of the most famous free speech cases in history: New York Times v. Sullivan. This was at the peak of the civil rights movement in the American South, which was accompanied by widespread violence and repression of protesters and civil rights activists. A full-page advertisement was taken out in the New York Times, titled Heed Their Rising Voices, which detailed some particularly reprehensible acts by the police in Montgomery, Alabama. It also contained some factual errors. For example, the advertisement mentioned that Martin Luther King Jr. had been arrested seven times, whereas he had only been arrested four times. It also stated that the Montgomery police had padlocked students into the university dining hall, in order to starve them into submission. That had not actually happened. On this basis, Sullivan, the Montgomery police commissioner, sued for libel. The Alabama courts awarded 500,000 dollars in damages. Because five other people in a situation similar to Sullivan were also suing, the total amount at stake was three million dollars – enough to potentially boycott the New York Times, and certainly enough to stop it from publishing about the civil rights movement.

In his book about the Sullivan case, Make No Law, Anthony Lewis notes that the stakes in the case were frighteningly high. The civil rights movement depended, for its success, upon stirring public opinion in the North. The press was just the vehicle to do it, reporting as it did on excessive police brutality against students and peaceful protesters, practices of racism and apartheid, and so on. Sullivan was a legal strategy to silence the press, and its weapon of choice was defamation law.

In a 9 – 0 decision, the Supreme Court found for the New York Times, and changed the face of free speech law (and, according to Lewis, saved the civil rights movement). Writing for the majority, Justice Brennan made the crucial point that in order to survive, free speech needed “breathing space” – that is, the space to make errors. Under defamation law, as it stood, “the pall of fear and timidity imposed upon those who would give voice to public criticism [is] an atmosphere in which the First Amendment freedoms cannot survive.” And under the burden of proving truth, “would-be critics of official conduct may be deterred from voicing their criticism, even though it is believed to be true and even though it is, in fact, true, because of doubt whether it can be proved in court or fear of the expense of having to do so. They tend to make only statements which "steer far wider of the unlawful zone." For these reasons, Justice Brennan laid down an “actual malice” test for defamation – that is, insofar as the statement in question concerned the conduct of a public official, it was actionable for defamation only if the publisher either knew it was false, or published it with “reckless disregard” for its veracity. After New York Times, this standard has expanded, and the press has never lost a defamation case.

There are some who argue that in its zeal to protect the press against defamation lawsuits by the powerful, the Sullivan court swung the opposite way. In granting the press a near-unqualified immunity to say whatever it wanted, it subordinated the legitimate interests of people to their reputation and their dignity to an intolerable degree, and ushered in a regime of media unaccountability. This is evidently what the South African courts felt. In Khulamo v. Holomisa, Justice O’Regan accepted that the common law of defamation would have to be altered so as to reflect the new South African Constitution’s guarantees of the freedom of speech. Much like Justice Brennan, she noted that the media are important agents in ensuring that government is open, responsive and accountable to the people as the founding values of our Constitution require”, as well as the chilling effect in requiring journalists to prove the truth of everything they said. Nonetheless, she was not willing to go as far as the American Supreme Court did. Instead, she cited a previous decision by the Supreme Court of Appeals, and incorporated a “resonableness standard” into defamation law. That is, “if a publisher cannot establish the truth, or finds it disproportionately expensive or difficult to do so, the publisher may show that in all the circumstances the publication was reasonable.  In determining whether publication was reasonable, a court will have regard to the individual’s interest in protecting his or her reputation in the context of the constitutional commitment to human dignity.  It will also have regard to the individual’s interest in privacy.  In that regard, there can be no doubt that persons in public office have a diminished right to privacy, though of course their right to dignity persists.  It will also have regard to the crucial role played by the press in fostering a transparent and open democracy.  The defence of reasonable publication avoids therefore a winner-takes-all result and establishes a proper balance between freedom of expression and the value of human dignity.  Moreover, the defence of reasonable publication will encourage editors and journalists to act with due care and respect for the individual interest in human dignity prior to publishing defamatory material, without precluding them from publishing such material when it is reasonable to do so.”

The South African Constitutional Court thus adopts a middle path between the two opposite zero-sum games that are traditional defamation law, and American first amendment law. A similar effort was made in the United Kingdom – the birthplace of the common law of defamation – with the passage of the 2013 Defamation Act. Under English law, the plaintiff must now show that there is likely to be “serious harm” to his reputation, and there is also public interest exception.

While South Africa and the UK try to tackle the problem at the level of standards for defamation, the ECHR has taken another, equally interesting tack: by limiting the quantum of damages. In Tolstoy Milolasky v. United Kingdom, it found a 1.5 million pound damage award “disproportionately large”, and held that there was a violation of the ECHR’s free speech guarantee that could not be justified as necessary in a democratic society.

Thus, constitutional courts the world over have noticed the adverse impact traditional defamation law has on free speech and a free press. They have devised a multiplicity of ways to deal with this, some more speech-protective than others: from America’s absolutist standards, to South Africa’s “reasonableness” and the UK’s “public interest” exceptions, to the ECHR’s limitation of damages. It is about time that the Indian Courts took this issue seriously: there is no dearth of international guidance.


Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and has just received an LLM from the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he blogs on issues of online freedom of speech and expression.

An Evidence based Intermediary Liability Policy Framework: Workshop at IGF

by Jyoti Panday last modified Jul 04, 2014 06:41 AM
CIS is organising a workshop at the Internet Governance Forum 2014. The workshop will be an opportunity to present and discuss ongoing research on the changing definition of intermediaries and their responsibilities across jurisdictions and technologies and contribute to a comprehensible framework for liability that is consistent with the capacity of the intermediary and with international human-rights standards.

The Centre for Internet and Society, India and Centre for Internet and Society, Stanford Law School, USA, will be organising a workshop to analyse the role of intermediary platforms in relation to freedom of expression, freedom of information and freedom of association at the Internet Governance Forum 2014. The aim of the workshop is to highlight the increasing importance of digital rights and broad legal protections of stakeholders in an increasingly knowledge-based economy. The workshop will discuss public policy issues associated with Internet intermediaries, in particular their roles, legal responsibilities and related liability limitations in context of the evolving nature and role of intermediaries in the Internet ecosystem. distinct

Online Intermediaries: Setting the context

The Internet has facilitated unprecedented access to information and amplified avenues for expression and engagement by removing the limits of geographic boundaries and enabling diverse sources of information and online communities to coexist. Against the backdrop of a broadening base of users, the role of intermediaries that enable economic, social and political interactions between users in a global networked communication is ubiquitous. Intermediaries are essential to the functioning of the Internet as many producers  and consumers of content on the internet rely on the action of some third party–the so called intermediary. Such intermediation ranges from the mere provision of connectivity, to more advanced services such as providing online storage spaces for data, acting as platforms for storage and sharing of user generated content (UGC), or platforms that provides links to other internet content.

Online intermediaries enhance economic activity by reducing costs, inducing competition by lowering the barriers for participation in the knowledge economy and fuelling innovation through their contribution to the wider ICT sector as well as through their key role in operating and maintaining Internet infrastructure to meet the network capacity demands of new applications and of an expanding base of users.

Intermediary platforms also provide social benefits, by empowering users and improving  choice through social and participative networks, or web services that enable creativity and collaboration amongst individuals. By enabling platforms for self-expression and cooperation, intermediaries also play a critical role in establishing digital trust, protection of human rights such as freedom of speech and expression, privacy and upholding fundamental values such as freedom and democracy.

However, the economic and social benefits of online intermediaries are conditional to a framework for protection of intermediaries against legal liability for the communication and distribution of content which they enable.

Intermediary Liability

Over the last decade, right holders, service providers and Internet users have been locked in a  debate on the potential liability of online intermediaries. The debate has raised global concerns on issues such as, the extent to which Internet intermediaries should be held responsible for content produced by third parties using their Internet infrastructure and how the resultant liability would affect online innovation and the free flow of knowledge in the information economy?

Given the impact of their services on communications, intermediaries find themselves as either directly liable for their actions, or indirectly (or “secondarily”) liable for the actions of their users. Requiring intermediaries to monitor the legality of the online content poses an insurmountable task. Even if monitoring the legality of content by intermediaries against all applicable legislations were possible, the costs of doing so would be prohibitively high. Therefore, placing liability on intermediaries can deter their willingness and ability to provide services, hindering the development of the internet itself.

Economics of intermediaries are dependent on scale and evaluating the legality of an individual post exceeds the profit from hosting the speech, and in the absence of judicial oversight can lead to a private censorship regime. Intermediaries that are liable for content or face legal exposure, have powerful incentives, to police content and limit user activity to protect themselves.  The result is curtailing of legitimate expression especially where obligations related to and definition of illegal content is vague. Content policing mandates impose significant compliance costs limiting the innovation and competiveness of such platforms.

More importantly, placing liability on intermediaries has a chilling effect on freedom of expression online. Gate keeping obligations by service providers threaten democratic participation and expression of views online, limiting the potential of individuals and restricting freedoms. Imposing liability can also indirectly lead to the death of anonymity and pseudonymity, pervasive surveillance of users' activities, extensive collection of users' data and ultimately would undermine the digital trust between stakeholders.

Thus effectively, imposing liability for intermediaries creates a chilling effect on Internet activity and speech, create new barriers to innovation and stifles the Internet's potential to promote broader economic and social gains.  To avoid these issues, legislators have defined 'safe harbours', limiting the liability of intermediaries under specific circumstances.

Online intermediaries do not have direct control of what information is or information are exchanged via their platform and might not be aware of illegal content per se. A key framework for online intermediaries, such limited liability regimes provide exceptions for third party intermediaries from liability rules to address this asymmetry of information that exists between content producers and intermediaries.

However, it is important to note, that significant differences exist concerning the subjects of these limitations, their scope of provisions and procedures and modes of operation. The 'notice and takedown' procedures are at the heart of the safe harbour model and can be subdivided into two approaches:

a. Vertical approach where liability regime applies to specific types of content exemplified in the US Digital Copyright Millennium Act

b. Horizontal approach based on the E-Commerce Directive (ECD) where different levels of immunity are granted depending on the type of activity at issue

Current framework

Globally, three broad but distinct models of liability for intermediaries have emerged within the Internet ecosystem:

1. Strict liability model under which intermediaries are liable for third party content used in countries such as China and Thailand

2. Safe harbour model granting intermediaries immunity, provided their compliance on certain requirements

3. Broad immunity model that grants intermediaries broad or conditional immunity from liability for third party content and exempts them from any general requirement to monitor content.

While the models described above can provide useful guidance for the drafting or the improvement of the current legislation, they are limited in their scope and application as they fail to account for the different roles and functions of intermediaries. Legislators and courts are facing increasing difficulties, in interpreting these regulations and adapting them to a new economic and technical landscape that involves unprecedented levels user generated content and new kinds of and online intermediaries.

The nature and role of intermediaries change considerably across jurisdictions, and in relation to the social, economic and technical contexts. In addition to the dynamic nature of intermediaries the different categories of Internet intermediaries‘ are frequently not clear-cut, with actors often playing more than one intermediation role. Several of these intermediaries offer a variety of products and services and may have number of roles, and conversely,  several of these intermediaries perform the same function. For example , blogs, video services and social media platforms are considered to be 'hosts'. Search engine providers have been treated as 'hosts' and 'technical providers'.

This limitations of existing models in recognising that different types of intermediaries perform different functions or roles  and therefore should have different liability, poses an interesting area for research and global deliberation. Establishing classification of intermediaries, will also help analyse existing patterns of influence in relation to content for example when the removal of content by upstream intermediaries results in undue over-blocking.

Distinguishing intermediaries on the basis of their roles and functions in the Internet ecosystem is  critical to ensuring a balanced system of liability and addressing concerns for freedom of expression. Rather than the highly abstracted view of intermediaries as providing a single unified service of connecting third parties, the definition of intermediaries must expand to include the specific role and function they have in relation  to users'  rights.  A successful intermediary liability regime must balance the needs of producers, consumers, affected parties and law enforcement, address the risk of abuses for political or commercial purposes, safeguard human rights and contribute to the evolution of uniform principles and safeguards.

Towards an evidence based intermediary liability policy framework

This workshop aims to bring together leading representatives from a broad spectrum of stakeholder groups to discuss liability related issues and ways to enhance Internet users’ trust.

Questions to address at the panel include:

1. What are the varying definitions of intermediaries across jurisdictions?

2. What are the specific roles and functions that allow for classification of intermediaries?

3. How can we ensure the legal framework keeps pace with technological advances and the changing roles of intermediaries?

4. What are the gaps in existing models in balancing innovation, economic growth and human rights?

5. What could be the respective role of law and industry self-regulation in enhancing trust?

6. How can we enhance multi-stakeholder cooperation in this space?

Confirmed Panel:

Technical Community: Malcolm Hutty: Internet Service Providers Association (ISPA)
Civil Society: Gabrielle Guillemin: Article19
Academic: Nicolo Zingales: Assistant Professor of Law at Tilburg University
Intergovernmental: Rebecca Mackinnon: Consent of the Networked, UNESCO project
Civil Society: Anriette Esterhuysen: Association for Progressive Communication (APC)
Civil Society: Francisco Vera: Advocacy Director: Derechos Digitale
Private Sector: Titi Akinsanmi: Policy and Government Relations Manager, Google Sub-Saharan Africa
Legal: Martin Husovec: MaxPlanck Institute

Moderator(s): Giancarlo Frosio, Centre for Internet and Society (CIS) and Jeremy Malcolm, Electronic Frontier Foundation

Remote Moderator: Anubha Sinha, New Delhi

TLD

by Geetha Hariharan last modified Jul 01, 2014 12:38 PM
Part of a web address
TLD
Full-size image: 230.6 KB | View image View Download image Download

ICANN’s Documentary Information Disclosure Policy – I: DIDP Basics

by Vinayak Mithal — last modified Jul 01, 2014 01:01 PM
In a series of blogposts, Vinayak Mithal analyses ICANN's reactive transparency mechanism, comparing it with freedom of information best practices. In this post, he describes the DIDP and its relevance for the Internet community.

The Internet Corporation for Assigned Names and Numbers (“ICANN”) is a non-profit corporation incorporated in the state of California and vested with the responsibility of managing the DNS root, generic and country-code Top Level Domain name system, allocation of IP addresses and assignment of protocol identifiers. As an internationally organized corporation with its own multi-stakeholder community of Advisory Groups and Supporting Organisations, ICANN is a large and intricately woven governance structure. Necessarily, ICANN undertakes through its Bye-laws that “in performing its functions ICANN shall remain accountable to the Internet community through mechanisms that enhance ICANN’s effectiveness”. While many of its documents, such as its Annual Reports, financial statements and minutes of Board meetings, are public, ICANN has instituted the Documentary Information Disclosure Policy (“DIDP”), which like the RTI in India, is a mechanism through which public is granted access to documents with ICANN which are not otherwise available publicly. It is this policy – the DIDP – that I propose to study.

In a series of blogposts, I propose to introduce the DIDP to unfamiliar ears, and to analyse it against certain freedom of information best practices. Further, I will analyse ICANN’s responsiveness to DIDP requests to test the effectiveness of the policy. However, before I undertake such analysis, it is first good to know what the DIDP is, and how it is crucial to ICANN’s present and future accountability.

What is the DIDP?

One of the core values of the organization as enshrined under Article I Section 4.10 of the Bye-laws note that “in performing its functions ICANN shall remain accountable to the Internet community through mechanisms that enhance ICANN’s effectiveness”. Further, Article III of the ICANN Bye-laws, which sets out the transparency standard required to be maintained by the organization in the preliminary, states - “ICANN and its constituent bodies shall operate to the maximum extent feasible in an open and transparent manner and consistent with procedures designed to ensure fairness”.

Accordingly, ICANN is under an obligation to maintain a publicly accessible website with information relating to its Board meetings, pending policy matters, agendas, budget, annual audit report and other related matters. It is also required to maintain on its website, information about the availability of accountability mechanisms, including reconsideration, independent review, and Ombudsman activities, as well as information about the outcome of specific requests and complaints invoking these mechanisms.

Pursuant to Article III of the ICANN Bye-laws for Transparency, ICANN also adopted the DIDP for disclosure of publicly unavailable documents and publish them over the Internet. This becomes essential in order to safeguard the effectiveness of its international multi-stakeholder operating model and its accountability towards the Internet community. Thereby, upon request made by members of the public, ICANN undertakes to furnish documents that are in possession, custody or control of ICANN and which are not otherwise publicly available, provided it does not fall under any of the defined conditions for non-disclosure. Such information can be requested via an email to [email protected].

Procedure

  • Upon the receipt of a DIDP request, it is reviewed by the ICANN staff.
  • Relevant documents are identified and interview of the appropriate staff members is conducted.
  • The documents so identified are then assessed whether they come under the ambit of the conditions for non-disclosure.
    • Yes - A review is conducted as to whether, under the particular circumstances, the public interest in disclosing the documentary information outweighs the harm that may be caused by such disclosure.
    • Documents which are considered as responsive and appropriate for public disclosure are posted on the ICANN website.
    • In case of request of documents whose publication is appropriate but premature at the time of response then the same is indicated in the response and upon publication thereafter, is notified to the requester.

Time Period and Publication

The response to the DIDP request is prepared by the staff and is made available to the requestor within a period of 30 days of receipt of request via email. The Request and the Response is also posted on the DIDP page http://www.icann.org/en/about/transparency in accordance with the posting guidelines set forth at http://www.icann.org/en/about/transparency/didp.

Conditions for Non-Disclosure

There are certain circumstances under which ICANN may refuse to provide the documents requested by the public. The conditions so identified by ICANN have been categorized under 12 heads and includes internal information, third-party contracts, non-disclosure agreements, drafts of all reports, documents, etc., confidential business information, trade secrets, information protected under attorney-client privilege or any other such privilege,  information which relates to the security and stability of the internet, etc.

Moreover, ICANN may refuse to provide information which is not designated under the specified conditions for non-disclosure if in its opinion the harm in disclosing the information outweighs the public interest in disclosing the information. Further, requests for information already available publicly and to create or compile summaries of any documented information may be declined by ICANN.

Grievance Redressal Mechanism

In certain circumstances the requestor might be aggrieved by the response received and so he has a right to appeal any decision of denial of information by ICANN through the Reconsideration Request procedure or the Independent Review procedure established under Section 2 and 3 of Article IV of the ICANN Bye-laws respectively. The application for review is made to the Board which has designated a Board Governance Committee for such reconsideration. The Independent Review is done by an independent third-party of Board actions, which are allegedly inconsistent with the Articles of Incorporation or Bye-laws of ICANN.

Why does the DIDP matter?

The breadth of ICANN’s work and its intimate relationship to the continued functioning of the Internet must be appreciated before our analysis of the DIDP can be of help. ICANN manages registration and operations of generic and country-code Top Level Domains (TLD) in the world. This is a TLD:

TLD

(Source: here)

Operation of many gTLDs, such as .com, .biz or .info, is under contract with ICANN and an entity to which such operation is delegated. For instance, Verisign operates the .com Registry. Any organization that wishes to allow others to register new domain names under a gTLD (sub-domains such as ‘benefithealth’ in the above example) must apply to ICANN to be an ICANN-accredited Registrar. GoDaddy, for instance, is one such ICANN-accredited Registrar. Someone like you or me, who wants to  get our own website – say, vinayak.com – buys from GoDaddy, which has a contract with ICANN under which it pays periodic sums for registration and renewal of individual domain names. When I buy from an ICANN-accredited Registrar, the Registrar informs the Registry Operator (say, Verisign), who then adds the new domain name (vinayak.com) to its registry list, and then it can be accessed on the Internet.

ICANN’s reach doesn’t stop here, technically. To add a new gTLD, an entity has to apply to ICANN, after which the gTLD has to be added to the root file of the Internet. The root file, which has the list of all TLDs (or all ‘legitimate’ TLDs, some would say), is amended by Verisign under its tripartite contract with the US Government and ICANN, after which Verisign updates the file in its ‘A’ root server. The other 12 root servers use the same root file as the Verisign root server. Effectively, this means that only ICANN-approved TLDs (and all sub-domains such as ‘benefithealth’ or ‘vinayak’) are available across the Internet, on a global scale. Or at least, ICANN-approved TLDs have the most and widest reach. ICANN similarly manages country-code TLDs, such as .in for India, .pk for Pakistan or .uk for the United Kingdom.

All of this leads us to wonder whether the extent of ICANN’s voluntary and reactive transparency is sufficient for an organization of such scale and impact on the Internet, perhaps as much impact as the governments do. In the next post, I will analyse the DIDP’s conditions for non-disclosure of information with certain freedom of information best practices.


Vinayak Mithal is a final year student at the Rajiv Gandhi National University of Law, Punjab. His interests lie in Internet governance and other aspects of tech law, which he hopes to explore during his internship at CIS and beyond. He may be reached at [email protected].

PMA Policy and COAI Recommendations

by Dipankar Das last modified Jul 02, 2014 06:45 AM

Introduction

The Ministry of Communications and Information Technology on the 10th of February, 2012 released a notification [1] in the Official Gazette outlining the Preferential Market Access [2] Policy for Domestically Manufactured Electronic Goods 2012. The Policy is applicable to procurement of telecom products by Government Ministries/Departments and to such electronics that had been deemed to having security concerns, thus making the policy applicable to private bodies in the latter half. The Notification reasoned that preferential access was to be given to domestically manufactured electronic goods predominantly for security reasons. Each Ministry or Department was to notify the products that had security implications, with reasons, after which the notified agencies would be required to procure the same from domestic manufacturers. This policy was also meant to be applicable to even procurement of electronic goods by Government Ministries/Agencies for Governmental purposes except Defence. Each Ministry would be required to notify its own percentage of such procurement, though it could not be less than 30%, and also had to specify the Value Addition that had to be made to a particular product to qualify it as a domestically manufactured product, with the policy again specifying the minimum standards. The policy was also meant for procurement of electronic hardware as a service from Managed Service Providers (MSPs).

The procurement was to be done as according to the policies of the each procuring agency. The tender was to be apportioned according to the procurement percentage notified and the preference part was to be allotted to the domestic manufacturer at the lowest bid price. If there were no bidders who were domestic manufacturers or if the tender was not severable, then it was to be awarded to the Foreign Manufacturer and the percentage adjusted as against other electronic procurement for that period.

Telecom equipment that qualifies as domestically manufactured telecom products for preferential market access include: encryption and UTM platforms, Core/Edge/Enterprise routers, Managed leased line network equipment, Ethernet Switches, IP based Soft Switches, Media gateways, Wireless/Wireline PABXs, CPE, 2G/3G Modems, Leased-line Modems, Set Top Boxes, SDH/Carrier Ethernet/Packet Optical Transport Eqiupments, DWDN systems, GPON equipments, Digital Cross connects, small size 2G/3G GSM based Base Station Systems, LTE based broadband wireless access systems, Wi-Fi based broadband wireless access systems, microwave radio systems, software defined radio cognitive radio systems, repeaters, IBS, and distributed antenna system, satellite based systems, copper access systems, network management systems, security and surveillance communication systems (video and sensors based), optical fiber cable.

The Policy also mentioned the creation of a self-certification system to declare domestic value addition to the vendor. The checks would be done by the laboratories accredited by the Department of Information Technology. The policy was to be in force for a period of 10 years and any dispute concerning the nature of product was to be referred to the Department of Information Technology.

International and Domestic Response to the Policy

There was a large scale opposition, usually from international sectors, towards the mooting of this policy. Besides business houses, even organizations like those of the United States Trades Representatives criticized the policy as being harmful to the global market and in violation of the World Trade Organization Guidelines.[3] Criticism also poured in from domestic bodies in terms of recommendations towards modification of the policy largely on three grounds: (i) the high domestic value addition requirement and the method of calculation of the same, (ii) the lack of a link between manufacturing and security and (iii) application of the policy to the private sector.

The Cellular Operations Association of India (COAI) in a letter dated March 15, 2012 to the Secretary of the Department Technology and Chairman of the Telecom Commission expressed its views on the telecom manufacturing in the country.[4]The COAI stated that such a development had to be done realistically and holistically so that the whole eco-system was developed as a comprehensive whole. In that regard it also forwarded a study that had been commissioned by COAI and conducted by M/s. Booz and Company titled “Telecom Manufacturing Policy – Developing an Actionable Roadmap”. The report was a comprehensive study of the telecom industry and outlined the challenges and opportunities that lay on its development trajectory. It also talked about Government involvement in the development process. The Report while citing the market share of Indian Telecom Industry which would be around 3% [5] of the Global Market highlighted the fact that no country could be self-sufficient in technology. It further talked about the development of local clusters in order to cut costs and encourage manufacturing, while ensuring that the PMA Policy was consistent with the WTO Guidelines. It further recommended opening up of foreign investments and making capital available to ensure growth of innovation. Finally it highlighted the lack of a connection between manufacturing and security and instead stressed upon proper certification, checks and development of a comprehensive CIIP framework across all sensitive networks for security purposes.

In a further letter to the Joint Secretary of the Department of Information and Technology dated April 25, 2012 the COAI expressed some reservations concerning the draft guidelines that had been published along with the notification.[6] While stressing upon the fact that a higher value addition would be impossible with the lack of basic manufacturing capabilities for the development of technological units, it also highlighted the need to redefine Bill of Materials which had been left ambiguous and subject to exploitation. It further highlighted the fact that allowing every Ministry to make its own specifications would lead to inconsistent definitions and an administrative challenge and hence such matters should be handled by a Central Body. Furthermore it opined that the calculation of BOMs and the Value Additions should be done using the concept of substantial transformation as has been given in the Booz Study. Furthermore, while discouraging the use of disincentives, it stated that one individual Ministry should be in charge of specifying such incentives to avoid confusion and for the sake of ease of business.

In another letter to a Member of the Department of Telecommunications dated July 12, 2012 the COAI stressed upon the futility of having high value additions as the same was impossible under the present scenario.[7] There was a lack of manufacturing sector which had to be comprehensively developed backed by fiscal incentives and comprehensive policies. In spite of that, it stressed that no country could become self-reliant and that such policies, like the PMA, were reminiscent of the “license and permit raj” era. It further said that such policies should be consistent with WTO Guidelines and should not give undue preference to domestic manufacturers to the detriment of other manufacturers. Countering the security aspect, it said that the same had been addressed by the DoT License Amendment of May 31, 2011 whereby all equipments on the network would have to comply with the “Safe to Connect” standard, and stressed upon the lack of any link between manufacturing and security. Furthermore for calculation of Value Addition it suggested an alternative to the method proposed by the Government as the same would lead to disclosures of sensitive commercial information which were contained in the BOMs. The COAI said that the three stages as laid out in the Substantial Transformation (as mentioned in the Booz Study) should be used for calculating the VA. It made several proposals to develop the telecom manufacturing industry in India including provision of fiscal incentives, development of telecom clusters and comprehensive policies which led to harmonization with laws and creation of SEZs among other such benefits.

In October 2012 the Government released a draft notification notifying products due to security consideration in furtherance of the PMA Policy.[8] The document outlined the minimum PMA and VA specification for a range of products. It also stated several security reasons for pursuing such a policy and stated that India had to be completely self-reliant for its active telecom products. It also contained data on the predicted growth of the telecom market in India. The COAI thereafter released a document commenting upon the draft notification of the Government.[9]

Besides highlighting the fact that the COAI still had not received a response to its former comments, it again stressed upon the lack of a link between security and manufacturing. It reiterated its point on the impossibility of a complete self-reliance on any nation’s part, and stressed upon the need of involving other stakeholders in the promulgation of such policies. It also made changes to the notified list of equipments, reclassifying it according to technology and only listing equipments which had volumes. Furthermore it also suggested changes towards the calculation of value addition to include materials sourced from local suppliers, in-house assemblage to be considered local material and the calculation to be done for complete order and not for each item in the order. It further recommended a study be conducted and the industry be involved while predicting demands as such were dated and needed revision. The Government thereafter released a revised notification[10] on October 5, 2012 but it did not contain much of the commented changes that the COAI had proposed.

Thereafter in April 2013, the DeitY released draft guidelines[11] for providing preference to domestically manufactured electronic products in Government Procurement in further of the second part of the PMA Policy. The guidelines besides containing definitions to several terms such as BOM also prescribed a minimum of 20% domestic procurement while leaving the specifications onto individual Ministries. It recommended the establishment of a technical committee by the concerned Ministry or Department that would recommend value addition to products. It followed a BOM based calculation of Value Addition while leaving the matter of certification to be dealt by DeitY certified laboratories that are notified for such purposes by the concerned Ministry/Department. DeitY was the nodal ministry for monitoring the implementation of the policy while particular monitoring was left to each Ministry or Department concerned. Among the annexures were indicative lists of generic and telecom products and a format for Self Certification regarding Domestic Value Addition in an Electronic Product.

The COAI thereafter released a revised draft containing its own comments on April 15, 2013.[12] The COAI pointed out faults in the definition of BOM. It highlighted the difficulty in splitting R&D according to countries, and also stressed upon the impractical usage of BOM in calculation of value addition as the same was confidential business information. As it had already suggested earlier, it reiterated the usage of the Substantial Transformation process for the calculation of Value Addition. While removing the lists of equipments mentioned, it further pointed out that the disqualification in the format for self-certification would be a very harsh disincentive and would result in driving away manufacturers. It suggested that there should be incentives for compliance instead.

The COAI along with the Association of Unified Telecom Service Providers of India sent a letter dated January 24, 2013 to the Secretary, DoT containing their inputs on Draft List of Security Sensitive Telecom Products for Preferential Market Access (PMA).[13] It again stressed upon the fact that security and manufacturing were not related and that the security aspect had been dealt by the “Safe to Connect” requirement mandated by the DoT License Amendment. It talked of the impossibility of arriving at VA figures until the same is defined to internationally accepted norms. Further it opined that if the Government had security concerns it should consider VA at a network level in the configurations as would be deployed in the network or its segments rather at element or subsystem levels as the latter would leave too many calculations open and the procurement entities will find it very difficult to ensure if they meet the PMA requirement or not. It further stressed upon the need to comply with WTO Guidelines while stressing upon the need to pay heed to certification standards than pursue the unavailable link between manufacturing and security through a PMA Policy. Finally it suggested a grouping of telecom products for the policy based on technology rather than individual products.

Pursuant to a Round Table Conference Organized by the Department of Information and Technology, AUSPI and COAI sent another letter dated April 15, 2013 to the Secretary, Department of Information and Technology.[14] It reiterated several points that both the AUSPI and COAI had been suggesting to the Government on the Telecom Manufacturing Policy. It cited the examples of other manufacturing nations to reiterate the fact that no country could be completely self-reliant in manufacturing electronics and such positions would only lead to creation of an environment that would not be conducive to global business. It further stressed upon the need to change the manner of calculation of VA while highlighting the fact that every Department should notify its list of products having security implications and the list of telecom equipment should be deleted from the draft guidelines being issued by DeitY to ensure better implementation.

A major change came in on July 8, 2013 when the Prime Minister’s Office made a press release withdrawing the PMA policy for review and withholding all the notifications that had been issued in that regard.[15] It said that  he revised proposal will incorporate a detailed provision for project / product / sector specific security standards, alternative modes of security certification, and a roadmap for buildup of domestic testing capacity. It further noted that the revised proposal on PMA in the private sector for security related products will not have domestic manufacturing requirements, percentage based or otherwise and that the revised proposal will incorporate a mechanism for a centralised clearing house mechanism for all notifications under the PMA Policy.

The COAI thereafter on November 7, 2013 sent a letter to the DoT containing feedback on the list of items slated for Government procurement.[16] It noted that there were 23 products on which PMA was applicable. It pointed out that there were no local manufacturers for many of the products notified. It also asked the Government to take steps to ensure that fiscal incentives were given to encourage manufacturing sector which was beset by several costs such as landing costs which acted as impediments to its development. It stressed upon the tiered development of the industry needed to ensure that a holistic and comprehensive growth is attained which would result in manufacturing of local products. It requested that the Government "focus on right enablers (incentives, ecosystem, infrastructure, taxation) as the outcome materializes once all of these converge."

The COAI sent a further letter dated November 13, 2013 to the DoT concerning the investment required in the telecom manufacturing industry.[17] It noted the projected required investment of 152bn USD in the telecom sector and that the Government had projected that 92% of the investment would have to come from the Private Sector. COAI, while stressing upon the need of the Government and the Private Industry to work in tandem with each other, suggested that the Government devise methods to attract investments in the telecom sectors from international telecom players and that the Telecom Equipment Manufacturing Council meet to review and revise methods for attracting such investments.

Pursuant to the PMO directive, DeitY released a revised PMA Policy on the 23rd of December, 2014.[18] While there have been a few major changes, not all of recommendations by various bodies have been adhered to.[19] The major changes in the revised policy included the exemption of the private sector from the policy and the removal of PMA Policy to equipments notified for security reasons. The manner of calculation of the domestic value addition has not been changed though there has been a reduction in the percentage of value addition needed to qualify a product as domestic product. Another addition has been of a two-tiered implementation mechanism for the Policy. Tier-I includes a National Planning and Monitoring Council for Electronic Products which would design a 10-year roadmap for the implementation of the policy including notification of the products and subsequent procurement. Under Tier-II, the Ministries and Departments will be issuing notifications specifying products and the technical qualifications of the same, after approval by the Council. The former notifications under the 2012 Policy, including the notification of 23 telecom products by Department of Telecom,[20] are still valid until revised further.[21]


[1]. No. 8(78)/2010-IPHW. Available at http://www.dot.gov.in/sites/default/files/5-10-12.PDF (accessed 03 June, 2014).

[2]. Preferential Market Access

[3]. See The PMA Debate, DataQuest at http://www.dqindia.com/dataquest/feature/191001/the-pma-debate/page/1 (accessed June 2014).

[4]. The letter is available at http://www.coai.com/Uploads/MediaTypes/Documents/letter-to-dit-on-pma-notification.pdf (accessed  June, 2014).

[5]. Around $17bn.

[6]. The letter is available at http://www.coai.com/Uploads/MediaTypes/Documents/letter-to-dit-on-pma-notification.pdf (accessed  June, 2014).

[7]. The letter is available at http://www.coai.com/Uploads/MediaTypes/Documents/coai-to-dot-on-enhancing-domestic-manufacturing-of-telecom-equipment-bas.pdf (accessed  June, 2014).

[8]. The notification no. 18-07/2010-IP can be found at http://www.coai.com/Uploads/MediaTypes/Documents/DoT-draft-notification-on-Policy-for-preference-to-domestically-manufactured-telecom-products-in-procurement-October-2012.pdf  (accessed  June, 2014).

[9]. The commented COAI draft can be found at http://www.coai.com/Uploads/MediaTypes/Documents/Annexure-1-Comments-on-draft-notification-by-DoT.pdf (accessed  June, 2014).

[10]. Available at http://www.coai.com/Uploads/MediaTypes/Documents/dots-notification-on-telecom-equipment-oct-5,-2012.pdf (accessed June, 2014).

[11]. The draft guidelines can be found at http://www.coai.com/Uploads/MediaTypes/Documents/pma_draft-govt-procurement-guidelines-april-2013.pdf (accessed June, 2014).

[12]. The COAI commented draft can be found at http://www.coai.com/Uploads/MediaTypes/Documents/pma-draft-security-guidelines-15-april-2013.pdf (accessed June, 2014).

[13]. The letter can be found at http://www.coai.com/Uploads/MediaTypes/Documents/jac-007-to-dot-on-Januarys-list-of-telecom-products-final.pdf (accessed June, 2014).

[14]. The letter can be found at http://www.coai.com/Uploads/MediaTypes/Documents/jac-to-moc-on-pma.pdf (accessed June, 2014).

[15]. The press release can be found at http://www.coai.com/Uploads/MediaTypes/Documents/pmo-on-pma.pdfhttp://www.coai.com/Uploads/MediaTypes/Documents/pmo-on-pma.pdf (accessed June, 2014).

[16]. The letter can be found at http://www.coai.com/Uploads/MediaTypes/Documents/COAI-letter-to-DoT-on-Feedback-on-List-of-Items-for-Govt-Procurement.pdf (accessed June, 2014).

[17]. The letter can be found at http://www.coai.com/Uploads/MediaTypes/Documents/COAI-letter-to-DoT-on-Investments-Required-(TEMC)-Nov%2013-2013.pdf (accessed June, 2014).

[18]. The Notification No. 33(3)/2013-IPHW can be found at http://deity.gov.in/sites/upload_files/dit/files/Notification_Preference_DMEPs_Govt_%20Proc_23_12_2013.pdf (accessed June, 2014).

[19]. For more information, see http://electronicsb2b.com/policy-corner/revised-preferential-market-access-policy/# (accessed June, 2014).

[20]. The notification has been mentioned and discussed above.

[21]. A list of notifications dealing with electronic products except telecom products can be found on the website of DeitY at http://deity.gov.in/esdm/pma (accessed June, 2014).

Whistle Blowers Protection Act, 2014

by Prasad Krishna last modified Jul 02, 2014 08:00 AM

PDF document icon The Whistle Blowers Protection Act, 2011.pdf — PDF document, 125 kB (128487 bytes)

Models for Surveillance and Interception of Communications Worldwide

by Bedavyasa Mohanty last modified Jul 10, 2014 07:50 AM
This is an evaluation of laws and practices governing surveillance and interception of communications in 9 countries. The countries evaluated represent a diverse spectrum not only in terms of their global economic standing but also their intrusive surveillance capabilities. The analysis is limited to the procedural standards followed by these countries for authorising surveillance and provisions for resolving interception related disputes.
Sl. No. Country Legislation Model
1. Australia Telecommunications (Interceptions and Access) Act, 1979
  • Governs interception of communications
  • Relevant provisions: S. 3, 7, 6A, 34, 46
Surveillance Devices Act, 2004
  • Establishes procedure for obtaining warrants and for use of surveillance devices
  • Relevant Provisions: S.13, 14
  • Authorisation for surveillance is granted in the form of a warrant from a Judge or a nominated member of the Administrative Appeals Tribunal
  • The warrant issuing authority must be satisfied that information obtained through interception shall assist in the investigation of a serious crime
  • The Acts provide a list of prescribed offences for which interception of communication may be authorized
  • The Acts also specify certain federal and state law enforcement agencies that may undertake surveillance
2. Brazil Federal Law No. 9,296, 1996:
  • Regulates wiretapping
  • Authorisation for interception is granted on a Judge’s order for a period of 15 days at a time
  • Interception is only allowed for investigations into serious offences like drug smuggling, corruption murder and kidnapping
3. Canada Criminal Code, 1985
  • Governs general rules of criminal procedure including search and seizure protocols
  • Relevant Provision: §§ 184.2, 184.4
  • Grants power to intercept communication by obtaining authorisation from a provincial court judge or a judge of the superior court
  • Before granting his authorisation, the judge must be satisfied that either the originator of the communication or the recipient thereof  has given his/her consent to the interception
  • Under exceptional circumstances, however, a police officer owing to the exigency of the situation may intercept communication without prior authorisation
4. France Loi d'orientation et de programmation pour la performance de la sécurité intérieure (LOPPSI 2), 2011:
  • Authorises use of video surveillance and interception of communications
  • Relevant Provisions: Article 36
Loi de Programmation Militaire (LPM), 2013:
  • Authorises surveillance for protection of national security and prevention of terrorism
  • Interception of communication under LOPPSI 2 requires previous authorization from an investigating Judge after consultation with the Public Prosecutor
  • Such authorization is granted for a period of 4 months which is further extendable by another 4 months
  • Interception of communication under LPM does not require prior sanction from an investigating judge and is instead provided by the Prime Minister’s office
  • Information that can be intercepted under LPM includes not only metadata but also content and geolocation services
5. Germany Gesetz zur Beschränkung des Brief-, Post und Fernmeldegeheimnisses (G10 Act), 2001
  • Imposes restrictions on the right to privacy and authorizes surveillance for protecting freedom and democratic order, preventing terrorism and illegal drug trade
  • Relevant Provisions: §3
The German Code of Criminal Procedure (StPO), 2002
  • Lays down search and seizure protocol and authorizes interception of telecommunications for criminal prosecutions
  • Relevant Provisions: §§ 97, 100a
  • Authorises warrantless surveillance by specific German agencies like the Bundesnachrichtendienst (Federal Intelligence Service)
  • Lays down procedure that must be followed while undertaking surveillance and intercepting communications
  • Authorises sharing of intercepted intelligence for criminal prosecutions
  • Mandates ex post notification to persons whose privacy has been violated but no judicial remedies are available to such persons
  • The Code of Criminal Procedure authorises interception of communication of a person suspected of being involved in a serious offence only on the order of a court upon application by the public prosecution office
6. Pakistan Pakistan Telecommunications Reorganisation Act, 1996:
  • Controls the flow of false and fabricated information and protects national security
  • Relevant Provisions: § 54
Investigation for Fair Trial Act, 2013:
  • Regulates the powers of law enforcement and intelligence agencies regarding covert surveillance and interception of communications
  • Relevant Provisions:  §§ 6,7, 8, 9
  • Authorisation for interception is provided by the federal government. No formal legal structure to monitor surveillance exists
  • Interception can be authorized in the interest of national security and on the apprehension of any offence
  • Requests for filtering and blocking of content are routed through the Inter-Ministerial Committee for the Evaluation of Websites, a confidential regulatory body
  • Under the Fair Trial Act, interception can only be authorised on application to the Fedral Minister for Interior who shall then permit the application to be placed before a High Court Judge
  • The warrant shall be issued by a judge only on his satisfaction that interception will aid in the collection of evidence and that a reasonable threat of the commission of a scheduled offence exists
7. South Africa The Regulation of Interception of Communications and Provision of Communication-related Information Act, 2002
  • Regulates and authorizes monitoring and interception of telecommunications services
  • Relevant Provisions: §§ 16, 22
  • Warrant for intercepting communications and installing surveillance devices is granted by a designated judge
  • The warrant is issued on satisfaction of the judge that the investigation relates to a serious offence or that the information gathering is vital to public health or safety, national security or compelling national economic interests
8. United Kingdom Regulation of Investigatory Powers Act, 2000:
  • Authorises interception of communications and surveillance
  • Relevant Provisions: §§ 5, 6, 65
  • Authorisation for interception is granted in the form of a warrant by the Secretary of State or in certain special cases by a ‘senior officer’
  • Communications can be intercepted only it is necessary to do so in the interest of national security or for the purpose of preventing and detecting serious crimes
  • Complaints of alleged illegal surveillance are heard by the Investigatory Powers Tribunal
9. United States Electronic Communications Privacy Act, 1986 (Title III, Omnibus Crime Control and Safe Streets Act)
  • Governs authorisation for wiretapping and interception
  • Relevant Provisions: §18
  • Authorisation for interception can be granted by a  district court or federal appeals court on application by a law enforcement officer duly signed by the attorney general
  • Application mandates obtaining the information through a service provider before invading upon individual’s privacy

Reading the Fine Script: Service Providers, Terms and Conditions and Consumer Rights

by Jyoti Panday last modified Jul 04, 2014 06:31 AM
This year, an increasing number of incidents, related to consumer rights and service providers, have come to light. This blog illustrates the facts of the cases, and discusses the main issues at stake, namely, the role and responsibilities of providers of platforms for user-created content with regard to consumer rights.

On 1st July, 2014 the Federal Trade Commission (FTC) filed a complaint against T-Mobile USA,[1] accusing the service provider of 'cramming' customers bills, with millions of dollars of unauthorized charges. Recently, another service provider, received flak from regulators and users worldwide, after it published a paper, 'Experimental evidence of massive-scale emotional contagion through social networks'.[2] The paper described Facebook's experiment on more than 600,000 users, to determine whether manipulating user-generated content, would affect the emotions of its users.

In both incidents the terms that should ensure the protection of their user's legal rights, were used to gain consent for actions on behalf of the service providers, that were not anticipated at the time of agreeing to the terms and conditions (T&Cs) by the consumer. More precisely, both cases point to the underlying issue of how users are bound by T&Cs, and in a mediated online landscape—highlight, the need to pay attention to the regulations that govern the online engagement of users.

I have read and agree to the terms

In his statement, Chief Executive Officer, John Legere might have referred to T-Mobile as "the most pro-consumer company in the industry",[3] however the FTC investigation revelations, that many customers never authorized the charges, suggest otherwise.  The FTC investigation also found that, T-Mobile received 35-40 per cent of the amount charged for subscriptions, that were made largely through innocuous services, that customers had been signed up to, without their knowledge or consent. Last month news broke, that just under 700,000 users 'unknowingly' participated in the Facebook study, and while the legality and ethics of the experiment are being debated, what is clear is that Facebook violated consumer rights by not providing the choice to opt in or out, or even the knowledge of such social or psychological experiments to its users.

Both incidents boil down to the sensitive question of consent. While binding agreements around the world work on the condition of consent, how do we define it and what are the implications of agreeing to the terms?

Terms of Service: Conditions are subject to change

A legal necessity, the existing terms of service (TOS)—as they are also known—as an acceptance mechanism are deeply broken. The policies of online service providers are often, too long, and with no shorter or multilingual versions, require substantial effort on part of the user to go through in detail. A 2008 Carnegie Mellon study estimated it would take an average user 244 hours every year to go through the policies they agree to online.[4] Based on the study, Atlantic's Alexis C. Madrigal derived that reading all of the privacy policies an average Internet user encounters in a year, would take 76 working days.[5]

The costs of time are multiplied by the fact that terms of services change with technology, making it very hard for a user to keep track of all of the changes over time. Moreover, many services providers do not even commit to the obligation of notifying the users of any changes in the TOS. Microsoft, Skype, Amazon, YouTube are examples of some of the service providers that have not committed to any obligations of notification of changes and often, there are no mechanisms in place to ensure that service providers are keeping users updated.

Facebook has said that the recent social experiment is perfectly legal under its TOS,[6] the question of fairness of the conditions of users consent remain debatable. Facebook has a broad copyright license that goes beyond its operating requirements, such as the right to 'sublicense'. The copyright also does not end when users stop using the service, unless the content has been deleted by everyone else.

More importantly, since 2007, Facebook has brought major changes to their lengthy TOS about every year.[7] And while many point that Facebook is transparent, as it solicits feedback preceding changes to their terms, the accountability remains questionable, as the results are not binding unless 30% of the actual users vote. Facebook can and does, track users and shares their data across websites, and has no obligation or mechanism to inform users of the takedown requests.

Courts in different jurisdictions under different laws may come to different conclusions regarding these practices, especially about whether changing terms without notifying users is acceptable or not. Living in a society more protective of consumer rights is however, no safeguard, as TOS often include a clause of choice of law which allow companies to select jurisdictions whose laws govern the terms.

The recent experiment bypassed the need for informed user consent due to Facebook's Data Use Policy[8], which states that once an account has been created, user data can be used for 'internal operations, including troubleshooting, data analysis, testing, research and service improvement.' While the users worldwide may be outraged, legally, Facebook acted within its rights as the decision fell within the scope of T&Cs that users consented to. The incident's most positive impact might be in taking the questions of Facebook responsibilities towards protecting users, including informing them of the usage of their data and changes in data privacy terms, to a worldwide audience.

My right is bigger than yours

Most TOS agreements, written by lawyers to protect the interests of the companies add to the complexities of privacy, in an increasingly user-generated digital world. Often, intentionally complicated agreements, conflict with existing data and user rights across jurisdictions and chip away at rights like ownership, privacy and even the ability to sue. With conditions that that allow for change in terms at anytime, existing users do not have ownership or control over their data.

In April New York Times, reported of updates to the legal policy of General Mills (GM), the multibillion-dollar food company.[9] The update broadly asserted that consumers interacting with the company in a variety of ways and venues no longer can sue GM, but must instead, submit any complaint to “informal negotiation” or arbitration. Since then, GM has backtracked and clarified that “online communities” mentioned in the policy referred only to those online communities hosted by the company on its own websites.[10] Clarification aside, as Julia Duncan, Director of Federal programs at American Association for Justice points out, the update in the terms were so broad, that they were open to wide interpretation and anything that consumers purchase from the company could have been held to this clause. [11]

Data and whose rights?

Following Snowden revelations, data privacy has become a contentious issue in the EU, and TOS, that allow the service providers to unilaterally alter terms of the contract, will face many challenges in the future. In March Edward Snowden sent his testimony to the European Parliament calling for greater accountability and highlighted that in "a global, interconnected world where, when national laws fail like this, our international laws provide for another level of accountability."[12] Following the testimony came the European Parliament's vote in favor of new safeguards on the personal data of EU citizens, when it’s transferred to non-EU.[13] The new regulations seek to give users more control over their personal data including the right to ask for data from companies that control it and seek to place the burden of proof on the service providers.

The regulation places responsibility on companies, including third-parties involved in data collection, transfer and storing and greater transparency on concerned requests for information. The amendment reinforces data subject right to seek erasure of data and obliges concerned parties to communicate data rectification. Also, earlier this year, the European Court of Justice (ECJ) ruled in favor of the 'right to be forgotten'[14]. The ECJ ruling recognised data subject's rights override the interest of internet users, however, with exceptions pertaining to nature of information, its sensitivity for the data subject's private life and the role of the data subject in public life.

In May, the Norwegian Consumer Council filed a complaint with the Norwegian Consumer Ombudsman, “… based on the discrepancies between Norwegian Law and the standard terms and conditions applicable to the Apple iCloud service...”, and, “...in breach of the law regarding control of marketing and standard agreements.”[15] The council based its complaint on the results of a study, published earlier this year, that found terms were hazy and varied across services including iCloud, Drop Box, Google Drive, Jotta Cloud, and Microsoft OneDrive. The Norwegian Council study found that Google TOS, allow for users content to be used for other purposes than storage, including by partners and that it has rights of usage even after the service is cancelled.  None of the providers provide a guarantee that data is safe from loss, while many,  have the ability to terminate an account without notice. All of the service providers can change the terms of service but only Google and Microsoft give an advance notice.

The study also found service providers lacking with respect to European privacy standards, with many allowing for browsing of user content. Tellingly, Google had received a fine in January by the French Data Protection Authority, that stated regarding Google's TOS, "permits itself to combine all the data it collects about its users across all of its services without any legal basis."

To blame or not to blame

Facebook is facing a probe by the UK Information Commissioner's Office, to assess if the experiment conducted in 2012 was a violation of data privacy laws.[16] The FTC asked the court to order T-Mobile USA,  to stop mobile cramming, provide refunds and give up any revenues from the practice. The existing mechanisms of online consent, do not simplify the task of agreeing to multiple documents and services at once, a complexity which manifolds, with the involvement of third parties.

Unsurprisingly, T-Mobile's Legere termed the FTC lawsuit misdirected and blamed the companies providing the text services for the cramming.[17] He felt those providers should be held accountable, despite allegations that T-Mobile's billing practices made it difficult for consumers to detect that they were being charged for unauthorized services and having shared revenues with third-party providers. Interestingly, this is the first action against a wireless carrier for cramming and the FTC has a precedent of going after smaller companies that provide the services.

The FTC charged  T-Mobile USA with deceptive billing practices in putting the crammed charges under a total for 'use charges' and 'premium services' and failure to highlight that portion of the charge was towards third-party charges. Further, the company urged customers to take complaints to vendors and was not forthcoming with refunds. For now, T-Mobile may be able to share the blame, the incident brings to question its accountability, especially as going forward it has entered a pact along with other carriers in USA including Verizon and AT&T, agreeing to stop billing customers for third-party services. Even when practices such as cramming are deemed illegal, it does not necessarily mean that harm has been prevented. Often users bear the burden of claiming refunds and litigation comes at a cost while even after being fined companies could have succeeded in profiting from their actions.

Conclusion

Unfair terms and conditions may arise when service providers include terms that are difficult to understand or vague in their scope. TOS that prevent users from taking legal action, negate liability for service providers actions despite the companies actions that may have a direct bearing on users, are also considered unfair. More importantly, any term that is hidden till after signing the contract, or a term giving the provider the right to change the contract to their benefit including wider rights for service provider wide in comparison to users such as a term that that makes it very difficult for users to end a contract create an imbalance. These issues get further complicated when the companies control and profiting from data are doing so with user generated data provided free to the platform.

In the knowledge economy, web companies play a decisive role as even though they work for profit, the profit is derived out of the knowledge held by individuals and groups. In their function of aggregating human knowledge, they collect and provide opportunities for feedback of the outcomes of individual choices. The significance of consent becomes a critical part of the equation when harnessing individual information. In France, consent is part of the four conditions necessary to be forming a valid contract (article 1108 of the Code Civil).

The cases highlight the complexities that are inherent in the existing mechanisms of online consent. The question of consent has many underlying layers such as reasonable notice and contractual obligations related to consent such as those explored in the case in Canada, which looked at whether clauses of TOS were communicated reasonably to the user, a topic for another blog. For now, we must remember that by creating and organising  social knowledge that further human activity, service providers, serve a powerful function. And as the saying goes, with great power comes great responsibility.


[1] 'FTC Alleges T-Mobile Crammed Bogus Charges onto Customers’ Phone Bills', published 1 July, 2014. See: http://www.ftc.gov/news-events/press-releases/2014/07/ftc-alleges-t-mobile-crammed-bogus-charges-customers-phone-bills

[2] 'Experimental evidence of massive-scale emotional contagion through social networks', Adam D. I. Kramera,1, Jamie E. Guilloryb, and Jeffrey T. Hancock, published March 25, 2014. See:http://www.pnas.org/content/111/24/8788.full.pdf+html?sid=2610b655-db67-453d-bcb6-da4efeebf534

[3] 'U.S. sues T-Mobile USA, alleges bogus charges on phone  bills, Reuters published 1st July, 2014 See: http://www.reuters.com/article/2014/07/01/us-tmobile-ftc-idUSKBN0F656E20140701

[4] 'The Cost of Reading Privacy Policies', Aleecia M. McDonald and Lorrie Faith Cranor, published I/S: A Journal of Law and Policy for the Information Society 2008 Privacy Year in Review issue. See: http://lorrie.cranor.org/pubs/readingPolicyCost-authorDraft.pdf

[5] 'Reading the Privacy Policies You Encounter in a Year Would Take 76 Work Days', Alexis C. Madrigal, published The Atlantic, March 2012 See: http://www.theatlantic.com/technology/archive/2012/03/reading-the-privacy-policies-you-encounter-in-a-year-would-take-76-work-days/253851/

[6] Facebook Legal Terms. See: https://www.facebook.com/legal/terms

[7] 'Facebook's Eroding Privacy Policy: A Timeline', Kurt Opsahl, Published Electronic Frontier Foundation , April 28, 2010 See:https://www.eff.org/deeplinks/2010/04/facebook-timeline

[8] Facebook Data Use Policy. See: https://www.facebook.com/about/privacy/

[9] 'When ‘Liking’ a Brand Online Voids the Right to Sue', Stephanie Strom, published in New York Times on April 16, 2014 See: http://www.nytimes.com/2014/04/17/business/when-liking-a-brand-online-voids-the-right-to-sue.html?ref=business

[10] Explaining our website privacy policy and legal terms, published April 17, 2014 See:http://www.blog.generalmills.com/2014/04/explaining-our-website-privacy-policy-and-legal-terms/#sthash.B5URM3et.dpufhttp://www.blog.generalmills.com/2014/04/explaining-our-website-privacy-policy-and-legal-terms/

[11] General Mills Amends New Legal Policies, Stephanie Strom, published in New York Times  on 1http://www.nytimes.com/2014/04/18/business/general-mills-amends-new-legal-policies.html?_r=0

[12] Edward Snowden Statement to European Parliament published March 7, 2014. See: http://www.europarl.europa.eu/document/activities/cont/201403/20140307ATT80674/20140307ATT80674EN.pdf

[13] Progress on EU data protection reform now irreversible following European Parliament vote, published 12 March 201 See: http://europa.eu/rapid/press-release_MEMO-14-186_en.htm

[14] European Court of Justice rules Internet Search Engine Operator responsible for Processing Personal Data Published by Third Parties, Jyoti Panday, published on CIS blog on May 14, 2014. See: http://cis-india.org/internet-governance/blog/ecj-rules-internet-search-engine-operator-responsible-for-processing-personal-data-published-by-third-parties

[15] Complaint regarding Apple iCloud’s terms and conditions , published on 13 May 2014 See:http://www.forbrukerradet.no/_attachment/1175090/binary/29927

[16] 'Facebook faces UK probe over emotion study' See: http://www.bbc.co.uk/news/technology-28102550

[17] Our Reaction to the FTC Lawsuit See: http://newsroom.t-mobile.com/news/our-reaction-to-the-ftc-lawsuit.htm

Research Advisory Network Agenda

by Prasad Krishna last modified Jul 03, 2014 06:38 AM

PDF document icon RAN Agenda June 18 (RANdraft).pdf — PDF document, 508 kB (520708 bytes)

Domain Name System Forum 2014

by Prasad Krishna last modified Jul 03, 2014 09:03 AM

PDF document icon DNS_2014_marketing _brochure_agenda_small.pdf — PDF document, 1857 kB (1902206 bytes)

The Constitutionality of Indian Surveillance Law: Public Emergency as a Condition Precedent for Intercepting Communications

by Bedavyasa Mohanty last modified Aug 04, 2014 04:52 AM
Bedavyasa Mohanty analyses the nuances of interception of communications under the Indian Telegraph Act and the Indian Post Office Act. In this post he explores the historical bases of surveillance law in India and examines whether the administrative powers of intercepting communications are Constitutionally compatible.

Introduction

State authorised surveillance in India derives its basis from two colonial legislations; §26 of the Indian Post Office Act, 1898 and §5 of the Telegraph Act, 1885 (hereinafter the Act) provide for the interception of postal articles[1] and messages transmitted via telegraph[2] respectively. Both of these sections, which are analogous, provide that the powers laid down therein can only be invoked on the occurrence of a public emergency or in the interest of public safety. The task of issuing orders for interception of communications is vested in an officer authorised by the Central or the State government. This blog examines whether the preconditions set by the legislature for allowing interception act as adequate safeguards. The second part of the blog analyses the limits of discretionary power given to such authorised officers to intercept and detain communications.

Surveillance by law enforcement agencies constitutes a breach of a citizen’s Fundamental Rights of privacy and the Freedom of Speech and Expression. It must therefore be justified against compelling arguments against violations of civil rights. Right to privacy in India has long been considered too ‘broad and moralistic’[3] to be defined judicially. The judiciary, though, has been careful enough to not assign an unbound interpretation to it. It has recognised that the breach of privacy has to be balanced against a compelling public interest [4] and has to be decided on a careful examination of the facts of a certain case. In the same breath, Indian courts have also legitimised surveillance by the state as long as such surveillance is not illegal or unobtrusive and is within bounds [5]. While determining what constitutes legal surveillance, courts have rejected “prior judicial scrutiny” as a mandatory requirement and have held that administrative safeguards are sufficient to legitimise an act of surveillance. [6]

Conditions Precedent for Ordering Interception

§§5(2) of the Telegraph Act and 26(2) of the Indian Post Office Act outline a two tiered test to be satisfied before the interception of telegraphs or postal articles. The first tier consists of sine qua nons in the form of an “occurrence of public emergency” or “in the interests of public safety.” The second set of requirements under the provisions is “the interests of the sovereignty and integrity of India, the security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of an offence.” While vesting the power of interception in administrative officials, the sections contemplate a legal fiction where a public emergency exists and it is in the interest of sovereignty, integrity, security of the state or for the maintenance of public order/ friendly relations with foreign states. The term “public emergency,” however, has not been clearly defined by the legislature or by the courts. It thus vests arbitrary powers in a delegated official to order the interception of communication violating one’s Fundamental Rights.

Tracing the History of the Expression “Public Emergency”

The origins of the laws governing interception can be traced back to English laws of the late 19th Century; specifically one that imposed a penalty on a postal officer who delayed or intercepted a postal article.[7] This law guided the drafting of the Indian Telegraph Act in 1885 that legitimised interception of communications by the state. The expression “public emergency” appeared in the original Telegraph Act of 1885 and has been adopted in that form in all subsequent renderings of provisions relating to interception. Despite the contentious and vague nature of the expression, no consensus regarding its interpretation seems to have been arrived at. One of the first post-independence analyses of this provision was undertaken by the Law Commission in 1968. The 38th Law Commission in its report on the Indian Post Office Act, raised concerns about the constitutionality of the expression. The Law Commission was of the opinion that the term not having been defined in the constitution cannot serve as a reasonable ground for suspension of Fundamental Rights.[8] It further urged that a state of public emergency must be of such a nature that it is not secretive and is apparent to a reasonable man.[9] It thus challenged the operation of the act in its then current form where the determination of public emergency is the discretion of a delegated administrative official. The Commission, in conclusion, implored the legislature to amend the laws relating to interception to bring them in line with the Constitution. This led to the Telegraph (Amendment) Act of 1981. Questions regarding the true meaning of the expression and its potential misuse were brought up in both houses of the Parliament during passing of the amendment. The Law Ministry, however, did not issue any additional clarifications regarding the terms used in the Act. Instead, the Government claimed that the expressions used in the Act are “exactly those that are used in the Constitution.” [10] It may be of interest to note here that the Constitution of India, neither uses nor defines the term “public emergency.” Naturally, it is not contemplated as a ground for reasonably restricting Fundamental Rights provided under Article 19(1). [11] Similarly, concerns regarding the potential misuse of the powers were defended with the logically incompatible and factually inaccurate position that the law had not been misused in the past.[12]

Locating “Public Emergency” within a Proclamation of Emergency under the Constitution (?)

Public emergency in not equivalent to a proclamation of emergency under Article 352 of the Constitution simply because it was first used in legislations over six decades before the drafting of the Indian Constitution began. Besides, orders for interception of communications have also been passed when the state was not under a proclamation of emergency. Moreover, public emergency is not the only prerequisite prescribed under the Act. §5(2) states that an order for interception can be passed either on the occurrence of public emergency or in the interest of public safety. Therefore, the thresholds for the satisfaction of both have to be similar or comparable. If the threshold for the satisfaction of public emergency is understood to be as high as a proclamation of emergency then any order for interception can be passed easily under the guise of public safety. The public emergency condition will then be rendered redundant. Public emergency is therefore a condition that is separate from a proclamation of emergency.

In a similar vein the Supreme Court has also clarified[13] that terms like “public emergency” and “any emergency,” when used as statutory prerequisites, refer to the occurrence of different kinds of events. These terms cannot be equated with one another merely on the basis of the commonality of one word.

The Supreme Court in Hukam Chand v. Union of India,[14] correctly stated that the terms public emergency and public safety must “take colour from each other.” However, the court erred in defining public emergency as a situation that “raises problems concerning the interest of the public safety, the sovereignty and integrity of India, the security of the State, friendly relations with foreign States or public order or the prevention of incitement to the commission of an offence.” This cyclic definition does not lend any clarity to the interpretive murk surrounding the term. The Act envisages public emergency as a sine qua non that must exist prior to a determination that there is a threat to public order and sovereignty and integrity of the state. The court’s interpretation on the other hand would suggest that a state of public emergency can be said to exist only when public order, sovereignty and integrity of the state are already threatened. Therefore, while conditions precedent exist for the exercise of powers under §5(2) of the Act, there are no objective standards against which they are to be tested.

Interpretation of Threshold Requirements

A similar question arose before the House of Lords in Liversidge v. Anderson.[15] The case examined the vires of an Act that vested an administrative authority with the conditional power to detain a person if there was reasonable cause to believe that the person was of hostile origin. Therein, Lord Atkin dissenting with the majority opinion stated in no unclear terms that power vested in the secretary of state was conditional and not absolute. When a conditional authority is vested in an administrative official but there aren’t any prescriptive guidelines for the determination of the preconditions, then the statute has the effect of vesting an absolute power in a delegated official. This view was also upheld by the Supreme Court in State of Madhya Pradesh v. Baldeo Prasad.[16] The court was of the opinion that a statute must not only provide adequate safeguards for the protection of innocent citizens but also require the administrative authority to be satisfied as to the existence of the conditions precedent laid down in the statute before making an order. If the statute failed to do so in respect of any condition precedent then the law suffered from an infirmity and was liable to be struck down as invalid.[17] The question of the existence of public emergency, therefore being left to the sole determination of an administrative official is an absolute and arbitrary power and is ultra vires the Constitution

Interestingly, in its original unamended form, §5 contained a provisio stating that a determination of public emergency was the sole authority of the secretary of state and such a finding could not be challenged before a court of law. It is this provision that the government repealed through the Telegraph (Amendment) Act of 1981 to bring it in line with Constitutional principles. The preceding discussion shows that the amendment did not have the effect of rectifying the law’s constitutional infirmities. Nonetheless, the original Telegraph Act and its subsequent amendment are vital for understanding the compatibility of surveillance standards with the Constitutional principles. The draconian provisio in the original act vesting absolute powers in an administrative official illustrates that the legislative intent behind the drafting of a 130 year law cannot be relied on in today’s context. Vague terms like public emergency that have been thoughtlessly adopted from a draconian law find no place in a state that seeks to guarantee to its citizens rights of free speech and expression.

Conclusion

Interception of communications under the Telegraph Act and the Indian Post office act violate not only one’s privacy but also one’s freedom of speech and expression. Besides, orders for the tapping of telephones violate not only the privacy of the individual in question but also that of the person he/she is communicating with. Considering the serious nature of this breach it is absolutely necessary that the powers enabling such interception are not only constitutionally authorised but also adequately safeguarded. The Fundamental Rights declared by Article 19(1) cannot be curtailed on any ground outside the relevant provisions of Cls. 2-6.[18] The restrictive clauses in Cls. (2)-(6) of Article 19 are exhaustive and are to be strictly construed.[19] Public emergency is not one of the conditions enumerated under Article 19 for curtailing fundamental freedoms. Moreover, it lacks adequate safeguards by vesting absolute discretionary power in a non-judicial administrative authority. Even if one were to ignore the massive potential for misuse of these powers, it is difficult to conceive that the interception provisions would stand a scrutiny of constitutionality.

Over the course of the last few years, India has been dangerously toeing the line that keeps it from turning into a totalitarian surveillance state. [20] In 2011, India was the third most intrusive state[21] with 1,699 requests for removal made to Google; in 2012 that number increased to 2529[22]. The media is abuzz with reports about the Intelligence Bureau wanting Internet Service Providers to log all customer details [23] and random citizens being videotaped by the Delhi Police for “looking suspicious.” It becomes essential under these circumstances to question where the state’s power ends and a citizens’ privacy begins. Most of the information regarding projects like the CMS and the CCTNS is murky and unconfirmed. But under the pretext of national security, government officials have refused to divulge any information regarding the kind of information included within these systems and whether any accountability measures exist. For instance, there have been conflicting opinions from various ministers regarding whether the internet would also be under the supervision of the CMS [24]. Even more importantly, citizens are unaware of what rights and remedies are available to them in instances of violation of their privacy.

The intelligence agencies that have been tasked with handling information collected under these systems have not been created under any legislation and therefore not subject to any parliamentary oversight. Attempts like the Intelligence Services (Powers and Regulation) Bill, 2011 have been shelved and not revisited since their introduction. The intelligence agencies that have been created through executive orders enjoy vast and unbridled powers that make them accountable to no one[25]. Before, vesting the Indian law enforcement agencies with sensitive information that can be so readily misused it is essential to ensure that a mechanism to check the use and misuse of that power exists. A three judge bench of the Supreme Court has recently decided to entertain a Public Interest Litigation aimed at subjecting the intelligence agencies to auditing by the Comptroller and Auditor General of India. But the PIL even if successful will still only manage to scratch the surface of all the wide and unbridled powers enjoyed by the Indian intelligence agencies. The question of the constitutionality of interception powers, however, has not been subjected to as much scrutiny as is necessary. Especially at a time when the government has been rumoured to have already obtained the capability for mass dragnet surveillance such a determination by the Indian courts cannot come soon enough.


[1] Indian Post Office Act, 1898, § 26

[2] Indian Telegraph Act, 1885 § 5(2)

[3] PUCL v. Union of India, AIR 1997 SC 568

[4] Govind vs. State of Madhya Pradesh, (1975) 2 SCC 148

[5] Malak Singh vs. State Of Punjab & Haryana, AIR 1981 SC 760

[6] Supra note 3

[7] Law Commission, Indian Post Office Act, 1898 (38th Law Commission Report) para 84

[8] ibid

[9] id

[10] Lok Sabha Debates , Minister of Communications, Shri H.N. Bahuguna, August 9, 1972

[11] The Constitution of India, Article 358- Suspension of provisions of Article 19 during emergencies

[12] Lok Sabha Debates , Minister of Communications, Shri H.N. Bahuguna, August 9, 1972

[13] Hukam Chand v. Union of India, AIR 1976 SC 789

[14] ibid

[15] Liversidge v. Anderson [1942] A.C. 206

[16] State of M.P. v. Baldeo Prasad, AIR 1961 (SC) 293 (296)

[17] ibid

[18] Ghosh O.K. v. Joseph E.X. Air 1963 SC 812; 1963 Supp. (1) SCR 789

[19] Sakal Papers (P) Ltd. v. Union of India, AIR 1962 SC 305 (315); 1962 (3) SCR 842

[20] See Notable Observations- July to December 2012, Google Transparency Report, available at http://www.google.com/transparencyreport/removals/government/ (last visited on July 2, 2014) (a 90% increase in Content removal requests by the Indian Government in the last year)

[21] Willis Wee, Google Transparency Report: India Ranks as Third ‘Snoopiest’ Country, July 6, 2011 available at http://www.techinasia.com/google-transparency-report-india/ (last visited on July 2, 2014)

[22] See Notable Observations- July to December 2012, Google Transparency Report, available at http://www.google.com/transparencyreport/removals/government/ (last visited on July 2, 2014) (a 90% increase in Content removal requests by the Indian Government in the last year)

[23] Joji Thomas Philip, Intelligence Bureau wants ISPs to log all customer details, December 30, 2010 http://articles.economictimes.indiatimes.com/2010-12-30/news/27621627_1_online-privacy-internet-protocol-isps (last visited on July 2, 2014)

[24] Deepa Kurup, In the dark about ‘India’s Prism’ June 16, 2013 available at http://www.thehindu.com/sci-tech/technology/in-the-dark-about-indias-prism/article4817903.ece

[25] Saikat Dutta, We, The Eavesdropped May 3, 2010 available at http://www.outlookindia.com/article.aspx?265191 (last visited on July 2, 2014)

Facebook and its Aversion to Anonymous and Pseudonymous Speech

by Jessamine Mathew — last modified Jul 04, 2014 07:53 AM
Jessamine Mathew explores Facebook's "real name" policy and its implications for the right to free speech.

The power to be unidentifiable on the internet has been a major reason for its sheer number of users. Most of the internet can now be freely used by anybody under a pseudonym without the fear of being recognised by anybody else. These conditions allow for the furtherance of free expression and protection of privacy on the internet, which is particularly important for those who use the internet as a medium to communicate political dissent or engage in any other activity which would be deemed controversial in a society yet not illegal. For example, an internet forum for homosexuals in India, discussing various issues which surround homosexuality may prove far more fruitful if contributors are given the option of being undetectable, considering the stigma that surrounds homosexuality in India, and the recent setting-aside of the Delhi High Court decision reading down Section 377 of the Indian Penal Code. The possibility of being anonymous or pseudonymous exists on many internet fora but on Facebook, the world’s greatest internet space for building connections and free expression, there is no sanction given to pseudonymous accounts as Facebook follows a real name policy. And as the recent decision of a New York judge, disallowing Facebook from contesting warrants on private information of over 300 of its users, shows, there are clear threats to freedom of expression and privacy.

On the subject of using real names, Facebook’s Community Standards states, “Facebook is a community where people use their real identities. We require everyone to provide their real names, so you always know who you're connecting with. This helps keep our community safe.” Facebook’s Marketing Director, Randi Zuckerberg, bluntly dismissed the idea of online anonymity as one that “has to go away” and that people would “behave much better” if they are made to use their real names. Apart from being a narrow-minded statement, she fails to realise that there are many different kinds of expression on the internet, from stories of sexual abuse victims to the views of political commentators, or indeed, whistleblowers, many of whom may prefer to use the platform without being identified. It has been decided in many cases that humans have a right to anonymity as it provides for the furtherance of free speech without the fear of retaliation or humiliation (see Talley v. California).

While Facebook’s rationale behind wanting users to register for accounts with their own names is based on the goal of maintaining the security of other users, it is still a serious infraction on users’ freedom of expression, particularly when anonymous speech has been protected by various countries. Facebook has evolved from a private space for college students to connect with each other to a very public platform where not just social connections but also discussions take place, often with a heavily political theme. Facebook has been described as instrumental in the facilitation of communication during the Arab Spring, providing a space for citizens to effectively communicate with each other and organise movements. Connections on Facebook are no longer of a purely social nature but have extended to political and legal as well, with it being used to promote movements all through the country. Even in India, Facebook was the most widely adopted medium, along with Twitter and Facebook, for discourse on the political future of the country during, before and after the 2014 elections. Earlier in 2011, Facebook was used intensively during the India Against Corruption movement. There were pages created, pictures and videos uploaded, comments posted by an approximate of 1.5 million people in India. In 2012, Facebook was also used to protest against the Delhi gang rape with many coming forward with their own stories of sexual assault, providing support to the victim, organising rallies and marches and protesting about the poor level of safety of women in Delhi.

Much like its content policy, Facebook exhibits a number of discrepancies in the implementation of the anonymity ban. Salman Rushdie found that his Facebook account had been suspended and when it was reinstated after he sent them proof of identity, Facebook changed his name to the name on his passport, Ahmed Rushdie instead of the name he popularly goes by. Through a series of tweets, he criticised this move by Facebook, forcing him to display his birth name. Eventually Facebook changed his name back to Salman Rushdie but not before serious questions were raised regarding Facebook’s policies. The Moroccan activist Najat Kessler’s account was also suspended as it was suspected that she was using a fake name. Facebook has also not just stopped at suspending individual user accounts but has also removed pages and groups because the creators used pseudonyms to create and operate the pages in question. This was seen in the case of Wael Ghonim who created a group which helped in mobilizing citizens in Egypt in 2011. Ghonim was a Google executive who did not want his online activism to affect his professional life and hence operated under a pseudonym. Facebook temporarily removed the group due to his pseudonymity but later reinstated it.

While Facebook performs its due diligence when it comes to some accounts, it has still done nothing about the overwhelmingly large number of obviously fake accounts, ranging from Santa Claus to Jack the Ripper. On my own Facebook friend list, there are people who have entered names of fictional characters as their own, clearly violating the real name policy. I once reported a pseudonymous account that used the real name of another person. Facebook thanked me for reporting the account but also said that I will “probably not hear back” from them. The account still exists with the same name. The redundancy of the requirement lies in the fact that Facebook does not request users to upload some form identification when they register with the site but only when they suspect them to be using a pseudonym. Since Facebook also implements its policies largely only on the basis of complaints by other users or the government, the real name policy makes many political dissidents and social activists the target of abuse on the internet.

Further, Articles 21 and 22 of the ICCPR grant all humans the right to free and peaceful assembly. As governments increasingly crack down on physical assemblies of people fighting for democracy or against legislation or conditions in a country, the internet has proved to be an extremely useful tool for facilitating this assembly without forcing people to endure the wrath of governmental authorities. A large factor which has promoted the popularity of internet gatherings is the way in which powerful opinions can be voice without the fear of immediate detection. Facebook has become the coveted online space for this kind of assembly but their policies and more particularly, faulty implementation of the policies, lead to reduced flows of communication on the site.

Of course, Facebook’s fears of cyberbullying and harassment are likely to materialise if there is absolutely no check on the identity of users.  A possible solution to the conflict between requiring real names to keep the community safe and still allowing individuals to be present on the network without the fear of identification by anybody would be to ask users to register with their own names but still allowing them to create a fictional name which would be the name that other Facebook users can see. Under this model, Facebook can also deal with the issue of safety through their system of reporting against other users. If a pseudonymous user has been reported by a substantial number of people for harassment or any other cause, then Facebook may either suspend the account or remove the content that is offensive. If the victim of harassment chooses to approach a judicial body, then Facebook may reveal the real name of the user so that due process may be followed. At the same time, users who utilise the website to present their views and participate in the online process of protest or contribute to free expression in any other way can do so without the fear of being detected or targeted.  Safety on the site can be maintained even without forcing users to reveal their real names to the world. The system that Facebook follows currently does not help curb the presence of fake accounts and neither does it promote completely free expression on the site.

Free Speech and Surveillance

by Gautam Bhatia — last modified Jul 07, 2014 04:59 AM
Gautam Bhatia examines the constitutionality of surveillance by the Indian state.

The Indian surveillance regime has been the subject of discussion for quite some time now. Its nature and scope is controversial. The Central Monitoring System, through which the government can obtain direct access to call records, appears to have the potential to be used for bulk surveillance, although official claims emphasise that it will only be implemented in a targeted manner. The Netra system, on the other hand, is certainly about dragnet collection, since it detects the communication, via electronic media, of certain “keywords” (such as “attack”, “bomb”, “blast” and “kill”), no matter what context they are used in, and no matter who is using them.

Surveillance is quintessentially thought to raise concerns about privacy. Over a series of decisions, the Indian Supreme Court has read in the right to privacy into Article 21’s guarantee of the right to life and personal liberty. Under the Supreme Court’s (somewhat cloudy) precedents, privacy may only be infringed if there is a compelling State interest, and if the restrictive law is narrowly tailored – that is, it does not infringe upon rights to an extent greater than it needs to, in order to fulfill its goal. It is questionable whether bulk surveillance meets these standards.

Surveillance, however, does not only involve privacy rights. It also implicated Article 19 – in particular, the Article 19(1)(a) guarantee of the freedom of expression, and the 19(1)(c) guarantee of the freedom of association.

Previously on this blog, we have discussed the “chilling effect” in relation to free speech. The chilling effect evolved in the context of defamation cases, where a combination of exacting standards of proof, and prohibitive damages, contributed to create a culture of self-censorship, where people would refrain from voicing even legitimate criticism for fear of ruinous defamation lawsuits. The chilling effect, however, is not restricted merely to defamation, but arises in free speech cases more generally, where vague and over-broad statutes often leave the border of the permitted and the prohibited unclear.

Indeed, a few years before it decided New York Times v. Sullivan, which brought in the chilling effect doctrine into defamation and free speech law, the American Supreme Court applies a very similar principle in a surveillance case. In NAACP v. Alabama, the National Association for the Advancement of Coloured People (NAACP), which was heavily engaged in the civil rights movement in the American deep South, was ordered by the State of Alabama to disclose its membership list. NAACP challenged this, and the Court held in its favour. It specifically connected freedom of speech, freedom of association, and the impact of surveillance upon both:

“Effective advocacy of both public and private points of view, particularly controversial ones, is undeniably enhanced by group association, as this Court has more than once recognized by remarking upon the close nexus between the freedoms of speech and assembly. It is beyond debate that freedom to engage in association for the advancement of beliefs and ideas is an inseparable aspect of the “liberty” assured by the Due Process Clause of the Fourteenth Amendment, which embraces freedom of speech. Of course, it is immaterial whether the beliefs sought to be advanced by association pertain to political, economic, religious or cultural matters, and state action which may have the effect of curtailing the freedom to associate is subject to the closest scrutiny… it is hardly a novel perception that compelled disclosure of affiliation with groups engaged in advocacy may constitute [an] effective a restraint on freedom of association… this Court has recognized the vital relationship between freedom to associate and privacy in one’s associations. Inviolability of privacy in group association may in many circumstances be indispensable to preservation of freedom of association, particularly where a group espouses dissident beliefs.”

In other words, if persons are not assured of privacy in their association with each other, they will tend to self-censor both who they associate with, and what they say to each other, especially when unpopular groups, who have been historically subject to governmental or social persecution, are involved. Indeed, this was precisely the argument that the American Civil Liberties Union (ACLU) made in its constitutional challenge to PRISM, the American bulk surveillance program. In addition to advancing a Fourth Amendment argument from privacy, the ACLU also made a First Amendment freedom of speech and association claim, arguing that the knowledge of bulk surveillance had made – or at least, was likely to have made – politically unpopular groups wary of contacting it for professional purposes (the difficulty, of course, is that any chilling effect argument effectively requires proving a negative).

If this argument holds, then it is clear that Articles 19(1)(a) and 19(1)(c) are prima facie infringed in cases of bulk – or even other forms of – surveillance. Two conclusions follow: first, that any surveillance regime needs statutory backing. Under Article 19(2), reasonable restrictions upon fundamental rights can only be imposed by law, and not be executive fiat (the same argument applies to Article 21 as well).

Assuming that a statutory framework is brought into force, the crucial issue then becomes whether the restriction is a reasonable one, in service of one of the stated 19(2) interests. The relevant part of Article 19(2) permits reasonable restrictions upon the freedom of speech and expression “in the interests of… the security of the State [and] public order.” The Constitution does not, however, provide a test for determining when a restriction can be legitimately justified as being “in the interests of” the security of the State, and of public order. There is not much relevant precedent with respect to the first sub-clause, but there happens to be an extensive – although conflicted – jurisprudence dealing with the public order exception.

One line of cases – characterised by Ramji Lal Modi v. State of UP and Virendra v. State of Punjab – has held that the phrase “for the interests of” is of very wide ambit, and that the government has virtually limitless scope to make laws ostensibly for securing public order (this extends to prior restraint as well, something that Blackstone, writing in the 18th century, found to be illegal!). The other line of cases, such as Superintendent v. Ram Manohar Lohia and S. Rangarajan v. P. Jagjivan Ram, have required the government to satisfy a stringent burden of proof. In Lohia, for instance, Ram Manohar Lohia’s conviction for encouraging people to break a tax law was reversed, the Court holding that the relationship between restricting free speech and a public order justification must be “proximate”. In Rangarajan, the Court used the euphemistic image of a “spark in a powder keg”, to characterise the degree of proximity required. It is evident that under the broad test of Ramji Lal Modi, a bulk surveillance system is likely to be upheld, whereas under the narrow test of Lohia, it is almost certain not to be.

Thus, if the constitutionality of surveillance comes to Court, three issues will need to be decided: first, whether Articles 19(1)(a) and 19(1)(c) have been violated. Secondly – and if so – whether the “security of the State” exception is subject to the same standards as the “public order” exception (there is no reason why it should not be). And thirdly, which of the two lines of precedent represent the correct understanding of Article 19(2)?


Gautam Bhatia — @gautambhatia88 on Twitter — is a graduate of the National Law School of India University (2011), and has just received an LLM from the Yale Law School. He blogs about the Indian Constitution at http://indconlawphil.wordpress.com. Here at CIS, he blogs on issues of online freedom of speech and expression.

FOEX Live

by Geetha Hariharan last modified Jul 07, 2014 12:36 PM
Selections of news on online freedom of expression and digital technology from across India (and some parts of the world)


For feedback, comments and any incidents of online free speech violation you are troubled or intrigued by, please email Geetha at geetha[at]cis-india.org or on Twitter at @covertlight.

Delhi High Court Orders Blocking of Websites after Sony Complains Infringement of 2014 FIFA World Cup Telecast Rights

by Anubha Sinha last modified Jul 08, 2014 07:02 AM
Of late the Indian judiciary has been issuing John Doe orders to block websites, most recently in Multi Screen Media v. Sunit Singh and Others. The order mandated blocking of 472 websites, out of which approximately 267 websites were blocked as on July 7, 2014. This trend is an extremely dangerous one because it encourages flagrant censorship by intermediaries based on a judicial order which does not provide for specific blocking of a URL, instead provides for blocking of the entire website.

The High Court of Delhi on June 23, 2014 issued a John Doe injunction restraining more than 400 websites from broadcasting 2014 FIFA world cup matches. News reports indicate that the Single judge bench of Justice V. Kameswar Rao directed the Department of Telecom to issue appropriate directions to ISPs to block the websites that Multi Screen Media provided, as well as “any other website identified by the plaintiff” in the future. On July 4, Justice G. S. Sistani permitted reducing the list to 219 websites.

Background

Multi Screen Media (MSM) is the official broadcaster for the ongoing 2014 FIFA World Cup tournament. FIFA (the Governing body) had exclusively licensed rights to MSM which included live, delayed, highlights, on demand, and repeat broadcast of the FIFA matches. MSM complained that the defendants indulged in hosting, streaming, providing access to, etc, thereby infringing the exclusive rights and broadcast and reproduction rights of MSM.

The court in the instant order held that the defendants had prima facie infringed MSM’s broadcasting rights, which are guaranteed by section 37 of the Copyright Act, 1957. In an over-zealous attempt to pre-empt infringement the court called for a blanket ban on all websites identified by MSM. Further, the court directed the concerned authorities to ensure ISPs complied with this order and block the websites mentioned by MSM presently, and other websites which may be subsequently be notified by MSM.

Where the Court went Wrong

The court stated that MSM successfully established a prima facie case, and on its basis granted a sweeping injunction to MSM ordering blocking 471 second level domains. I’d like to point out numerous flaws with the order-

  1. Dissatisfactory "Prima facie case"
In my opinion the court could have scrutinised the list of websites provided by MSM more carefully. There is nothing in the order to suggest that evidence was proffered by MSM in support of the list. The order reveals that the list was prepared by MarkScan, a “consulting boutique dedicated to (the client’s) IP requirements in the cyberspace and the Indian sub-continent.” The list throws up names such as docs.google.com, goo.gl & ad.ly (provide URL shortening service only), torrent indexing websites, IP addresses, online file streaming websites, etc., at a cursory glance. Evidently, perfectly legitimate websites have been targeted by an ill conducted search and shoddily prepared list which may lead to blocking of legitimate content on account of no verification by the court. 471 websites out of 472 mentioned in the first list are second level domains and 23 websites have been listed twice.

2. Generic order which abysmally fails to identify specific infringing URLS

Out of the 472 websites (list provided in the order by MarkScan)-

471 are file streaming websites, video sharing websites, file lockers, URL shorteners, file storage websites; only one is a specific URL [http://www.24livestreamtv.com/brazil-2014-fifa-world-cup-football-%20%C2%A0%C2%A0live-streaming-online-t ].

Breakdown of the list in the July 23rd Order

The order calls for blocking of complete websites. This is in complete contradiction to the 2012 Madras High Court’s order in R K Productions v BSNL which held that only a particular URL where the infringing content is kept should be blocked, rather than the entire website. The Madras High Court order had also made it mandatory for the complainants to provide exact URLs where they find illegal content, such that ISPs could block only that content and not the entire site. MSM did not adhere to this and I have serious doubts if the defendants brought the distinguishing Madras High Court judgment to the attention of the bench. The entire situation is akin to MarkScan scamming MSM by providing their clients a dodgy list, and MSM scamming the court and the public at large.

3. Lack of Transparency – Different blocking messages on different ISPs

The message displayed uniformly on blocked websites was:

"This website/URL has been blocked until further notice either pursuant to court orders or on the directions issued by the Department of Telecommunications."

I observed that a few websites showed the message “Error 404 – File or Directory not found” without the blocking message (above) on the network provider Reliance, and same Error 404 with the blocking message on the network provider Airtel highlighting the non-transparent manner of adherence to the order. Further, both the messages do not indicate the end period of the block.

Legality of John Doe orders in Website Blocking

It is pertinent to reiterate the ‘misuse’ of John Doe orders to block websites in India. The judiciary has erred in applying the John Doe order to protect copyrightable content on the internet. While the R K Productions v BSNL case appears reasonable in terms of permitting blocking of only URL specific content, the application of John Doe order to block websites remains unfounded in law. Ananth Padmanabhan in a three part study (Part I, II and III) had earlier analysed the improper use of John Doe injunctions to block websites in India. The John Doe order was conceived by US courts to pre-emptively remedy the irreparable damages suffered by copyright holders on account of unidentified/unnamed infringers. The interim injunction allowed collection of evidence from infringers, who were identified later as certain defendants and the final relief was accordingly granted. The courts routinely advocated judicious use of the order, and ensured that the identified defendants were provided and informed of their right to apply to the court within twenty four hours for a review of the order and a right to claim damages in an appropriate case. Therefore, the John Doe order applied against primary infringers per se.

On the other hand, whilst extending this remedy in India the courts have unfortunately placed onus on the conduit i.e. the ISP to block websites. This is tantamount to providing final relief at the interim stage, since all content definitely gets blocked; however, this hardly helps in identifying the actual infringer on the internet. The court is prematurely doling out blocking remedies to the complaining party, which, legally speaking should be meted out only during the final disposition of the case after careful examination of the evidence available. Thus, the intent of a John Doe order is miserably lost in such an application. Moreover, this lends an arbitrary amount of power in the hands of intermediaries since ISPs may or may not choose to approach the court for directions to specifically block URLs which provide access to infringing content only.

CIS 12A Certificate

by Prasad Krishna last modified Jul 10, 2014 05:38 AM

PDF document icon CIS 12a certificate.pdf — PDF document, 264 kB (270571 bytes)

CIS PAN Copy

by Prasad Krishna last modified Jul 10, 2014 05:49 AM

PDF document icon CIS pan copy.pdf — PDF document, 2609 kB (2672616 bytes)

Registration under FCRA

by Prasad Krishna last modified Jul 10, 2014 05:51 AM

PDF document icon CIS FCRA registration certficate.pdf — PDF document, 2005 kB (2053255 bytes)

GNI and IAMAI Launch Interactive Slideshow Exploring Impact of India's Internet Laws

by Jyoti Panday last modified Jul 17, 2014 12:01 PM
The Global Network Initiative and the Internet and Mobile Association of India have come together to explain how India’s Internet and technology laws impact economic innovation and freedom of expression.

The Global Network Initiative (GNI), and the Internet and Mobile Association of India (IAMAI) have launched an interactive slide show exploring the impact of existing Internet laws on users and businesses in India. The slide show created by Newsbound, and to which Centre for Internet and Society (CIS) has contributed its comments—explain the existing legislative mechanisms prevalent in India, map the challenges of the regulatory environment and highlight areas where such mechanisms can be strengthened.

Foregrounding the difficulties of content regulation, the slides are aimed at informing users and the public of the constraints of current legal mechanisms in place, including safe harbour and take down and notice provisions. Highlighting Section 79(3) and the Intermediary Liability Rules issued in 2011, the slide show identifies some of the challenges faced by Internet platforms, such as the broad interpretation of the legislation by the executive branch.

Challenges governing Internet platforms highlighted in the slide show include uniform Terms of Service that do not consider the type of service being provided by the platform, uncertain requirements for taking down content and compliance obligations related to information disclosure. Further the issues of over compliance and misuse of the legal notice and take down system introduced under Section 79 of the Information Technology (Intermediaries Guidelines) Rules 2011.

The Rules were created with the purpose of providing guidelines for the ‘post-publication redressal mechanism expression as envisioned in the Constitution of India'. However, since their introduction, the Rules have been criticised extensively, by both the national and the international media on account of not conforming to principles of natural justice and freedom of expression. Critics have pointed out that by not recognising the different functions performed by the different intermediaries and by not providing safeguards against misuse of such mechanism for suppressing legitimate expression, the Rules have a chilling effect on freedom of expression.

Under the current Rules, the third party provider/creator of information is not given a chance to be heard by the intermediary, nor is there a requirement to give a reasoned decision by the intermediary to the creator whose content has been taken down. The take down procedure also, does not have any provisions for restoring the removed information, such as providing a counter notice filing mechanism or appealing to a higher authority.  Further, the content criteria for removal of content includes terms like 'disparaging' and 'objectionable', which are not defined and prima facie seem to be beyond the reasonable restrictions envisioned by the Constitution of India. With uncertainty in content criteria and no safeguards to prevent abuse complainant may send frivolous complaints and suppress legitimate expressions without any fear of repercussions.

Most importantly, the redressal mechanism under the Rules shifts the burden of censorship, previously, the exclusive domain of the judiciary or the executive, and makes it the responsibility of private intermediaries. Often, private intermediaries, do not have sufficient legal resources to subjectively determine the legitimacy of a legal claim, resulting in over compliance to limit liability. The slide show cites  the 2011 CIS research carried out by Rishabh Dara to determine whether the Rules lead to a chilling effect on online free expression, towards highlighting the issue of over compliance and self censorship.

The initiative is timely, given the change of guard in India, and stresses, not only the economic impact of fixing the Internet legal framework, but also the larger impact on users rights and freedom of expression. The initiative calls for a legal environment for the Internet that enables innovation, protects the rights of users, and provides clear rules and regulations for businesses large and small.

See the slideshow here: How India’s Internet Laws Can Help Propel the Country Forward

Other GNI reports and resources:

Closing the Gap: Indian Online Intermediaries and a Liability System Not Yet Fit for Purpose

Strengthening Protections for Online Platforms Could Add Billions to India’s GDP

First Privacy and Surveillance Roundtable

by Anandini K Rathore last modified Aug 09, 2014 04:13 AM
The Privacy and Surveillance Roundtables are a CIS initiative, in partnership with the Cellular Operators Association of India (COAI), as well as local partners. From June 2014 – November 2014, CIS and COAI will host seven Privacy and Surveillance Roundtable discussions across multiple cities in India. The Roundtables will be closed-door deliberations involving multiple stakeholders.

Through the course of these discussions we aim to deliberate upon the current legal framework for surveillance in India, and discuss possible frameworks for surveillance in India. The provisions of the draft CIS Privacy Bill 2013, the International Principles on the Application of Human Rights to Communication Surveillance, and the Report of the Group of Experts on Privacy will be used as background material and entry points into the discussion. The recommendations and dialogue from each roundtable will be compiled and submitted to the Department of Personnel and Training.

The first of seven proposed roundtable meetings on “Privacy and Surveillance” conducted by the Centre for Internet and Society in collaboration with the Cellular Operators Association of India and the Council for Fair Business Practices was held in Mumbai on the 28th of June, 2014.

The roundtable’s discussion centered on the Draft Privacy Protection Bill formed by CIS in 2013, which contains provisions on the regulation of interception and surveillance and its implications on individual privacy. Other background documents to the event included the Report of the Group of Experts on Privacy, and the International Principles on the Application of Human Rights to Communications Surveillance.

Background and Context

The Chair of the Roundtable began by giving a brief background of Surveillance regulation in India, focusing its scope to primarily telegraphic, postal and electronic surveillance.

Why a surveillance regime now?

A move to review the existing privacy laws in India came in the wake of Indo-EU Fair Trade Agreement negotiations; where a Data Adequacy Assessment conducted by European Commission found India’s data protection policies and practices inadequate for India to be granted EU secure status. The EU’s data protection regime is in contrast, fairly strong, governed by the framework of the EU Data Protection Directive, 1995.

In response to this, the Department of Personnel and Training, which drafted the Right to Information Act of 2005 and the Whistleblower’s Protection Act, 2011 was given the task of forming a Privacy Bill. Although the initial draft of the Bill was made available to the public, as per reports, the Second draft of the Bill has been shared selectively with certain security agencies and not with service providers or the public.

Discussion

The Chair began the discussion by posing certain preliminary questions to the Roundtable:

  • What should a surveillance law contain and how should it function?
  • If the system is warrant based, who would be competent to execute it?
  • Can any government department be allowed a surveillance request?


A larger question posed was whether the concerns and questions posed above would be irrelevant with the possible enforcement of a Central Monitoring System in the near future? As per reports, the Central Monitoring System would allow the government to intercept communications independently without using service providers and thus, in effect, shielding such information from the public entirely.

The CIS Privacy Protection Bill’s Regulatory Mechanism

The discussion then focused on the type of regulatory mechanism that a privacy and surveillance regime in India should have in place. The participants did not find favour in either a quasi-judicial body or a self-regulatory system – instead opting for a strict regulatory regime.

The CIS Draft Privacy Protection Bill proposes a regime that consists of a Data Protection Regulation Authority that is similar to the Telecom Regulatory Authority of India, including the provision for an appellate body. The Bill envisions that the Authority will act as an adjudicating body for all complaints relating to the handling of personal data in addition to forming and reviewing rules on personal data protection.

Although, the Draft Bill dealt with privacy and surveillance under one regulatory authority, the Chair proposes a division between the two frameworks, as the former is governed primarily by civil law, and the latter is regulated by criminal law and procedure. Though in a 2014 leaked version of the governments Privacy Bill, surveillance and privacy are addressed under one regulation, as per reports, the Department of Personnel and Training is also considering creating two separate regulations: one for data protection and one for surveillance.

Authorities in Other Jurisdictions

The discussion then moved to comparing the regulatory authorities within other jurisdictions and the procedures followed by them. The focus was largely on the United States and the United Kingdom, which have marked differences in their privacy and surveillance systems.

In the United Kingdom, for example, a surveillance order is reviewed by an Independent Commissioner followed by an Appellate Tribunal, which has the power to award compensation. In contrast, the United States follows a far less transparent system which governs foreigners and citizens under separate legislations. A secret court was set up under the FISA, an independent review process, however, exists for such orders within this framework.

The Authority for Authorizing Surveillance in India

The authority for regulating requests for interceptions of communication under the Draft CIS Privacy Protection Bill is a magistrate. As per the procedure, an authorised officer must approach the Magistrate for approval of a warrant for surveillance. Two participants felt that a Magistrate is not the appropriate authority to regulate surveillance requests as it would mean vesting power in a few people, who are not elected via a democratic process.

In the present regime, the regulation of interception of telecommunications under Indian Law is governed by the Telegraph Act,1885 and the Telegraph Rules,1951. Section 5(2) of the Act and Rule 419A of the Telegraph Rules, permit interception only after an order of approval from the Home Secretary of the Union Government or of the State Governments, which in urgent cases, can be granted by an officer of the Joint Secretary Level or above of the Ministry of Home Affairs of the Union or that State’s Government.

Although most participants felt confident that a judicial authority rather than an executive authority would serve as the best platform for regulating surveillance, there was debate on what level of a Magistrate Judge would be apt for receiving and authorizing surveillance requests - or whether the judge should be a Magistrate at all. Certain participants felt that even District Magistrates would not have the competence and knowledge to adjudicate on these matters. The possibility of making High Court Judges the authorities responsible for authorizing surveillance requests was also suggested. To this suggestion participants noted that there are not enough High Court judges for such a system as of now.

The next issue raised was whether the judges of the surveillance system should be independent or not, and if the orders of the Courts are to be kept secret, would this then compromise the independence of such regulators.  As part of this discussion, questions were raised about the procedures under the Foreign Intelligence Surveillance Act, the US regulation governing the surveillance of foreign individuals, and if such secrecy could be afforded in India. During the discussions, certain stakeholders felt that a system of surveillance regulation in India should be kept secret in the interests of national security. Others highlighted that this is the existing practice in India giving the example of the Intelligence Bureau and Research and Analysis Wing orders which are completely private, adding however, that none of these surveillance regulations in India have provisions on disclosure.

When can interception of communications take place?

The interception of communications under the CIS Privacy Protection Bill is governed by the submission of a report by an authorised officer to a Magistrate who issues a warrant for such surveillance. Under the relevant provision, the threshold for warranting surveillance is suspicious conduct. Several participants felt that the term ‘suspicious conduct’ was too wide and discretionary to justify the interception of communication and suggested a far higher threshold for surveillance. Citing the Amar Singh Case, a participant stated that a good way to ensure ‘raise the bar’ and avoid frivolous interception requests would be to require officers submitting interception request to issue affidavits. A participant suggested that authorising officers could be held responsible for issuing frivolous interception requests. Some participants agreed, but felt that there is a need for a higher and stronger standard for interception before provisions are made for penalising an officer. As part of this discussion, a stakeholder added that the term “person” i.e. the subject of surveillance needed definition within the Bill.

The discussion then moved to comparing other jurisdictions’ thresholds on permitting surveillance. The Chair explained here that the US follows the rule of probable cause, which is where a reasonable suspicion exists, coupled with circumstances that could prove such a suspicion true. The UK follows the standard of ‘reasonable suspicion’, a comparatively lesser degree of strength than probable cause. In India, the standard for telephonic interception under the Telegraph Act 1885 is the “occurrence of any public emergency or in the interest of public safety” on the satisfaction of the Home Secretary/Administrative Officer.

The participants, while rejecting the standard of ‘suspicious conduct’ and agreeing that a stronger threshold was needed, were unable to offer other possible alternatives.

Multiple warrants, Storing and sharing of Information by Governmental Agencies

The provision for interception in the CIS Privacy Protection Bill stipulates that a request for surveillance should be accompanied by warrants previously issued with respect to that individual. The recovery of prior warrants suggests the sharing of information of surveillance warrants across multiple governmental agencies which certain participants agree, could prevent the duplication of warrants.

Participants briefly discussed how the Central Monitoring System will allow for a permanent log of all surveillance activities to be recorded and stored, and the privacy implications of this. It was noted that as per reports, the hardware purported to be used for interception by the CMS is Israeli, and is designed to store a log of all metadata.

A participant stated that automation component of the Centralized Monitoring System may be positive considering that authentication of requests i.e. tracing the source of the interception may be made easier with such a system.

Conditions prior to issuing warrant

The CIS Privacy Protect Bill states that a Magistrate should be satisfied of either. A reasonable threat to national security, defence or public order; or a  cognisable  offence,  the  prevention,  investigation  or  prosecution  of  which  is necessary in the public interest. When discussing these standards, certain participants felt that the inclusion of ‘cognizable offences’ was too broad, whereas others suggested that the offences would necessarily require an interception to be conducted should be listed.  This led to further discussion on what kind of categorisation should be followed and whether there would be any requirement for disclosure when the list is narrowed down to graver and serious offences.

The chair also posed the question as to whether the term ‘national security’ should elaborated upon, highlighting the lack of a definition in spite of two landmark Supreme Court judgments on national security legislations, Terrorist and Disruptive Activities Act,1985 and the Prevention of Terrorism Act,  i.e. Kartar Singh v Union of India [1] and PUCL v Union of India.[2]

Kinds of information and degree of control

The discussion then focused on the kinds of information that can be intercepted and collected. A crucial distinction was made here, between content data and metadata, the former being the content of the communication itself and the latter being information about the communication.  As per Indian law, only content data is regulated and not meta-data. On whether a warrant should be issued by a Magistrate in his chambers or in camera, most participants agreed that in chambers was the better alternative. However, under the CIS Privacy Protection Bill, in chamber proceedings have been made optional, which stakeholders agreed should be discretionary depending on the case and its sensitivity.

Evidentiary Value

The foundation of this discussion, the Chair noted, is the evidentiary value given to information collected from interception of communications. For instance, the United States follows the exclusionary rule, also known as the “fruit of the poisonous tree rule”, where evidence collected from an improper investigation discredits the evidence itself as well as further evidence found on the basis of it.

Indian courts however, allow for the admission of evidence collected through improper collection, as does the UK.  In Malkani v State of Maharashtra[3] the Supreme Court stated that an electronically recorded conversation can be admissible as evidence, and stated that evidence collected from an improper investigation can be relied upon for the discovery of further evidence - thereby negating the application of the exclusionary rule.

Emergent Circumstances: who should the authority be?

The next question posed to the participants was who the apt authority would be to allow surveillance in emergent circumstances. The CIS Privacy Protection Bill places this power with the Home Secretary, stating that if the Home Secretary is satisfied of a grave threat to national security, defence or public order, he can permit surveillance. The existing law under the Telegraph Act 1885 uses the term ‘unavoidable circumstance’, though not elaborating on what this amounts to for such situations, where an officer not below the rank of a Joint Secretary evaluates the request. In response to this question, a stakeholder suggested that the issuing authority should be limited to the police and administrative services alone.  In the CIS Privacy Protection Bill - a review committee for such decisions relating to interception is comprised of senior administrative officials both at the Central and State Government level.  A participant suggested that the review committee should also include the Defence secretary and the Home secretary.

Sharing of Information

The CIS Privacy Protection Bill states that information gathered from surveillance should not be shared be shared amongst persons, with the exception that if the information is sensitive in terms of national security or prejudicing an investigation, an authorised officer can share the information with an authorised officer of any other competent organisation.

A participant highlighted that this provision is lacking an authority for determining the sharing of information. Another participant noted that the sharing of information should be limited amongst certain governmental agencies, rather than to ‘any competent organisation.’

Proposals for Telecommunication Service Providers

In the Indian interception regime, although surveillance orders are passed by the Government, the actual interception of communication is done by the service provider. Certain proposals have been introduced to protect service providers from liability. For example, an execution provision ensures that a warrant is not served on a service provider more than seven days after it is issued. In addition an indemnity provision prevents any action being taken against a service provider in a court of law, and indemnifies them against any losses that arise from the execution of the warrant, but not outside the scope of the warrant. During discussions, stakeholders felt that the standard should be a blanket indemnity without any conditions to assure service providers.

Under the Indian interception regime, a service provider must also ensure confidentiality of the content and meta data of the intercepted communications. To this, a participant suggested that in situations of information collection, a service provider may have a policy for obtaining customer consent prior to the interception. The Information Technology (Reasonable security practices and procedures and sensitive personal information) Rules, 2011 are clearer in this respect, which allow for the disclosure of information to governmental agencies without consent.

Another participant mentioned that the inconsistencies between laws on information disclosure and collection, such as the IT Act, the Right to Information Act and the recently enacted Whistleblower’s Protection Act, 2011 need to be harmonised. Other stakeholders agreed with this, though they stated that surveillance regulations should prevail over other laws in case of any inconsistency.

Conclusions

The inputs from the Bombay Roundtable seem to point towards a more regulated approach, with the addition of a review system to enhance accountability. While most stakeholders here agreed that national security is a criterion that takes precedence over concerns of privacy vis-à-vis surveillance, there is a concomitant need to define the limits of permissible interception. The view here is that a judicial model would prove to be a better system than the executive system; however, there is no clear answer as of yet on who would constitute this model. While the procedure for interception was covered in depth, the nature of the information itself was covered briefly and more discussion would be welcome here in forthcoming sessions.

Click to download the Report (PDF, 188 Kb)


[1]. 1994 4 SCC 569.

[2]. (1997) 1 SCC 301.

[3]. [1973] 2 S.C.R. 417.

Bombay Report

by Prasad Krishna last modified Jul 18, 2014 06:03 AM

PDF document icon Bombay Report.pdf — PDF document, 188 kB (192615 bytes)

Private Censorship and the Right to Hear

by Chinmayi Arun last modified Jul 22, 2014 05:57 AM
Very little recourse is available against publishers or intermediaries if these private parties censor an author’s content unreasonably.

The article was published in the Hoot on July 17, 2014 and also mirrored on the website of Centre for Communication Governance.


DNA newspaper's removal of Rana Ayyub's brave piece on Amit Shah, with no explanation, is shocking. It is reminiscent of the role that media owners played in censoring journalists during the Emergency, prompting L.K. Advani to say, "You were asked to bend, but you crawled."

The promptitude with which some media houses are weeding out political writing that might get them into trouble should make us reconsider the way we think about the freedom of the press. Discussions of press freedom often concentrate on the individual's right to speak, but may be more effective if they also accommodated another perspective - the audience's right to hear.

It is fortunate that Ayyub's piece was printed and reached its audience before attempts were made to bury it. Its removal was counterproductive, making DNA's decision a good example of what is popularly known as the Streisand Effect (when an attempt to censor or remove infor-mation has the unintended consequence of publicising the information even more widely).

The controversy that has emerged from DNA removing the article has generated much wider attention for it now that it has appeared on multiple websites, its readership expanding as outrage at its removal ricochets around the Internet.

This incident is hardly the first of its kind. Just weeks ago, news surfaced of Rajdeep Sardesai being pressurised to alter his news channel's political coverage before the national election. The Mint reported that the people pressurising Sardesai wanted a complete blackout of Kejriwal and the Aam Admi party from CNN-IBN. Had he capitulated, significant news of great public interest would have been lost to a large audience. CNN-IBN's decision would have been put down to editorial discretion, and we the public would have been none the wiser.

Luckily for their audience, Sardesai and Sagarika Ghose quit the channel that they built from scratch instead of compromising their journalistic integrity.  However, the league of editors who choose to crawl remains large, their decisions protected by the Indian constitution.

The freedom of the press in India only protects the press from the government's direct attempts to influence it. Both big business and the state have far more instruments at their disposal than just direct ownership or censorship diktats. These include withdrawal of lucrative advertisements, defamation notices threatening journalists with enormous fines and imprisonment, and sometimes even physical violence. Who can forget how Tehelka magazine's exposure of largescale government wrongdoing resulted in its financiers being persecuted by the Enforcement Directorate, with one of them even being jailed for some time.

The instruments of harassment work best when the legal notices are sent to third party publishers or intermediaries. Unlike the authors who may wish to defend their work or modify it a little to make it suitable for publication, a publishing house or web platform would usually prefer to avoid expensive litigation. Third party publishers will often remove legitimate con-tent to avoid spending time and money fighting for it. Pressurising them is a fairly effective way to silence authors and journalists.

Consider the different news outlets and publishing houses that control what reaches us as news or commentary. If they can be forced to bury content, citing editorial discretion, consider what this means for the quality of news that reaches the Indian public. Indira Gandhi understood this weakness of the press, and successfully controlled the Indian media by managing the proprietors.

Although media ownership still remains concentrated in a few hands, the disruptive element that still offers some hope of free public dialogue is the Internet where, through blogs, small websites and social media, journalists can still get access to the public sphere. This means that when DNA deletes Rana Ayyub's article, copies of it are immediately posted in other places.

However, online journalism is also vulnerable. Online intermediaries which receive content blocking and take down orders tend to over-comply rather than risk litigation. Like publishers, these intermediaries can easily prevent speakers from reaching their audiences. Just look at the volume of information online that is dependent on third party intermediaries such as Rediff, Facebook, WordPress or Twitter. The only thing that keeps the state and big business from easily controlling the flow of information on the Internet is that it is difficult to exert cross-border pressure on online intermediaries located outside India.

However, the ease with which most of the mainstream media is controlled makes it easy to construct a bubble of fiction around audiences, leaving them in blissful ignorance of how little they really know. Very little recourse is available against publishers or intermediaries if these private parties censor an author's content unreasonably. Unlike state censorship, private censorship is invisible, and is protected by the online and offline intermediaries' right to their editorial choices.

Ordinarily, there is nothing wrong with editorial discretion or even with a media house choosing a particular slant to its stories. However, it is important for the public to have access to a healthy range of perspectives and interests, with a diversity of content. If news of public significance is regularly filtered out, it affects the state of our democracy. Citizens cannot participate in governance without access to important information.

It is, therefore, vital to acknowledge the harm caused by private censorship. A democracy is endangered when a few parties disproportionately control access to the public sphere. We need to think of how to ensure that the voices of journalists and scholars reach their audience. Media freedom should be seen in the context of the right of the audience, the Indian public, to receive information.

UK’s Interception of Communications Commissioner — A Model of Accountability

by Joe Sheehan last modified Jul 24, 2014 06:08 AM
The United Kingdom maintains sophisticated electronic surveillance operations through a number of government agencies, ranging from military intelligence organizations to police departments to tax collection agencies. However, all of this surveillance is governed by one set of national laws outlining specifically what surveillance agencies can and cannot do.

The primary law that governs government investigations is the Regulation of Investigatory Powers Act 2000, abbreviated as RIPA 2000.

To ensure that this law is being followed and surveillance operations in the United Kingdom are not conducted illegally, the RIPA 2000 Part I establishes an Interception of Communications Commissioner, who is tasked with inspecting the surveillance operations, assessing their legality, and compiling an annual report to for the Prime Minister.

On April 8, 2014 the current Commissioner, Rt Hon. Sir Anthony May, laid the 2013 annual report before the House of Commons and the Scottish Parliament. In its introduction, the report notes that it is responding to concerns raised as a result of Edward Snowden’s actions, especially misuse of powers by intelligence agencies and invasion of privacy. The report also acknowledges that the laws governing surveillance, and particularly RIPA 2000, are difficult for the average citizen to understand, so the report includes a narrative outline of relevant provisions in an attempt to make the legislation clear and accessible. However, the report points out that while the Commissioner had complete access to any documents or investigative records necessary to construct the report, the Commissioner was unable to publish surveillance details indiscriminately, due to confidentiality concerns in a report being issued to the public. (It is worth noting here that though the Commissioner is one man, he has an entire agency working under him, so it is possible that he himself did not do or write all of that the report attributes to him). As a whole, the report outlines a series of thorough audits of surveillance operations, and reveals that the overwhelming majority of surveillance in the UK is conducted entirely legally, and that the small minority of incorrectly conducted surveillance appears to be unintentional. Looking beyond the borders of the United Kingdom, the report represents a powerful model of a government initiative to ensure transparency in surveillance efforts across the globe.

The Role of the Commissioner

The report begins in the first person, by outlining the role of the Commissioner. May’s role, he writes, is primarily to audit the interception of data, both to satisfy his own curiosity and to prepare a report for the Prime Minister. Thus, his primary responsibility is to review the lawfulness of surveillance actions, and to that end, his organization possesses considerable investigative powers. He is also tasked with ensuring that prisons are legally administrated, though he makes this duty an afterthought in his report.

Everyone associated with surveillance or interception in the government must disclose whatever the commissioner asks for. In short, he seems well equipped to carry out his work. The Commissioner has a budget of £1,101,000, almost all of which, £948,000 is dedicated to staff salaries.

The report directly addresses questions about the Commissioner’s ability to carry out his duties. Does the Commissioner have full access to whatever materials or data it needs to conduct its investigations, the report asks, and it answers bluntly, yes. It is likely, the report concludes, that the Commissioner also has sufficient resources to adequately carry out his duties. Yes, the Commissioner is fully independent from other government interests; the commissioner answers his own question. Finally, the report asks if the Commissioner should be more open in his reports to the public about surveillance, and he responds that the sensitivity of the material prohibits him from disclosing more, but that the report adequately addresses public concern regardless. There is a degree to which this question and answer routine seems self-congratulatory, but it is good to see that the Commissioner is considering these questions as he carries out his duties.

Interception of Communications

The report first goes into detail about the Commissioner’s audits of communications interception operations, where interception means wiretapping or reading the actual content of text messages, emails, or other communications, as opposed to the metadata associated with communications, such as timestamps and numbers contacted. In this section, the report outlines the steps necessary to conduct an interception, outlining that an interception requires a warrant, and only a Secretary of State (one of five officials) can authorize an interception warrant. Moreover, the only people who can apply for such warrants are the directors of various intelligence, police, and revenue agencies. In practice, the Secretaries of State have senior staff that read warrant applications and present those they deem worthy to the Secretary for his or her signature, as their personal signature is required for authorization.

For a warrant to be granted, it must meet a number of criteria. First, interception warrants must be necessary in the interests of national security, to prevent or detect serious crime, or to safeguard economic wellbeing of the UK. Additionally, a warrant can be granted if it is necessary for similar reasons in other countries with mutual assistance agreements with the UK. Warrants must be proportionate to the ends sought. Finally, interception warrants for communications inside the UK must specify either a person or a location where the interception will take place. Warrants for communications outside of the UK require no such specificity.

In 2013, 2760 interception warrants were authorized, 19% fewer warrants than in 2012. The Commissioner inspected 26 different agencies and examined 600 different warrants throughout 2013. He gave inspected agencies a report on his findings after each inspection, so they could see whether or not they were following the law. He concluded that the agencies that undertake interception “do so lawfully, conscientiously, effectively, and in our national interest.” Thus, all warrants adequately meet the application and authorization requirements outlined in RIPA 2000.

Communications Data

The report goes on to discuss communications data collection, where communications data refers to metadata–not the content of the communications itself, but data associated with it, such as call durations, or a list of email recipients. The Commissioner explains that metadata is easier to obtain than an interception warrant. Designated officials in their respective surveillance organization read and grant metadata warrant applications, instead of one of the Secretaries of State who could grant interception warrants. Additionally, the requirements for a metadata warrant are looser than for interception warrants. Metadata warrants must still be necessary, but necessary for a broader range of causes, ranging from collecting taxes, protecting public health, or for any purpose specified by a Secretary of State.

The relative ease of obtaining a metadata warrant is consistent with a higher number of warrants approved. In 2013, 514,608 metadata warrants were authorized, down from 570,135 in 2012. Local law enforcement applied for 87.5% of those warrants while intelligence agencies accounted for 11.5%. Only a small minority of requests was sent from the revenue office or other departments.

The purposes of these warrants were similarly concentrated. 76.9% of metadata warrants were issued for prevention or detection of crime. Protecting national security justified 11.4% of warrants and another 11.4% of warrants were issued to prevent death or injury. 0.2% of warrants were to identify people who had died or otherwise couldn’t identify themselves, 0.11% of warrants were issued to protect the economic wellbeing of the United Kingdom, and 0.02% of warrants were associated with tax collection. The Commissioner identified less than 0.01% of warrants as being issued in a miscarriage of justice, a very low proportion.

The Commissioner inspected metadata surveillance efforts, conducting 75 inspections in 2013, and classified the practices of those operations inspected as good, fair or poor. 4% of operations had poor practices. He noticed two primary errors. The first was that data was occasionally requested on an incorrect communications address, and the second was that he could not verify that some metadata was not being stored past its useful lifetime. May highlighted that RIPA 2000 does not give concrete lengths for which data should be stored, as Section 15(3) states only that data must be deleted “as soon as there are no longer grounds for retaining it as necessary for any of the authorized purposes.”  He noted that he was only concerned because some metadata was being stored for longer periods than associated interception data. As May put it, “I have yet to satisfy myself fully that some of these periods are justified and in those cases I required the agencies to shorten their retention periods or, if not, provide me with more persuasive reasons.” The Commissioner seems determined that this practice will either be eliminated or better justified to him in the near future.

Indian Applications

The United Kingdom’s Interception of Communications Commissioner has similar powers to the Indian Privacy Commissioner suggested by the Report of the Group of Experts on Privacy.  Similar to the United Kingdom, it is recommended that a Privacy Commissioner in India have investigative powers in the execution of its charter, and that the Privacy Commissioner represent citizen interests, ensuring that data controllers are in line with the stipulated regulations. The Report also broadly states that “with respect to interception/access, audio & video recordings, the use of personal identifiers, and the use of bodily or genetic material, the Commissioner may exercise broad oversight functions.”  In this way, the Report touches upon the need for oversight of surveillance, and suggests that this responsibility may be undertaken by the Privacy Commissioner, but does not clearly place this responsibility with the Privacy Commissioner. This raises the question of if India should adopt a similar model to the United Kingdom – and create a privacy commissioner – responsible primarily for overseeing and enforcing data protection standards, and a separate surveillance commissioner – responsible for overseeing and enforcing standards relating to surveillance measures. When evaluating the different approaches there are a number of considerations that should be kept in mind:

  1. Law enforcement and security agencies are the exception to a number of data protection standards including access and disclosure.
  2. There is a higher level of ‘sensitivity’ around issues relating to surveillance than data protection and each needs to be handled differently.
  3. The ‘competence’ required to deliberate on issues related to data protection is different then the ‘competence’ required deliberating on issues related to surveillance.

Additionally, this raises the question of whether India needs a separate regulation governing data protection and a separate regulation governing surveillance.

Allegations of Wrongdoing

It is worth noting that though May describes surveillance operations conducted in compliance with the law, many other organizations have accused the UK government of abusing their powers and spying on citizens and internet users in illegal ways. The GCHQ, the government’s communications surveillance center has come under particular fire. The organization has been accused indiscriminate spying and introducing malware into citizen’s computers, among other things. Led by the NGO Privacy International, internet service providers around the world have recently lodged complaints against the GCHQ, alleging that it uses malicious software to break into their networks. Many of these complaints are based on the information brought to light in Edward Snowden’s document leaks. Privacy International alleges that malware distributed by GCHQ enables access to any stored content, logging keystrokes and “the covert and unauthorized photography or recording of the user and those around him,” which they claim is similar to physically searching through someone’s house unbeknownst to them and without permission. They also accuse GCHQ malware of leaving devices open to attacks by others, such as identity thieves.

Snowden’s files also indicate a high level of collaboration between GCHQ and the NSA. According to the Guardian, which analyzed and reported on many of the Snowden files, the NSA has in past years paid GCHQ to conduct surveillance operations through the US program called Prism. Leaked documents report that the British intelligence agency used Prism to generate 197 intelligence reports in the year to May 2012. Prism is not mentioned at all in the Interception of Communications Commissioner’s report. In fact, while the report’s introduction explains that it will attempt to address details revealed in Snowden’s leaked documents, very little of what those documents indicate is later referenced in the report. May ignores the plethora of accusations of GCHQ wrongdoing.

Thus, while May’s tone appears genuine and sincere, the details of his report do little to dispel fears of widespread surveillance. It is unclear whether May is being totally forthcoming in his report, especially when he devotes so little energy to directly responding to concerns raised by Snowden’s leaks.

Conclusion

May wrapped up his report with some reflections on the state of surveillance in the United Kingdom. He concluded that RIPA 2000 protects consumers in an internet age, though small incursions are imaginable, and especially lauds the law for it’s technological neutrality. That is, RIPA 2000 is a strong law because it deals with surveillance in general and not with any specific technologies like telephones or Facebook, use of which changes over time. The Commissioner also was satisfied that powers were not being misused in the United Kingdom. He reported that there have been a small number of unintentional errors, he noted, and some confusion about the duration of data retention. However, any data storage mistakes seemed to stem from an unspecific law.

Despite May’s report of surveillance run by the books, other UK groups have accused GCHQ, the government’s communications surveillance center, of indiscriminate spying and introducing malware into citizen’s computers. Privacy International has submitted a claim arguing that a litany of malware is employed by the GCHQ to log detailed personal data such as keystrokes. The fact that May’s report does little to disprove these claims casts the Commissioner in an uncertain light.  It is unclear whether surveillance is being conducted illegally or, as the report suggests, all surveillance of citizens is being conducted as authorized.

Still, the concept of a transparency report and audit of a nation’s surveillance initiatives report is a step towards government accountability done right, and should serve as a model for enforcement methods in other nations. May’s practice of giving feedback to the organizations he inspects allows them to improve, and the public report he releases serves as a deterrent to illegal surveillance activity. The Interception of Communications Commissioner–provided he reports truthfully and accurately–is what gives the safeguards built into the UK’s interception regime strength and accountability. In other nations looking to establish privacy protections, a similar role would make their surveillance provisions balanced with safeguards and accountability to ensure that the citizens fundamental rights–including the right to privacy–are not compromised.

IAMCR 2014 Conference

by Prasad Krishna last modified Jul 28, 2014 08:08 AM

PDF document icon IAMCR2014.pdf — PDF document, 7513 kB (7693808 bytes)

Thinking about Internet Regulation

by Prasad Krishna last modified Jul 29, 2014 09:26 AM

PDF document icon Thinking about Internet Regulation.pdf — PDF document, 115 kB (118269 bytes)

Innovation Ecosystem

by Prasad Krishna last modified Jul 29, 2014 09:36 AM

PDF document icon Innovation Ecosystem.pdf — PDF document, 2738 kB (2804264 bytes)

CIS Cybersecurity Series (Part 18) – Lobsang Gyatso Sither

by Purba Sarkar last modified Jul 31, 2014 05:34 AM
CIS interviews Lobsang Gyatso Sither, Tibetan field coordinator and activist, as part of the Cybersecurity Series.

“The digital arms trade and the digital arms race, that is going on right now is a huge problem, in terms of what is happening around the world. A lot of people talk about digital arms like it’s just digital technology; it’s just surveillance technology; it’s just censorship technology; it’s just technology; it doesn’t kill anyone, but the fact of the matter is that it does kill. It’s as bad as a gun; it’s as bad as a weapon. It's the same thing in my opinion and it has to be restricted; it has to be curtailed, it has to be controlled so that it doesn’t go to places where there are no human rights and where there are rampant human rights violations. People know what it is going to be used for and it is going to be used for human rights violations and that is something that has be kept in mind before the whole aspect of digital arms trade and it has to be treated as any other arms trade.”

Centre for Internet and Society presents its eighteenth installment of the CIS Cybersecurity Series. 

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic. 

Lobsang Gyatso Sither is a Tibetan born in exile dedicated to increasing cybersecurity among Tibetans inside Tibet and in the diasporas. He has helped to develop community-specific technologies and educational content and deploys them via training and public awareness campaigns at the grassroots level. Lobsang works with key communicators and organizations in the Tibetan community, including Voice of Tibet Radio and the Tibetan Centre for Human Rights and Democracy.

 

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

CIS Cybersecurity Series (Part 19) – Lobsang Sangay

by Purba Sarkar last modified Jul 31, 2014 05:40 AM
CIS interviews Lobsang Sangay, Prime Minister of the Central Tibetan Administration, as part of the Cybersecurity Series.

“If there is already freedom of speech in a democratic country, then anonymous commentary could be misplaced in many instances. Because if the country is democratic, it has freedom of speech, and the laws protect you when you speak out. Then I think the citizens also have responsibilities. Democracy not only means freedom, but it also means duties. Your duty is to say who you are and criticize the government, or the employer, or the policy or whatever, in your name. So anonymity is misplaced in that sense, in most of the instances. Having said that, if a particular country or a government restricts freedom of speech, then you have no option but to be anonymous  because just by speaking out, you are committing a crime and hence you are liable. For example, in Tibet, even if you paste a poster on the wall, saying just two words ‘human right’, you will be arrested and you will go behind bars. Even if you just shout a slogan, you will be arrested and you will be in prison.”

Centre for Internet and Society presents its nineteenth installment of the CIS Cybersecurity Series. 

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic. 

Dr. Lobsang Sangay took office as Sikyong (Prime Minister) of the Central Tibetan Administration in Dharamsala, India, in 2011. He was born in a Tibetan refugee settlement in northern India. As a Fulbright scholar, he was the first Tibetan to receive a doctorate from the Harvard Law School in 2004. He worked as a senior fellow at Harvard University for a number of years during which he organized landmark conferences between the Dalai Lama and Chinese scholars. An expert on Tibet, international human rights law, democratic constitutionalism and conflict resolution, Dr Sangay has lectured at various universities and think-tanks throughout Europe, Asia and North America.


This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

Second Privacy and Surveillance Roundtable

by Anandini K Rathore last modified Aug 09, 2014 04:10 AM
On July 4, 2014, the Centre for Internet and Society in association with the Cellular Operators Association of India organized a privacy roundtable at the India International Centre. The primary aim was to gain inputs on what would constitute an ideal surveillance regime in India.

Introduction: About the Privacy and Surveillance Roundtables

The Privacy and Surveillance Roundtables are a CIS initiative, in partnership with the Cellular Operators Association of India (COAI), as well as local partners. From June 2014 – November 2014, CIS and COAI will host seven Privacy and Surveillance Roundtable discussions across multiple cities in India. The Roundtables will be closed-door deliberations involving multiple stakeholders. Through the course of these discussions we aim to deliberate upon the current legal framework for surveillance in India, and discuss possible frameworks for surveillance in India. The provisions of the draft CIS Privacy Bill 2013, the International Principles on the Application of Human Rights to Communication Surveillance, and the Report of the Group of Experts on Privacy will be used as background material and entry points into the discussion. The recommendations and dialogue from each roundtable will be compiled and submitted to the Department of Personnel and training

The second Privacy and Surveillance Roundtable was held in New Delhi at the India International Centre by the Centre for Internet and Society in collaboration with the Cellular Operators Association of India on the 4th of July, 2014.

The aim of the discussion was to gain inputs on what would constitute an ideal surveillance regime in India working with theCIS Draft Privacy Protection Bill, the Report of the Group of Experts on Privacy prepared by the Justice Shah committee, and the International Principles on the Application of Human Rights to Communications Surveillance.

Background and Context: Privacy and  Surveillance in India

The discussion began with the chair giving an overview of the legal framework that governs communications interception under Indian Law. The interception of telecommunication is governed by Section 5(2) of the Telegraph Act,1885 and Rule 419A of the Telegraph Rules,1951. The framework under the Act has remained the same since it was drafted in 1885. An amendment to the Telegraph Rules in 1996 in light of the directions given under PUCL v Union of India was possibly the first change to this colonial framework barring a brief amendment in 1961.

During the drafting of the Act, the only two Indian members of the drafting committee objected to the wide scope given to interception under Section 5(2). In 1968, however, the 30th Law Commission Report studying Section 5(2) came to the conclusion that the standards in the Act may be unconstitutional given factors such as ‘public emergency’ were too wide in nature and called for a relook at the provision.

While the interception of postal mail is governed by Section 26 of the Post Office Act, 1898, the interception of modern forms of communication that use electronic information and traffic data are governed under Sections 69 and 69B of the Information Technology Act, 2000, while interception of telephonic conversations are governed by section 5(2) of the Indian Telegraph Act 1885 and subsequent rules under section 419A.

What the law ought to be?
With the shift in time, the Chair noted that the concept of the law has changed from  its original colonial perspective. Cases such as Maneka Gandhi v Union of India, highlighted that an acceptable law must be one that is ‘just, fair and reasonable’. From judgments such as these, one can impute that any surveillance law should not be arbitrary and must comply with the principles of criminal procedure. Although this is ideal, recent matters that are at the heart of surveillance and privacy, such as the Nira Radia matter, currently sub-judice, will hopefully clarify the scope of surveillance that is considered permissible in India.

Why is it important now?
In India, the need to adopt a legislation on privacy came in the wake of the Indo-EU Free Trade Agreement negotiations, where a data adequacy assessment conducted by the European Commission showed that India’s data protection practices were weak. In response to this, the Department of Personnel and Training drafted a Privacy Bill, of which two drafts have been made, though the later draft has not been made available to the public.

The formation of a privacy proposal in India is not entirely new. For example in 1980, former Union minister VN Gadgil proposed a bill to deal with limiting reportage on public personalities. Much of this bill was based on a bill in the House of Lords in 1960 suggested by Lord Mancroft to prevent uncontrolled reporting. The chair notes here that in India privacy has developed comprehensively as a concept in response to the reporting practices of the media.

Although, the right to privacy has been recognised as an implicit part of the right to life under the Constitution, the National Commission to Review the Working of the Constitution set up in February 2000 suggested the addition of a separate and distinct fundamental right to privacy under Article 21 B along the same lines of Article 8 of the European Convention of Human Rights.

While these are notable efforts in the development of privacy, the Chair raised the question of whether India is merely 'inheriting' reports and negotiations, without adopting such standards into practice and a law.

Discussions

Cloud base storage and surveillance

Opening up the discussion on electronic interception, a participant asked about the applicability of a Privacy regulation to cloud based services. Cloud based storage is of increasing relevance given that the cloud permits foreign software companies to store large amounts of customer information at little or no cost.

Indian jurisdiction, however, would be limited to a server that resides in India or a service provider that originates or terminates in India. Moving the servers back to India is a possible solution, however, it could have negative economic implications.In terms of telecommunications, any communications that originate or terminate using Indian satellites are protected from foreign interception.

Before delving into further discussion, the Chair posed the question of as to what kind of society we would like to live in, contrasting the individual based society principle and the community based principle. While the former is followed by most Western Nations as a form of governance, Orientalist and/or Asian tradition follows the community based principle where the larger focus is community rights. However, it would be incorrect to say that the latter system does not protect rights such as privacy, as often Western perceptions seem to imply. For example, the Chair points out that the oldest Hindu laws such as the Manu Smriti protected personal privacy.

Regulatory models for surveillance


After the preliminary discussion, the Chair then posed the fundamental question of how a government can regulate surveillance. During the discussion, a comparison was made between the UK, the US modus operandi i.e. the rule of probable cause coupled with exhaustion of other remedies, and the Indian rule based out of Section 5(2) of the Telegraph Act, 1885. In the United States, wire taps cannot be conducted without a Judge’s authorization.For example, the Foreign Intelligence Surveillance Act, which governs foreign persons, has secret courts. In addition, a participant added that surveillance requests in the US are rarely if ever, rejected. While on paper, the US model seems acceptable, most participants are weary of the practicability of such a system in India citing that a judiciary that is shielded from public scrutiny entirely cannot be truly independent. The UK follows an interception regime regulated by the Executive, the beginnings of which lay in its Telegraph Act in 1861, which the Indian Telegraph Act is based on. However, the interception regime of the UK has constantly changed with a steady re-evaluation of the law. Surveillance in the UK is regulated by the Regulation of Investigatory Powers Act of 2000(RIPA), in addition it has draft bills pending on Data Retention and on the Admissibility of intercepted communications as evidence.

In contrast, India follows an executive framework, where the Home Secretary gives authorization for conducting wiretaps. This procedure can be compromised in emergent circumstances, where an officer not below the rank of a Joint Secretary can pass an order.

Participants agreed that the current system is grossly inadequate, and the Chair asked whether both a warrant and a judicial order based system would be appropriate for India.

Considering the judicial model as a possible option, participants thought of the level of judiciary apt for regulating matters on surveillance in India. While participants felt that High Court judges would be favourable, the immense backlog at the High Court level and the lack of judges is a challenge and risks being inefficient. If one were to accept the magistrate system, the Chair adds that there are executive magistrates within the hierarchy who are not judicial officers. To this, a participant posed the question as to whether a judicial model is truly a workable one and whether it should be abandoned. In response, a participant, iterated the Maneka Gandhi ratio that “A law must be just, fair and reasonable and be established to the satisfaction of a judicially trained mind”

It was then discussed how the alternative executive model is followed in India, and how sources disclose that police officers often use (and sometimes misuse) dedicated powers under Section 5(2), despite Rule 419A having narrowed down the scope of authority. A participant disagreed here, stating that most orders for the interception of communications are passed by the Home Secretary.

When the People’s Union for Civil Liberties challenged Section 5(2) of the Telegraph Act, the Supreme Court held that it did not stand the test of Maneka Gandhi and proposed the set-up of a review committee under its guidelines which was institutionalised following an amendment in 2007 to the Telegraph Rules.

Under Rule 419A, a review committee comprises of officials such as the Cabinet Secretary, Secretary of the Department of Telecommunications, Secretary of the Department of Law and Justice and the Secretary of Information Technology and Communication ministry at the Centre and the Chief Secretary ,the Law Secretary and an officer not below the rank of a Principal secretary at the State level. A participant suggested that the Home Secretary should also be placed in the review committee to explain the reasons for allowing the interception.

Albeit Rule 419A states that the Review Committee sits twice a month, the actual review time according to conflicting reports is somewhere between a day to a week. The government mandates that such surveillance cannot continue for more than 180 days.

In contrast to the Indian regime, the UK has a Commissioner who reviews the reasons for the interception along with the volume of communication among other elements. The reports of such interceptions are made public after the commissioner decides whether it should be classified or declassified and individuals can challenge such interception at the Appellate Tribunal.

A participant asked whether in India, such a provision exists for informing the person under surveillance about the interception. A stakeholder answered that a citizen can find out whether somebody is intercepting his or her communications via the government but did not elaborate on how.

Authorities for authorizing interception

On the subject of the regulatory model, a participant asked whether magistrates would be competent enough to handle matters on interception. It was pointed out that although this is subjective, it can be said that a lower court judge does not apply the principles of constitutional law, which include privacy, among other rights.

Having rejected the possibility of High Court judges earlier in the discussion, certain participants felt that setting up a tribunal to handle issues related to surveillance could be a good option, considering the subject matter and specialisation of judges. Yet, it was pointed out that the problem with any judicial system, is delay that happens not merely inordinately but strategically with multiple applications being filed in multiple forums. In response, a participant suggested a more federal model with greater checks and balances, which certain others felt can only be found in an executive system.

The CIS Privacy Protection Bill and surveillance

Section 6 of the CIS Privacy Protection Bill lists the procedure for applying to a magistrate for a warrant for interception. One of the grounds listed in the Bill is the disclosure of all previously issued warrants with respect to the concerned person.

Under Section 7 of the Bill, cognisable offences that impact public interest are listed as grounds for interception. Considering the wide range of offences that are cognisable, there is debate on whether they all constitute serious enough offences to justify the interception of communications. For example, the bouncing of a cheque under the Negotiable Instruments Act is a cognisable offence in public interest, but is it serious enough an offence to justify the interception of communications? How should this, then be classified so as to not make arbitrary classifications and manage national security is another question raised by the Chair.

The example of Nira Radia and the fact that the income tax authorities requested the surveillance demonstrates the subsisting lack of a framework for limiting access to information in India. A participant suggested that a solution could be to define the government agencies empowered to intercept communications and identify the offences that justify the interception of communications under Section 7 of the CIS Privacy Protection Bill.

During the discussion, it was pointed out that the Government Privacy Bill, 2011 gives a broad mandate to conduct interception that goes beyond the reasonable restrictions under Article 19 (2) of the Constitution. For example, among grounds for interception like friendly relations with other States, Security and public disorder, there are also vague grounds for interception such as the protection of the rights and freedoms of others and any other purpose mentioned within the Act.

Although the Justice Shah report did not recommend that “any other purpose within the Act” be a ground for interception, it did recommend “protection of the freedom of others” continue to be listed as a permissible ground for the interception of communications.

Meta-data and surveillance


Under Section 17 of the Draft Bill, metadata can be intercepted on grounds of national security or commission of an offence. Metadata is not protected under Rule 419A of the Telegraph Rules and a participant asked as to why this is. The Chair then posed the question to the conference of whether there should be a distinction between the two forms of data at all.

While participants agreed that Telecommunication Service Providers store meta data and not content data, there is a need according to certain participants, to circumscribe the limits of permissible metadata collection. These participants advocated for a uniform standard of protection for both meta and content data, whereas another participant felt that there needs to be a distinction between content data and meta data. Certain participants also stressed that defining what amounts to metadata is essential in this regard.

The Chair moved on to discussing the provisions relating to communication service providers under Chapter V. It was noted that this section will be irrelevant however, if the Central Monitoring System comes into force, as it will allow interception to be conducted by the Government independent of service providers.

Data Retention and Surveillance


Data can be classified into two kinds for the purposes of interception, i.e. content and Meta data. Content data represents the content in the communication in itself whereas Meta data is the information about the communication.

Telecommunications service providers are legally required to retain metadata for the previous year under the Universal Access Service Terms, although no maximum time limit on retention has been legally established.

A participant highlighted that the principle of necessity has been ignored completely in India and there is currently a practice of mass data collection. In particular, metadata is collected freely by companies, as it is not considered an invasion of privacy.

Another stakeholder mentioned that nodal officers set up under every Telecommunication Service Provider are summoned to court to explain the obtainment of the intercepted data. The participant mentions that Telecom Service Providers are reluctant to explain the process of each interception, questioning as to why Telecom Service Providers must be involved in judicial proceedings regarding the admissibility of evidence when they merely supply the data.

A participant asked as to where a Grievance Redressal mechanism can be fit in within the current surveillance framework in India. In response, it was noted that with a Magistrate model, procedure cannot be prescribed as Criminal Procedure would apply. However, if tribunals were to be created, a procedure that deals with the concerns of multiple stakeholders would be apt.

A doubt raised by a stakeholder was whether prior sanction could be invoked by public servants against surveillance. Its applicability must be seen on a case to case basis, although for the most part, prior sanction would not be applicable considering that public officials accused of offences are not be entitled to prior sanction.

Section 14 of the CIS Privacy Protection Bill prohibits the sharing of information collected by surveillance with persons other than authorised authorities in an event of national security or the commission of a cognisable offence. Participants agreed that the wording of the section was too wide and could be misused.

A participant also pointed out that in practice, such parameters on disclosure are futile as even on civil family matters, metadata is shared amongst the service provider and the individuals that request it.

With relation to metadata, a participant suggested a maximum retention period of 2 years. As pointed out earlier, Call Detail Records, a service provider must retain the information for at least one year, however, there is no limit placed on retention, and destruction of the same is left to the discretion of the service provider. Generally it was agreed by participants that a great deal more clarity is needed as currently the UASL merely states that Internet Protocol Detail Record (IPDR) should be maintained for a year.

Duties of the Service Provider


Under the CIS Privacy Protection Bill , the duties of Telecommunication Service Providers broadly includes ‘measures to protect privacy and confidentiality’ without further elaboration. A participant mentioned that applicable and specific privacy practices for different industries need to be defined. Another participant stressed that such practices should be based in principles and not based in technology - citing rapidly evolving technology and the obsolete government standards that are meant to be followed as security practices for ISPs.

Another area that needs attention according to a participant is the integrity of information after interception is conducted. Participants also felt that audit practices by Telecommunication Service Providers should be confined to examining the procedures followed by the company, and not examine content, which is currently the practice according to other participants.

A participant also mentioned that standards do not be prescribed to Telco's considering the Department of Telecommunications conducts technical audits. Another participant felt that the existing system on audits is inadequate and perhaps a different model standard should be suggested. The Chair suggests that a model akin to the Statement on Auditing Standards that has trained persons acting as auditors could fair better and give security to Telco's by ensuring immunity for proceedings based on compliance with the standards.

The next issue discussed was whether surveillance requests can be ignored by Telco's, and whether Telco's can be held liable for repeatedly ignoring interception requests. A stakeholder replied that although there are no rules for such compliance, a hierarchal acquiescence exists which negates any flexibility.

Admissibility of Evidence


The significance given to intercepted communications as evidence was the next question put forth by the Chair. For example in the US, the ‘fruit of the poisonous tree’ rule is followed where evidence that has been improperly received discredits its admissibility in law as well as further evidence found on the basis of it. In India, however, intercepted communications are accorded full evidentiary value, irrespective of how such evidence is procured. The 1972 Supreme Court Judgment of Malkani v State of Maharashtra, reiterated a seminal UK judgment, Kuruma, Son of Kanju v. R , which stated that if the evidence was admissible it is irrelevant how it was obtained.

Participants suggested more interaction with the actual investigative process of surveillance, which includes prosecutors and investigators to gain a better understanding of how evidence is collected and assessed.

Conclusions

The Roundtable in Delhi was not a discussion on surveillance trapped in theory but a practical exposition on the realities of governance and surveillance. There seemed to be two perspectives on the regulatory model both supported with workable solutions, although the overall agreement was on an organised executive model with accountability and a review system. In addition, inputs on technology and its bearing on the surveillance regime were informative. A clear difference of opinion was presented here on the kind of protection metadata should be accorded. In addition, feedback from stakeholders on how surveillance is conducted at the service provider level, highlight the need for an overhaul of the regime, incorporating multiple stakeholder concerns.


1994 4 SCC 569

The definition of telegraph was expanded with the Telegraph Laws (Amendment) Act, 1961 under Section 3 (1AA) to ‘‘telegraph’ means any appliance, instrument, material or apparatus used or capable of use for transmission or reception of signs, signals, writing, images and sounds orintelligence of any nature by wire, visual or other electro-magnetic emissions, radio waves or Hertzian waves, galvanic, electric or magnetic means.

Explanation.—’Radio waves’ or ‘Hertzian waves’ means electromagnetic waves of frequencies lower than 3,000 giga-cycles per second propagated in space without artificial guide;]

1978 AIR 597

Art 21-B-“Every person has a right to respect for his private and family life, his home and his correspondence.”, Accessed at < http://lawmin.nic.in/ncrwc/finalreport/v1ch3.htm>

Article 8 of the European Convention on Human Rights mentions

1. Everyone has the right to respect for his private and family life, his home and his correspondence.

2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals or for the protection of the rights and freedoms of others.

Article 8 was invoked in Rajagopal v State of Tamil Nadu (1995 AIR 264)

PUCL v Union of India, (1997) 1 SCC 301

IPDR measures bandwidth and monitors internet traffic.

[1955] A.C. 197

BPM Agenda

by Prasad Krishna last modified Aug 06, 2014 06:15 AM

PDF document icon BPM14_Agenda.pdf — PDF document, 551 kB (565214 bytes)

Telecom Chapters

by Prasad Krishna last modified Aug 19, 2014 03:47 AM

ZIP archive icon Telecom Chapters.zip.zip — ZIP archive, 2468 kB (2527810 bytes)

Privacy in Healthcare: Policy Guide

by Tanvi Mani last modified Aug 31, 2014 03:18 PM
The Health Policy Guide seeks to understand what are the legal regulations governing data flow in the health sector — particularly hospitals, and how are these regulations implemented. Towards this objective, the research reviews data practices in a variety of public and private hospitals and diagnostics labs. The research is based on legislation, case law, publicly available documents, and anonymous interviews.

Click to download the PDF (320 Kb)


Introduction

To this date, there exists no universally acceptable definition of the right to privacy. It is a continuously evolving concept whose nature and extent is largely context driven. There are numerous aspects to the right to privacy, each different from the other in terms of the circumstance in which it is invoked. Bodily privacy however, is to date, the most guarded facet of this vastly expansive right. The privacy over one’s own body including the organs, genetic material and biological functions that make up one’s health is an inherent right that does not; as in the case of other forms of privacy such as communication or transactional privacy, emanate from the State. It is a right that has its foundations in the Natural Law conceptions of The Right to Life, which although regulated by the State can at no point be taken away by it except under extreme circumstances of a superseding Right to Life of a larger number of people.

The deliberation leading to the construction of a universally applicable Right to Privacy has up until now however only been in terms of its interpretation as an extension of the Fundamental Right to Life and Liberty as guaranteed under Article 21 as well as the freedom of expression and movement under Articles 19(1)(a) and (b) of the Constitution of India. While this may be a valid interpretation, it narrows the ambit of the right as one that can only be exercised against the State. The Right to privacy however has much larger implications in spheres that are often removed from the State. There is thus an impending need to create an efficient and durable structure of Law and policy that regulates the protection of privacy in Institutions that may not always be agents of the State.

It is in this regard that the following analysis studies the existing conceptions of privacy in the Healthcare sector. It aims to study the existing mechanisms of privacy protection and their pragmatic application in everyday practices. Further, it determines definitive policy gaps in the existing framework and endeavors to provide effective recommendations to not only redress these shortcomings but also create a system that is efficient in its fulfillment of the larger objective of the actualization of the Right to Privacy at an individual, state and institutional level.

Purpose

The purpose of this research study is to formulate a comprehensive guide that maps the synthesis, structure and implementation of privacy regulations within the healthcare sector in India. It traces the domestic legislation pertaining to various aspects of the healthcare sector and the specific provisions of the law that facilitate the protection of the privacy of individuals who furnish their personal information as well as genetic material to institutions of healthcare, either for the purpose of seeking treatment or to contribute to research studies. It is however imperative that the nature and extent of the information collected be restricted through the establishment of requisite safeguards at an institutional level that percolate down to everyday practices of data collection, handling and storage within healthcare institutions. The study thus aims to collate the existing systems of privacy protection in the form of laws, regulations and guidelines and compare these with actual practices in government and private hospitals and diagnostic laboratories to determine whether these laws are in fact effective in meeting the required standards of privacy protection. Further, the study also broadly looks at International practices of privacy protection and offers recommendations to better the existing mechanisms of delimiting unnecessary intrusions on the privacy of patients.

Importance

The Indian Healthcare sector although at par with international standards in its methods of diagnosis, treatment and the use of contemporary technology, is still nascent in the nature and extent of its interaction with the Law. There are a number of aspects of healthcare that lie on the somewhat blurred line between the interest of the public and the sole right of the individual seeking treatment. One such aspect is the slowly evolving right to privacy. The numerous facets of this right have come to the fore largely through unique case laws that are reflective of a dynamic social structure, one that seeks to reconcile the socio economic rights that once governed society with individual interests that it has slowly come to realize. The right of an individual to disclose the nature of his disease, the liberty of a woman not to be compelled to undergo a blood test, the bodily autonomy to decide to bear children or not, the decisional privacy with regards to the termination of a pregnancy and the custodial rights of two individuals to their child are certain contentious aspects of healthcare that have constructed the porous interface between the right to privacy and the need for medical treatment. It is in this context that this study aims to delve into the existing basic structure of domestic legislation, case laws and regulations and their subsequent application in order to determine important gaps in the formulation of Law and Policy. The study thus aims to draw relevant conclusions to fill these gaps through recommendations sourced from international best practice in order to construct a broad framework upon which one can base future policy considerations and amendments to the existing law.

Methodology

This research study was undertaken in two major parts. The first part assesses domestic legislation and its efficacy in the current context. This is done through the determination of relevant provisions within the Act that are in consonance with the broader privacy principles as highlighted in the A.P Shah Committee report on Privacy Protection[1]. This part of the research paper is based on secondary sources, both in terms of books as well as online resources. The second part of the paper analyses the actual practices with regard to the assimilation, organization, use and storage of personal data as practiced in Government and Private hospitals and Diagnostic laboratories. Three Private hospitals, a prominent Government hospital and a Diagnostic laboratory were taken into consideration for this study. The information was provided by the concerned personnel at the medical records department of these institutions of healthcare through a survey conducted on the condition of anonymity. The information provided was analyzed and collated in accordance with the compliance of the practices of these institutions with the Principles of privacy envisioned in the Report of the Group of Experts on Privacy.

The Embodiment of Privacy Regulation within Domestic Legislation

This section of the study analyses the viability of an approach that takes into account the efficacy of domestic legislation in regulating practices pertaining to the privacy of individuals in the healthcare sector. This approach perceives the letter and spirit of the law as the foundational structure upon which internal practices, self regulation and the effective implementation of policy considerations that aim to create an atmosphere of effective privacy regulation take shape, within institutions that offer healthcare services. To this effect, domestic legislationthat provides for the protection of a patient’s privacy has been examined. The law has been further studied with respect to its tendency to percolate into the everyday practices, regulations and guidelines that private and government hospitals adhere to. The extent of its permeation into actual practice; in light of its efficacy in fulfilling the perambulatory objectives of ensuring safe and unobtrusive practices,within the construct of which a patient is allowed to recover and seek treatment, has also been examined.

The term ‘Privacy’ is used in a multitude of domestic legislations primarily in the context of the foundation of the fiduciary relationship between a doctor and a patient.This fiduciary relationship emanates from a reasonable expectation of mutual trust between the doctor and his patients and is established through the Indian Medical Council Act of 1952, specifically section 20(A) of the Act which lays down the code of ethics which a doctor must adhere to at all times. Privacy within the healthcare sector includes a number of aspects including but not limited to informational privacy (e.g., confidentiality, anonymity, secrecy and data security); physical privacy (e.g., modesty and bodily integrity); associational privacy (e.g. intimate sharing of death, illness and recovery); proprietary privacy (e.g., self-ownership and control over personal identifiers, genetic data, and body tissues); and decisional privacy (e.g., autonomy and choice in medical decision-making).

Privacy Violations stem from policy and information gaps: Violations in the healthcare sector that stem from policy formulation as well and implementation gaps[2] include the disclosure of personal health information to third parties without consent, inadequate notification to a patient of a data breach, unlimited or unnecessary collection of personal health data, collection of personal health data that is not accurate or relevant, the purpose of collecting data is not specified, refusal to provide medical records upon request by client, provision of personal health data to public health, research, and commercial uses without de-identification of data and improper security standards, storage and disposal. The disclosure of personal health information has the potential to be embarrassing, stigmatizing or discriminatory.[3] Furthermore, various goods such as employment, life, and medical insurance, could be placed at risk [4]if the flow of medical information were not restricted. [5]

Disclosure of personal health information is permitted and does not amount to a violation of privacy in the following situations: 1) during referral, 2) when demanded by the court or by the police on a written requisition, 3) when demanded by insurance companies as provided by the Insurance Act when the patient has relinquished his rights on taking the insurance, and 4) when required for specific provisions of workmen's compensation cases, consumer protection cases, or for income tax authorities,[6] 5) disease registration, 6) communicable disease investigations, 7) vaccination studies, or 8) drug adverse event reporting. [7]

The following domestic legislations have been studied and relevant provisions of the Act have been accentuated in order to analyse their compliance with the basic principles of privacy as laid out in the A.P Shah Committee report on Privacy.

Mental Health Act, 1987[8]
The Provisions under the Act pertaining to the protection of privacy of the patient have been examined. The principles embodied within the Act include aspects of the Law that determine the nature and extent of oversight exercised by the relevant authorities over the collection of information, the limitation on the collection of data and the restrictions on the disclosure of the data collected. The principle of oversight is embodied under the legislation within the provisions that allow for the inspection of records in psychiatric hospitals and nursing homes only by officers authorized by the State Government.[9] The limitation on the Collection of information is imposed by the Inspection of living conditionsby a psychiatrist and two social workers are on a monthly basis. This would include analyzing the living condition of every patient and the administrative processes of the psychiatric hospital and/or psychiatric nursing home. [10]Additionally, Visitors must maintain a book regarding their observations and remarks.[11] Medical certificates may be issued by a doctor, containing information regarding the nature and degree of the mental disorder as reasons for the detention of a person in a psychiatric hospital or psychiatric nursing home. [12]Lastly, the disclosure of personal records of any facility under this Act by inspecting officers is prohibited[13]

Pre-Conception and Pre-Natal Diagnostic Techniques (Prohibition of Sex Selection) Act, 1994 [14]
The Act was instituted in light of a prevalent public interest consideration of preventing female foeticide. However, it is imperative that the provision of the Act remain just shy of unnecessarily intrusive techniques and do not violate the basic human requirement of privacy in an inherently personal sphere. The procedure that a mother has to follow in order to avail of pre-natal diagnostic testing is mandatory consent of age, abortion history and family history. These conditions require a woman to reveal sensitive information concerning family history of mental retardation or physical deformities.[15] Aspecial concern for privacy and confidentiality should be exercised with regards to disclosure of genetic information. [16]

Medical Termination of Pregnancy Act, 1971 [17]
Although, the right to an abortion is afforded to a woman within the construct of her inherent right to bodily privacy, decisional privacy (for e.g., autonomy and choice in medical decision-making) is not afforded to patients and their families with regards to determining the sex of the baby. The sections of the Act that have been examined lay down the provisions available within the Act to facilitate the protection of a woman’s right to privacy during the possible termination of a pregnancy. These include the principles pertaining to the choice and consent of the patient to undergo the procedure, a limit on the amount of information that can be collected from the patient, the prevention of disclosure of sensitive information and the security measures in place to prevent the unauthorized access to this information. The Medical Termination of Pregnancy Regulations, 2003 supplement the Act and provide relevant restrictions within every day practices of data collection use and storage in order to protect the privacy of patients. The Act mandates Written Consent of the patient in order to facilitate an abortion .Consent implies that the patient is aware of all her options, has been counselled about the procedure, the risks and post-abortion care.[18]. The Act prohibits the disclosure of matters relating to treatment for termination of pregnancy to anyone other than the Chief Medical Officer of the State. [19]The Register of women who have terminated their pregnancy, as maintained by the hospital, must be destroyed on the expiry of a period of five years from the date of the last entry.[20] The Act also emphasizes upon the security of information collected. The medical practitioner assigns a serial number for the woman terminating her pregnancy.[21]Additionally, the admission register is stored in safe custody of the head of the hospital. [22]

Indian Medical Council (Professional conduct, Etiquette and Ethics) Regulations, 2002 (Code of Ethics Regulations, 2002)
The Medical Council of India (MCI) Code of Ethics Regulations[23] sets the professional standards for medical practice. These provisions regulate the nature and extent of doctor patient confidentiality. It also establishes universally recognized norms pertaining to consent to a particular medical procedure and sets the institutionally acceptable limit for intrusive procedure or gathering excessively personal information when it is not mandatorily required for the said procedure. The provisions addressed under these regulations pertain to the Security of the information collected by medical practitioners and the nature of doctor patient confidentiality.

Physicians are obliged to protect the confidentiality of patients 5during all stages of the procedure and with regard to all aspects of the information provided by the patient to the doctor, includinginformation relating to their personal and domestic lives. [24]The only exception to this mandate of confidentiality is if the law requires the revelation of certain information, or if there is a serious and identifiable risk to a specific person and / or community ofa notifiable disease.

Ethical Guidelines for Biomedical Research on Human Subjects [25]
The provisions for the regulation of privacy pertaining to biomedical research include aspects of consent as well as a limitation on the information that may be collected and its subsequent use. The provisions of this act aim to regulate the protection of privacy during clinical trials and during other methods of research. The principal of informed consent is an integral part of this set of guidelines. ThePrivacy related information included in the participant/ patient information sheet includes: the choice to prevent the use of their biological sample, the extent to which confidentiality of records could be maintained and the consequences of breach of confidentiality, possible current and future uses of the biological material and of the data to be generated from the research and if the material is likely to be used for secondary purposes or would be shared with others, the risk of discovery of biologically sensitive information and publications, including photographs and pedigree charts.[26] The Guidelines require special concern for privacy and confidentiality when conducting genetic family studies. [27]The protection of privacy and maintenance of confidentiality, specifically surrounding the identity and records, is maintained whenusing the information or genetic material provided by participants for research purposes. [28]The Guidelines require investigators to maintain confidentiality of epidemiological data due to the particular concern that some population based data may also have implications on issues like national security or public safety.[29]All documentation and communication of the Institutional Ethics Committee (IEC) must be dated, filed and preserved according to the written procedures.Data of individual participants can be disclosed in a court of law under the orders of the presiding judge, if there is a threat to a person’s life, communication to the drug registration authority regarding cases of severe adverse reaction and communication to the health authority if there is risk to public health.[30]

Insurance Regulatory and Development Authority (Third Party Administrators) Health Services Regulations, 2001
The provisions of the Act that have been addressed within the scope of the study regulate the practices of third party administrators within the healthcare sector so as to ensure their compliance with the basic principles of privacy.An exception to the maintenance and confidentiality of information confidentiality clause in the code of conduct, requires TPAs to provide relevant information to any Court of Law/Tribunal, the Government, or the Authority in the case of any investigation carried out or proposed to be carried out by the Authority against the insurance company, TPA or any other person or for any other reason.[31]In July 2010, the IRDA notified theInsurance Regulatory and Development Authority (Sharing of Database for Distribution of Insurance Products) Regulations [32]. These regulations restrict referral companies from providing details of their customers without their prior consent.[33]TPAs must maintain the confidentiality of the data collected by it in the course of its agreement and maintain proper records of all transactions carried out by it on behalf of an insurance company and are also required to refrain from trading information and the records of its business[34].TPA’s must keep records for a period of not less than three years.[35]

IDRA Guidelines on Outsourcing of Activities by Insurance Companies [36]
These guidelines require the insurer to take appropriate steps that require third party service providers protect confidential information of both the Insurer and its clients from intentional or inadvertent disclosure to unauthorized persons.[37]

Exceptions to the Protection of Privacy
The legal provisions with regard to privacy, confidentiality and secrecy are often superseded by Public Interest Considerations. The right to privacy, although recognized in the course of Indian jurisprudence and embodied within domestic legislation is often overruled prima facie when faced with situations or instances that involve a larger interest of a greater number of people. This policy is in keeping with India’s policy goals as a social welfare state to aid in the effectuation of its utilitarian ideals. This does not allow individual interest to at any point surpass the interest of the masses.

Epidemic Diseases Act, 1897 [38]
Implicit within this formulation of this Act is the assumption that in the case of infectious diseases, the right to privacy, of infected individuals must give way to the overriding interest of protecting public health.[39] This can be ascertained not only from the black letter of the Law but also from its spirit. Thus, in the absolute positivist as well as a more liberal interpretation, at the crux of the legislation lies the undeniable fundamental covenant of the preservation of public health, even at the cost of the privacy of a select few individuals [40].

Policy and Regulations

National Policy for Persons with Disabilities, 2006[41]
The following provisions of the Act provide for the incorporation of privacy considerations in prevalent practices with regard to persons with disabilities. The National Sample Survey Organization collects the following information on persons with disabilities: the socio- economic and cultural context, cause of disabilities, early childhood education methodologies and all matters connected with disabilities, at least once in five years.[42]This data is collected by non-medical investigators. [43]There is thus an inherent limit on the information collected. Additionally, this information is used only for the purpose for which it has been collected.

The Special Employment Exchange, as established under The Persons with Disabilities (Equal Opportunities, Protection of Rights and Full Participation) Act, 1995 Act, collects and furnishes information in registers, regarding provisions for employment. Access to such data is limited to any person who is authorized by the Special Employment Exchange as well as persons authorized by general or special order by the Government, to access, inspect, question and copy any relevant record, document or information in the possession of any establishment. [44] When conducting research on persons with disabilities consent is required from the individual or their family members or caregivers.[45]

HIV Interventions
In 1992, the Government of India instituted the National AIDS Control Organization (NACO) for the prevention and control of AIDS. NACO aims to control the spread of HIV in India through the implementation of Targeted Interventions (TIs) for most at risk populations (MARPs) primarily, sex workers, men having sex with men and people who inject drugs.[46]The Targeted Interventions (TIs) system of testing under this organization has however raised numerous concerns about relevant policy gaps in the maintenance of the confidentiality and privacy of persons living with HIV/ AIDS. The shortcomings in the existing policy framework include: The Lack of a limitation and subsequent confidentiality in the amount of Information collected. Project staff inTIsrecordthe name, address and other contact information of MARPs and share this data with Technical Support Unit and State AIDS Control Societies.[47] Proof of address and identity documents are required to get enrolled in government ART programs.[48]Peer-educators operate under a system known as line-listing, used to make referrals and conduct follow-ups. Peer-educators have to follow-up with those who have not gone at regular intervals for testing. [49] This practice can result in peer-educators noticing and concluding that the names missing are those who have tested positive. [50] Although voluntary in nature, the policy encourage the fulfillment of fulfilling of numerical targets, and in doing so supports unethical ways of testing.[51]

The right to privacy is an essential requirement for persons living with HIV/AIDS due to the potential stigmatizing and discriminatory impact of the revelation of this sensitive information, in any form.[52] The lack of privacy rights often fuels the spread of the disease and exacerbates its impact on high risk communities of individuals. Fears emanating from a privacy breach or a disclosure of data often deter people from getting tested and seeking medical care. The impact of such disclosure of sensitive information including the revelation of tests results to individuals other than the person being tested include low self esteem, fear of loss of support from family/peers, loss of earnings especially for female and transgender sex workers, fear of incrimination for illicit sex/drug use and the insensitivity of counselors. [53]HIV positive individualslive in constant fear of their positive status being leaked. They also shy away from treatment as they fear people might see them taking their medicines and thereby guess their status. Thus breaches in confidentiality and policy gaps in privacy regulation, especially with respect to diseases such as HIV also prevents people from seeking out treatment. [54]

Case Law

The following cases have been used to deliberate upon important points of contention within the ambit of the implementation and impact of Privacy Regulationsin the healthcare sector. This includes the nature and extent of privacy enjoyed by the patient and instances where in the privacy of the patient can be compromised in light of public interest considerations.

Mr. Surupsingh Hrya Naik vs. State of Maharashtra ,[55] (2007)

The decision in this case held that The RTI Act 2005 would supersede The Medical Council Code of Ethics. The health records of an individual in judicial custody should be made available under the Act and can only be denied in exceptional cases, for valid reasons.

Since the Code of Ethics Regulations are only delegated legislation, it was held in the case of Mr. SurupsinghHrya Naik v.State Of Maharashtra[56] that these would not prevail over the Right to Information Act, 2005 (RTI Act) unless the information sought falls under the exceptions contained in Section 8 of the RTI Act. This case dealt with the important point of contention of whether making the health records public under the RTI Act would constitute a violation of the right to privacy. These health records were required to determine why the convict in question was allowed to stay in a hospital as opposed to prison. In this context the Bombay High Court held thatThe Right to Information Act supersedes the regulation that mandate the confidentiality od a person, or in this case a convict’s medical records. It was held that the medical records of a a person sentenced or convicted or remanded to police or judicial custody, if during that period such person is admitted in hospital and nursing home, should be made available to the person asking the information provided such hospital nursing home is maintained by the State or Public Authority or any other Public Body. It is only in rare and in exceptional cases and for good and valid reasons recorded in writing can the information may be denied.

Radiological & Imaging Association v. Union of India ,[57] (2011)
On 14 January 2011 a circular was issued by the Collector and District Magistrate, Kolhapur requiring the Radiologists and Sonologists to submit an on-line form “F” under the PNDT Rules. This was challenged by the Radiological and Imaging Association, inter alia, on the ground that it violates the privacy of their patients. Deciding the above issue the Bombay High Court held that .The images stored in the silent observer are not transmitted on-line to any server and thus remain embedded in the ultra-sound machine. Further, the silent observer is to be opened only on request of the Collector/ the civil surgeonin the presence of the concerned radiologist/sonologist/doctor incharge of the Ultra-sound Clinic. In light of these considerations and the fact that the `F' form submitted on-line is submitted only to the Collector and District Magistrate is no violation of the doctor's duty of confidentiality or the patient's right to privacy. It was further observed that The contours of the right to privacy must be circumscribed by the compelling public interest flowing through each and every provision of the PC&PNDT Act, when read in the background of the following figures of declining sex ratio in the last five decades.

The use of a Silent Observer system on a sonograph has requisite safeguards and doesn’t violate privacy rights. The declining sex ratio of the country was considered a compelling public Interest that could supersede the right to privacy.

Smt. Selvi and Ors. v.State of Karnataka (2010)
The Supreme Court held that involuntary subjection of a person to narco analysis, polygraph test and brain-mapping violates the ‘right against self-incrimination' which finds its place in Article 20(3)[58] of the Constitution. [59] The court also found that narco analysis violated individuals’ right to privacy by intruding into a “subject’s mental privacy,” denying an opportunity to choose whether to speak or remain silent, and physically restraining a subject to the location of the tests and amounted to cruel, inhuman or degrading treatment.[60]

The Supreme Court found that Narco-analysis violated an individuals’ right to privacy by intruding into a “subject’s mental privacy,” denying an opportunity to choose whether to speak or remain silent.

Neera Mathur v. Life Insurance Corporation (LIC),[61] (1991)
In this casethe plaintiff contested a wrongful termination after she availed of maternity leave. LIC required women applicants to furnish personal details like their menstrual cycles, conceptions, pregnancies, etc. at the time of appointment. Such a requirement was held to go against the modesty and self respect of women. The Court held that termination was only because of disclosures in application, which was held to be intrusive, embarrassing and humiliating. LIC was directed to delete such questions.

The Court did not refer to the term privacy however it used the term personal details as well as modesty and self respect, but did not specifically link them to the right to life or any other fundamental right. These terms (modesty and self respect) are usually not connected to privacy but although they may be the harm which comes from an intrusion of one’s privacy.

The Supreme Court held that Questions related to an individual’s reproductive issues are personal details and should not be asked in the service application forms.

Ms. X vs. Mr. Z &Anr ,[62] (2001)
In this case, the Delhi High Court held that an aborted foetus was not a part of the body of a woman and allowed the DNA test of the aborted foetus at the instance of the husband. The application for a DNA test of the foetus was contested by the wife on the ground of “Right to Privacy”.7In this regard the court held that The Supreme Court had previously decided that a party may be directed to provide blood as a DNA sample but cannot be compelled to do so. The Court may only draw an adverse interference against such party who refuses to follow the direction of the Court in this respect.The position of the court in this case was that the claim that the preservation of a foetus in the laboratory of the All India Institute of Medical Science, violates the petitioner’s right to privacy, cannot be entertained as the foetus had been voluntarily discharges from her body previously, with her consent. The foetus, that she herself has dischargedis claimed to be subjected to DNA test. Thus, in light of the particular facts and the context of the case, it was held that petitioner does not have any right of privacy.

A woman’s right to privacy does not extend to a foetus, which is no longer a part of her body. The right to privacy may arise from a contract as well as a specific relationship, including a marital relationship. The principle in this case has been laid down in broad enough terms that it may be applied to other body parts which have been disassociated from the body of the individual.

It is important to note here that the fact that the Court is relying upon the principles laid down in the case of R. Rajagopal seems to suggest that the Court is treating organic tissue preserved in a public hospital in the same manner as it would treat a public document, insofar as the exception to the right to privacy is concerned.

B.K Parthasarthi vs. Government of Andhra Pradesh ,[63] (1999)
In this case, the Andhra Pradesh High Court was to decide the validity of a provision in the Andhra Pradesh Panchayat Raj Act, 1994 which stipulated that any person having more than two children should be disqualified from contesting elections. This clause was challenged on a number of grounds including the ground that it violated the right to privacy. The Court, in deciding upon the right to privacy and the right to reproductive autonomy, held thatThe impugned provision, i.eSection 19(3) of the said Act does not compel directly anyone to stop procreation, but only disqualifies any person who is otherwise eligible to seek election to various public offices coming within the ambit of the Andhra Pradesh Panchayat Raj Act, 1994 or declares such persons who have already been holding such offices to be disqualified from continuing in such offices if they procreate more than two children.Therefore, the submission made on behalf of the petitioners 'right to privacy' is infringed, is untenable and must be rejected.”

Mr. X v. Hospital Z, Supreme Court of India ,[64] (1998 and 2002)
The petitioner was engaged to be married and thereafter during tests for some other illness in the hospital it was found that the petitioner was HIV positive. This information was released by the doctor to the petitioner’s family and through them to the family of the girl to whom the petitioner was engaged, all without the consent of the petitioner. The Court held that:

“The Right to privacy is not treated as absolute and is subject to such action as may be lawfully taken for the prevention of crime or disorder or protection of health or morals or protection of rights and freedoms of others.”

Right to privacy and is subject to such action as may be lawfully taken for the prevention of crime or disorder or protection of health or morals or protection of rights and freedoms of others.

This decision of this case could be interpreted to extend the principle, of disclosure to the person at risk, to other communicable and life threatening diseases as well. However, a positivist interpretation would render these principle applicable to only to HIV+ cases.

M. Vijaya v. Chairman and Managing Director, Singareni Collieries Co. Ltd. [65] (2001)
The petitioner alleged that she had contracted the HIV virus due to the negligence of the authorities of Maternity and Family Welfare Hospital, Godavarikhani, a hospital under the control of Singareni Collieries Company Ltd., (SCCL), in conducting relevant precautionary blood tests before transfusion of blood of her brother (donor) into her body when she was operated for hysterectomy (Chronic Cervicitis) at the hospital. The petition was initially filed as a Public Interest Litigation,which the court duly expanded in order to address the problem of the lack of adequate precautionary measures in hospitals, thereby also dealing with issues of medical confidentiality and privacy of HIV patients. The court thus deliberated upon the conflict between the right to privacy of an HIV infected person and the duty of the state to prevent further transmission and held:

In the interests of the general public, it is necessary for the State to identify HIV positive cases and any action taken in that regard cannot be termed as unconstitutional. As under Article 47 of the Constitution, the State was under an obligation to take all steps for the improvement of the public health. A law designed to achieve this object, if fair and reasonable, in our opinion, will not be in breach of Article 21 of the Constitution of India

The right of reproductive autonomy is a component of the right to privacy .A provision disqualifying a person from standing for elections due to the number of children had, does not violate the right to privacy as the object of the legislation is not to violate the autonomy of an individual but to mitigate the population growth in the country. Measures to control population growth shall be considered legal unless they impermissibly violate a fundamental right.

However, another aspect of the matter is whether compelling a person to take HIV test amounts to denying the right to privacy? The Court analyzed the existing domestic legislation to arrive at the conclusion that there is no general law that can compel a person to undergo an HIV-AIDS test. However, specific provisions under the Prison Laws[66]

provide that as soon as a prisoner is admitted to prison, he is required to be examined medically and the record of prisoner's health is to be maintained in a register. Further, Under the ITP Act, the sex workers can also be compelled to undergo HIV/ AIDS test.[67]

Additionally, under Sections 269 and 270 of the Indian Penal Code, 1860, a person can be punished for negligent act of spreading infectious diseases.

The right to privacy of a person suspected to be HIV+ would be subordinate to the power and duty of the state to identify HIV+ patients in order to protect public interest and improve public health. However any law designed to achieve this object must be fair and reasonable. In a conflict between the individual’s privacy right and the public’s right in dealing with the cases of HIV-AIDS, the Roman Law principle 'SalusPopuliestSuprema' (regard for the public wealth is the highest law) applies when there is a necessity.

After mapping legislation that permit the invasion of bodily privacy, the Court concluded that they are not comprehensive enough to enable the State to collect information regarding patients of HIV/AIDS and devise appropriate strategies and therefore the State should draft a new legislation in this regard. Further the Court gave certain directions to the state regarding how to handle the epidemic of HIV/AIDS and one of those directions was that the “Identity of patients who come for treatment of HIV+/AIDS should not be disclosed so that other patients will also come forward for taking treatment.”

Sharda v. Dharmpal ,[68] (2003)

The basic question in this case was whether a party to a divorce proceeding can be compelled to a medical examination. The wife in the divorce proceeding refused to submit herself to medical examination to determine whether she was of unsound mind on the ground that such an act would violate her right to personal liberty. Discussing the balance between protecting the right to privacy and other principles that may be involved in matrimonial cases such as the ‘best interest of the child’ in case child custody is also in issue, the Court held:

If the best interest of a child is in issue in the case then the patient’s right to privacy and confidentiality would get limited. The right to privacy of an individual would be subordinate to the power of a court to arrive at a conclusion in a matrimonial dispute and the right of a party to protect his/her rights in a Court of law would trump the right to privacy of the other.

"Privacy" is defined as "the state of being free from intrusion or disturbance in one's private life or affairs". However, the right to privacy in India, is only conferred through an extensive interpretation of Article 21 and cannot therefore in any circumstance be considered an absolute right. Mental health treatment involves disclosure of one's most private feelings However, like any other privilege the psychotherapist-patient privilege is not absolute and may only be recognized if the benefit to society outweighs the costs of keeping the information private. Thus if a child's best interest is jeopardized by maintaining confidentiality the privilege may be limited.” Thus, the power of a court to direct medical examination of a party to a matrimonial litigation in a case of this nature cannot beheld to violate the petitioner’s right to privacy.

Regulation of Privacy in Government and Private Hospitals and Diagnostic Laborataries

A. Field Study
The Hospitals that have been chosen for the analysis of the efficacy of these legislations include prominent Government Hospitals, Private Hospitals and Diagnostic Centers. These Institutes were chosen because of their widely accredited status as centers of medical research and cutting edge treatment. They have also had a long standing reputation due to their staff of experienced and skilled on call doctors and surgeons. The Private Hospitals chosen had patient welfare centers that addressed the concerns of patients including questions and doubts relating to but not limited to confidentiality and consent. The Government hospitals had a public relations office that addressed the concerns of discharged patients. They also provided counseling services to patients to aid them in addressing concerns relate to the treatment that they might want to be kept confidential. Diagnostic laboratories also have an HR department that addresses similar concerns. The laboratory also has a patient welfare manager who addresses the concerns and queries of the patient prior to and during the procedure.

The following section describes the practices promulgated by Government and Private Hospitals, as well as Diagnostic Laboratories in their endeavor to comply with the basic principles of privacy as laid down in the A.P Shah Committee report on Privacy.

(i) Notice

Through an analysis of the information provided by Government and Private hospitals and diagnostic laboratories, relevant conclusions were drawn with regard to the nature, process and method in which the patient information is recorded. Through interviews of various medical personnel including administrative staff in the patient welfare and medical records departments we observed an environment of openness and accountability within the structure of the patient registration system.

In Government Hospitals, the patient is notified of all types of information that is collected, in terms of both personal information as well as medical history. The Patient admission as well as the patient consent form is filled out by the patient or the attending relative accompanying the patient and assistance for the same is provided by the attending staff members, who explain the required details that need to be filled in a language that the patient is able to understand. The patient is notified of the purpose for which such information is collected and the procedure that he/ she might have to undergo depending on his injury or illness. The patient is not however, notified of the method in which he/she may correct or withdraw the information that is provided. There is no protocol provided for the correction or withdrawal of information, once provided. The patient is, at all times notified of the extent and nature of doctor patient confidentiality including the fact that his/her personal information would not be shared even with his/her immediate relatives , insurance companies, consulting doctors who are not directly involved with his/her treatment or any unauthorized third party without requisite consent from the patient. The patient is informed of the fact that in some cases the medical records of the patient will have to be shared with consulting doctors and that all the patient’s medical records would be provided to insurance companies, but this will only be done with the consent of the patient.

The same system of transparency and accountability transcends across private hospitals and diagnostic laboratories as well. In private hospitals, the patient is informed of all the information that is collected and the purpose for which such information may be collected. Diagnostic laboratories have specific patient consent forms for specific types of procedures which the patient will have to fill out depending on the required tests. These forms contain provisions with regard to the confidential nature of all the information provided. This information can only be accessed by the patient and the consulting doctor with the consent of the patient. Both private hospitals and diagnostic laboratories have a specific protocol and procedure in place to correct or withdraw information that has been provided. In order to do so the patient would have to contact the medical records department with requisite proof of the correct information. Private hospitals inform patients of the nature and extent of doctor patient confidentiality at every stage of the registration process. Some private hospitals contain patient safety brochures which inform patients about the nature and extent of consent and confidentiality, even with regard to consulting doctors and insurance agencies. If the patient does not want certain information revealed to insurance agencies the hospital will retain such records and refraining from providing them to third party insurance agencies. Thus, all information provided by the patient remains confidential at the behest of the patient.

(ii) Choice and Consent

Choice and consent are two integral aspects of the regulation of privacy within the healthcare sector. Government and Private hospitals as well as diagnostic laboratories have specific protocols in place to ensure that the consent of the patient is taken at every stage of the procedure. The consent of the patient can also be withdrawn just prior to the procedure even if this consent has already been given by the patient in writing, previously. The choice of the patient is also given ample importance at all stages of the procedure. The patient can refuse to provide any information that may not mandatorily required for the treatment provided basic information regarding his identity and contact information in case of emergency correspondence has been given.

(iii) Collection Limitation

The information collected from the patient in both government and private hospitals is used solely for the purpose that the patient has been informed of. In case this information is used for purposes other than for the purpose that the patient has been informed of, the patient is informed of this new purpose as well. Patient records in both Government and Private hospitals are stored in the Medical Records Department as hard copies and in some cases as scanned soft copies of the hard copy as well. These Medical Records are all stored within the facility. The duration for which the records are stored range from a minimum of two years to a maximum of ten years in most private hospitals. Some private hospitals store these records for life. Government hospitals store these records for a term of thirty years only as hard copies after which the records are discarded. Private hospitals make medical records accessible to any medical personnel who may ask for it provided the requisite proof of identity and reasons for accessing the same are provided, along with an attested letter of authorization of the doctor who is currently involved or had been involved in the treatment of the patient. Government hospitals however do not let any medical personnel access these records except for the doctor involved in the treatment of that particular patient. Both private and government hospitals are required to share the medical records of the patient with the insurance companies. Government Hospitals only share patient records with nationalized insurance agencies such as The Life Insurance Corporation of India (LIC) but not with private insurance agencies. The insurance claims forms that are required prior to providing medical records to the insurance companies mandatorily require the signature of the patient. The patient is thus informed that his records will be shared with the insurance agencies and his signature is a proof of his implied consent to the sharing of these records with the company with which he has filed a health insurance claim.

Diagnostic laboratories collect patient information solely for the purpose of the particular test that they have been asked to conduct by the treating or consulting doctor. Genetic samples (Blood, Semen, Urine etc) are collected at one time and the various tests required are conducted on these samples. In case of any additional testing that is required to be conducted on these samples, the patient is informed. Additional testing is conducted only in critical cases and in cases where the referral doctor requests for the same to be conducted on the collected samples. In critical cases, where immediate testing is required and the patient is unreachable, the testing is conducted without informing the patient. The patient is mandatorily informed after the test that such additional testing was conducted. The patient sample is stored for one week within the same facility. The Patient records are digitized. They can only be accessed by the patient, who is provided with a particular username and password using which he can access only his records. The information is stored for a minimum of two years. This information can be made available to a medical personnel only if such medical personnel has the required lab no, the patients name, and reason for which it needs to be accessed. He thus requires the permission of the authorities at the facility as well as the permission and consent of the patient to access such records. The Medical test records of a patient are kept completely confidential. Even insurance companies cannot access such records unless they are provided to the company by the patient himself. In critical cases however, the patient information and tests results are shared with the treating or referral doctor without the consent of the patient.

(iv) Purpose Limitation

In Government and Private Hospitals, the information is only used for the purpose for which it is collected. There is thus a direct and relevant connection between the information collected and the purpose for which it used. Additional information is collected to gauge the medical history of the patient that may be relevant to the disease that has to be treated. The information is never deleted after it has been used for the purpose for which it had been collected. The Medical Records of the patient are kept for extended periods in hard copy as well as soft copy versions. There is a provision for informing the patient in case the information is used for any purpose other than the purpose for which it was collected. Consent of the patient is taken at all stages of collecting and utilizing the information provided by him.

Diagnostic Laboratories have a database of all the information collected which is saved in the server. The information is mandatorily deleted after it has been used for the purpose for which it was collected after a period of two years. In case the information is used for any purpose other than the purpose for which it was collected, for example, in critical cases where additional tests have to be conducted the patient is\ always informed of the same.

(v) Access and Correction

In private hospitals, the patient is allowed to access his own records during his stay at the hospital. He is given a copy of his file upon his discharge from the hospital in the form of a discharge summary. However, if he needs to access the original records at a later stage, he can do so by filing a request for the same at the Medical Records Department of the hospital. A patient can make amendments or corrections to his records by providing requisite proof to substantiate the amended information. The patient however at no stage can confirm if the hospital is holding or processing personal information about him or her with the exception of the provisions provided for the amendment or correction to the information held.

The Medical records of a patient in a government hospital are completely sealed. A patient has no access to his own records. Only the concerned doctor who was treating the patient during his stay at the hospital can access the records of the patient. This doctor has to be necessarily associated with the hospital and had to have been directly involved in the patient’s treatment in order to access the records. The patient is allowed to amend information in his medical records but only generic information such as the spelling of his name, his address, telephone number etc. The patient is at no point allowed to access his own records and therefore cannot confirm if the hospital is holding or processing any information about him/her. The patient is only provided with a discharge summary that includes his personal information, the details of his disease and the treatment provided in simple language.

Diagnostic laboratories have an online database of patient records. The patient is given a username and a password and can access the information at any point. The patient may also amend or correct any information provided by contacting the Medical records department for the same. The patient can at any time view the status of his record and confirm if it is being held or processed by the hospital. A copy of such information can be obtained by the patient at any time.

(vi) Disclosure of Information

Private Hospitals are extremely cautious with regard to the disclosure of patient information. Medical records of patients cannot be accessed by anyone except the doctor treating that particular patient or consulting on the case. The patient is informed whenever his records are disclosed even to doctors. Usually, even immediate relatives of the patient cannot access the patient’s records without the consent of the patient except in cases where the condition of the patient is critical. The patient is always informed about the type and extent of information that may be disclosed whenever it is disclosed. No information of the patient is made available publicly at any stage. The patient can refuse to consent to sharing of information collected from him/her with non-authorized agencies. However, in no circumstance is the information collected from him/her shared with non authorized agencies. Some private hospitals also provide the patient with patient’s safety brochures highlighting the extent of doctor patient confidentiality, the patient’s rights including the right to withdraw consent at any stage and refuse access of records by unauthorized agencies.

In government hospitals, the medical records of the patient can only be disclosed to authorized agencies with the prior approval of patient. The patient is made aware of the type and extent of information that is collected from him/her and is mandatorily shared with authorized bodies such as insurance agencies or the treating doctor. No information of the patient is made publicly available. In cases where the information is shared with insurance agencies or any such authorized body the patient gives an undertaking via a letter of his consent to such disclosure. The insurance companies only use medical records for verification purposes and have to do so at the facility. They cannot take any original documents or make copies of the records without the consent of the patient as provided in the undertaking.

Diagnostic Laboratories provide information regarding the patient’s medical records only to the concerned or referred doctor. The patient is always informed of any instance where his information may be disclosed and the consent of the patient is always taken for the same. No information is made available publicly or shared with unauthorized agencies at any stage. Information regarding the patient’s medical records is not even shared with insurance companies.

Government and Private Hospitals provide medical records of patients to the police only when a summons for the same has been issued by a judge. Diagnostic laboratories however do not provide information regarding a patient’s records at any stage to any law enforcement agencies unless there is summons from a judge specifying exactly the nature and extent of information required.

Patients are not made aware of laws which may govern the disclosure of information in private and government hospitals as well as in diagnostic laboratories. The patient is merely informed that the information provided by him to the medical personnel will remain confidential.

(vii) Security

The security measures that are put in place to ensure the safety of the collected information is not adequately specified in the forms or during the collection of information from the patient in Government or Private Hospitals. Diagnostic laboratories however do provide the patient with information regarding the security measures put in place to ensure the confidentiality of the information.

(viii) Openness

The information made available to the patient at government and private hospital and diagnostic laboratories is easily intelligible. At every stage of the procedure the explicit consent of the patient is obtained. In government and private hospitals the signature of the patient is obtained on consent forms at every stage of the procedure and the nature and extent of the procedure is explained to the patient in a language that he understands and is comfortable speaking. The information provided is detailed and is provided in simplistic terms so that the patient does at all stages understand the nature of any procedure he is consenting to undergo.

(ix) Accountability

Private hospitals and Diagnostic laboratories have internal and external audit mechanisms in place to check the efficacy of privacy measures. They both have grievance redress mechanisms in the form of patient welfare cells and complaint cells. There is an assigned officer in place to take patient feedback and address and manage the privacy concerns of the patient.

Government hospitals do not have an internal or external audit mechanism in place to check the efficacy of privacy measures. There is however a grievance redressal mechanism in government hospitals in the form of a Public Relations Office that addresses the concerns, complaints, feedback and suggestions of the patients. There is an officer in charge of addressing and managing the privacy concerns of patients. This officer also offers counseling to the patients in case of privacy concerns regarding sensitive information.

International Best Practices and Recommendations

A. European Union
An official EU data protection regulation [69]was issued in January 2012. A key objective of this was to introduce a uniform policy directive across all member states. The regulation, once implemented was to be applicable in all member states and left no room for alteration or amendments.

The regulation calls for Privacy Impact Assessments[70]when there are specific risks to privacy which would include profiling, sensitive data related to health, genetic material or biometric information. This is an important step towards evaluating the nature and extent of privacy regulation required for various procedures and would be effective in the creation of a systematic structure for the implementation of these regulations. The regulation also established the need for explicit consent for sensitive personal data. The basis for this is an inherent imbalance in the positions of the data subject and the data controller, or in simpler terms the patient and the hospital or the life sciences company conducting the research. Thus, implied consent is not enough [71]and a need arises to proceed with the testing only when there is explicit informed consent.

Embedded within the regulation is the right to be forgotten [72]wherein patients can request for their data to be deleted after they have been discharged or the clinical trial has been concluded. In the Indian scenario, patient information is kept for extended periods of time. This can be subject to unauthorized access and misuse. The deletion of patient information once it has been used for the purpose for which it was collected is thus imperative towards the creation of an environment of privacy protection.

Article 81 of the regulation specifies that health data may be processed only for three major processes[73] :

a) In cases of Preventative or occupational medicine, medical diagnosis, the care, treatment or management of healthcare services, and in cases where the data is processed by the healthcare professionals, the data is subject to the obligation of professional secrecy;

b) Considerations of public interest bearing a direct nexus to public health, for example, the protection of legitimate cross border threats to health or ensuring a high standard of quality and safety for medicinal products or services;

c) Or other reasons of public interest such as social protection.

An added concern is the nature and extent of consent. The consent obtained during a clinical trial may not always be sufficient to cover additional research even in instances of data being coded adequately. Thus, it may not be possible to anticipate additional research while carrying out initial research. Article 83[74] of the regulation prohibits the use of data collected for an additional purpose, other that the purpose for which it was collected.

Lastly, the regulation covers data that may be transferred outside the EEA, unless there is an additional level of data protection. If a court located outside the EU makes a request for the disclosure of personal data, prior authorization must be obtained from the local data protection authority before such transfer is made. It is imperative that this be implemented within Indian legislation as currently there is no mechanism to regulate the cross border transfer of personal data.

B. The United States of America
The Health Maintenance Organizations Act, 1973 [75]was enacted with a view to keep up with the rapid development in the Information Technology sector. The digitization of personal information led to new forms of threats with regard to the privacy of a patient. In the face of this threat, the overarching goal of providing effective and yet unobtrusive healthcare still remains paramount.

To this effect, several important federal regulations have been implemented. These include the Privacy and Security Ruled under the Health Insurance Portability and Accountability Act (HIPAA) 1996[76] and the State Alliance for eHealth (2007) [77].The HIPAA privacy rules addressed the use and subsequent disclosure of a patient's personal information under various healthcare plans, medical providers, and clearinghouses. These insurance agencies were the primary agents involved in obtaining a patients information for purposes such as treatment, payment, managing healthcare operations, medical research and subcontracting. Under the HIPAA it is required of insurance agencies to ensure the implementation of various administrative safeguards such as policies, guidelines, regulations or rules to monitor and control inter as well as intra organizational access.

Apart from the HIPAA, approximately 60 laws related to privacy in the healthcare sector have been enacted in more than 34 states. These legislations have been instrumental in creating awareness about privacy requirements in the healthcare sector and improving the efficiency of data collection and transfer. Similar legislative initiative is required in the Indian context to aid in the creation of a regulated and secure atmosphere pertaining to the protection of privacy within the healthcare sector.

C. Australia
Australia has a comprehensive law that deals with sectoral regulations of the right to privacy.An amendment to the Privacy Act1988 [78]applies to all healthcare providers and was made applicable from 21st December 2001.The privacy Act includes the followingpractices:

a. A stringent requirement for informed consent prior to the collection of health related information

b. A provision regarding the information that needs to be provided to individuals before information is collected from them

c. The considerations that have to be taken into account before the transfer of information to third parties such as insurance agencies, including the specific instances wherein this information can be passed on

d. The details that must be included in the Privacy policy of the healthcare service providers' Privacy Policy

e. The securing and storing of information; and

f. Providing individuals with a right to access their health records.

These provisions are in keeping with the 13 National Privacy [79]Principles that represent the minimum standards of privacy regulation with respect to the handling of personal information in the healthcare sector.These guidelines are advisory in nature and have been issued by the Privacy Commissioner in exercise of his power under Section 27(1)(e) [80]of the Privacy Act.

The Act also embodiessimilar privacy principles which include a collection limitation, a definitive use and purpose for the information collected, a specific set of circumstance and an established protocol for the disclosure of information to third parties including the nature and extent of such disclosure, maintenance accuracy ofthe data collected, requisite security measures to ensure the data collected is at all times protected, a sense of transparency,accountability and openness in the administrative functioning of thehealthcare provider and accessibility of the patient to his ownrecords for the purpose of viewing, corroboration or correction.

Additionally, the Act includes the system of identifiers which includes a number assigned by the organization to an individual to identify the purpose of that person's data for the operation of the organization. Further, the Act provides for anonymity wherein individuals have the optionnot to identify themselves while entering into transactions with an organization. The Act also provides for restrictions on the transfer of personal data outside Australia and establishes conclusive and stringent barriers to the extent of collection of personal and sensitive data.These principles although vaguely similar to those highlighted in the A.P. Shah Committee report can be usedto streamline the regulations pertaining to privacy in the healthcare sector and make them more efficient.

Key Recommendations

It is Imperative that Privacy concerns relating to the transnational flow of Private data be addressed in the most efficient way possible. This would involve international cooperation and collaboration to address privacy concerns including clear provisions and the development of coherent minimum standards pertaining to international data transfer agreements. This exchange of ideas and multilateral deliberation would result in creating more efficient methods of applying the provisions of privacy legislation even within domestic jurisdictions.

There is a universal need for the development of a foundational structure for the physical collection, use and storage of human biological specimens (in contrast to the personalinformation that may be derived from those specimens) as these are extremely important aspects of biomedical research and clinical trials. The need for Privacy Impact Assessments would also arise in the context of clinical trials, research studies and the gathering of biomedical data.

Further, there also arises the need for patients to be allowed to request for the deletion of their personal information once it has served the purpose for which it was obtained. The keeping of records for extended periods of time by hospitals and laboratories is unnecessary and can often result in the unauthorized access to and subsequent misuse of such data.

There is a definitive need to ensure the incorporation of safeguards to regulate the protection of patient’s data once accessed by third parties, such as insurance companies. In the Indian Context as well as insurance agencies often have unrestricted access to a patient's medical records however there is a definitive lack of sufficient safeguards to ensure that this information is not released to or access by unauthorized persons either within these insurance agencies or outsourced consultants

The system of identifiers which allocate specific numbers to an individual’s data which can only be accessed using that specific number or series of numbers can be incorporated into the Indian system as well and can simplify the administrative process thus increasing its efficacy. This would afford individuals the privilege of anonymity while entering into transactions with specific healthcare institutions.

An important means of responding to public concerns over potential unauthorized use ofpersonal information gathered for research, could be through the issuing of Certificates of confidentiality as issued in the United States to protectsensitive information on research participants from forced disclosure. [81]

Additionally, it is imperative that frequent discussions, deliberations, conferences and roundtables take place involving multiple stakeholders form the healthcare sector, insurance companies, patient’s rights advocacy groups and the government. This would aid in evolving a comprehensive policy that would aid in the protection of privacy in the healthcare sector in an efficient and collusive manner.

Conclusions

The Right to Privacy has been embodied in a multitude of domestic legislations pertaining to the healthcare sector. The privacy principles envisioned in the A.P Shah Committee report have also been incorporated into the everyday practices of healthcare institutions to the greatest possible extent. There are however significant gaps in the policy formulation that essentially do not account for the data once it has been collected or its subsequent transfer. There is thus an imminent need for institutional collaboration in order to redress these gaps. Recommendations for the same have been made in the report. However, for an effective framework to be laid down there is still a need for the State to play an active role in enabling the engagement between different institutions both in the private and public domain across a multitude of sectors including insurance companies, online servers that are used to harbour a data base of patient records and civil action groups that demand patient privacy while at the same time seek to access records under the Right to Information Act. The collaborative efforts of these multiple stakeholders will ensure the creation of a strong foundational framework upon which the Right to Privacy can be efficiently constructed.


[1] . Report of the group of experts on Privacy chaired by Justice A.P Shah <http://planningcommission.nic.in/reports/genrep/rep_privacy.pdf> [Accessed on 14th May 2014]

[2] . Nissenbaum, H. (2004). Privacy as Contextual Integrity. Washington Law Review, 79(1), 101-139.

[3] . Ibid.

[4] . Thomas, J. (2009). Medical Records and Issues in Negligence, Indian Journal of Urology : IJU : Journal of the Urological Society of India, 25(3), 384-388. doi:10.4103/0970-1591.56208.

[5] . Ibid

[6] . Plaza, J., &Fischbach, R. (n.d.). Current Issues in Research Ethics : Privacy and Confidentiality. Retrieved December 5, 2011, from http://ccnmtl.columbia.edu/projects/cire/pac/foundation/index.html.

[7] . Ibid.

[8] . The Mental Health Act, 1987 <https://sadm.maharashtra.gov.in/sadm/GRs/Mental%20health%20act.pdf> [Accessed on 14th May 2014]

[9] . The Mental Health Act, 1987, s. 13(1).

[10] .The Mental Health Act, 1987, s. 38.

[11] .The Mental Health Act, 1987, s. 40.

[12] .The Mental Health Act, 1987, s. 21(2).

[13] .The Mental Health Act, 1987, s. 13(1), Proviso.

[14] . Also see the: Pre-Conception and and Pre-Natal Diagnostic Techniques (Prohibition of Sex Selection) Rules, 1996.

[15] . Pre-Conception and Pre-Natal Diagnostic Techniques (Prohibition of Sex Selection) Act, 1994, s. 4(3).

[16] . Pre-Conception and Pre-Natal Diagnostic Techniques (Prohibition of Sex Selection) Act, 1994, s. 4(2). Pre-natal diagnostic techniques shall be conducted for the purposes of detection of: chromosomal abnormalities, genetic metabolic diseases, haemoglobinopathies, sex-linked genetic diseases, congenital anomalies any other abnormalities or diseases as may be specified by the Central Supervisory Board.

[17] .Medical Termination of Pregnancy Amendment Act, 2002, Notification on Medical Termination of Pregnancy (Amendment) Act, Medical Termination of Pregnancy Regulations, 2003 and Medical Termination of Pregnancy Rules, 2003.

[18] .Medical Termination of Pregnancy Act, 1971 (Amended in 2002), s. 2(4) and 4, and Medical Termination of Pregnancy Rules, 2003, Rule 8

[19] .Medical Termination of Pregnancy Regulations, 2003, Regulation 4(5).

[20] .Medical Termination of Pregnancy Regulations, 2003, Regulation 5.

[21] .Medical Termination of Pregnancy Regulations, 2003, Regulation 4(2).

[22] .Medical Termination of Pregnancy Regulations, 2003, Regulations 4(2) and 4(4).

[24] . Code of Ethics Regulations, 2002 Chapter 2, Section 2.2.

[25] .Ethical Guidelines for Biomedical Research on Human Subjects. (2006) Indian Council of Medical Research New Delhi.

[26] . Informed Consent Process, Ethical Guidelines for Biomedical ResearchonHuman Subjects (2006). Indian Council of Medical Research New Delhi.P. 21.

[27] . Statement of Specific Principles for Human Genetics Research, Ethical Guidelines for Biomedical ResearchonHuman Subjects (2000) . Indian Council of Medical Research New Delhi.P. 62.

[28] . General Ethical Issues. Ethical Guidelines for Biomedical ResearchonHuman Subjects (2006). Indian Council of Medical Research New Delhi.P. 29.

[29] . Statement of Specific Principles for Epidemiological Studies, Ethical Guidelines for Biomedical ResearchonHuman Subjects (2000) . Indian Council of Medical Research New Delhi P. 56.

[30] . Statement of General Principles, Principle IV and Essential Information on Confidentiality for Prospective Research Participants, Ethical Guidelines for Biomedical ResearchonHuman Subjects (2006). Indian Council of Medical Research New Delhi.P. 29.

[31] . The IRDA (Third Party Administrators - Health Services) Regulations 2001, (2001), Chapter 5. Section 2.

[32] . The IRDA (Sharing Of Database for Distribution of Insurance Products) Regulations 2010.

[33] . The IRDA (Sharing Of Database For Distribution Of Insurance Products) Regulations 2010.

[34] . The IRDA (Sharing Of Database For Distribution Of Insurance Products) Regulations 2010

[35] . List of TPAs Updated as on 19th December, 2011, Insurance Regulatory and Development Authority (2011), http://www.irda.gov.in/ADMINCMS/cms/NormalData_Layout.aspx?page=PageNo646 (last visited Dec 19, 2011).

[36] . The IRDA, Guideline on Outsourcing of Activities by Insurance Companies, (2011).

[37] . The IRDA, Guideline on Outsourcing of Activities by Insurance Companies, (2011), Section 9.11. P. 8.

[38] .The Epidemic Diseases Act, 1897.

[39] .The Epidemic Diseases Act, 1897. s. 2.1.

[40] .The Epidemic Diseases Act, 1897, s. 2.2(b).

[41] . The National Policy for Persons with Disabilities, 2006, Persons with Disabilities (Equal Opportunities, Protection of Rights and Full Participation) Act, 1995, Persons with Disabilities (Equal Opportunities, Protection of Rights and Full Participation) Rules, 1996.

[42] . Research, National Policy for Persons with Disabilities, 1993.

[43] . Survey of Disabled Persons in India. (December 2003) National Sample Survey Organization. Ministry of Statistics and Programme Implementation. Government of India.

[44] .Persons With Disabilities (Equal Opportunities, Protection of Rights and Full Participation) Act. 1995, Section 35.

[45]. Research. National Policy for Persons with Disabilities, 2003.

[46]. http://www.lawyerscollective.org/files/Anti%20rights%20practices%20in%20Targetted%20Interventions.pdf

[47]. http://www.lawyerscollective.org/files/Anti%20rights%20practices%20in%20Targetted%20Interventions.pdf

[48]. Aneka, Karnataka Sexual Minorities Forum. (2011)“Chasing Numbers, Betraying People: Relooking at HIV Services in Karnataka”, p.22.

[49]. Aneka, Karnataka Sexual Minorities Forum. (2011)“Chasing Numbers, Betraying People: Relooking at HIV Services in Karnataka”, p.16.

[50]. Aneka, Karnataka Sexual Minorities Forum. (2011)“Chasing Numbers, Betraying People: Relooking at HIV Services in Karnataka”, p.16.

[51]. Aneka, Karnataka Sexual Minorities Forum. (2011)“Chasing Numbers, Betraying People: Relooking at HIV Services in Karnataka”, p.14.

[52]. http://www.hivaidsonline.in/index.php/HIV-Human-Rights/legal-issues-that-arise-in-the-hiv-context.html

[53]. Chakrapani et al, (2008) ‘HIV Testing Barriers and Facilitators among Populations at-risk in Chennai, India’, INP, p 12.

[54]. Aneka, Karnataka Sexual Minorities Forum. (2011)“Chasing Numbers, Betraying People: Relooking at HIV Services in Karnataka”, p.24.

[58] . No person accused of any offence shall be compelled to be a witness against himself’, (the 'right to silence').

[59] . http://indiankanoon.org/doc/338008/

[60] . http://www.hrdc.net/sahrdc/hrfeatures/HRF205.pdf

[61] . AIR 1992 SC 392.

[62] . 96 (2002) DLT 354.

[63] .AIR 2000 A.P 156.

[66] .See Sections 24, 37, 38 and 39 of The Prisons Act, 1894 (Central Act 9 of 1894) Rules 583 to 653 (Chapter XXXV) and Rules 1007 to 1014 (Chapter LVII) of Andhra Pradesh Prisons Rules, 1979

[67] .Section 10-A,17(4) ,19(2) Immoral Traffic (Prevention) Act 1956

[69] . http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf

[70] . Article 33, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL

on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) < http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf> [Accessed on 14th May, 2014]

[71] .Article 4 (Definition of “Data Subject’s Consent”), Article 7, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL

on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) < http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf> [Accessed on 14th May, 2014].

[72] . Article 17, “Safeguarding Privacy in a Connected World – A European Data Protection Framework for the 21st

Century” COM(2012) 9 final. Based on, Article 12(b), EU Directive 95/46/EC – The Data Protection Directive at <http://www.dataprotection.ie/docs/EU-Directive-95-46-EC-Chapter-2/93.htm> [Accessed on 14th May, 2014]

[73] . Article 81, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL

on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) < http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf> [Accessed on 14th May, 2014]

[74] .Article 83, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL

on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) < http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf> [Accessed on 14th May, 2014]

[75] . Health Maintainence and Organization Act 1973, Notes and Brief Reports available at http://www.ssa.gov/policy/docs/ssb/v37n3/v37n3p35.pdf [Accessed on 14th May 2014].

[76] . Health Insurance Portability and Accountability Act, 1996 available at http://www.hhs.gov/ocr/privacy/hipaa/administrative/statute/hipaastatutepdf.pdf [Accessed on 14th May 2014]

[77] . Illinois Alliance for Health Innovation plan available at http://www2.illinois.gov/gov/healthcarereform/Documents/Alliance/Alliance%20011614.pdf [Accessed on 14th May 2014]

[78] . The Privacy Act 1988 available at http://www.comlaw.gov.au/Series/C2004A03712 [Accessed on 14th May 2014]

[79] . Schedule 1, Privacy Act 1988 [Accessed on 14th May 2014]

[80] .Section 27(e), Privacy Act 1988 [Accessed on 14th May 2014]

[81] . Guidance on Certificates of Confidentiality, Office of Human Research Protections, U.S Department of Health and Human Services available at http://www.hhs.gov/ohrp/policy/certconf.pdf [Accessed on 14th May, 2014].

Privacy in Healthcare Chapter

by Prasad Krishna last modified Aug 19, 2014 05:13 AM

PDF document icon Privacy in Healthcare - Policy Guide - Draft 2.pdf — PDF document, 320 kB (328554 bytes)

Learning to Forget the ECJ's Decision on the Right to be Forgotten and its Implications

by Divij Joshi last modified Aug 19, 2014 05:24 AM
“The internet never forgets” is a proposition which is equally threatening and promising.

The phrase reflects the dichotomy presented by the extension on the lease of public memory granted by the internet – as information is more accessible and more permanent, letting go of the past is becoming increasingly difficult. The question of how to govern information on the internet – a space which is growing increasingly important in society and also one that presents a unique social environment - is one that persistently challenges courts and policy makers. A recent decision by the European Court of Justice, the highest judicial authority of the European Union, perfectly encapsulates the way the evolution of the internet is constantly changing our conceptions of individual privacy and the realm of information. On the 13th of May, 2014, the ECJ in its ruling in Google v Costeja,[1] effectively read a “right to be forgotten” into existing EU data protection law. The right, broadly, provides that an individual may be allowed to control the information available about them on the web by removing such information in certain situations - known as the right to erasure. In certain situations such a right is non-controversial, for example, the deletion of a social media profile by its user. However, the right to erasure has serious implications for the freedom of information on the internet when it extends to the removal of information not created by the person to whom it pertains.

Privacy and Perfect Memory

The internet has, in a short span, become the biggest and arguably the most important tool for communication on the planet. However, a peculiar and essential feature of the internet is that it acts as a repository and a reflection of public memory – usually, whatever is once made public and shared on the internet remains available for access across the world without an expiry date. From public information on social networks to comments on blog posts, home addresses, telephone numbers and candid photos, personal information is disseminated all across the internet, perpetually ready for access - and often without the possibility of correcting or deleting what was divulged. This aspect of the internet means that the internet is a now an ever-growing repository of personal data, indexed and permanently filed. This unlimited capacity for information has a profound impact on society and in shaping social relations.

The core of the internet lies in its openness and accessibility and the ability to share information with ease – most any information to any person is now a Google search away. The openness of information on the internet prevents history from being corrupted, facts from being manipulated and encourages unprecedented freedom of information. However, these virtues often become a peril when considering the vast amount of personal data that the internet now holds. This “perfect memory” of the internet means that people are perpetually under the risk of being constantly scrutinized and being tied to their pasts, specifically a generation of users that from their childhood have been active on the internet.[2] Consider the example of online criminal databases in the United States, which regularly and permanently upload criminal records of convicted offenders even after their release, which is accessible to all future employers;[3] or the example of the Canadian psychotherapist who was permanently banned from the United States after an internet search revealed that he had experimented with LSD in his past; [4] or the cases of “revenge porn” websites, which (in most cases legally) publically host deeply private photos or videos of persons, often with their personal information, for the specific purpose of causing them deep embarrassment. [5]

These examples show that, due to the radically unrestricted spread of personal data across the web, people are no longer able to control how and by whom and in what context their personal data is being viewed. This creates the vulnerability of the data collectively being “mined” for purposes of surveillance and also of individuals being unable to control the way personal data is revealed online and therefore lose autonomy over that information.

The Right to be Forgotten and the ECJ judgement in Costeja

The problems highlighted above were the considerations for the European Union data protection regulation, drafted in 2012, which specifically provides for a right to be forgotten, as well as the judgement of the European Court of Justice in Google Spain v Mario Costeja Gonzalves.

The petitioner in this case, sought for the removal of links related to attachment proceedings for his property, which showed up upon entering his name on Google’s search engine. After refusing to remove the links, he approached the Spanish Data Protection Agency (the AEPD) to order their removal. The AEPD accepted the complaints against Google Inc. and ordered the removal of the links. On appeal to the Spanish High Court, three questions were referred to the European Court of Justice. The first related to the applicability of the data protection directive (Directive 95/46/EC) to search engines, i.e. whether they could be said to be “processing personal data” under Article 2(a) and (b) of the directive,[6] and whether they can be considered data controllers as per Section 2(d) of the directive. The court found that, because the search engines retrieve, record and organize data, and make it available for viewing (as a list of results), they can be said to process data. Further, interpreting the definition of “data controller” broadly, the court found that ‘ It is the search engine operator which determines the purposes and means of that activity and thus of the processing of personal data that it itself carries out within the framework of that activity and which must, consequently, be regarded as the ‘controller’ [7] and that ‘ it is undisputed that that activity of search engines plays a decisive role in the overall dissemination of those data in that it renders the latter accessible to any internet user making a search on the basis of the data subject’s name, including to internet users who otherwise would not have found the web page on which those data are published.’[8] The latter reasoning highlights the particular role of search engines, as indexers of data, in increasing the accessibility and visibility of data from multiple sources, lending to the “database” effect, which could allow the structured profiling of an individual, and therefore justifies imposing the same (and even higher) obligations on search engines as on other data controllers, notwithstanding that the search engine operator has no knowledge of the personal data which it is processing.

The second question relates to the territorial scope of the directions, i.e. whether Google Inc., being the parent company based out of the US, came within the court’s jurisdiction – which only applies to member states of the EU. The court held that even though it did not carry on the specific activity of processing personal data, Google Spain, being a subsidiary of Google Inc. which promotes and sells advertisement for the parent company, was an “establishment” in the EU and Google Inc., and, because it processed data “in the context of the activities” of the establishment specifically directed towards the inhabitants of a member state (here Spain), came under the scope of the EU law. The court also reaffirmed a broad interpretation of the data protection law in the interests of the fundamental right to privacy and therefore imputed policy considerations in interpreting the directive. [9]

The third question was whether Google Spain was in breach of the data protection directive, specifically Articles 12(b) and 14(1)(a), which state that a data subject may object to the processing of data by a data controller, and may enforce such a right against the data controller, as long as the conditions for their removal are met. The reasoning for enforcing such a claim against search engines in particular can be found in paragraphs 80 and 84 of the judgement, where the court holds that “(a search engine) enables any internet user to obtain through the list of results a structured overview of the information relating to that individual that can be found on the internet — information which potentially concerns a vast number of aspects of his private life and which, without the search engine, could not have been interconnected or could have been only with great difficulty — and thereby to establish a more or less detailed profile of him.” and that “ Given the ease with which information published on a website can be replicated on other sites and the fact that the persons responsible for its publication are not always subject to European Union legislation, effective and complete protection of data users could not be achieved if the latter had to obtain first or in parallel the erasure of the information relating to them from the publishers of websites.” In fact, the court seems to apply a higher threshold for search engines due to their peculiar nature as indexes and databases. [10]

Under the court’s conception of the right of erasure, search engines are mandated to remove content upon request by individuals, when the information is deemed to be personal data that is “ inadequate, irrelevant or excessive in relation to the purposes of the processing, that they are not kept up to date, or that they are kept for longer than is necessary unless they are required to be kept for historical, statistical or scientific purposes,” [11] notwithstanding that the publication itself is lawful and causes no prejudice to the data subject. The court reasoned that when the data being projected qualified on any of the above grounds, it would violate Article 6 of the directive, on grounds of the data not being processed “ fairly and lawfully’, that they are ‘collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes’, that they are ‘adequate, relevant and not excessive in relation to the purposes for which they are collected and/or further processed’, that they are ‘accurate and, where necessary, kept up to date’ and, finally, that they are ‘kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed’.” [12] Therefore, the court held that, due to the nature of the information, the data subject has a right to no longer have such information linked to his or her name on a list of results following a search made on their name. The grounds laid down by the court, i.e. relevancy, inadequacy, etc. are very broad, yet such a broad conception is necessary in order to effectively deal with the problems of the nature described above.

The judgement of the ECJ concludes by applying a balancing test between the rights of the data subject and both the economic rights of the data controller as well as the general right of the public to information. It states that generally, as long as the information meets the criteria laid down by the directive, the right of the data subject trumps both these rights. However, it adds an important caveat – such a right is inapplicable “ the in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information, an interest which may vary, in particular, according to the role played by the data subject in public life.” This crucial point on the balancing of two rights directly hit by the judgement was only summarily dealt with by the ECJ, without effectively giving any clarity as to what standards to apply or laying down any specific guidelines for the application of the new rule. [13] Doing so, it effectively left the decision to determine what was in the public interest and how the rights are to be balanced to the search engines themselves. Delegating such a task to a private party takes away from the idea of the internet as a common resource which should be developed for the benefit of the larger internet community as a whole, by allowing it to be governed and controlled by private stakeholders, and therefore paves an uncertain path for this crucial aspect of internet governance.

Implications of the ECJ ruling

The decision has far reaching consequences on both privacy and on freedom of information on the internet. Google began implementing the decision through a form submission process, which requires the individual to specify which links to remove and why, and verifies that the request comes from the individual themselves via photo identification, and has also constituted an expert panel to oversee its implementation (similar to the process for removing links which infringe copyright law).[14] Google has since received more than 91,000 requests for removal, pertaining to 328,000 links of which it has approved more than half.[15] In light of such large volumes of data to process, the practical implementation of the ruling has been necessarily problematic. The implementation has been criticized both for implicating free speech on the internet as well as disregarding the spirit of the right to be forgotten. On the first count, Google has been criticized for taking down several links which are clearly are in public interest to be public, including several opinion pieces on politicians and corporate leaders, which amounts to censorship of a free press.[16] On the second count, EU privacy watchdogs have been critical of Google’s decision to notify sources of the removed content, which prompts further speculation on the issue, and secondly, privacy regulators have challenged Google’s claim that the decision is restricted to the localised versions of the websites, since the same content can be accessed through any other version of the search engine, for example, by switching over to “Google.com”.[17]

This second question also raises complicated questions about the standards for free speech and privacy which should apply on the internet. If the EU wishes for Google Inc. to remove all links from all versions of its search engine, it is, in essence, applying the balancing test of privacy and free speech which are peculiar to the EU (which evolved from a specific historical and social context, and from laws emerging out of the EU) across the entire world, and is radically different from the standard applicable in the USA or India, for example. In spirit, therefore, although the judgement seeks to protect individual privacy, the vagueness of the ruling and the lack of guidelines has had enormous negative implications for the freedom of information. In light of these problems, the uproar that has been caused in the two months since the decision is expected, especially amongst news media sites which are most affected by this ruling. However, the faulty application of the ruling does not necessarily mean that a right to be forgotten is a concept which should be buried. Proposed solutions such as archiving of data or limited restrictions, instead of erasure may be of some help in maintaining a balance between the two rights.[18] EU regulators hope to end the confusion through drafting comprehensive guidelines for the search engines, pursuant to meetings with various stakeholders, which should come out by the end of the year. [19] Until then, the confusion will most likely continue.

Is there a Right to be Forgotten in India?

Indian law is notorious for its lackadaisical approach towards both freedom of information and privacy on the internet. The law, mostly governed by the Information Technology Act, is vague and broad, and the essence of most laws is controlled by the rules enacted by non-legislative bodies pursuant to various sections of the Act. The “right to be forgotten” in India can probably be found within this framework, specifically under Rule 3(2) of the Intermediary Guideline Rules, 2011, under Section 79 of the IT Act. Under this rule, intermediaries are liable for content which is “invasive of another’s privacy”. Read with the broad definition of intermediaries under the same rules (which includes search engines specifically) and of “affected person”, the applicable law for takedown of online content is much more broad and vague than the standard laid down in Costeja. It remains to be seen whether the EU’s interpretation of privacy and the “right to be forgotten” would further the chilling effect caused by these rules.


[1] Google Spain v Mario Costeja Gonzalves, C‑131/12, Available at http://curia.europa.eu/juris/document/document.jsf?text=&docid=152065&pageIndex=0&doclang=en&mode=req&dir=&occ=first&part=1&cid=264438.

[2] See Victor Mayer-Schonberger, Delete: The Virtue of Forgetting in the Digital Age, (Princeton, 2009).

[3] For example, See http://mugshots.com/; and http://www.peoplesearchpro.com/resources/background-check/criminal-records/

[4] LSD as Therapy? Write about It, Get Barred from US, (April, 2007) available at

http://thetyee.ca/News/2007/04/23/Feldmar/

[5] It’s nearly impossible to get revenge porn of the internet, (June, 2014), available t http://www.vox.com/2014/6/25/5841510/its-nearly-impossible-to-get-revenge-porn-off-the-internet

[6] Article 2(a) - “personal data” shall mean any information relating to an identified or identifiable natural person (“data subject”); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity;

Article 2(b) - “ processing of personal data” (“processing”) shall mean any operation or set of operations which is performed upon personal data, whether or not by automatic means, such as collection, recording, organisation, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, blocking, erasure or destruction;

[7] ¶36, judgment.

[8] The court also recognizes the implications on data profiling through the actions of search engines organizing results in ¶37.

[9] ¶74 judgment.

[10] In ¶83, the court notes that the processing by a search engine affect the data subject additionally to publication on a webpage; ¶87 - Indeed, since the inclusion in the list of results, displayed following a search made on the basis of a person’s name, of a web page and of the information contained on it relating to that person makes access to that information appreciably easier for any internet user making a search in respect of the person concerned and may play a decisive role in the dissemination of that information, it is liable to constitute a more significant interference with the data subject’s fundamental right to privacy than the publication on the web page.

[11] ¶92, judgment.

[12] ¶72, judgment.

[13] ¶81, judgment.

[14] The form is available at https://support.google.com/legal/contact/lr_eudpa?product=websearch

[15] Is Google intentionally overreacting on the right to be forgotten? (June, 2014), available at http://www.pcpro.co.uk/news/389602/is-google-intentionally-overreacting-on-right-to-be-forgotten.

[16] Will the right to be forgotten extend to Google.com?, (July, 2014), available at http://www.pcpro.co.uk/news/389983/will-right-to-be-forgotten-extend-to-google-com.

[17] The right to be forgotten is a nightmare to enforce, (July, 2014), available at http://www.forbes.com/sites/kashmirhill/2014/07/24/the-right-to-be-forgotten-is-a-nightmare-to-enforce.

[18] Michael Hoven, Balancing privacy and speech in the right to be forgotten, available ati http://jolt.law.harvard.edu/digest/privacy/balancing-privacy-and-speech-in-the-right-to-be-forgotten#_edn15

[19] EU poses 26 questions on the right to be forgotten, (July, 2014), available at http://www.cio-today.com/article/index.php?story_id=1310024135B0

Surveillance and Privacy Law Roundtable Invite

by Prasad Krishna last modified Aug 25, 2014 09:24 AM

PDF document icon New Delhi Invite.pdf — PDF document, 1207 kB (1235970 bytes)

The Aadhaar Case

by Vipul Kharbanda last modified Sep 05, 2014 09:12 AM
In 2012 a writ petition was filed by Justice K.S. Puttaswamy in the Supreme Court of India challenging the policy of the government in making an Aadhaar card for every person in India and its later plans to link various government benefit schemes to the same.

Over time a number of other cases have been filed in the Supreme Court challenging the Aadhaar mechanism and/or its procedure most of which have now been linked to the main petition filed by Justice Puttaswamy.[1] This means that the Supreme Court now hears all these cases together (i.e. at the same time) since they throw up similar questions and involve the same or similar issues. The court while hearing the case made an interim order on September 23, 2013 whereby it ordered that no person should suffer on account of not having an Aadhaar card and that Aadhaar cards should not be issued to any illegal immigrants. The relevant extract from the Order of the court is reproduced below:

"No person should suffer for not getting the Aadhaar card in spite of the fact that some authority had issued a circular making it mandatory and when any person applies to get the Aadhaar card voluntarily, it may be checked whether that person is entitled for it under the law and it should not be given to any illegal immigrant."[2]

It must be noted that the above order was only an interim measure taken by the Supreme Court till the time it finally decided all the issues involved in the case, which is still pending in the Supreme Court.

In November 2013 during one of the hearings of the matter, the Supreme Court came to the conclusion that it was an important enough matter for all the states and union territories to be impleaded as parties to the case and passed an order to this effect.[3] This was probably because the Aadhaar cards will be issued in the entire country and this is a national issue and therefore it is possible that the court thought that if any of the states have any concerns regarding the issue they should have the opportunity to present their case.

In another petition filed by the Unique Identification Authority of India (UIDAI), the Supreme Court on March 24, 2014 reiterated its earlier order and held that no person shall be deprived of any service just because such person lacked an aadhaar number if he/she was otherwise eligible for the service. A direction was issued to all government authorities and departments to modify their forms/circulars, etc., so as to not compulsorily require an aadhaar number. In the same order the Supreme Court also restrained the UIDAI from transferring any biometric data to any agency without the consent of the person in writing as an interim measure.[4] After passing these orders the Supreme Court linked this case as well to the petition filed by Justice Puttaswamy on which final arguments were being heard in February 2014 which so far do not seem to have concluded.

Note : Please note that the case is still being heard by the Supreme Court and the orders given so far and explained in this blog are all interim measures till the case is finally disposed off. The status of the cases can be seen on the following link:

http://courtnic.nic.in/supremecourt/casestatus_new/caseno_new_alt.asp

The names and number of the cases that have been covered in this blog are given below:

  • W.P(C) No. 439 of 2012 titled S. Raju v. Govt. of India and Others pending before the D.B. of the High Court of Judicature at Madras.
  • PIL No. 10 of 2012 titled Vickram Crishna and Others v. UIDAI and Others pending before the High Court of Judicature at Bombay.
  • W.P. No. 833 of 2013 titled Aruna Roy & Anr v. Union of India & Ors.
  • W.P. No. 829 of 2013 titled S.G. Vombatkere & Anr v. Union of India & Ors.
  • Petition(s) for Special Leave to Appeal (Crl) No(s).2524/2014 titled Unique Identification Authority of India & another v. Central Bureau of Investigation.

All the above cases have now been linked with the ongoing Supreme Court case of K. Puttaswamy v. Union of India.


[1] W.P(C) No. 439 of 2012 titled S. Raju v. Govt. of India and Others pending before the D.B. of the High Court of Judicature at Madras and PIL No. 10 of 2012 titled Vickram Crishna and Others v. UIDAI and Others pending before the High Court of Judicature at Bombay were transferred to the Supreme Court vide Order dated September 23, 2013. Also W.P. No. 833 of 2013 titled Aruna Roy & Anr Vs Union of India & Ors, W.P. No. 829 of 2013 titled S G Vombatkere & Anr Vs Union of India & Ors and Petition(s) for Special Leave to Appeal (Crl) No(s).2524/2014 titled Unique Identification Authority of India & another v. Central Bureau of Investigation.

Surat’s Massive Surveillance Network Should Cause Concern, Not Celebration

by Joe Sheehan last modified Sep 06, 2014 03:05 AM
The blog post examines the surveillance network of Surat, a city in Gujarat state in India.

The Surveillance System

Surat, a city in the state of Gujarat, has recently unveiled a comprehensive closed-circuit camera surveillance system that spans almost the entire city.  This makes Surat the first Indian city to have a modern, real-time CCTV system, with eye-tracking software and night vision cameras, along with intense data analysis capabilities that older systems lack.

Similar systems are planned for cities across India, from Delhi to Punjab, even those that already have older CCTV programs in place.  Phase I of the system, which is currently completed, consists of 104 CCTV cameras installed at 23 locations and a 280 square foot video wall at the police control room. The video wall is one of the largest in the country, according to the Times of India.

Narendra Modi, then the Gujarat chief minister, launched the project in January 2013, though the project was original conceptualized by police commissioner Rakesh Asthana, who has cited the CCTV system in Scotland Yard as his inspiration.

Phase II of the surveillance project will involve the installation of 550 cameras at 282 locations, and in the future, police plan to install over 5000 cameras across the city. Though other security systems, like those in Delhi, rely on lines from the state owned service provider MTNL, with limited bandwidth for their CCTV network, the Surat system has its own dedicated cables.

The security system was financed by a unique Public-Private Partnership (PPP) model, where a coalition of businesses, including many manufacturing units and representatives of Surat’s textile industry want to prevent crime. The many jewelers in the city also hoped it would limit thefts.  In the model, businesses interested in joining the coalition contribute Rs 25 lakh as a one-time fee and the combined fees along with some public financing go to construct the city-wide system. The chairman of the coalition is always the Commissioner of Surat Police. Members of the coalition not only get a tax break, but also believe they are helping to create a safer city for their industries to thrive.

Arguments for the System

Bomb blasts in Ahmedabad in 2008 led the Gujarat police to consider setting up surveillance systems not just in Ahmedabad, according to Scroll.in, but in many cities including Surat. Terror attacks in Mumbai in 2008 and at the Delhi High Court in 2011 lent momentum to surveillance efforts, as did international responses to terror, such as the United Kingdom’s intensive surveillance efforts in response to 2005 bombing in London. The UK’s security system has become so comprehensive that Londoners are caught on camera over 300 times a day on average. The UK’s CCTV systems cost over £500 million between 2008 and 2012, and one single crime has been solved in London for every 1,000 cameras each year, according to 2008 Metropolitan Police figures.

However, citizens in London may feel safer in their surveillance state knowing that the Home Office of the United Kingdom regulates how CCTV systems are used to ensure that cameras are being used to protect and not to spy. The UK’s Surveillance Camera Code of Practice outlines a thorough system of safeguards that make CCTV implementation less open to abuse. India currently has no comparable regulation.

The combined government worries of terrorism and business owners desire to prevent crime led to Surat’s unique PPP, ournalist Suarav Datta’s article in Scroll.in continues. Though the Surat Municipal Corporation invested Rs 2 crore, business leaders demonstrated their support for the surveillance system by donating the remaining Rs 10 crore required to build the first phase system. Phase II will cost Rs 21 crore, with the state government investing Rs 3 crore and business groups donating the other Rs 18 crore. This finance model demonstrates both public and private support for the CCTV system.

Why CCTV systems may do more harm than good

Despite hopes that surveillance through CCTV systems may prevent terrorism and crime, evidence suggests that it is not as much of a golden bullet as its proponents believe. In the UK, for example, where surveillance is practice extensively, the number of crimes captured on camera dropped significantly in 2010, because there were so many cameras that combing through all the hours footage was proving to be an exercise in futility for many officers. According to Suaray Datta’s article on Scroll.in, potential offenders in London either dodge cameras or carry out their acts in full view of them, which detracts from the argument that cameras deter crime. Additionally, prosecutors allege that the CCTV systems are of little value in court, because the quality of the footage is so low that it cannot provide conclusive proof of identities.

A 2008 study in San Francisco showed that surveillance cameras produce only a placebo effect–they do not deter crime, they just move it down the block, away from the cameras. In Los Angeles, more dramatically, there was little evidence that CCTV cameras helped detect crime, because in high traffic areas the number of cameras and operators required is so high, and because the city’s system was privately funded, the California Research Bureau’s report noted that it was open to exploitation by private interests pursuing their own goals. Surat’s surveillance efforts are largely privately funded too, a vulnerability that could lead to miscarriages of justice if private security contractors were to gain to security footage.

More evidence of the ineffectiveness of CCTV surveillance comes in the Boston marathon bombing of 2013 and the attack on the Indian parliament in 2001. In the case of the Boston bombing, release of CCTV footage to the general public led to rampant and unproductive speculation about the identity of the bomber, which resulted in innocent spectators being unfairly painted with suspicion.

India’s lack of regulation over CCTV’s also makes Surat’s new system susceptible to misuse. There is currently no strong legislation that protects citizens filmed on CCTV from having their images exploited or used inappropriately. Only police will have access to the recordings, Surat officials say, but the police themselves cannot always be trusted to adequately respect the rights of the citizens they are trying to protect.

The Report of the Group of Experts on Privacy acknowledges the lack of regulations on CCTV surveillance, and recommends that CCTV footage be legally protected from abuse. However, the Report notes that regulating CCTV surveillance to the standards of the National Privacy Principals they establish earlier in the report may not be possible for a number of reasons. First, it will be difficult to limit the quantity of information collected because the cameras are simply recording video of public spaces, and is unlikely that individuals will be able to access security footage of themselves. However, issues of consent and choice can be addressed by indicating that CCTV surveillance is taking place on entryways to monitored spaces.

Surat is not the first place in India to experiment with mass CCTV surveillance. Goa has mandated surveillance cameras in beach huts to monitor the huts and deter and detect crime. The rollout is slow and ongoing, and some of the penalties the cameras are intended to enforce seem too severe, such as potentially three months in prison for having too many beach chairs. More worryingly, there are still no laws ensuring that the footage will only be used for its proper law-enforcement objectives. Clear oversight is needed in Goa just as it is in Surat.  The Privacy Commissioner outlined by the Report of the Group of Experts could be well suited to overseeing the proper administration of CCTV installations, just as the Commissioner would oversee digital surveillance.

Concerns of privacy and civil liberties appear to have flown out the window in Surat, with little public debate. It is unclear that Surat’s surveillance efforts will achieve any of their desired effects, but without needed safeguards they will present an opportunity for abuse. Perhaps CCTV initiatives need to be subjected to a little bit more scrutiny.

CIS Cybersecurity Series (Part 20) – Saumil Shah

by Purba Sarkar last modified Sep 06, 2014 05:03 AM
CIS interviews Saumil Shah, security expert, as part of the Cybersecurity Series.
“If you look at the evolution of targets, from the 2000s to the present day, the shift has been from the servers to the individual. Back in 2000, the target was always servers. Then as servers started getting harder to crack, the target moved to the applications hosted on the servers, as people started using e-commerce applications even more. Eventually, as they started getting harder to crack, the attacks moved to the user's desktops and the user's browsers, and now to individual user identities and to the digital personas.”

Centre for Internet and Society presents its twentieth installment of the CIS Cybersecurity Series.

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

Saumil Shah is a security expert based in Ahmedabad. He has been working in the field of security and security related software development for more than ten years, with a focus on web security and hacking.

Video

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

CIS Cybersecurity Series (Part 21) – Gyanak Tsering

by Purba Sarkar last modified Sep 06, 2014 05:08 AM
CIS interviews Gyanak Tsering, Tibetan monk in exile, as part of the Cybersecurity Series.

“I have three mobile phones but I use only one to exchange information to and from Tibet. I don't give that number to anyone and nobody knows about it. High security forces me to use three phones. Usually a mobile phone can be tracked easily in many ways, especially by the network provider but my third mobile phone is not registered so that makes sure that the Chinese government cannot track me. The Chinese have a record of all mobile phone numbers and they can block them at anytime. But my third number cannot be traced and that allows me to communicate freely. This is only for security reasons so that my people in Tibet don't get into trouble.”

Centre for Internet and Society presents its twenty-first installment of the CIS Cybersecurity Series.

The CIS Cybersecurity Series seeks to address hotly debated aspects of cybersecurity and hopes to encourage wider public discourse around the topic.

Gyanak Tsering is a Tibetan monk in exile, studying at Kirti Monastery, Dharamshala. He came to India in 1999, and has been using the internet and mobile phone technology, since 2008, to securely transfer information to and from Tibet. Tsering adds a new perspective to the cybersecurity debate and explains how his personal security is interlinked with internet security and mobile phone security.

Video

This work was carried out as part of the Cyber Stewards Network with aid of a grant from the International Development Research Centre, Ottawa, Canada.

Code of Civil Procedure

by Prasad Krishna last modified Sep 06, 2014 03:05 PM

ZIP archive icon Code of Civil Procedure and Code of Criminal Procedure.zip — ZIP archive, 2849 kB (2918196 bytes)

Freedom of Expression

by Prasad Krishna last modified Sep 06, 2014 03:25 PM

ZIP archive icon FREEDOM OF EXPRESSION CASES.zip — ZIP archive, 443 kB (454516 bytes)

Identity Cases

by Prasad Krishna last modified Sep 06, 2014 03:27 PM

ZIP archive icon IDENTITY_CASES.zip — ZIP archive, 897 kB (919034 bytes)

National Security Cases

by Prasad Krishna last modified Sep 06, 2014 03:30 PM

ZIP archive icon NATIONAL SECURITY CASES.zip — ZIP archive, 1482 kB (1517572 bytes)

Consumer Protection

by Prasad Krishna last modified Sep 07, 2014 03:58 AM

ZIP archive icon CONSUMER PROTECTION.zip — ZIP archive, 10 kB (10698 bytes)

Transparency and Privacy

by Prasad Krishna last modified Sep 07, 2014 04:05 AM

ZIP archive icon TRANSPARENCY AND PRIVACY.zip — ZIP archive, 2063 kB (2113510 bytes)

Healthcare

by Prasad Krishna last modified Sep 07, 2014 04:09 AM

ZIP archive icon HEALTHCARE.zip — ZIP archive, 1701 kB (1742100 bytes)

Telecom Cases

by Prasad Krishna last modified Sep 08, 2014 02:20 AM

ZIP archive icon TELECOM CHAPTER.zip — ZIP archive, 661 kB (677745 bytes)

Zero Draft of Content Removal Best Practices White Paper

by Jyoti Panday last modified Sep 10, 2014 07:11 AM
EFF and CIS Intermediary Liability Project is aimed towards the creation of a set of principles for intermediary liability in consultation with groups of Internet-focused NGOs and the academic community.

The draft paper has been created to frame the discussion and will be made available for public comments and feedback. The draft document and the views represented here are not representative of the positions of the organisations involved in the drafting.

http://tinyurl.com/k2u83ya

3 September  2014

Introduction

The purpose of this white paper is to frame the discussion at several meetings between groups of Internet-focused NGOs that will lead to the creation of a set of principles for intermediary liability.

The principles that develop from this white paper are intended as a civil society contribution to help guide companies, regulators and courts, as they continue to build out the legal landscape in which online intermediaries operate. One aim of these principles is to move towards greater consistency with regards to the laws that apply to intermediaries and their application in practice.

There are three general approaches to intermediary liability that have been discussed in much of the recent work in this area, including CDT’s 2012 report called “Shielding the Messengers: Protecting Platforms for Expression and Innovation.” The CDT’s 2012 report divides approaches to intermediary liability into three models: 1. Expansive Protections Against Liability for Intermediaries, 2. Conditional Safe Harbor from Liability, 3. Blanket or Strict Liability for Intermediaries.[1]

This white paper argues in the alternative that (a) the “expansive protections against liability” model is preferable, but likely not possible given the current state of play in the legal and policy space (b) therefore the white paper supports “conditional safe harbor from liability” operating via a ‘notice-to-notice’ regime if possible, and a ‘notice and action’ regime if ‘notice-to-notice’ is deemed impossible, and finally (c) all of the other principles discussed in this white paper should apply to whatever model for intermediary liability is adopted unless those principles are facially incompatible with the model that is finally adopted.

As further general background, this white paper works from the position that there are three general types of online intermediaries- Internet Service Providers (ISPs), search engines, and social networks. As outlined in the recent draft UNESCO Report (from which this white paper draws extensively);

“With many kinds of companies operating many kinds of products and services, it is important to clarify what constitutes an intermediary. In a 2010 report, the Organization for Economic Co-operation and Development (OECD) explains that Internet intermediaries “bring together or facilitate transactions between third parties on the Internet. They give access to, host, transmit and index content, products and services originated by third parties on the Internet or provide Internet-based services to third parties.”

Most definitions of intermediaries explicitly exclude content producers. The freedom of expression advocacy group Article 19 distinguishes intermediaries from “those individuals or organizations who are responsible for producing information in the first place and posting it online.”  Similarly, the Center for Democracy and Technology explains that “these entities facilitate access to content created by others.”  The OECD emphasizes “their role as ‘pure’ intermediaries between third parties,” excluding “activities where service providers give access to, host, transmit or index content or services that they themselves originate.”  These views are endorsed in some laws and court rulings.  In other words, publishers and other media that create and disseminate original content are not intermediaries. Examples of such media entities include a news website that publishes articles written and edited by its staff, or a digital video subscription service that hires people to produce videos and disseminates them to subscribers.

For the purpose of this case study we will maintain that intermediaries offer services that host, index, or facilitate the transmission and sharing of content created by others. For example, Internet Service Providers (ISPs) connect a user’s device, whether it is a laptop, a mobile phone or something else, to the network of networks known as the Internet. Once a user is connected to the Internet, search engines make a portion of the World Wide Web accessible by allowing individuals to search their database. Search engines are often an essential go-between between websites and Internet users. Social networks connect individual Internet users by allowing them to exchange messages, photos, videos, as well as by allowing them to post content to their network of contacts, or the public at large. Web hosting providers, in turn, make it possible for websites to be published and to be accessed online.”[2]

General Principles for ISP Governance - Content Removals

The discussion that follows below outlines nine principles to guide companies, government, and civil society in the development of best practices related to the regulation of online content through intermediaries, as norms, policies, and laws develop in the coming years. The nine principles are: Transparency, Consistency, Clarity, Mindful Community Policy Making, Necessity and Proportionality in Content Restrictions, Privacy, Access to Remedy, Accountability, and Due Process in both Legal and Private Enforcement. Each principle contains subsections that expand upon the theme of the principle to cover more specific issues related to the rights and responsibilities of online intermediaries, government, civil society, and users.

Principle I: Transparency

“Transparency enables users’ right to privacy and right to freedom of expression. Transparency of laws, policies, practices, decisions, rationale, and outcomes related to privacy and restrictions allow users to make informed choices with respect to their actions and speech online. As such - both governments and companies have a responsibility in ensuring that the public is informed through transparency initiatives.” [3]

Government Transparency

  • In general, governments should publish transparency reports:

As part of the democratic process, the citizens of each country have a right to know how their government is applying its laws, and a right to provide feedback about the government’s legal interpretations of its laws. Thus, all governments should be required to publish online transparency reports that provide information about all requests issued by any branch or agency of government for the removal or restriction of online content. Further, governments should allow for the submission of comments and suggestions by a webform hosted on the same webpage where that government’s transparency report is hosted. There should also be some legal mechanism that requires the government to look at the feedback provided by its citizens, ensure that relevant feedback is passed along to legislative bodies, and provide for action to be taken on the citizen-provided feedback where appropriate. Finally, and where possible, the raw data that constitutes each government’s transparency report should be made available online, for free, in a common file format such as .csv, so that civil society may have easy access to it for research purposes.

  • Governments should be more transparent about content orders that they impose on ISPs
    The legislative process proceeds most effectively when the government knows how the laws that it creates are applied in practice and is able to receive feedback from the public about how those laws should change further, or remain the same. Relatedly, regulation of the Internet is most effective when the legislative and judicial branches are aware of what the other is doing. For all of these reasons, governments should publish information about all of the court orders and executive requests for content removals that they send to online intermediaries. Publishing all of this information in one place necessarily requires that some single entity within the government collects the information, which will have the benefits of giving the government a holistic view of how it is regulating the internet, encouraging dialogue between different branches of government about how best to create and enforce internet content regulation, and encouraging dialogue between the government and its citizens about the laws that govern internet content and their application.
  • Governments should make the compliance requirements they impose on ISPs public
    Each government should maintain a public website that publishes as complete a picture as possible of the content removal requests made by any branch of that government, including the judicial branch. The availability of a public website of this type will further many of the goals and objectives discussed elsewhere in this section. The website should be biased towards high levels of detail about each request and towards disclosure that requests were made, subject only to limited exceptions for compelling public policy reasons, where the disclosure bias conflicts directly with another law, or where disclosure would reveal a user’s PII. The information should be published periodically, ideally more than once a year. The general principle should be: the more information made available, the better. On the same website where a government publishes its ‘Transparency Report,’ that government should attempt to provide a plain-language description of its various laws related to online content, to provide users notice about what content is lawful vs. unlawful, as well as to show how the laws that it enacts in the Internet space fit together. Further, and as discussed in section “b,” infra, government should provide citizens with an online feedback mechanism so that they may participate in the legislative process as it applies to online content.
  • Governments should give their citizens a way to provide input on these policies
    Private citizens should have the right to provide feedback on the balancing between their civil liberties and other public policies such as security that their government engages in on their behalf. If and when these policies and the compliance requirements they impose on online intermediaries are made publicly available online, there should also be a feedback mechanism built into the site where this information is published. This public feedback mechanism could take a number of different forms, like, for example, a webform that allowed users to indicate their level of satisfaction with prevailing policy choices by choosing amongst several radio buttons, while also providing open text fields to allow the user to submit clarifying comments and specific suggestions. In order to be effective, this online feedback mechanism would have to be accompanied by some sort of legal and budgetary apparatus that would ensure that the feedback was monitored and given some minimum level of deference in the discussions and meetings that led to new policies being created.
  • Government should meet users concerned about its content policies in the online domain. Internet users, as citizens of both the internet and the country their country of origin, have a natural interest in defining and defending their civil liberties online; government should meet them there to extend the democratic process to the Internet. Denying Internet users a voice in the policymaking processes that determine their rights undermines government credibility and negatively influences users’ ability to freely share information online. As such, content policies should be posted in general terms online and users should have the ability to provide input on those policies online.

    ISP Transparency
    “The transparency practices of a company impact users’ freedom expression by providing insight into the scope of restriction that is taking in place in specific jurisdiction. Key areas of transparency for companies include: specific restrictions, aggregate numbers related to restrictions, company imposed regulations on content, and transparency of applicable law and regulation that the service provider must abide by.”[4]

    “Disclosure by service providers of notices received and actions taken can provide an important check against abuse. In addition to providing valuable data for assessing the value and effectiveness of a N&A system, creating the expectation that notices will be disclosed may help deter fraudulent or otherwise unjustified notices. In contrast, without transparency, Internet users may remain unaware that content they have posted or searched for has been removed pursuant due to a notice of alleged illegality. Requiring notices to be submitted to a central publication site would provide the most benefit, enabling patterns of poor quality or abusive notices to be readily exposed.”[5] Therefore, ISPs at all levels should publish transparency reports that include:

    • Government Requests

    All requests from government agencies and courts should be published in a periodic transparency report, accessible on the intermediary’s website, that publishes information about the requests the intermediary received and what the intermediary did with them in the highest level of detail that is legally possible. The more information that is provided about each request, the better the understanding that the public will have about how laws that affect their rights online are being applied. That said, steps should be taken to prevent the disclosure of personal information in relation to the publication of transparency reports. Beyond redaction of personal information, however, the maximum amount of information about each request should be published, subject as well to the (ideally minimal) restrictions imposed by applicable law. A thorough Transparency Report published by an ISP or online intermediary should include information about the following categories of requests:

  • Police and/or Executive Requests
    This category includes all requests to the intermediary from an agency that is wholly a part of the national government; from police departments, to intelligence agencies, to school boards from small towns. Surfacing information about all requests from any part of the government helps to avoid corruption and/or inappropriate exercises of governmental power by reminding all government officials, regardless of their rank or seniority, that information about the requests they submit to online intermediaries is subject to public scrutiny.
  • Court Orders
    This category includes all orders issued by courts and signed by a judicial officer. It can include ex-parte orders, default judgments, court orders directed at an online intermediary, or court orders directed at a third party presented to the intermediary as evidence in support of a removal request. To the extent legally possible, detailed information should be published about these court orders detailing the type of court order each request was, its constituent elements, and the actions(s) that the intermediary took in response to it. All personally identifying information should be redacted from any court orders that are published by the intermediary as part of a transparency report before publication.
  • First Party
    Information about court orders should be further broken down into two groups; first party and third party. First party court orders are orders directed at the online intermediary in an adversarial proceeding to which the online intermediary was a party.
  • Third Party
    As mentioned above, ‘third party’ refers to court orders that are not directed at the online intermediary, but rather a third party such as an individual user who posted an allegedly defamatory remark on the intermediary’s platform. If the user who obtains a court order approaches an online intermediary seeking removal of content with a court order directed at the poster of, say, the defamatory content, and the intermediary decides to remove the content in response to the request, the online intermediary that decided to perform the takedown should publish a record of that removal. To be accepted by an intermediary, third party court orders should be issued by a court of appropriate jurisdiction after an adversarial legal proceeding, contain a certified and specific statement that certain content is unlawful, and specifically identify the content that the court has found to be unlawful, by specific, permalinked URL where possible.
  • This type of court order should be broken out separately from court orders directed at the applicable online intermediary in companies’ transparency reports because merely providing aggregate numbers that do not distinguish between the two types gives an inaccurate impression to users that a government is attempting to censor more content than it actually is. The idea of including first party court orders to remove content as a subcategory of ‘government requests’ is that a government’s judiciary speaks on behalf of the government, making determinations about what is permitted under the laws of that country. This analogy does not hold for court orders directed at third parties- when the court made its determination of legality on the content in question, it did not contemplate that the intermediary would remove the content. As such, the court likely did not weigh the relevant public interest and policy factors that would include the importance of freedom of expression or the precedential value of its decision. Therefore, the determination does not fairly reflect an attempt by the government to censor content and should not be considered as such.

    Instead, and especially considering that these third party court order may be the basis for a number of content removals, third party court orders should be counted separately and presented with some published explanation in the company’s transparency report as to what they are and why the company has decided it should removed content pursuant to its receipt of one.

    Private-Party Requests
    Private-party requests are requests to remove content that are not issued by a government agency or accompanied by a court order. Some examples of private party requests include copyright complaints submitted pursuant to the Digital Millennium Copyright Act or complaints based on the laws of specific countries, such as laws banning holocaust denial in Germany.

    Policy/TOS Enforcement
    To give users a complete picture of the content that is being removed from the platforms that they use, corporate transparency reports should also provide information about the content that the intermediary removes pursuant to its own policies or terms of service, though there may not be a legal requirement to do so.

    User Data Requests
    While this white paper is squarely focused on liability for content posted online and best practices for deciding when and how content should be removed from online services, corporate transparency reports should also provide information about requests for user data from executive agencies, courts, and others.

    Principle II: Consistency

  • Legal requirements for ISPs should be consistent, based on a global legal framework that establishes baseline limitations on legal immunity
    Broad variation amongst the legal regimes of the countries in which online intermediaries operate increases compliance costs for companies and may discourage them from offering their services in some countries due to the high costs of localized compliance. Reducing the number of speech platforms that citizens have access to limits their ability to express themselves. Therefore, to ensure that citizens of a particular country have access to a robust range of speech platforms, each country should work to harmonize the requirements that it imposes upon online intermediaries with the requirements of other countries. While a certain degree of variation between what is permitted in one country as compared to another is inevitable, all countries should agree on certain limitations to intermediary liability, such as the following:
  • Conduits should be immune from claims about content that they neither created nor modified
    As noted in the 2011 Joint Declaration on Freedom of Expression and the Internet, “[n]o one who simply provides technical Internet services such as providing access, or searching for, or transmission or caching of information, should be liable for content generated by others, which is disseminated using those services, as long as they do not specifically intervene in that content or refuse to obey a court order to remove that content, where they have the capacity to do so (‘mere conduit principle’).”[6]
  • Court orders should be required for the removal of content that is related to speech, such as defamation removal requests
    In the Center for Democracy and Technology’s Additional Responses Regarding Notice and Action, CDT outlines the case against allowing notice and action procedures to apply to defamation removal requests. They write:
  • “Uniform notice-and-action procedures should not apply horizontally to all types of illegal content. In particular, CDT believes notice-and-takedown is inappropriate for defamation and other areas of law requiring complex legal and factual questions that make private notices especially subject to abuse. Blocking or removing content on the basis of mere allegations of illegality raises serious concerns for free expression and access to information. Hosts are likely to err on the side of caution and comply with most if not all notices they receive, because evaluating notices is burdensome and declining to comply may jeopardize their protection from liability. The risk of legal content being taken down is especially high in cases where assessing the illegality of the content would require detailed factual analysis and careful legal judgments that balance competing fundamental rights and interests. Intermediaries will be extremely reluctant to exercise their own judgment when the legal issues are unclear, and it will be easy for any party submitting a notice to claim a good faith belief that the content in question is unlawful. In short, the murkier the legal analysis, the greater the potential for abuse.

    To reduce this risk, removal of or disablement of access to content based on unadjudicated allegations of illegality (i.e., notices from private parties) should be limited to cases where the content at issue is manifestly illegal – and then only with necessary safeguards against abuse as described above.

    CDT believes that online free expression is best served by narrowing what is considered manifestly illegal and subject to takedown upon private notice. With proper safeguards against abuse, for example, notice-and-action can be an appropriate policy for addressing online copyright infringement. Copyright is an area of law where there is reasonable international consensus regarding what is illegal and where much infringement is straightforward. There can be difficult questions at the margins – for example concerning the applicability of limitations and exceptions such as “fair use” – but much online infringement is not disputable.

    Quite different considerations apply to the extension of notice-and-action procedures to allegations of defamation or other illegal content. Other areas of law, including defamation, routinely require far more difficult factual and legal determinations. There is greater potential for abuse of notice-and-action where illegality is less manifest and more disputable. If private notices are sufficient to have allegedly defamatory content removed, for example, any person unhappy about something that has been written about him or her would have the ability and incentive to make an allegation of defamation, creating a significant potential for unjustified notices that harm free expression. This and other areas where illegality is more disputable require different approaches to notice and action. In the case of defamation, CDT believes “notice” for purposes of removing or disabling access to content should come only from a competent court after full adjudication.

    In cases where it would be inappropriate to remove or disable access to content based on untested allegations of illegality, service providers receiving allegations of illegal content may be able to take alternative actions in response to notices. Forwarding notices to the content provider or preserving data necessary to facilitate the initiation of legal proceedings, for example, can pose less risk to content providers’ free expression rights, provided there is sufficient process to allow the content provider to challenge the allegations and assert his or her rights, including the right to speak anonymously.”[7]

    Principle III: Clarity

  • All notices that request the removal of content should be clear and meet certain minimum requirements
    The Center for Democracy and Technology outlined requirements for clear notices in a notice and action system in response a European Commission public comment period on a revised notice and action regime.[8] They write:
  • “Notices should include the following features:

    1. Specificity. Notices should be required to specify the exact location of the material – such as a specific URL – in order to be valid. This is perhaps the most important requirement, in that it allows hosts to take targeted action against identified illegal material without having to engage in burdensome search or monitoring. Notices that demand the removal of particular content wherever it appears on a site without specifying any location(s) are not sufficiently precise to enable targeted action.
    2. Description of alleged illegal content. Notices should be required to include a detailed description of the specific content alleged to be illegal and to make specific reference to the law allegedly being violated. In the case of copyright, the notice should identify the specific work or works claimed to be infringed.
    3. Contact details. Notices should be required to contain contact information for the sender. This facilitates assessment of notices’ validity, feedback to senders regarding invalid notices, sanctions for abusive notices, and communication or legal action between the sending party and the poster of the material in question.
    4. Standing: Notices should be issued only by or on behalf of the party harmed by the content. For copyright, this would be the rightsholder or an agent acting on the rightsholderʼs behalf. For child sexual abuse images, a suitable issuer of notice would be a law enforcement agency or a child abuse hotline with expertise in assessing such content. For terrorism content, only government agencies would have standing to submit notice.
    5. Certification: A sender of a notice should be required to attest under legal penalty to a good-faith belief that the content being complained of is in fact illegal; that the information contained in the notice is accurate; and, if applicable, that the sender either is the harmed party or is authorized to act on behalf of the harmed party. This kind of formal certification requirement signals to notice-senders that they should view misrepresentation or inaccuracies on notices as akin to making false or inaccurate statements to a court or administrative body.
    6. Consideration of limitations, exceptions, and defenses: Senders should be required to certify that they have considered in good faith whether any limitations, exceptions, or defenses apply to the material in question. This is particularly relevant for copyright and other areas of law in which exceptions are specifically described in law.
    7. An effective appeal and counter-notice mechanism. A notice-and-action regime should include counter-notice procedures so that content providers can contest mistaken and abusive notices and have their content reinstated if its removal was wrongful.
    8. Penalties for unjustified notices. Senders of erroneous or abusive notices should face possible sanctions. In the US, senders may face penalties for knowingly misrepresenting that content is infringing, but the standard for “knowingly misrepresenting” is quite high and the provision has rarely been invoked.  A better approach might be to use a negligence standard, whereby a sender could be held liable for damages or attorneys’ fees for making negligent misrepresentations (or for repeatedly making negligent misrepresentations). In addition, the notice-and-action system should allow content hosts to ignore notices from senders with an established record of sending erroneous or abusive notices or allow them to demand more information or assurances in notices from those who have in the past submitted erroneous notices. (For example, hosts might be deemed within the safe harbor if they require repeat abusers to specifically certify that they have actually examined the alleged infringing content before sending a notice).”[9]
  • All ISPs should publish their content removal policies online and keep them current as they evolve
    The UNESCO report states, by way of background, that “[c]ontent restriction practices based on Terms of Service are opaque. How companies remove content based on Terms of Service violations is more opaque than their handling of content removals based on requests from authorized authorities. When content is removed from a platform based on company policy, [our] research found that all companies provide a generic notice of this restriction to the user, but do not provide the reason for the restriction. Furthermore, most companies do not provide notice to the public that the content has been removed. In addition, companies are inconsistently open about removal of accounts and their reasons for doing so.”[10]
  • There are legitimate reasons why an ISP may want to have policies that permit less content, and a narrower range of content, than is technically permitted under the law, such as maintaining a product that appeals to families. However, if a company is going to go beyond the minimal legal requirements in terms of content that it must restrict, the company should have clear policies that are published online and kept up-to-date to provide its users notice of what content is and is not permitted on the company’s platform. Notice to the user about the types of content that are permitted encourages her to speak freely and helps her to understand why content that she posted was taken down if it must be taken down for violating a company policy.

  • When content is removed, a clear notice should be provided in the product that explains in simple terms that content has been removed and why
    This subsection works in conjunction with “ii,” above. If content is removed for any reason, either pursuant to a legal request or because of a violation of company policy, a user should be able to learn that content was removed if they try to access it. Requiring an on-screen message that explains that content has been removed and why is the post-takedown accompaniment to the pre-takedown published online policy of the online intermediary: both work together to show the user what types of content are and are not permitted on each online platform. Explaining to users why content has been removed in sufficient detail may also spark their curiosity as to the laws or policies that caused the content to be removed, resulting in increased civic engagement in the internet law and policy space, and a community of citizens that demands that the companies and governments it interacts with are more responsive to how it thinks content regulation should work in the online context.
  • The UNESCO report provides the following example of how Google provides notice to its users when a search result is removed, which includes a link to a page hosted by Chilling Effects:[11]

    “When search results are removed in response to government or copyright holder demands, a notice describing the number of results removed and the reasons for their removal is displayed to users (see screenshot below) and a copy of the request to the independent non-proft organization ChillingEffects.org, which archives and publishes the request.  When possible the company also contacts the website’s owners.”[12]

    This is an example of the message that is displayed when Google removes a search result pursuant to a copyright complaint.[13]

  • Requirements that governments impose on intermediaries should be as clear and unambiguous as possible
    Imposing liability on internet intermediaries without providing clear guidance as to the precise type of content that is not lawful and the precise requirements of a legally sufficient notice encourages intermediaries to over-remove content. As Article 19 noted in its 2013 report on intermediary liability:
  • “International bodies have also criticized ‘notice and takedown’ procedures as they lack a clear legal basis. For example, the 2011 OSCE report on Freedom of Expression on the internet highlighted that: Liability provisions for service providers are not always clear and complex notice and takedown provisions exist for content removal from the Internet within a number of participating States. Approximately 30 participating States have laws based on the EU E-Commerce Directive. However, the EU Directive provisions rather than aligning state level policies, created differences in interpretation during the national implementation process. These differences emerged once the national courts applied the provisions.

    These procedures have also been criticized for being unfair. Rather than obtaining a court order requiring the host to remove unlawful material (which, in principle at least, would involve an independent judicial determination that the material is indeed unlawful), hosts are required to act merely on the say-so of a private party or public body. This is problematic because hosts tend to err on the side of caution and therefore take down material that may be perfectly legitimate and lawful. For example, in his report, the UN Special Rapporteur on freedom of expression noted:

    [W]hile a notice-and-takedown system is one way to prevent intermediaries from actively engaging in or encouraging unlawful behavior on their services, it is subject to abuse by both State and private actors. Users who are notified by the service provider that their content has been flagged as unlawful often has little recourse or few resources to challenge the takedown. Moreover, given that intermediaries may still be held financially or in some cases criminally liable if they do not remove content upon receipt of notification by users regarding unlawful content, they are inclined to err on the side of safety by overcensoring potentially illegal content. Lack of transparency in the intermediaries’ decision-making process also often obscures discriminatory practices or political pressure affecting the companies’ decisions. Furthermore, intermediaries, as private entities, are not best placed to make the determination of whether a particular content is illegal, which requires careful balancing of competing interests and consideration of defenses.”[14]

    Considering the above, if liability is to be imposed on intermediaries for certain types of unlawful content, the legal requirements that outline what is unlawful content and how to report it must be clear. Lack of clarity in this area will result in over-removal of content by rational intermediaries that want to minimize their legal exposure and compliance costs. Over-removal of content is at odds with the goals of freedom of expression.

    The UNESCO Report made a similar recommendation, stating that; “Governments need to ensure that legal frameworks and company policies are in place to address issues arising out of intermediary liability. These legal frameworks and policies should be contextually adapted and be consistent with a human rights framework and a commitment to due process and fair dealing. Legal and regulatory frameworks should also be precise and grounded in a clear understanding of the technology they are meant to address, removing legal uncertainty that would provide opportunity for abuse.”[15]

    Similarly, the 2011 Joint Declaration on Freedom of Expression and the Internet states:

    “Consideration should be given to insulating fully other intermediaries, including those mentioned in the preamble, from liability for content generated by others under the same conditions as in paragraph 2(a). At a minimum, intermediaries should not be required to monitor user-generated content and should not be subject to extrajudicial content takedown rules which fail to provide sufficient protection for freedom of expression (which is the case with many of the ‘notice and takedown’ rules currently being applied).”[16]

    Principle IV: Mindful Community Policy Making

    “Laws and regulations as well as corporate policies are more likely to be compatible with freedom of expression if they are developed in consultation with all affected stakeholders – particularly those whose free expression rights are known to be at risk.”[17] To be effective, policies should be created through a multi-stakeholder consultation process that gives voice to the communities most at risk of being targeted for the information they share online. Further, both companies and governments should embed an ‘outreach to at-risk communities’ step into both legislative and policymaking processes to be especially sure that their voices are heard. Finally, civil society should work to ensure that all relevant stakeholders have a voice in both the creation and revision of policies that affect online intermediaries. In the context of corporate policymaking, civil society can use strategies from activist investing to encourage investors to make the human rights and freedom of expression policies of Internet companies’ part of the calculus that investors use to decide where to place their money. Considering the above;

    1. Human rights impact assessments, considering the impact of the proposed law or policy on various communities from the perspectives of gender, sexuality, sexual preference, ethnicity, religion, and freedom of expression, should be required before:
    2. New laws are written that govern content issues affecting ISPs or conduct that occurs primarily online
    3. “Protection of online freedom of expression will be strengthened if governments carry out human rights impact assessments to determine how proposed laws or regulations will affect Internet users’ freedom of expression domestically and globally.”[18]
  • Intermediaries enact new policies
    “Protection of online freedom of expression will be strengthened if companies carry out human rights impact assessments to determine how their policies, practices, and business operations affect Internet users’ freedom of expression. This assessment process should be anchored in robust engagement with stakeholders whose freedom of expression rights are at greatest risk online, as well as stakeholders who harbor concerns about other human rights affected by online speech.”[19]
  • Multi-stakeholder consultation processes should precede any new legislation that will apply to content issues affecting online intermediaries or online conduct
    “Laws and regulations as well as corporate policies are more likely to be compatible with freedom of expression if they are developed in consultation with all affected stakeholders – particularly those whose free expression rights are known to be at risk.”[20]
  • Civil society and public interest groups should encourage responsible investment in companies who implement policies that reflect best practices for internet intermediaries
    “Over the past thirty years, responsible investors have played a powerful role in incentivizing companies to improve environmental sustainability, supply chain labor practices, and respect for human rights of communities where companies physically operate. Responsible investors can also play a powerful role in incentivizing companies to improve their policies and practices affecting freedom of expression and privacy by developing metrics and criteria for evaluating companies on these issues in the same way that they evaluate companies on other “environmental, social, and governance” criteria.”[21]
  • Principle V: Necessity and Proportionality in Content Restriction

  • Content should only be restricted when there is a legal basis for doing so, or the removal is performed in accordance with a clear, published policy of the ISP
    As CDT outlined in its 2012 intermediary liability report, “[a]ctions required of intermediaries must be narrowly tailored and proportionate, to protect the fundamental rights of Internet users. Any actions that a safe-harbor regime requires intermediaries to take must be evaluated in terms of the principle of proportionality and their impact on Internet users’ fundamental rights, including rights to freedom of expression, access to information, and protection of personal data. Laws that encourage intermediaries to take down or block certain content have the potential to impair online expression or access to information. Such laws must therefore ensure that the actions they call for are proportional to a legitimate aim, no more restrictive than is required for achievement of the aim, and effective for achieving the aim. In particular, intermediary action requirements should be narrowly drawn, targeting specific unlawful content rather than entire websites or other Internet resources that may support both lawful and unlawful uses.”[22]
  • When content must be restricted, it should be restricted in the most minimal way possible (i.e., prefer domain removals to IP-blocking)
    There are a number of different ways that access to content can be restricted. Examples include hard deletion of the content from all of a company’s servers, blocking the download of an app or other software program in a particular country, blocking the content on all IP addresses affiliated with a particular country (“IP-Blocking”), removing the content from a particular domain of a product (i.e., removing from a link from the .fr version of a search engine that remains accessible on the .com version), blocking content from a ‘version’ of an online product that is accessible through a ‘country’ or ‘language’ setting on that product, or some combination of the last three options (i.e., an online product that directs the user to a version of the product based on the country that their IP address is coming from, but where the user can alter a URL or manipulate a drop-down menu to show her a different ‘country version’ of the product, providing access to content that may otherwise be inaccessible).
  • While almost all of the different types of content restrictions described above can be circumvented by technical means such as the use of proxies, IP-cloaking, or Tor, the average internet user does not know that these techniques exist, much less how to use them. Of the different types of content restrictions described above, a domain removal, for example, is easier for an individual user to circumvent than IP-Blocked content because you only have to change the URL of the product you are using to, i.e. “.com” to see content that has been locally restricted. To get around an IP-block, you would have to be sufficiently savvy to employ a proxy or cloak your true IP address.

    Therefore, the technical means used to restrict access to controversial content has a direct impact on the magnitude of the actual restriction on speech. The more restrictive the technical removal method, the fewer people that will have access to that content. To preserve access to lawful content, online intermediaries should choose the least restrictive means of complying with removal requests, especially when the removal request is based on the law of a particular country that makes certain content unlawful that is not unlawful in other countries. Further, when building new products and services, intermediaries should built in removal capability that minimally restricts access to controversial content.

  • If content is restricted due to its illegality in a particular country, the geographical scope of the content restriction should be as minimal as possible
    Building on the discussion in “ii,” supra, a user should be able to access content that is lawful in her country even if it is not lawful in another country. Different countries have different laws and it is often difficult for intermediaries to determine how to effectively respond to requests and reconcile the inherent conflicts that result. For example, content that denies the holocaust is illegal in certain countries, but not in others. If an intermediary receives a request to remove content based on the laws of a particular country and determines that it will comply because the content is not lawful in that country, it should not restrict access to the content such that it cannot be accessed by users in other countries where the content is lawful. To respond to a request based on the law of a particular country by blocking access to that content for users around the world, or even users of more than one country, essentially allows for extraterritorial application of the laws of the country that the request came from. While it is preferable to standardize and limit the legal requirements imposed on online intermediaries throughout the world, to the extent that this is not possible, the next-best option is to limit the application of laws that are interpreted to declare certain content unlawful to the users that live in that country. Therefore, intermediaries should choose the technical means of content restriction that is most narrowly tailored to limit the geographical scope and impact of the removal.
  • The ability of conduits (telecommunications/internet service providers) to filter content should be minimized to the extent technically and legally possible
  • The 2011 Joint Declaration on Freedom of Expression and the Internet made the following points about the dangers of allowing filtering technology:

    “Mandatory blocking of entire websites, IP addresses, ports, network protocols or types of uses (such as social networking) is an extreme measure – analogous to banning a newspaper or broadcaster – which can only be justified in accordance with international standards, for example where necessary to protect children against sexual abuse.

    Content filtering systems which are imposed by a government or commercial service provider and which are not end-user controlled are a form of prior censorship and are not justifiable as a restriction on freedom of expression.

    Products designed to facilitate end-user filtering should be required to be accompanied by clear information to end-users about how they work and their potential pitfalls in terms of over-inclusive filtering.”[23]

    In short, filtering at the conduit level is a blunt instrument that should be avoided whenever possible. Similar to how conduits should not be legally responsible for content that they neither host nor modify (the ‘mere conduit’ rule discussed supra), conduits should technically restrict their ability to filter content such that it would be inefficient for government agencies to contact them to have content filtered. Mere conduits are not able to assess the context surrounding the controversial content that they are asked to remove and are therefore not the appropriate party to receive takedown requests. Further, when mere conduits have the technical ability to filter content, they open themselves to pressure from government to exercise that capability. Therefore, mere conduits should limit or not build in the capability to filter content.

  • Notice and notice, or notice and judicial takedown, should be preferred to notice and takedown, which should be preferred to unilateral removal
    Mechanisms for content removal that involve intermediaries acting without any oversight or accountability, or those which only respond to the interests of the party requesting removal, are unlikely to do a very good job at balancing public and private interests. A much better balance is likely to be struck through a mechanism where power is distributed between the parties, and/or where an independent and accountable oversight mechanism exists.
  • Considered in this way, there is a continuum of content removal mechanisms that ranges from those are the least balanced and accountable, and those that are more so.  The least accountable is the unilateral removal of content by the intermediary without legal compulsion in response to a request received, without affording the uploader of the content the right to be heard or access to remedy.

    Notice and takedown mechanisms fit next along the continuum, provided that they incorporate, as the DMCA attempts to do, an effective appeal and counter-notice mechanism. However where notice and takedown falls down is that the cost and incentive structure is weighted towards removal of content in the case of doubt or dispute, resulting in more content being taken down and staying down than would be socially optimal.

    A better balance is likely to be struck by a “notice and notice” regime, which provides strong social incentives for those whose content is reported to be unlawful to remove the content, but does not legally compel them to do so. If legal compulsion is required, a court order must be separately obtained.

    Canada is an example of a jurisdiction with a notice and notice regime, though limited to copyright content disputes. Although this regime is now established in legislation, it formalizes a previous voluntary regime, whereby major ISPs would forward copyright infringement notifications received from rightsholders to subscribers, but without removing any content and without releasing subscriber data to the rightsholders absent a court order. Under the new legislation additional record-keeping requirements are imposed on ISPs, but otherwise the essential features of the regime remain unchanged.

    Analysis of data collected during this voluntary regime indicates that it has been effective in changing the behavior of allegedly infringing subscribers.  A 2010 study by the Entertainment Software Association of Canada (ESAC) found that 71% of notice recipients did not infringe again, whereas a similar 2011 study by Canadian ISP Rogers found 68% only received one notice, and 89% received no more than two notices, with only 1 subscriber in 800,000 receiving numerous notices.[24] However, in cases where a subscriber has a strong good faith belief that the notice they received was wrong, there is no risk to them in disregarding the erroneous notice – a feature that does not apply to notice and takedown.

    Another similar way in which public and private interests can be balanced is through a notice and judicial takedown regime, whereby the rightsholder who issues a notice about offending content must have it assessed by an independent judicial (or perhaps administrative) authority before the intermediary will respond by taking the content down.

    An example of this is found in Chile, again limited to the case of copyright.[25] In response to its Free Trade Agreement with the United States, the system introduced in 2010 is broadly similar to the DMCA, with the critical difference that intermediaries are not required to take material down in order to benefit from a liability safe harbor, until such time as a court order for removal of the material is made. Responsibility for evaluating the copyright claims made is therefore shifted from intermediaries onto the courts.

    Although this requirement does impose a burden on the rightsholder, this serves a purpose by disincentivizing the issue of automated or otherwise unjustified notices that are more likely to restrict or chill freedom of expression.  In cases where there is no serious dispute about the legality of the content, it is unlikely that the lawsuit would be defended. In any case, the legislation authorizes the court to issue a preliminary injunction on an ex parte basis, on condition of payment of a bond.

  • Intermediaries should be allowed to charge for the time and expense associated with processing legal requests
    As an intermediary, it is time consuming and relatively expensive to understand the obligations that each country’s legal regime imposes on you, and to accurately how each legal request should be handled. Especially for intermediaries without many resources, such as forum operators or owners of home Wifi networks, the costs associated with being an intermediary can be prohibitive. Therefore, it should be within their rights to charge for their compliance costs if they are either below a certain user threshold or can show financial necessity in some way.
  • Legal requirements imposed on intermediaries should be a floor, not a ceiling- ISPs can adopt more restrictive policies to more effectively serve their users as long as they have published policies that explain what they are doing
    The Internet has space for a wide range of platforms and applications directed to different communities, with different needs and desires. A social networking site directed at children, for example, may reasonably want to have policies that are much more restrictive than a political discussion board. Therefore, legal requirements that compel intermediaries to take down content should be seen as a ‘floor,’ but not a ‘ceiling’ on the range and quantity that of content those intermediaries may remove. Intermediaries should retain control over their own policies as long as they are transparent about what those policies are, what type of content the intermediary removes, and why they removed certain pieces of content.
  • Principle VI: Privacy

  • It is important to protect the ability of Internet users to speak by narrowing and making less ambiguous the range of content that intermediaries can be held liable for, but it is also very important to make users feel comfortable sharing their view by ensuring that their privacy is protected. Protecting the user’s ability to share her views, especially when those views are controversial or have a direct bearing on important political issues, requires that the user can trust the intermediaries that she uses. This concept can be further broken down into three sub-principles:
  • The user’s personal information should be protected to the greatest extent possible given the state of the art in encryption, security, and policy
    Users will be less willing to speak on important topics if they have legitimate concerns that their data may be taken from them. As stated in the UNESCO Report, “[b]ecause of the amount of personal information held by companies and ability to access the same, a company’s practices around collection, access, disclosure, and retention are key. To a large extent a service provider’s privacy practices are influenced by applicable law and operating licenses required by the host government. These can include requirements for service providers to verify subscribers, collect and retain subscriber location data, and cooperate with law enforcement when requested. Outcome: The implications of companies trying to balance a user’s expectation for privacy with a government’s expectation for cooperation can be serious and are inadequately managed in all jurisdictions studied.”[26]
  • Where possible, ISPs should help to preserve the user’s right to speak anonymously
    An important aspect of an Internet user’s ability to exercise her right to free expression online is ability to speak anonymously. Anonymous speech is one of the great advances of the Internet as a communications medium and should be preserved to the extent possible. As noted by special rapporteur Frank LaRue, “[i]n order for individuals to exercise their right to privacy in communications, they must be able to ensure that these remain private, secure and, if they choose, anonymous. Privacy of communications infers that individuals are able to exchange information and ideas in a space that is beyond the reach of other members of society, the private sector, and ultimately the State itself. Security of communications means that individuals should be able to verify that only their intended recipients, without interference or alteration, receive their communications and that the communications they receive are equally free from intrusion. Anonymity of communications is one of the most important advances enabled by the Internet, and allows individuals to express themselves freely without fear of retribution or condemnation.”[27]
  • The user’s PII should never be sold or used without her consent, and she should always know what is being done with it via an easily comprehensible dashboard
    The user’s trust in the online platform that she uses and relies upon is influenced not only by the relationships the intermediary maintains with the government, but also with other commercial entities. A user, who feels that her data will be constantly shared with third parties, perhaps without her consent and/or for marketing purposes, will never feel like she is able to freely express her opinion. Therefore, it is the intermediary’s responsibility to ensure that its users know exactly what information it retains about them, who it shares that information with and under what circumstances, and how to change the way that her data is shared. All of this information should be available on a dashboard that is comprehensible to the average user, and which gives her the ability to easily modify or withdraw her consent to the way her data is being shared, or the amount of data, or specific data, that the intermediary is retaining about her.
  • Principle VII: Access to Remedy

  • As noted in the UNESCO Report, “Remedy is the third central pillar of the UN Guiding Principles on Business and Human Rights, placing an obligation both on governments and on companies to provide individuals access to effective remedy. This area is where both governments and companies are almost consistently lacking. Across intermediary types, across jurisdictions and across the types of restriction, individuals whose content is restricted and individuals who wish to access such content are offered little or no effective recourse to appeal restriction decisions, whether in response to government orders, third party requests or in accordance with company policy. There are no private grievance or due process mechanisms that are clearly communicated and readily available to all users, or consistently applied.”[28]

  • Any notice and takedown system is subject to abuse, and any company policy that results in the removal of content is subject to mistaken or inaccurate takedowns, both of which are substantial problems that can only be remedied by the ability for users to let the intermediary know when the intermediary improperly removed a specific piece of content and the technical and procedural ability of the intermediary to put the content back. However, the technical ability to reinstate content that was improperly removed may conflict with data retention laws. This conflict should be explored in more detail. In general, however, every time content is removed, there should be:

  • A clear mechanism through which users can request reinstatement of content
    When an intermediary decides to remove content, it should be immediately clear to the user that content has been removed and why it was removed (see discussion of in-product notice, supra). If the user disagrees with the content removal decision, there should be an obvious, online method for her to request reinstatement of the content.
  • Reinstatement of content should be technically possible
    When intermediaries (who are subject to intermediary liability) are building new products, they should build the capability to remove content into the product with a high degree of specificity so as to allow for narrowly tailored content removals when a removal is legally required. Relatedly, all online intermediaries should build the capability to reinstate content into their products while maintaining compliance with data retention laws.
  • Intermediaries should have policies and procedures in place to handle reinstatement requests
    Between the front end (online mechanism to request reinstatement of content) and the backend (technical ability to reinstate content) is the necessary middle layer, which consists of the intermediary’s internal policies and processes that allow for valid reinstatement requests to be assessed and acted upon. In line with the corporate ‘responsibility to respect’ human rights, and considered along with the human rights principle of ‘access to remedy,’ intermediaries should have a system in place from the time that an online product launches to ensure that reinstatement requests can be made and will be processed quickly and appropriately.
  • Principle VIII: Accountability

  • Governments must ensure that independent, transparent, and impartial accountability mechanisms exist to verify the practices of government and companies with regards to managing content created online
    “While it is important that companies make commitments to core principles on freedom of expression and privacy, make efforts to implement those principles through transparency, policy advocacy, and human rights impact assessments, it is also important that companies take these steps in a manner that is accountable to stakeholders. One way of doing this is by committing to external third party assurance to verify that their policies and practices are being implemented to a meaningful standard, with acceptable consistency wherever their service is offered. Such assurance gains further public credibility when carried out with the supervision and affirmation of multiple stakeholders including civil society groups, academics, and responsible investors. The Global Network Initiative provides one such mechanism for public accountability.  Companies not currently participating in GNI, or a process of similar rigor and multi-stakeholder involvement, should be urged by users, investors, and regulators to do so.”[29]
  • Civil society should encourage comparative studies between countries and between ISPs with regards to their content removal practices to identify best practices
    Civil society has the unique ability to look longitudinally across this issue to determine and compare how different intermediaries and governments are responding to content removal requests. Without information about how other governments and intermediaries are handling these issues, it will be difficult for each government or intermediary to learn how to improve its laws or policies. Therefore, civil society has an important role to play in the process of creating increasingly better human rights outcomes for online platforms by performing and sharing ongoing, comparative research.
  • Civil society should establish best practices and benchmarks against which ISPs and government can be measured, and should track governments and ISPs over time in public reports
    “A number of projects that seek, define and implement indicators and benchmarks for governments or companies are either in development (examples include: UNESCO’s Indicators of Internet Development project examining country performance, Ranking Digital Rights focusing on companies) or already in operation (examples include the Web Foundation’s Web Index, Freedom House’s Internet Freedom Index, etc.). The emergence of credible, widely-used benchmarks and indicators that enable measurement of country and company performance on freedom of expression will help to inform policy, practice, stakeholder engagement processes, and advocacy.”[30]
  • Principle IX: Due Process - In Both Legal and Private Enforcement

  • ISPs should always consider context before removing content and Governments and courts should always consider context before ordering that certain content be removed
    “Governments need to ensure that legal frameworks and company policies are in place to address issues arising out of intermediary liability. These legal frameworks and policies should be contextually adapted and be consistent with a human rights framework and a commitment to due process and fair dealing. Legal and regulatory frameworks should also be precise and grounded in a clear understanding of the technology they are meant to address, removing legal uncertainty that would provide opportunity for abuse.”[31]
  • Principles for Courts
  • An independent and impartial judiciary exists, at least in part, to preserve the citizen’s due process rights. Many have called for an increased reliance on courts to make determinations about the legality of content posted online in order to both shift the censorship function from unaccountable private actors and to ensure that courts only order the removal of content that is actually unlawful. However, when courts do not have an adequate technical understanding of how content is created and shared on the internet, the rights of the intermediaries that facilitate the posting of the content, and who should be ordered to remove unlawful content, they do not add value to the online ecosystem. Therefore, courts should keep certain principles in mind to preserve the due process rights of the users that post content and the intermediaries that host the content.

  • Preserve due process for intermediaries- do not order them to do something before giving them notice and the opportunity to appear before the court
  • In a dispute between two private parties over a specific piece of content posted online, it may appear to the court that the easy solution is to order the intermediary who hosts the content to remove it. However, this approach does not extend any due process protections to the intermediary and does not adequately reflect the intermediary's status as something other than the creator of the content. If a court feels that it is necessary for an intermediary to intervene in a legal proceeding between two private parties, the court should provide the intermediary with proper notice and give them the opportunity to appear before the court before issuing any orders.

  • Necessity and proportionality of judicial determinations- judicial orders determining the illegality of specific content should be narrowly tailored to avoid over-removal of content
  • With regards to government removal requests, the UNESCO Report notes that “[o]ver-broad law and heavy liability regimes cause intermediaries to over-comply with government requests in ways that compromise users’ right to freedom of expression, or broadly restrict content in anticipation of government demands even if demands are never received and if the content could potentially be found legitimate even in a domestic court of law.”[32] Courts should follow the same principle: only order the removal of the bare minimum of content that is necessary to remedy the harm identified and nothing more.

  • Courts should clarify whether ISPs have to remove content in response to court orders directed to third parties, or only have to remove content when directly ordered to do so (first party court orders) after an adversarial proceeding to which the ISP was a party
  • See discussion of the difference between first party and third party court orders (supra, section a., “Transparency”). Ideally, any decision that courts reach on this issue would be consistent across different countries.

  • Questions- related unresolved issues that should be kicked to the larger group
  • How should the conflict between access to remedy and data retention laws that say content must be hard deleted after a certain period of time be resolved?  I think the access to remedy has to be subordinated to the data protection laws. Let's make that our draft position, but continue to flag it for discussion.
  • Should ISPs have to remove content in response to court orders directed to third parties, or only have to remove content when directly ordered to do so (first party court orders) after an adversarial proceeding to which the ISP was a party?  I think first party orders.  Let's make that our draft position, but continue to flag it for discussion.

  • [1] Center for Democracy and Technology, Shielding the Messengers: Protecting Platforms for Expression and Innovation at 4-15 (Version 2, 2012), available at https://www.cdt.org/files/pdfs/CDT-Intermediary-Liability-2012.pdf (see pp.4-15 for an explanation of these different models and the pros and cons of each).

    [2] UNESCO, “Fostering Freedom Online: The Roles, Challenges, and Obstacles of Internet Intermediaries” at 6-7 (Draft Version, June 16th, 2014) (Hereinafter “UNESCO Report”).

    [3] UNESCO Report at 56.

    [4] UNESCO Report at 37.

    [5] Center for Democracy and Technology, Additional Responses Regarding Notice and Action, Available at https://www.cdt.org/files/file/CDT%20N&A%20supplement.pdf.

    [6] The United Nations (UN) Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples’ Rights (ACHPR) Special Rapporteur on Freedom of Expression and Access to Information, Article 19, Global Campaign for Free Expression, and the Centre for Law and Democracy, JOINT DECLARATION ON FREEDOM OF EXPRESSION AND THE INTERNET at 2 (2011), available at http://www.osce.org/fom/78309 (Hereinafter “Joint Declaration on Freedom of Expression).

    [7] Center for Democracy and Technology, Additional Responses Regarding Notice and Action, Available at https://www.cdt.org/files/file/CDT%20N&A%20supplement.pdf.

    [8] Id.

    [9] Id.

    [10] UNESCO Report at 113-14.

    [11] ‘Chilling Effects’ is a website that allows recipients of ‘cease and desist’ notices to submit the notice to the site and receive information about their legal rights. For more information about ‘Chilling Effects’ see: http://www.chillingeffects.org.

    [12] Id. at 73. You can see an example of a complaint published on Chilling Effects at the following location. “DtecNet DMCA (Copyright) Complaint to Google,” Chilling Effects Clearinghouse, March 12, 2013, www.chillingeffects.org/notice.cgi?sID=841442.

    [13] UNESCO Report at 73.

    [14] Article 19, Internet Intermediaries: Dilemma of Liability (2013), available at http://www.article19.org/data/files/Intermediaries_ENGLISH.pdf.

    [15] UNESCO Report at 120.

    [16] Joint Declaration on Freedom of Expression and the Internet at 2.

    [17] Id.

    [18] Id.

    [19] Id. at 121.

    [20] Id. at 104.

    [21] Id. at 122.

    [22] Center for Democracy and Technology, Shielding the Messengers: Protecting Platforms for Expression and Innovation at 12 (Version 2, 2012), available at https://www.cdt.org/files/pdfs/CDT-Intermediary-Liability-2012.pdf.

    [23] Joint Declaration on Freedom of Expression at 2-3.

    [24] Geist, Michael, Rogers Provides New Evidence on Effectiveness of Notice-and-Notice System (2011), available at http://www.michaelgeist.ca/2011/03/effectiveness-of-notice-and-notice/

    [25] Center for Democracy and Technology, Chile’s Notice-and-Takedown System for Copyright Protection: An Alternative Approach (2012), available at https://www.cdt.org/files/pdfs/Chile-notice-takedown.pdf

    [26] UNESCO Report at 54.

    [27] “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue (A/HRC/23/40),” United Nations Human Rights, 17 April 2013, http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A.HRC.23.40_EN.pdf, § 24, p. 7.

    [28] UNESCO Report at 118.

    [29] UNESCO Report at 122.

    [30] Id.

    [31] UNESCO Report at 120.

    [32] Id. at 119.

    UID: A Data Subject's Registration Tale

    by Mukta Batra — last modified Sep 11, 2014 09:05 AM
    A person who registered for UIDAI shares their experience of registering for the UID Number, on the condition of anonymity.

    The registration process begins with filling a form, which has a verification clause at the end. This is a statement that the data, including biometric data, is correct and is that of the registrant. The presence of the word ‘biometric’ in relation to the verification creates tacit consent in the collection of biometric data.

    The data subject registered for the UID number as several utilities were being linked to the UID number at that time.

    The data subject pointed out three areas for concern: (i) optional data was being collected under protest; (ii) the subjects documents were being taken out of their sight for scanning; (iii) the ownership of data.

    While registering for the UID number, data subjects have a choice not to link their bank numbers to bank accounts and to utilities such as gas connections. This data subject noticed that the data operator linked these by default and the data subject had to specifically request the de-linking. The data operator did not inform the data subject of the choice not to link the UID with these services. If this is the state of affairs for the conscious registrant, it is unlikely that those who cannot read will be informed of their right to choice. Their information will then be inadvertently linked and they will be denied the right to opt out of the linkage.

    This data subject additionally noted that their right to refuse to provide optional data on the registration form was blatantly disregarded by the enrolling agency. Despite protests against providing this information, the enroller forcibly entered information such as ‘ward number’, which was optional. The enroller justified these actions - stating: the company will cut our salary. Unfortunately, registrants do not know who the data collection company is.

    Where the data subjects do not know who collects their data and where it is going, there can be no accountability.

    This incident seems to show that the rules on personal information are being violated. The right to know: the identity and address of the entity collecting the data,[1] the purpose of data collection,[2] the restrictions on data use[3] and the right not to disclose sensitive personal data [4] are all granted by the Information Technology Rules. Data subjects also have the right to be informed about the intended recipients[5] and the entities that will retain the data. [6] The data collector has failed to perform its corresponding duty to make such disclosures and has arguably limited the control of data subjects over their privacy.

    If this is what other UID registrations are like, then perhaps it is time to modify the process of data handling and processing. The law should be implemented better and amended to enable better implementation either through greater state intervention or severe liability when personal information is improperly handled.


    [1] R.4(3)(d) of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

    [2] R. 4(3)(b) Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

    [3] R. 4(7) Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

    [4] R. 4 (7) Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

    [5] R. 4 (3) (c) Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

    [6] R.4(3)(d) Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.

    UID and NPR: Towards Common Ground

    by Mukta Batra — last modified Oct 15, 2014 01:06 PM
    The UID (Unique Identification) and NPR (National Population Register) are both government identity schemes that aggregate personal data, including biometric data for the provision of an identification factor, and aim to link them with the delivery of public utility services.

    The differences between the two exist in terms of collection of data, the type of identification factor issued, authorities involved and the outcome.

    Despite the differences, there has been talk of combining the two schemes because of the overlap.[1] In the same breath, it has been argued that the two schemes are incompatible. [2]

    One of the UIDAI’s (Unique Identification Authority of India) functions is to harmonize the two schemes. [3]

    As it stands, the schemes are distinct. Enrolment for a UID does not lead to automatic enrolment in the NPR. The NPR website expressly states that even if a data subject has undergone census or has been granted a UID Number, it is necessary to visit a data collection centre to provide biometric data for the NPR.[4]

    UID and NPR: The Differences

    The Basis of identity/ Unit of Survey

    The most striking difference between the UID and NPR Schemes is their notion of identity. The UID is individual based, whereas the NPR scheme focuses on the household or the family as a composite unit. Thus, the UID seeks to enroll individuals while the NPR seeks to gather data of the members of a household or family as a composite unit during the census and later register each person for an NPR Card, on the basis of the census data. To this extent, analysis of the data gathered from the two schemes will be different and will require differing analytical tools. The definition of the data subject and the population is different. In one scheme, the unit is an individual; in the other it is the household/family. Though the family is the composite unit in the NPR, the data is finally extracted it is unpaired to provide individuals NPR cards, but the family based association is not lost and it is argued that this household association of NPR should be used to calculate and provide subsidies. Some states have put on hold transfer of cooking gas subsidy, which is calculated for each household, through Aadhar-linked bank accounts.[5] If both schemes were merged, the basis for determining entitlement to subsidies would be non-uniform.

    Differences in Information Collection

    The UID and NPR have different procedures for collection of information. In the UID scheme, all data is collected in data collection centres whereas NPR data is collected door to door in part and in collection centres for the other part.

    UID data is collected by the UIDAI themselves or by private parties, under contract. These contractors are private parties: often, online marketing service providers.[6] The data subjects were initially allowed registration through an introducer and without any documentation. This was replaced with the verification system where documents were to be produced for registration for UID.

    The NPR involves a dual collection process- the first stage is the door-to-door collection of data as part of the Census. This information is collected through questionnaire. No supporting documents/ proof is produced to verify this data. The verification happens at a later stage, through public display of the information. This data is digitized. The data subjects are then to give their biometric data at the data collection centres, on the production of the census slip. The biometric data collectors are parties who are empanelled by the UIDAI and are eligible to collect data under the UID Scheme. A subject’ s data is aggregated and then de-duplicated by the UIDAI. [7]

    This shows two points of merger. It can be suggested that when data is collected for the UID number, then the subject should not have to give their biometrics for the NPR Scheme again. The sharing of biometrics across the schemes will reduce cost and redundancy. While sharing of UID data with NPR is feasible, the reverse is not true, since UID is optional and NPR is not. If NPR data is to be shared with UID, then the subject has the right to refuse. However, the consent for using NPR data for the UID is a default YES in the UID form. [8] Prohibiting the information sharing is no option.

    Differences in Stated Purposes

    The NPR is linked to citizenship status. The NPR exercise is being conducted to create a national citizen register and to assist in identifying and preventing illegal immigration. The NPR card, a desired outcome, is aimed to be a conduit for transactions relating to subsidies and public utilities.[9] So is the UID Number, which was created to provide the residents of India an identity. The linkage and provision of subsidies through the NPR and UID cards have not taken off on a large scale and there is a debate as to which will be more appropriate for direct benefit transfer, with some leaders proclaiming that the NPR scheme is more suited to direct benefit transfer.[10] Since the UID Number is linked to direct benefit transfer, but not to citizenship, benefits such as those under the MNREGA scheme, may be availed by non-citizens as well, though only citizens are eligible for the scheme.[11]

    C. Chandramouli, the Registrar General and Census Commissioner of India, states that the conflict between the two schemes is only perceived, and results from a poor understanding of the differences in objective. The NPR, he states is created to provide national security through the creation of a citizen register, starting with a register of residents after authentication and verification of the residence of the subjects. On the other hand, the UID exercise is to provide a number that will be used to correctly identify a person.[12]

    Difference in Legal Sanctity

    The UIDAI was set up through an executive notification, which dictates a few of its responsibility, including: assigning a UID number, collating the UID and NPR schemes, laying down standards for interlinking with partner databases and so on. However, the UIDAI has not expressed responsibility to collect, or authorize collection of data under this scheme. The power to authorize the collection of biometrics is vested with the National Identification Authority of India (NIAI), which will be set up under the National Identification Authority of India Bill, (NIAI Bill, which is at times referred to as the UID Bill).

    The NPR Scheme has been created pursuant to the 2004 Amendment of the Citizenship Act. Under S. 14A of the Citizenship Act, the central government has the power to compulsorily register citizens for an Identity Card. This gives the NPR exercise sanctity. However, no authority to collect biometric information has been given either under this Act or Rules framed under it.

    Future of Aadhaar

    The existence of both the UID and NPR Schemes leads to redundancy. Therefore, many have advocated for their merger. This seems impractical, as the standards in collection and management of data are not the same.

    For some time, it was thought that the Aadhaar Scheme would be scrapped. This belief was based on the present government’s opposition to the scheme during and before the election. This was further strengthened by the fact that they did not expressly mention the continuance of the scheme in their manifesto. The Cabinet Committee on UIDAI was disbanded and the enrolment for the UID Number was stopped, only to be resumed a short while later.[13]

    However, recent events show that the Aadhaar scheme will continue. First, the new government has stated that the UID scheme will continue. In support of the UID Scheme, the government has made budgetary allocation for the scheme to enable, inter-alia, it being sped-up. The Government even intends to enact a law to give the scheme sanctity. [14]

    Second, the Government is assigning the UID Number new uses. To track attendance of government employees, the Government shall use a biometric attendance system, which is linked to the employees UID Number. [15] The attendance will be uploaded onto a website, to boost transparency.

    Third, direct benefit transfers under the UID will become more vigorous.

    The UID is already necessary for registration under the NPR, which is compulsory.

    Providing one’s UID Number for utilities such as cooking gas is also compulsory in several areas, despite the Courts diktat that it should not be so.[16]

    Conclusion

    The government is in favour of continuing both the schemes. Therefore, it is unlikely that either scheme will be scrapped or that the two schemes will be combined. The registration for UID is becoming compulsory by implication as it is required for direct benefit transfers and for utilities. Data collected under NPR is being shared with the UIDAI by default, when one registers for a UID number. However, the reverse is unlikely, as the UID collects secondary data, whereas NPR requires primary data, which it collects through physical survey and authentication. Perhaps the sharing of data could be incorporated when one goes to the data collection centre to submit biometrics for the NPR. The subject could fill in the UID form and submit verification documents at this stage, completing both exercises in one go. This will drastically reduce the combined costs of the two exercises.


    [1] Rajesh Aggarwal, Merging UID and NPR???, Igovernment, accessed 5 September, 2014 http://www.igovernment.in/igov/opinion/41631/merging-npr-uid; Bharti Jain, Rajnath Hints at Merger of NPR and Aadhar, Times of India, accessed 5 September, 2014 http://timesofindia.indiatimes.com/india/Rajnath-hints-at-merger-of-NPR-and-Aadhaar/articleshow/35740480.cms

    [2] Raju Rajagopal, The Aadhar-NPR Conundrum, Mint, accessed 5 September, 2014 http://www.livemint.com/Opinion/tvpoCYeHxrs2Z7EkAAu7bP/The-AadhaarNPR-conundrum.html .

    [3] Cl, 4 of the Notification on the creation o fthe UIDAI, No. A-43011/02/2009-Admin.1 of the Planning Commission of India, dated 28 January, 2009

    [4] FAQ for NPR, accessed: 3 September, 2014. http://censusindia.gov.in/2011-Common/FAQs.html

    [5] A Jolt for Aadhar: UPA Shouldn’t Have to Put on Hold its Only Good Idea,Business Standard, accessed 5 September, 2014 http://www.business-standard.com/article/opinion/a-jolt-for-aadhaar-114020301243_1.html

    [6] Prakash Chandra Sao, The Unique ID Project in India: An Exploratory Study, accessed: 21 August, 2014 http://subversions.tiss.edu/the-unique-id-project-in-india-an-exploratory-study/

    [7] NPR Activities, accessed 5 September, 2014, http://ditnpr.nic.in/NPR_Activities.aspx

    [8] R. Dinakaran, NPR and Aadhar- A Confused Process, The Hindu BusinessLine, accessed: 4 September, 2014 http://www.thehindubusinessline.com/blogs/blog-rdinakaran/npr-and-aadhaar-a-confused-process/article4940976.ece

    [9] More than sixty-five thousand NPR cards have been issued and biometric data of more than twenty-five lakh people has been captured, as on 28 August, 2014 http://censusindia.gov.in

    [10] NPR, not Aadhaar, best tool for cash transfer: BJP's Sinha, accessed: 3 September, http://www.moneycontrol.com/master_your_money/stocks_news_consumption.php?autono=1035033

    [11] Bharati Jain, NDA's national ID cards may kill UPA's Aadhaar, accessed 3 September, 2014 http://timesofindia.indiatimes.com/india/NDAs-national-ID-cards-may-kill-UPAs-Aadhaar/articleshow/36791858.cms

    [12] Id.

    [13] Aadhar Enrolment Drive Begins Again, accessed 3 Spetember, 2014 http://timesofindia.indiatimes.com/city/gurgaon/Aadhaar-enrolment-drive-begins-again/articleshow/38280932.cms

    [14] Mahendra Singh, Modi govt to give legal backing to Aadhaar, Times of India, http://timesofindia.indiatimes.com/india/Modi-govt-to-give-legal-backing-to-Aadhaar/articleshow/38336812.cms

    [15] Narendra Modi Government to Launch Website to Track Attendance of Central Government Employees, DNA, accessed: 4 September, 2014 http://www.dnaindia.com/india/report-narendra-modi-government-to-launch-website-to-track-attendance-of-central-government-employees-2014684

    [16] No gas supply without Aadhaar card, Deccan Chronicle, accessed: 4 September, 2014, http://www.deccanchronicle.com/140829/nation-current-affairs/article/no-gas-supply-without-aadhaar-card


    Note: This is an anonymous post.

    Biometrics: An ‘Angootha Chaap’ nation?

    by Mukta Batra — last modified Sep 19, 2014 06:12 AM
    This blog post throws light on the inconsistencies in biometric collection under the UID and NPR Schemes.

    Introduction

    Fingerprints and iris scans. The Unique Identification (UID) Number aims to serve as a proof of identity that can be easily verified and linked to subsidies and to bank accounts. Four years into its implementation, the UID Scheme seems to have the vote of confidence of the public. More than 65 Crore Indians have been granted UID Numbers,[1] and only a few have been concerned enough to seek clarity through Right to Information Requests to the UIDAI about the finances and legal authority backing the scheme.[2] Parallel to the UID scheme, the National Population Register scheme is also under way, with enrolment in some areas, such as Srinagar, Shimla and Panchkula, having reached 100% of the estimated population.[3]

    The NPR scheme is an offshoot of the census. It began in census cycle 2010-11, pursuant to the amendment of the Citizenship Act in 2004, under which national identity cards are to be issued. The desired outcome of the NPR scheme is an NPR card with a chip embedded with three bits of information built into a card: (i) biometric information, (ii) demographic information and (iii) UID Number.

    Both the UID and NPR schemes aspire to be conduits that subsidies, utilities, and other benefits are routed through. While the UID and NPR schemes are distinct in terms of their legal sanctity, purpose and form, the harmonization of these two schemes is one of the UIDAI’s functions.

    There are substantial overlaps in the information collected and the purpose they serve leading to the argument that having two schemes is redundant. The compatibility of the two schemes was questioned and it was initially thought that a merger would be unreasonable. While there has been speculation that the UID scheme may terminate, or that it would be taken over by the Home Ministry, it has been reported that the new government has directed expedited enrolments through the UID scheme. [4]

    Both schemes are incomplete and suffer from vagaries, including, but not limited to: their legality, safeguards against misuse of the data, the implementation of the schemes – including the collection and storage of biometric information and their convergence or divergence.

    This blog will focus on understanding the process of collecting biometric data in each scheme – calling out similarities and differences – as well as areas in which data collected under one scheme is incompatible with the other scheme. It will look at existing and missing safeguards in the collection of biometrics, overlap in the collection of biometrics by the two schemes, and existing practice in the collection of biometrics. In doing so the blog will highlight the lack of privacy safeguards for the biometric information and conclude that since the policies for data collection and use policy are unclear, the data subjects do not know how their data is being collected, used, and shared between the UID and the NPR schemes.

    Unreliability of Biometric Data

    Biometric data has been qualified as being unreliable.[5] It cannot always be successfully used to identify a person, especially in India, where manual labour degrades the fingerprint[6] and nutritional deficiencies mar the iris. Even experts working with the UIDAI[7] admit that fingerprints are not always good indicators of identity. If the very identification of a person fails, which is what the UID seeks to do, then the purpose of the UID is defeated.

    Biometric Data Collection under the UID Scheme

    In the current structure of the scheme, collected biometric information is stored by, and vests with the UIDAI for an undefined period. The data if used only for identification and authentication purposes, as originally intended, could very well fail to serve its intended purpose. But amassing the personal data of the entire country is lucrative, particularly to the service providers who collect the information and are mandated with the task to manually collect the data before it is fed into the UID system and encrypted. Most of the service providers that collect information, including biometric data, for the UID are engaged in information services such as IT or online marketing service providers.[8]

    The below chart delineates the process followed for the collection of biometrics under the UID Scheme:

    c1

    Under the NIAI Bill, all data collected or authenticated by the UIDAI, until the Bill is enacted and the National Identification Authority of India is created, vests with the UIDAI. In practice this means that the UIDAI owns the biometric data of the data-subject, without clear safeguards against misuse of the data.

    In the UID scheme, the collection of biometrics at the time of enrollment by the UIDAI is severely flawed for a number of reasons:

    1. Lack of clear legal authority and procedure for collection of biometrics: The only legal authority the UIDAI has to collect biometric information is via the notification of its constitution. Even then, the powers of the UIDAI are vague and broad. Importantly, the notification tells us nothing of how biometric data is to be collected and how it is to be used. These standards have only been developed by the UIDAI in an ad-hoc manner when the need arises or after a problem is spotted. The lack of purpose-specification is in violation of the law[9] and prevents the data subject from giving informed consent to data collection. This is discussed at a later stage.

    2. The collection of Biometrics is regulated through only a Bill, which delegates the development of safeguards to Rules: The National Identification Authority of India (NIAI) Bill[10] confers the National Information Authority of India (NOT THE UIDAI) with the power to pass rules to collect biometric data and to prescribe standards for collection.[11] This is a rule-making power, which is conferred under a Bill. Neither has the Bill been enacted, nor have rules for the collection of biometrics been framed and notified.

    3. Collection of biometric data only with implied consent: Though collection of biometrics is mentioned in the enrolment form, explicit consent for the collection of biometrics is not collected and only implied consent may be inferred. The last line in the enrollment form is titled ‘CONSENT’ and is a declaration that all data, including biometric information, is true.[12]

    4. Collection of biometric data outsourced to third party: Collection of biometric information in the UID scheme is outsourced to third parties through tenders. For instance, Accenture has been declared a biometric service provider under a contract with the UID.[13] The third party may be a company, firm, educational institution or an accreditation agency. The eligibility criteria are quite straightforward, they relate to the entity’s structure and previous experiences with small projects.[14] Since the ability to protect privacy of the data subject is entirely absent from the eligibility criteria, a successful bidder may not have adequate procedure in place or sufficient experience in managing confidential data, to ensure the privacy of the data subject. By outsourcing the data collection, the UIDAI has arguably delegated a function it never had the legal authority to perform. Thus, the agency of the data collection is equally defective. To heighten the irregularity, these contract agents can sub-contract the job of physical data collection.[15] This means that the data operator and the ground supervisors, who come into direct contact with the raw data, including biometric data, are not appointed by the government, or the UIDAI, but by a private agency, who is further removed from the chain. The data operator scans the documents submitted for verification and has physical access to the document.[16]

    5. Biometric data is admittedly vulnerable to sale and leakage: In an ongoing case in the Supreme Court of India, the national Capital Territory of Delhi has, in its counter-affidavit, admitted that data collected under the UID is vulnerable to sale and leakage.[17] To quote from the counter-affidavit ‘..in any exercise of gathering identities whether it is by census authority… or through the present process… there is always a possibility of leakage. Enumerators can scan and keep copies of all the forms and sell them for a price.- this (sic) it can never be said that the data gathered… is safe.’[18] Anyone who has registered for either UID is therefore a candidate for identity theft or unsolicited commercial information. This is also true for the NPR, as census data is the basis for the NPR.

    Data collection under the NPR Scheme

    The declaration of courts that it is unnecessary to link the UID number for public utilities and the admission by Delhi in the case that a data subject cannot be compelled to provide biometrics or to obtain a UID Number under the Aadhaar scheme[19] are steps forward in ensuring the voluntariness of UID. However, the UID Number is mandatory by implication. It is a pre-requisite for registration under the National Population Register, which is compulsory, pursuant to S. 14-A of the Citizenship Act. The below diagram delineates the collection of biometric information under the NPR scheme:

    DATA FLOW PROCESS

    c2

    Flaws in the collection of biometric data under the NPR scheme

    1. Compulsion: Registration in the NPR is legally mandated and individuals who fail to do so can face penalty. As a note, arguably, the compulsion to register for the NPR is untenable, as the Rules prescribe penalty, whereas the Act does not. [20] A word of caution is appropriate here. The penalty under the Rules stands till it is deleted by the legislature or declared void by courts and one may be held liable for refusing to register for the NPR, though the above argument may be a good defense.
    2. Duplicity: Duplicity is a problem under the NPR Scheme. Biometric data is collected twice before the NPR exercise is completed. Even if one has registered under the UID scheme, they have to give their biometric information again under the NPR scheme. The first instance of collection of biometric information is for the UID number and the second, under the NPR scheme. The latter is necessary even if the data has already been collected for the UID number. Since the parties collecting biometric information for NPR are empanelled by the UIDAI and the eligibility is the same, the data is subject to the same or similar threats of data leakage that may arise when registering for the UID. The multi-level data collection only amplifies the admitted vulnerability of data as unauthorized actors can unlawfully access the data at any stage. This, coupled with the fact that UIDAI has to harmonize the NPR and UID schemes, and that the data comes to the UIDAI for de-duplication, means that the NPR data could be used by the UIDAI, but it may not result in a UID Number. There is no data that disproves this potential. This is a matter of concern, as one who wishes not to register for a UID number, in protection of their privacy, is at peril for their data falls into the hands of the UIDAI.
    3. Biometric data collectors under the NPR scheme empanelled by the UIDAI: The service providers collecting biometric data under the NPR are selected through bids and need to be empanelled with the UIDAI.[21] Most enrolment agencies that are empanelled with the UIDAI are either IT or online marketing companies[22], making the fear of targeted marketing even more likely.
    4. Public display and verification: Under the NPR scheme, the biometric and demographic information and UID number of registrants is publicly displayed in their local area for verification.[23] However, it is a violation of privacy to have sensitive personal data, such as biometrics put up publicly. Not only will the demographic information be readily accessible, nothing will prohibit the creation of a mailing list or collection of data for either data theft or for sending unsolicited commercial communication. The publicly available information is the kind of information that can be used for verification (Know Your Customer) and to authorize financial transactions. Since the personal information is displayed in the data subject’s local area, it is arguably a more invasive violation of privacy, since the members of the local area can make complex connections between the data subject and the data.
    5. Smart Card: The desired outcome of the NPR scheme is an NPR card. This card is to contain a chip, which is embedded with information such as the UID Number, biometrics and the demographic information. It is still unclear as to whether this information will be machine-readable. If so, this information may be just a swipe away. However, this cannot be confirmed without information on the level encryption and how the data will be stored on the chip.

    ‘Privacy safeguards available under the UID and NPR schemes are ad-hoc and incomplete

    The safeguards under both the UID and NPR schemes are quite similar, since the UIDAI and its empanelled biometric service providers are involved in collecting biometric information for both the UID and the NPR.

    Pilot studies for the UID scheme, including the use of biometrics, were not conducted in advance to implementation. In line with this, the enactment of a legislation governing the UID and the implementation of policies with respect to data handling and use will be made as and when the need arises. The development of safeguards in relation to the NPR will also be ad-hoc.

    Also, the data standards for one will potentially influence that of the other scheme. For instance, the change in privacy standards for handling biometrics under the UID may affect the empanelment of biometric service providers. This will automatically affect the data security level the NPR can seek to achieve.

    Being developed ad-hoc and after the fact, there is a risk that these regulations may unreasonably curtail the rights of data subjects.

    The existing Indian laws on data protection and privacy are not comprehensive. Certain laws protect privacy only in specific situations. For instance, the IT Act and related rules protect privacy in relation to digital information.

    Any body that collects sensitive personal data such as biometric data, or any other data for processing and storage has a legal mandate under the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal data or Information) Rules, 2011 to make certain disclosures BEFORE OR WHILE THE DATA IS COLLECTED. This includes, inter-alia, disclosures of (i) the purpose of information collection, (ii) the intended recipients of the information and (iii) name and addresses of the collector and of the party retaining the data.[24]

    Under the Rules, the data collector has a duty to give the data subject an option to withhold personal sensitive information.[25] A conversation with a data subject shows that this safeguard has not been upheld. The subject also conveyed a lack of knowledge of who the collection agency was. This is a problem of lack of accountability, as the data path cannot be traced and the party responsible for misuse or breach of security cannot be held liable.

    Conclusion

    The data collection under the NPR and UID schemes shows several vulnerabilities. Apart from the vulnerabilities with biometric information, there is a real risk of misuse of the data and documents submitted for enrolment under these schemes. Since the data collectors are primarily online marketing or IT service providers, there is likelihood that they will use this data for marketing.

    We can only hope that in time, data subjects will be able to withdraw their personal data from the UID database and surrender their UID number. We can only wait and watch to see whether (i) the UID Number is a legal prerequisite for the NPR Card and (ii) whether the compulsion to register for NPR is done away with.


    [1] https://portal.uidai.gov.in/uidwebportal/dashboard.do accesed: 21 August, 2014

    [2] As of January 2013, only 25 RTI requests were made to the UIDAI http://uidai.gov.in/rti/rti-requests.html accessed: 21 August, 2014

    [3] DIT-NPR Management Information System accessed: 22 August, 2014 http://nprmis.nic.in/NPRR33_DlyDigitPrgGraph.aspx

    [4] Cloud Still Hangs Over Aadhaar’s Future, Business Standard, accessed 28 August, 2014. http://www.business-standard.com/article/current-affairs/cloud-still-hangs-over-aadhaar-s-future-114081401131_1.html

    [5] Frost & Sullivan, Best Practices Guide to Biometrics, accessed: 13 August, 2014 http://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&cad=rja&uact=8&ved=0CD8QFjAE&url=http%3A%2F%2Fwww.frost.com%2Fprod%2Fservlet%2Fcpo%2F240303611&ei=6VbsU4m8HcK58gWx64DYDQ&usg=AFQjCNGqan81fX6qtG0S4VV6oh_B5R_QYg&sig2=cOOPm1JJ79AcJq2Gfq1_3Q&bvm=bv.73231344,d.dGc

    [6] Malavika Jayaram, “India’s Identity Crisis”, Internet Monitor 2013, reflections of a digital world, accessed: 13 August, 2014 http://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID2366840_code727672.pdf?abstractid=2366840&mirid=1

    [7]M. Vatsa, et.al, “Analyzing Fingerprints of Indian Population Using Image Quality: A UIDAI Case Study” , accessed: 13 August, 2014 https://research.iiitd.edu.in/groups/iab/ICPR2010-Fingerprint.pdf

    [8] Prakash Chandra Sao, The Unique ID Project in India: An Exploratory Study, accessed: 21 August, 2014 http://subversions.tiss.edu/the-unique-id-project-in-india-an-exploratory-study/

    [9] R. 5(3) of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal data or Information) Rules, 2011, accessed: 20 August, 2013 http://deity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf

    [10] National Identification Authority of India Bill, 2010 (Bill No. LXXV of 2010), accessed: 26 August,2014 http://164.100.24.219/BillsTexts/RSBillTexts/asintroduced/national%20ident.pdf

    [11] Clause 23 of the NIAI Bill, 2010

    [12]The UID Enrollment form, accessed: 26 August, 2014 http://uidai.gov.in/images/uid_download/enrolment_form.pdf

    [13] Documents filed and relied on in Puttuswamy v Union of India

    [14] Request for empanelment, accessed: 28 August, 2014. http://uidai.gov.in/images/tenders/rfe_for_concurrent_evaluation_of_processoperation_at_enrolment_centers_13082014.pdf

    [15] This information is available from the documents filed and relied on in Puttuswamy v Union Of India, which is being heard in the Supreme Court of India

    [16] An anonymous registrant observes that the data was scanned behind a screen and was not visible from the registered counter. The registrant is concerned that, in addition to collection of information for the UID, photocopies or digital copies could be taken for other uses and the registrant would not know.

    [17] Counter Affidavit filed in the Supreme Court of India on behalf on New Delhi in K. Puttuswamy v Union of India

    It is also admitted that the census is equally vulnerable. The information collected through census is used for the NPR exercise.

    [18] Para. 48 in the Counter Affidavit filed by NCR Delhi.

    [19] Affidavit in K. Puttuswamy v Union of India.

    See also: FAQs: Enrollment Agencies, accessed 22 August, 2014 http://uidai.gov.in/faq.html?catid=37

    [20] Usha Ramanathan, A Tale of Two Turfs, The Statesman, accessed: 20 August, 2014 http://www.thestatesman.net/news/10497-a-tale-of-two-turfs-npr-and-uid.html?page=3

    [21] RFQ for Engaging MSP for Biometric Enrolment for the Creation of NPR, accessed: 26 August, 2014 http://ditnpr.nic.in/pdf/120102_RFQBiometricUrban_rebidding-Draft.pdf

    [22] Prakash Chandra Sao, The Unique ID Project in India: An Exploratory Study, accessed: 21 August, 2014 http://subversions.tiss.edu/the-unique-id-project-in-india-an-exploratory-study/

    [23] http://censusindia.gov.in/2011-Common/IntroductionToNpr.html, accessed: 26 August, 2014

    [24] R. 5(3) of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal data or Information) Rules, 2011, accessed: 20 August, 2013 http://deity.gov.in/sites/upload_files/dit/files/GSR313E_10511(1).pdf

    [25] R. 5(7) of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal data or Information) Rules, 2011.

    Centre for Internet and Society joins the Dynamic Coalition for Platform Responsibility

    by Jyoti Panday last modified Oct 07, 2014 10:54 AM
    The Centre for Internet and Society (CIS) has joined the multistakeholder cooperative engagement amidst stakeholders towards creating Due Diligence Recommendations for online platforms and Model Contractual Provisions to be enshrined in ToS. This blog provides a brief background of the role of dynamic coalitions within the IGF structure, establishes the need for the coalition and provides an update on the action plan and next steps for interested stakeholders.

    "Identify emerging issues, bring them to the attention of the relevant bodies and the general public, and, where appropriate, make recommendations."
    Tunis Agenda (Para 72.g)

    The first United Nations Internet Governance Forum (IGF), in 2006 saw the emergence of the concept of Dynamic Coalition and a number of coalitions have been established over the years. The IGF is structured to bring together multistakeholder groups to,

    "Discuss public policy issues related to key elements of Internet governance in order to foster the sustainability, robustness, security, stability and development of the Internet."
    Tunis Agenda (Para 72.a)

    While IGF workshops allow various stakeholders to jointly analyse "hot topics" or to examine progress that such issues have undertaken since the previous IGF, dynamic coalitions are informal, issue-specific groups comprising members of various stakeholder groups. With no strictures upon the objects, structure or processes of dynamic coalitions claiming association with the IGF, and no formal institutional affiliation, nor any access to the resources of the IGF Secretariat, IGF Dynamic Coalitions allow collaboration of anyone interested in contributing to their discussions. Currently, there are eleven active dynamic coalitions at the IGF and can be divided into three distinct types—networks, working groups and Birds of Feather (BOFs).

    Workshops at the IGF are content specific events that, though valuable in informing participants, are limited in their impact by being confined to the launch of a report or by the issues raised within the conference room. The coalitions on the other hand are expected to have a broader function, acting as a coalescing point for interested stakeholders to gather and analyse progress around identified issues and plan next steps. The coalitions can also make recommendations around issues, however, no mechanism has been developed so far, by which the recommendations can be considered by the plenary body. The long-term nature of coalition is perhaps, most suited to engage stakeholders in heterogeneous groups, towards understanding and cooperating around emerging issues and to make recommendations to inform policy making.

    Platform Responsibility

    Social networks and other interactive online services, give rise to 'cyber-spaces' where individuals gather, express their personalities and exchange information and ideas. The transnational and private nature of such platforms means that they are regulated through contractual provisions enshrined in the platforms' Terms of Service (ToS). The provisions delineated in the ToS not only extend to users in spite of their geographical location, the private decisions undertaken by platform providers in implementing the ToS are not subject to constitutional guarantees framed under national jurisdictions.

    While ToS serve as binding agreement online, an absence of binding international rules in this area despite the universal nature of human rights represented is a real challenge, and makes it necessary to engage in a multistakeholder effort to produce model contractual provisions that can be incorporated in ToS. The concept of 'platform responsibility' aims to stimulate behaviour in platform providers to provide intelligible and solid mechanisms, in line with the principles laid out by the UN Guiding Principles on Business and Human Rights and equip platform users with common and easy-to-grasp tools to guarantee the full enjoyment of their human rights online. The utilisation of model contractual provisions in ToS may prove instrumental in fostering trust in online services for content production, use and dissemination, increasing demand of services and ultimately consumer demand may drive the market towards human rights compliant solutions.

    The Dynamic Coalition on Platform Responsibility

    To nurture a multi-stakeholder endeavour aimed at the elaboration of model contractual-provisions, Mr. Luca Belli, Council of Europe / Université Paris II, Ms Primavera De Filippi, CNRS / Berkman Center for Internet and Society and Mr Nicolo Zingales, Tilburg University / Center for Technology and Society Rio, initiated and facilitated the creation of the Dynamic Coalition on Platform Responsibility (DCPR). DCPR has over fifty individual and organisational members from civil society organisations, academia, private sector organisations and intergovernmental organisations and held its first meeting at the IGF in Istanbul. The meeting began with an overview of the concept of platform responsibility, highlighting relevant initiatives from Council of Europe, Global Network Initiative, Ranking Digital Rights and the Center for Democracy and Technology have undertaken in this regard. Existing issues such as difficulty in comprehension and lack of standardization of redress across rights were raised along with the fundamental lack of due process in terms of transparency across existing mechanisms.

    Online platforms compliance to human rights is often framed around the duty of States to protect human rights and often, Internet companies do not sufficient consideration of the effects of their  business practices on users fundamental rights undermining trust.

    The meeting focused it efforts with a call to identify issues of process and substance and specific rights and challenges to be addressed by the DCPR. The procedural issues raised concerned  'responsibility' in decision-making e.g., giving users the right to be heard and an effective remedy before an impartial decision-making body, and obtaining their consent for changes in the contractual terms.  The concerns raised around substantive rights such as privacy and freedom of expression eg., disclosure of personal information and content removal and need to promote 'responsibility' through establishing concrete mechanisms to deal with such issues.

    It was suggested that concept of responsibility including in case of conflict between different rights could be grounded in Human Rights case law eg., from European Court of Human Rights jurisprudence. It was also established that any framework that would evolve from this coalition would consider the distinction between users (eg., adults, children, and people with or without continuous access to the Internet) and platforms (eg., in terms of size and functionality).

    Action Plan

    The participants at the DCPR meeting agreed to establish a multistakeholder cooperative engagement amidst stakeholders that will go beyond dialogue and produce concrete proposals. Particularly, participants suggested developing:

    1. Due Diligence Recommendations: Recommendations to online platforms with regard to processes of compliance with internationally agreed human rights standards.
    2. Model Contractual Provisions: Elaboration of a set of principles and provisions protecting platform users’ rights and guaranteeing transparent mechanisms to seek redress in case of violations.

    DCPR will ground the development of these frameworks in the preliminary step of compilation of existing projects and initiatives dealing with the analysis of ToS compatibility with human rights  standards. Members, participants and interested stakeholders are invited to highlight and share relevant initiatives by 10th October regarding:

    1. Processes of due diligence for human rights compliance;
    2. The evaluation of ToS cocompliance with human rights standards;

    Further to this compilation, a first recommendation draft regarding online platforms' due diligence will be circulated on the mailing list by 30th October 2014. CIS will be contributing to the drafting which will be led and elaborated by the DCPR coordinators. This draft will be open for comments via the DCPR mailing list until 30th November 2014 and we encourage you to sign up to the mailing list (http://lists.platformresponsibility.info/listinfo/dcpr).

    A second draft will be developed compiling the comments expressed via the mailing-list and shared for comments by 10 December 2014. The final version of the recommendation will be drafted by 30 December. Subsequently, the first set of model contractual provisions will be elaborated  building upon such recommendation. A call for inputs will be issued in order to gather suggestions on the content of these provisions.

    Anvar v. Basheer and the New (Old) Law of Electronic Evidence

    by Bhairav Acharya last modified Dec 04, 2014 03:53 PM
    The Supreme Court of India revised the law on electronic evidence. The judgment will have an impact on the manner in which wiretap tapes are brought before a court.

    Read the original published by Law and Policy in India on September 25, 2014.


    The case

    On 18 September 2014, the Supreme Court of India delivered its judgment in the case of Anvar v. P. K. Basheer (Civil Appeal 4226 of 2012) to declare new law in respect of the evidentiary admissibility of the contents of electronic records. In doing so, Justice Kurian Joseph, speaking for a bench that included Chief Justice Rajendra M. Lodha and Justice Rohinton F. Nariman, overruled an earlier Supreme Court judgment in the 1995 case of State (NCT of Delhi) v. Navjot Sandhu alias Afsan Guru(2005) 11 SCC 600, popularly known as the Parliament Attacks case, and re-interpreted the application of sections 63, 65, and 65B of the Indian Evidence Act, 1872 (“Evidence Act”). To appreciate the implications of this judgment, a little background may be required.

    The hearsay rule

    The Evidence Act was drafted to codify principles of evidence in the common law. Traditionally, a fundamental rule of evidence is that oral evidence may be adduced to prove all facts, except documents, provided always that the oral evidence is direct. Oral evidence that is not direct is challenged by the hearsay rule and, unless it is saved by one of the exceptions to the hearsay rule, is inadmissible. In India, this principle is stated in sections 59 and 60 of the Evidence Act.

    The hearsay rule is both fundamental and complex; a proper examination would require a lengthy excursus, but a simple explanation should suffice. In the landmark House of Lords decision in R v. Sharp [1988] 1 All ER 65, Lord Havers – the controversial prosecutor who went on to become the Lord Chancellor – described hearsay as “Any assertion other than one made by a person while giving oral evidence in the proceedings is inadmissible as evidence of any fact or opinion asserted.” This definition was applied by courts across the common law world. Section 114 of the United Kingdom’s (UK) Criminal Justice Act, 2003, which modernised British criminal procedure, uses simpler language: “a statement not made in oral evidence in the proceedings.

    Hearsay evidence is anything said outside a court by a person absent from a trial, but which is offered by a third person during the trial as evidence. The law excludes hearsay evidence because it is difficult or impossible to determine its truth and accuracy, which is usually achieved through cross examination. Since the person who made the statement and the person to whom it was said cannot be cross examined, a third person’s account of it is excluded. There are a few exceptions to this rule which need no explanation here; they may be left to another post.

    Hearsay in documents

    The hearsay rule is straightforward in relation to oral evidence but a little less so in relation to documents. As mentioned earlier, oral evidence cannot prove the contents of documents. This is because it would disturb the hearsay rule (since the document is absent, the truth or accuracy of the oral evidence cannot be compared to the document). In order to prove the contents of a document, either primary or secondary evidence must be offered.

    Primary evidence of the contents of a document is the document itself [section 62 of the Evidence Act]. The process of compelling the production of a document in court is called ‘discovery’. Upon discovery, a document speaks for itself. Secondary evidence of the contents of a document is, amongst other things, certified copies of that document, copies made by mechanical processes that insure accuracy, and oral accounts of the contents by someone who has seen that document. Section 63 of the Evidence Act lists the secondary evidence that may prove the contents of a document.

    Secondary evidence of documentary content is an attempt at reconciling the hearsay rule with the difficulties of securing the discovery of documents. There are many situations where the original document simply cannot be produced for a variety of reasons. Section 65 of the Evidence Act lists the situations in which the original document need not be produced; instead, the secondary evidence listed in section 63 can be used to prove its content. These situations arise when the original document (i) is in hostile possession; (ii) has been stipulated to by the prejudiced party; (iii) is lost or destroyed; (iv) cannot be easily moved, i.e. physically brought to the court; (v) is a public document of the state; (vi) can be proved by certified copies when the law narrowly permits; and (vii) is a collection of several documents.

    Electronic documents

    As documents came to be digitised, the hearsay rule faced several new challenges. While the law had mostly anticipated primary evidence (i.e. the original document itself) and had created special conditions for secondary evidence, increasing digitisation meant that more and more documents were electronically stored. As a result, the adduction of secondary evidence of documents increased. In the Anvar case, the Supreme Court noted that “there is a revolution in the way that evidence is produced before the court”.

    In India before 2000, electronically stored information was treated as a document and secondary evidence of these electronic ‘documents’ was adduced through printed reproductions or transcripts, the authenticity of which was certified by a competent signatory. The signatory would identify her signature in court and be open to cross examination. This simple procedure met the conditions of both sections 63 and 65 of the Evidence Act. In this manner, Indian courts simply adapted a law drafted over one century earlier in Victorian England. However, as the pace and proliferation of technology expanded, and as the creation and storage of electronic information grew more complex, the law had to change more substantially.

    New provisions for electronic records

    To bridge the widening gap between law and technology, Parliament enacted the Information Technology Act, 2000 (“IT Act”) [official pdf here] that, amongst other things, created new definitions of “data”, “electronic record”, and “computer”. According to section 2(1)(t) of the IT Act, an electronic record is “data, record or data generated, image or sound stored, received or sent in an electronic form or micro film or computer generated micro fiche” (sic).

    The IT Act amended section 59 of the Evidence Act to exclude electronic records from the probative force of oral evidence in the same manner as it excluded documents. This is the re-application of the documentary hearsay rule to electronic records. But, instead of submitting electronic records to the test of secondary evidence – which, for documents, is contained in sections 63 and 65, it inserted two new evidentiary rules for electronic records in the Evidence Act: section 65A and section 65B.

    Section 65A of the Evidence Act creates special law for electronic evidence:

    65A. Special provisions as to evidence relating to electronic record. – The contents of electronic records may be proved in accordance with the provisions of section 65B.

    Section 65A of the Evidence Act performs the same function for electronic records that section 61 does for documentary evidence: it creates a separate procedure, distinct from the simple procedure for oral evidence, to ensure that the adduction of electronic records obeys the hearsay rule. It also secures other interests, such as the authenticity of the technology and the sanctity of the information retrieval procedure. But section 65A is further distinguished because it is a special law that stands apart from the documentary evidence procedure in sections 63 and 65.

    Section 65B of the Evidence Act details this special procedure for adducing electronic records in evidence. Sub-section (2) lists the technological conditions upon which a duplicate copy (including a print-out) of an original electronic record may be used: (i) at the time of the creation of the electronic record, the computer that produced it must have been in regular use; (ii) the kind of information contained in the electronic record must have been regularly and ordinarily fed in to the computer; (iii) the computer was operating properly; and, (iv) the duplicate copy must be a reproduction of the original electronic record.

    Sub-section (4) of section 65B of the Evidence Act lists additional non-technical qualifying conditions to establish the authenticity of electronic evidence. This provision requires the production of a certificate by a senior person who was responsible for the computer on which the electronic record was created, or is stored. The certificate must uniquely identify the original electronic record, describe the manner of its creation, describe the device that created it, and certify compliance with the technological conditions of sub-section (2) of section 65B.

    Non-use of the special provisions

    However, the special law and procedure created by sections 65A and 65B of the Evidence Act for electronic evidence were not used. Disappointingly, the cause of this non-use does not involve the law at all. India’s lower judiciary – the third tier of courts, where trials are undertaken – is vastly inept and technologically unsound. With exceptions, trial judges simply do not know the technology the IT Act comprehends. It is easier to carry on treating electronically stored information as documentary evidence. The reasons for this are systemic in India and, I suspect, endemic to poor developing countries. India’s justice system is decrepit and poorly funded. As long as the judicial system is not modernised, India’s trial judges will remain clueless about electronic evidence and the means of ensuring its authenticity.

    By bypassing the special law on electronic records, Indian courts have continued to apply the provisions of sections 63 and 65 of the Evidence Act, which pertain to documents, to electronically stored information. Simply put, the courts have basically ignored sections 65A and 65B of the Evidence Act. Curiously, this state of affairs was blessed by the Supreme Court in Navjot Sandhu (the Parliament Attacks case), which was a particularly high-profile appeal from an emotive terrorism trial. On the question of the defence’s challenge to the authenticity and accuracy of certain call data records (CDRs) that the prosecution relied on, which were purported to be reproductions of the original electronically stored records, a Division Bench of Justice P. Venkatarama Reddi and Justice P. P. Naolekar held:

    According to Section 63, secondary evidence means and includes, among other things, “copies made from the original by mechanical processes which in themselves ensure the accuracy of the copy, and copies compared with such copies”. Section 65 enables secondary evidence of the contents of a document to be adduced if the original is of such a nature as not to be easily movable. It is not in dispute that the information contained in the call records is stored in huge servers which cannot be easily moved and produced in the court. That is what the High Court has also observed at para 276. Hence, printouts taken from the computers/servers by mechanical process and certified by a responsible official of the service-providing company can be led into evidence through a witness who can identify the signatures of the certifying officer or otherwise speak to the facts based on his personal knowledge.

    Flawed justice and political expediency in wiretap cases

    The Supreme Court’s finding in Navjot Sandhu (quoted above) raised uncomfortable questions about the integrity of prosecution evidence, especially in trials related to national security or in high-profile cases of political importance. The state’s investigation of the Parliament Attacks was shoddy with respect to the interception of telephone calls. The Supreme Court’s judgment notes in prs. 148, 153, and 154 that the law and procedure of wiretaps was violated in several ways.

    The Evidence Act mandates a special procedure for electronic records precisely because printed copies of such information are vulnerable to manipulation and abuse. This is what the veteran defence counsel, Mr. Shanti Bhushan, pointed out in Navjot Sandhu [see pr. 148] where there were discrepancies in the CDRs led in evidence by the prosecution. Despite these infirmities, which should have disqualified the evidence until the state demonstrated the absence of mala fide conduct, the Supreme Court stepped in to certify the secondary evidence itself, even though it is not competent to do so. The court did not compare the printed CDRs to the original electronic record. Essentially, the court allowed hearsay evidence. This is exactly the sort of situation that section 65B of the Evidence Act intended to avoid by requiring an impartial certificate under sub-section (4) that also speaks to compliance with the technical requirements of sub-section (2).

    When the lack of a proper certificate regarding the authenticity and integrity of the evidence was pointed out, this is what the Supreme Court said in pr. 150:

    Irrespective of the compliance of the requirements of Section 65B, which is a provision dealing with admissibility of electronic records, there is no bar to adducing secondary evidence under the other provisions of the Evidence Act, namely, Sections 63 and 65. It may be that the certificate containing the details in sub-section (4) of Section 65B is not filed in the instant case, but that does not mean that secondary evidence cannot be given even if the law permits such evidence to be given in the circumstances mentioned in the relevant provisions, namely, Sections 63 and 65.

    In the years that followed, printed versions of CDRs were admitted in evidence if they were certified by an officer of the telephone company under sections 63 and 65 of the Evidence Act. The special procedure of section 65B was ignored. This has led to confusion and counter-claims. For instance, the 2011 case of Amar Singh v. Union of India (2011) 7 SCC 69 saw all the parties, including the state and the telephone company, dispute the authenticity of the printed transcripts of the CDRs, as well as the authorisation itself. Currently, in the case of Ratan Tata v. Union of India Writ Petition (Civil) 398 of 2010, a compact disc (CD) containing intercepted telephone calls was introduced in the Supreme Court without following any of the procedure contained in the Evidence Act.

    Returning sanity to electronic record evidence, but at a price

    In 2007, the United States District Court for Maryland handed down a landmark decision in Lorraine v. Markel American Insurance Company241 FRD 534 (D. Md. 2007) that clarified the rules regarding the discovery of electronically stored information. In American federal courts, the law of evidence is set out in the Federal Rules of EvidenceLorraine held when electronically stored information is offered as evidence, the following tests need to be affirmed for it to be admissible: (i) is the information relevant; (ii) is it authentic; (iii) is it hearsay; (iv) is it original or, if it is a duplicate, is there admissible secondary evidence to support it; and (v) does its probative value survive the test of unfair prejudice?

    In a small way, Anvar does for India what Lorraine did for US federal courts. In Anvar, the Supreme Court unequivocally returned Indian electronic evidence law to the special procedure created under section 65B of the Evidence Act. It did this by applying the maxim generalia specialibus non derogant (“the general does not detract from the specific”), which is a restatement of the principle lex specialis derogat legi generali (“special law repeals general law”). The Supreme Court held that the provisions of sections 65A and 65B of the Evidence Act created special law that overrides the general law of documentary evidence [see pr. 19]:

    Proof of electronic record is a special provision introduced by the IT Act amending various provisions under the Evidence Act. The very caption of Section 65Aof the Evidence Act, read with Sections 59 and 65B is sufficient to hold that the special provisions on evidence relating to electronic record shall be governed by the procedure prescribed under Section 65B ofthe Evidence Act. That is a complete code in itself. Being a special law, the general law under Sections 63 and 65 has to yield.

    By doing so, it disqualified oral evidence offered to attest secondary documentary evidence [see pr. 17]:

    The Evidence Act does not contemplate or permit the proof of an electronic record by oral evidence if requirements under Section 65B of the Evidence Act are not complied with, as the law now stands in India.

    The scope for oral evidence is offered later. Once electronic evidence is properly adduced according to section 65B of the Evidence Act, along with the certificate of sub-section (4), the other party may challenge the genuineness of the original electronic record. If the original electronic record is challenged, section 22A of the Evidence Act permits oral evidence as to its genuineness only. Note that section 22A disqualifies oral evidence as to the contents of the electronic record, only the genuineness of the record may be discussed. In this regard, relevant oral evidence as to the genuineness of the record can be offered by the Examiner of Electronic Evidence, an expert witness under section 45A of the Evidence Act who is appointed under section 79A of the IT Act.

    While Anvar is welcome for straightening out the messy evidentiary practice regarding electronically stored information that Navjot Sandhuhad endorsed, it will extract a price from transparency and open government. The portion of Navjot Sandhu that was overruled dealt with wiretaps. In India, the wiretap empowerment is contained in section 5(2)of the Indian Telegraph Act, 1885 (“Telegraph Act”). The Telegraph Act is an inherited colonial law. Section 5(2) of the Telegraph Act was almost exactly duplicated thirteen years later by section 26 of the Indian Post Office Act, 1898. When the latter was referred to a Select Committee, P. Ananda Charlu – a prominent lawyer, Indian nationalist leader, and one of the original founders of the Indian National Congress in 1885 – criticised its lack of transparency, saying: “a strong and just government must not shrink from daylight”.

    Wiretap leaks have become an important means of discovering governmental abuse of power, corruption, and illegality. For instance, the massive fraud enacted by under-selling 2G spectrum by A. Raja, the former telecom minister, supposedly India’s most expensive corruption scandal, caught the public’s imagination only after taped wiretapped conversations were leaked. Some of these conversations were recorded on to a CD and brought to the Supreme Court’s attention. There is no way that a whistle blower, or a person in possession of electronic evidence, can obtain the certification required by section 65B(4) of the Evidence Act without the state coming to know about it and, presumably, attempting to stop its publication.

    Anvar neatly ties up electronic evidence, but it will probably discourage public interest disclosure of inquity.

    Video

    National Consultation on Media Law Schedule

    by Prasad Krishna last modified Sep 30, 2014 06:34 AM

    PDF document icon (National Consultation on Media Law)- Schedule.pdf — PDF document, 193 kB (197990 bytes)

    Document Actions