The Internet Has a New Standard for Censorship
The article was published in the Wire on January 29, 2016. The original can be read here.
Ray Bradbury’s dystopian novel Fahrenheit 451 opens with the declaration, “It was a pleasure to burn.” The six unassuming words offer a glimpse into the mindset of the novel’s protagonist, ‘the fireman’ Guy Montag, who burns books. Montag occupies a world of totalitarian state control over the media where learning is suppressed and censorship prevails. The title alludes to the ‘temperature at which book paper catches fire and burns,’ an apt reference to the act of violence committed against citizens through the systematic destruction of literature. It is tempting to think about the novel solely as a story of censorship. It certainly is. But it is also a story about the value of intellectual freedom and the importance of information.
Published in 1953, Bradbury’s story predates home computers, the Internet, Twitter and Facebook, and yet it anticipates the evolution of these technologies as tools for censorship. When the state seeks to censor speech, they use the most effective and easiest mechanisms available. In Bradbury’s dystopian world, burning books did the trick; in today’s world, governments achieve this by blocking access to information online. The majority of the world’s Internet users encounter censorship even if the contours of control vary depending on the country’s policies and infrastructure.
Online censorship in India
In India, information access blockades have become commonplace and are increasingly enforced across the country for maintaining political stability, for economic reasons, in defence of national security or preserving social values. Last week, the Maharashtra Anti-terror Squad blocked 94 websites that were allegedly radicalising the youth to join the militant group ISIS. Memorably, in 2015 the NDA government’s ham-fisted attempts at enforcing a ban on online pornography resulted in widespread public outrage. Instead of revoking the ban, the government issued yet another vaguely worded and in many senses astonishing order. As reported by Medianama, the revised order delegates the responsibility of determining whether banned websites should remain unavailable to private intermediaries.
The state’s shifting reasons for blocking access to information is reflective of its tendentious attitude towards speech and expression. Free speech in India is messily contested and normally, the role of the judiciary acts as a check on the executive’s proclivity for banning. For instance, in 2010 the Supreme Court upheld the Maharashtra High Court’s decision to revoke the ban on the book on Shivaji by American author James Laine, which, according to the state government, contained material promoting social enmity. However, in the context of communications technology the traditional role of courts is increasingly being passed on to private intermediaries.
The delegation of authority is evident in the government notifying intermediaries to proactively filter content for ‘child pornography’ in the revised order issued to deal with websites blocked as result of its crackdown on pornography. Such screening and filtering requires intermediaries to make a determination on the legality of content in order to avoid direct liability. As international best practices such as the Manila Principles on Intermediary Liability point out, such screening is a slow process and costly and intermediaries are incentivised to simply limit access to information.
Blocking procedures and secrecy
The constitutional validity of Section 69A of the Information Technology Act, 2008 which grants power to the executive to block access to information unchecked, and in secrecy was challenged in Shreya Singhal v. Union of India. Curiously, the Supreme Court upheld S69A reasoning that the provisions were narrowly-drawn with adequate safeguards and noted that any procedural inconsistencies may be challenged through writ petitions under Article 226 of the Constitution. Unfortunately as past instances of blocking under S69A reveal the provisions are littered with procedural deficiencies, amplified manifold by the authorities responsible for interpreting and implementing the orders.
Problematically, an opaque confidentiality criteria built into the blocking rules mandates secrecy in requests and recommendations for blocking and places written orders outside the purview of public scrutiny. As there are no comprehensive list of blocked websites or of the legal orders, the public has to rely on ISPs leaking orders, or media reports to understand the censorship regime in India. RTI applications requesting further information on the implementation of these safeguards have at best provided incomplete information.
Historically, the courts in India have held that Article 19(1)(a) of the Constitution of India is as much about the right to receive information as it is to disseminate, and when there is a chilling effect on speech, it also violates the right to receive information. Therefore, if a website is blocked citizens have a constitutional right to know the legal grounds on which access is being restricted. Just like the government announces and clarifies the grounds when banning a book, users have a right to know the grounds for restrictions on their speech online.
Unfortunately, under the present blocking regime in India there is no easy way for a service provider to comply with a blocking order while also notifying users that censorship has taken place. The ‘Blocking Rules’ require notice “person or intermediary” thus implying that notice may be sent to either the originator or the intermediary. Further, the confidentiality clause raises the presumption that nobody beyond the intermediaries ought to know about a block.
Naturally, intermediaries interested in self-preservation and avoiding conflict with the government become complicit in maintaining secrecy in blocking orders. As a result, it is often difficult to determine why content is inaccessible and users often mistake censorship for technical problem in accessing content. Consequently, pursuing legal recourse or trying to hold the government accountable for their censorious activity becomes a challenge. In failing to consider the constitutional merits of the confidentiality clause, the Supreme Court has shied away from addressing the over-broad reach of the executive.
Secrecy in removing or blocking access is a global problem that places limits on the transparency expected from ISPs. Across many jurisdictions intermediaries are legally prohibited from publicising filtering orders as well as information relating to content or service restrictions. For example in United Kingdom, ISPs are prohibited from revealing blocking orders related to terrorism and surveillance. In South Korea, the Korean Communications Standards Commission holds public meetings that are open to the public. However, the sheer volume of censorship (i.e. close to 10,000 URLs a month) makes it unwieldy for public oversight.
As the Manila Principles note, providing users with an explanation and reasons for placing restrictions on their speech and expression increases civic engagement. Transparency standards will empower citizens to demand that companies and governments they interact with are more accountable when it comes to content regulation. It is worth noting, for conduits as opposed to content hosts, it may not always be technically feasible for to provide a notice when content is unavailable due to filtering. A new standard helps improve transparency standards for network level intermediaries and for websites bound by confidentiality requirements. The recently introduced HTTP code for errors is a critical step forward in cataloguing censorship on the Internet.
A standardised code for censorship
On December 21, 2015, the Internet Engineering Standards Group (IESG) which is the organisation responsible for reviewing and updating the internet’s operating standards approved the publication of 451-’An HTTP Status Code to Report Legal Obstacles’. The code provides intermediaries a standardised way to notify users know when a website is unavailable following a legal order. Publishing the code allows intermediaries to be transparent about their compliance with court and executive orders across jurisdictions and is a huge step forward for capturing online censorship. HTTP code 451 was introduced by software engineer Tim Bray and the code’s name is an homage to Bradbury’s novel Fahrenheit 451.
Bray began developing the code after being inspired by a blog post by Terence Eden calling for a censorship error code. The code’s official status comes after two years of discussions within the technical community and is a result of campaigning from transparency and civil society advocates who have been pushing for clearer labelling of internet censorship. Initially, the code received pushback from within the technical community for reasons enumerated by Mark Nottingham, Chair of the IETF HTTP Working Group in his blog. However, soon sites began using the code on an experimental and unsanctioned basis and faced with increasing demand for and feedback, the code was accepted.
The HTTP code 451 works as a machine-readable flag and has immense potential as a tool for organisations and users who want to quantify and understand censorship on the internet. Cataloguing online censorship is a challenging, time-consuming and expensive task. The HTTP code 451 circumvents confidentiality obligations built into blocking or licensing regimes and reduces the cost of accessing blocking orders.
The code creates a distinction between websites blocked following a court or an executive order, and when information is inaccessible due to technical errors. If implemented widely, Bray’s new code will help prevent confusion around blocked sites. The code addresses the issue of the ISP’s misleading and inaccurate usage of Error 403 ‘Forbidden’ (to indicate that the server can be reached and understood the request, but refuses to take any further action) or 404 ‘Not Found’ (to indicate that the requested resource could not be found but may be available again in the future).
Adoption of the new standard is optional, though at present there are no laws in India that prevent intermediaries doing so. Implementing a standardised machine-readable flag for censorship will go a long way in bolstering the accountability of ISPs that have in the past targeted an entire domain instead of the specified URL. Adoption of the standard by ISPs will also improve the understanding of the burden imposed on intermediaries for censoring and filtering content as presently, there is no clarity on what constitutes compliance. Of course, censorious governments may prohibit the use of the code, for example by issuing an order that specifies not only that a page be blocked, but also precisely which HTTP return code should be used. Though such sanctions should be viewed as evidence of systematic rights violation and totalitarian regimes.
In India where access to software code repositories such as Github and Sourceforge are routinely restricted, the need for such code is obvious. The use of the code will improve confidence in blocking practices, allowing users to understand the grounds on which their right to information is being restricted. Improving transparency around censorship is the only way to build trust between the government and its citizens about the laws and policies applicable to internet content.