You are here: Home / Internet Governance / Blog / In Twitter India’s Arbitrary Suspensions, a Question of What Constitutes a Public Space

In Twitter India’s Arbitrary Suspensions, a Question of What Constitutes a Public Space

Posted by Torsha Sarkar at Dec 12, 2019 04:54 PM |
A discussion is underway about the way social media platforms may have to operate within the tenets of constitutional protections of free speech.

The article by Torsha Sarkar was published in the Wire on December 7, 2019.


On October, 26 2019, Twitter suspended the account of senior advocate Sanjay Hegde. The reason? He had previously put up the famous photo of August Landmesser refusing to do the Nazi salute in a sea of crowd in the Blohm Voss shipyard.

According to the social media platform, the image violated Twitter’s ‘hateful imagery’ guidelines, despite the photo being around for decades and usually being recognised as a sign of resistance against blind authoritarianism.

August Landmasser

August Landmesser. Photo: Public Domain

Twitter briefly revoked the suspension on October 27, but promptly suspended Hegde’s account again. This time, the action was prompted by Hegde quote-tweeting parts of a poem by Gorakh Pandey, titled ‘Hang him’, which was written in protest of the first death penalties given to two peasant revolutionaries in an independent India. This time, Hegde was informed that his account would not be restored.

Spurred by what he believed was Twitter’s arbitrary exercise of power, he proceeded to file a legal notice with Twitter, and asked the Ministry of Electronics and Information Technology (MeitY) to intervene in the matter. It is the subject matter of this ask that becomes of interest.

In his complaint, Hegde first outlines how the content shared by him did not violate any of Twitter’s community guidelines. He then goes on to highlight how his fundamental right of dissemination and receipt of information under Article 19(1)(a) were obstructed by the action of Twitter. Here, he places reliance to several key decisions of the Indian and the US Supreme court on media freedom, which provided thrust to his argument that a citizen’s right to free speech is meaningless if control was concentrated in the hands of a few private parties.

Vertical or horizontal?

One of the first things we learn about fundamental rights is that they are enforceable against the government, and that they allow the individual to have a remedy against the excesses of the all-powerful state. This understanding of fundamental rights is usually called the ‘vertical’ approach – where the state, or the allied public authority is at the top and the individual, a non-public entity is at the bottom.

However, there is another, albeit underdeveloped, thread of constitutional jurisprudence that argues that in certain circumstances these rights can be claimed against another private entity. This is called the ‘horizontal’ application of fundamental rights.

In that note, Hegde’s contention essentially becomes this – claiming an enforceable remedy against the private entity for supposedly violating his fundamental right. This is clearly an ask for the Centre to consider a horizontal application of Article 19(1)(a) against large social media companies.

What could this mean?

Lawyer Gautam Bhatia has argued that there are several ways in which a fundamental right can be enforced against another private entity. It must be noted that he derives this classification on the touchstone of existing judicial decisions, which is different from seeking an executive intervention. Nevertheless, it is interesting to consider the logic of his arguments as a thought exercise. Bhatia points out that one of the ways in which fundamental rights can be applied to a private entity is by assimilating the concerned entity as a ‘state’ as per Article 12.

There is a considerable amount of jurisprudence on the nature of the test to determine whether the assailed entity is state. In 2002, the Supreme Court held that for an entity to be deemed state, it must be ‘functionally, financially and administratively dominated by or under the control of the Government’. If we go by this test, then a social media platform would most probably not come within the ambit of Article 12.

However, there is a thread of recent developments that might be interesting to consider. Earlier this year, a federal court of appeals in the US ruled that the First Amendment prohibits President Donald Trump, who used his Twitter for government purposes, from blocking his critics. The court further held that when a public official uses their account for official purposes, then the account ceases to be a mere private account. This judgment has a sharp bearing in the current discussion, and the way social media platforms may have to operate within the tenets of constitutional protections of free speech.

Although the opinion of the federal court clearly noted that they did not concern themselves with the application of the First Amendment rights to the social media platforms, one cannot help but wonder – if the court rules that certain spaces in a social media account are ‘public’ by default, and that politicians cannot exclude critiques from those spaces, then can the company itself block or impede certain messages? If the company does it, can an enforceable remedy then be made against them?

Trump

A US court ruled that Donald Trump cannot block people on his Twitter account. Photo: Reuters

What can be done?

Of course, there is no straight answer to this question. On one hand, social media platforms, owing to the enormous concentration of power and opaque moderating policies, have become gatekeepers of online speech to a large extent. If such power is left unchecked, then, as Hegde’s request demonstrates, a citizen’s free speech rights are meaningless.

On the other hand, if we definitively agree that in certain circumstances, citizens should be allowed to claim remedies against these companies’ arbitrary exercise of power, then are we setting ourselves for a slippery slope? Would we make exceptions to the nature of spaces in the social media based on who is using it? If we do, then what would be the extent to which we would limit the company’s power of regulating speech in such space? How would such limitation work in consonance with the company’s need to protect public officials from targeted harassment?

At this juncture, given the novelty of the situation, our decisions should also be measured. One way of addressing this obvious paradigm shift is by considering the idea of oversight structures more seriously.

I have previously written about the possibility of having an independent regulator as a compromise between overtly stern government regulation and allowing social media companies to have free reign over the things that go on their platforms. In light of the recent events, this might be a useful alternative to consider.

Hegde had also asked the MeitY to issue guidelines to ensure that any censorship of speech in these social media platforms is to be done in accordance with the principles of Article 19.

If we presume that certain social media platforms are large and powerful enough to be treated akin to public spaces, then having an oversight authority to arbitrate and ensure the enforcement of constitutional principles for future disputes may just be the first step towards more evidence-based policymaking.