You are here: Home / Internet Governance / News & Media / Roundtable: Identifying and Limiting Hate Speech and Harassment Online

Roundtable: Identifying and Limiting Hate Speech and Harassment Online

by Prasad Krishna last modified Aug 09, 2016 01:31 PM
Japreet Grewal attended this event organized by Software Freedom Law Centre at Constitution Club Of India, Rafi Marg, New Delhi on July 28, 2016.

See the original report published by SFLC here.


SFLC.in organized a roundtable discussion on 28th July, 2016 in New Delhi to initiate a focused and collaborative dialogue around the increasingly important issues of online harassment and hate speech. This roundtable was intended as the first in a series of discussions around said issues, and was attended by representatives from various stakeholder groups including intermediary platforms, civil society groups, and media houses, along with individuals who had personally experienced such online abuse & harassment. The core objective of this discussion was to recognize and understand the vast range of concerns that exist in this sphere, in an effort to develop a framework for the regulation of such activities, without stepping on the right to freedom of expression. The discussion was conducted under Chatham House rules so as to facilitate an uninhibited exchange of views.

Over the course of the event, the complex and multifaceted nature of its overarching theme unraveled, as the discussion moved from underlying social constructs, to responsibilities of intermediary platforms, adequacy of existing laws, sensitization of everyday users and effective handling of grievances by law enforcement agencies. At the very outset, it was highlighted that social media platforms, with their increasing popularity, are being considered centralized hubs for businesses and others. However, individuals, communities & institutions often find themselves at the receiving end of sustained abuse and threats either on grounds of their actual or perceived characteristics, or over their online expression. The dynamic discussion that ensued brought to light significant concerns that would require a collaborative effort across stakeholder groups to address. For the sake of clarity, we are categorizing these learnings under the following heads:

  • Conceptual understanding of online harassment and hate speech: It was discussed at length that hate speech and speech that culminates in harassment on the online sphere, are reflective of the social outlook of the country at large. Women were seen as more frequent targets of harassment in the form of rape threats, sexual remarks, and name calling, whereas men are mostly called out for their beliefs and opinions. When discussing hate speech relations, it was considered important to take note of the power dynamics at play amongst the stronger groups, and the vulnerable ones. Limiting such content gets specially complicated considering the apprehension that in an effort to monitor hate speech and harassment, free speech may get stifled. The paradox of anonymity being an enabler of free speech, as well the reason for unabashed harassment adds yet another layer of complexity to the issue. Moreover, it was felt that a nuanced distinction needed to be made regarding the systematic attacks by online mobs against a particular person, as opposed to hateful and/or harassing speech that engages on a one to one level. This all culminated in a realization that this issue goes beyond the online domain, into the societal mindset that is amplified on the Internet, and that the faint line between free speech, and hateful & harassing speech is very difficult to pin-point.

  • Role of intermediaries: It was the opinion of the representatives of intermediary platforms at the roundtable that the current legal frameworks in the country are sufficient to tackle this issue and they should operate in compliance with such laws. While the specific terms of service may differ in terms of permissible content depending on the type of service being provided by the intermediary, these platforms do invariably keep a check on the content being generated and evaluate them for compliance with the applicable terms of service. Additionally, platforms that have the option of users creating & generating their own content, give the user various tools such as block, filter, un-follow, and other customized options to moderate the content they receive. though the intermediaries, in their own words ‘ are not a delete squad, but a compliance team’, it was said that they ran the perpetual risk of either censoring content that should not have been censored, or not censoring enough of the content that should have been censored. This incentivizes them to exercise zero-tolerance policies in certain areas such as child sexual abuse or terrorism, and resort to immediate take down of content related to such themes. However, in spite of the sheer volume of material that is generated and reported, it was felt that a completely automated approach cannot be followed for filtering hateful and harassing content that violates terms of service Taking down content and expression requires processing various factors that determine the context of that material, and this calls for a subjective approach that requires a set of human eyes. Therefore, the intermediaries do have tools for users that protect them from hate and harassing speech, and they work with certain safety experts to ensure that the users feel safe while using their services.

  • Adequacy of legal frameworks: A distinction was drawn over the course of the discussion between hate speech as a social as opposed to a legal concept. For legal purposes, speech would not attract penalties until it incites a real threat of violence and civic disorder. However, the law is not sufficiently equipped to deal with speech that does not incite violence, but causes psychological damage. It was undisputed that the concerns in this area cannot be solved by creating more statutes. Going down this road could lead to the creation of a Section 66A equivalent that would lead to censorship through law and cause a chilling effect on freedom of expression. It was emphasized that the existing laws have adequate provisions, but a strict implementation is required.

  • Response from law enforcement agencies: An evaluation of this point led to the conclusion that people who are harassed online, or are the targets of hate speech, are hesitant to approach the police and law enforcement agencies for their help. There have been instances where the police is unable to help due to the limited application of laws in such cases, as mentioned above.

  • Possible remedies: As a part of this roundtable, SFLC.in had proposed a set of best practices aimed at limiting hateful and harassing content online. These were intended as self-regulatory measures that could be followed by intermediaries functioning as speech platforms, where users could create and publish content without pre-filtrations. Amongst the measures that was discussed extensively was the practice of promoting ‘counter speech’ on the platforms that are most frequently used to spread hateful propaganda and harassment. This was generally seen as an effective counter-measure deserving further exploration, and one of the intermediaries mentioned a project they were formulating on ‘counter radicalization’. However, concerns were raised with respect to the identification of areas that would benefit from counter speech, and its effectiveness with respect to mob attacks. Another unique approach suggested by the participants was to ‘vaccinate’ first time users by educating them about the enormity and complexity of the Internet, including intiation of such users to the idea that freedom of expression online often crosses over to hate speech and harassment. This would act as an initiation process to understand the working of the Internet and the prevalence of hateful and harassing content on its numerous speech platforms, so that first-time users are not discouraged from using the Internet merely due to the presence of negative content. An interesting suggestion for the platforms was to work towards a mechanism that is more offender centric, and facilitates the tracking of repeat offenders along with providing tools of blocking for users.

This roundtable served in exploring the many layers of hateful and harassing speech that runs across roles and responsibilities of various stakeholder groups and concerns that are deeply entrenched in our societal outlook. The increasing frequency and amount of such content on the Internet is an indication of the urgent need to collaborate and develop a framework for limiting such speech, while balancing the fundamental right to freedom of expression. We thank all the participants and appreciate their valuable contributions that facilitated a better understanding of the overall theme.