The Centre for Internet and Society
http://editors.cis-india.org
These are the search results for the query, showing results 1 to 15.
Submission to the Facebook Oversight Board: Policy on Cross-checks
http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-policy-on-cross-checks
<b>The Centre for Internet & Society (CIS) submitted public comments to the Facebook Oversight Board on a policy consultation.</b>
<h2>Whether a cross-check system is needed?</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board</strong>: The Board should investigate the cross-check system as part of Meta’s larger problems with algorithmically amplified speech, and how such speech gets moderated.</p>
<p style="text-align: justify;"><strong>Explanation</strong>: The issues surrounding Meta’s cross-check system are not an isolated phenomena, but rather a reflection of the problems of algorithmically amplified speech, as well the lack of transparency in the company’s content moderation processes at large. At the outset, it must be stated that the majority of information on the cross-check system only became available after the media <a href="https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline">reports</a> published by the Wall Street Journal. While these reports have been extensive in documenting various aspects of the system, there is no guarantee that the disclosures obtained by them provides the complete picture regarding the system. Further, given that Meta has been found to purposely mislead the Board and the public on how the cross-check system operates, it is worth investigating the incentives that necessitate the cross-check system in the first place.</p>
<p style="text-align: justify;">Meta claims that the cross-check system works as a check for false positives: they “employ additional reviews for high-visibility content that may violate our policies.” Essentially they want to make sure that content that stays up on the platform and reaches a large audience, is following their content guidelines. However, previous disclosures have <a href="https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346">proven</a> policy executives have prioritized the company’s ‘business interests’ over removing content that violates their policies; and have <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">waited to act on known problematic content</a> until significant external pressure was built up, including in India. In this context, the cross-check system seems less like a measure designed to protect users who might be exposed to problematic content, and more as a measure for managing public perception of the company.</p>
<p style="text-align: justify;">Thus the Board should investigate both how content gains an audience on the platform, and how it gets moderated. Previous <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">whistleblower disclosures</a> have shown that the mechanics of algorithmically amplified speech, which prioritizes <a href="https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/">engagement and growth over safety</a>, are easily taken advantage of by bad actors to promote their viewpoints through artificially induced virality. The cross-check system and other measures of content moderation at scale would not be needed if it was harder to spread problematic content on the platform in the first place. Instead of focusing only on one specific system, the Board needs to urge Meta to re-evaluate the incentives that drive content sharing on the platform and come up with ways that make the platform safer.</p>
<h2 style="text-align: justify;">Meta’s Obligations under Human Rights Law</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board: </strong>The Board must consider the cross-check system to be violative of Meta’s obligations under the International Covenant of Civil and Political Rights (ICCPR). Additionally, the cross-check ranker must be incorporated with Meta’s commitments towards human rights, as outlined in its Corporate Human Rights Policy.</p>
<p style="text-align: justify;">Explanation: Meta’s content moderation, and by extension, its cross-check system, is bound by both international human rights law as well as the Board’s past decisions. At the outset, The system fails the three-pronged test of legality, legitimacy and necessity and proportionality, as delineated under Article 19(3) of the International Covenant of Civil and Political Rights (ICCPR). Firstly, this system has been “<a href="https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline">scattered throughout the company, without clear governance or ownership</a>”, which violates the legality principle, since there is no clear guidance on what sort of speech, or which classes of users, would deserve the treatment of this system. Secondly, there is no understanding about the legitimacy of aims with which this system had been set up in the first place, beyond Meta’s own assertions, which have been <a href="https://www.oversightboard.com/news/215139350722703-oversight-board-demands-more-transparency-from-facebook/">countered</a> by evidence to the contrary. Thirdly, the necessity and proportionality of the restriction has to be <a href="https://www.oversightboard.com/decision/FB-691QAMHJ">read along</a> with the <a href="https://www.ohchr.org/en/issues/freedomopinion/articles19-20/pages/index.aspx">Rabat Plan of Action</a>, which requires that for a statement to become a criminal offense, a six-pronged test of threshold is to be applied: a) the social and political context, b) the speaker’s position or status in the society, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm. As news reports have indicated, Meta has been utilizing the cross-check system to privilege speech from influential users, and in the process, have shielded inflammatory, inciting speech that would have otherwise qualified the Rabat threshold. As such, the third requirement is not fulfilled either.</p>
<p style="text-align: justify;">Additionally, Meta’s own <a href="https://about.fb.com/wp-content/uploads/2021/03/Facebooks-Corporate-Human-Rights-Policy.pdf">Corporate Human Rights Policy</a> commits to respecting human rights in line with the UN Guiding Principles on Business and Human Rights (UNGPs). Therefore, the cross-check ranker must incorporate these existing commitments to human rights, including:</p>
<ul>
<li style="text-align: justify;">The right to freedom of expression:, UN Special Rapporteur on freedom of opinion and expression report <a href="https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/38/35">A/HRC/38/35</a> (2018); <a href="https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=25729&LangID=E">Joint Statement of international freedom of expression monitors on COVID-19 (March, 2020)</a>.</li></ul>
<p style="text-align: justify;">The Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression addresses the regulation of user-generated online content.</p>
<p>The Joint Statement issued regarding Governmental promotion and protection of access to and free flow of information during the pandemic.</p>
<ul>
<li>The right to non-discrimination: International Convention on the Elimination of All Forms of Racial Discrimination (<a href="https://www.ohchr.org/EN/ProfessionalInterest/Pages/CERD.aspx">ICERD</a>), Articles 1 and 4.</li></ul>
<p>Article 1 of the ICERD defines racial discrimination.</p>
<p>Article 4 of the ICERD condemns propaganda and organisations that attempt to justify discrimination or are based on the idea of racial supremacism.</p>
<ul>
<li>Participation in public affairs and the right to vote: ICCPR Article 25.</li>
<li>The right to remedy: General Comment No. 31, Human Rights Committee (2004) (<a href="https://tbinternet.ohchr.org/_layouts/15/treatybodyexternal/Download.aspx?symbolno=CCPR%2fC%2f21%2fRev.1%2fAdd.13&Lang=en">General Comment 31</a>); UNGPs, Principle 22.</li></ul>
<p>The General Comment discusses the nature of the general legal obligation imposed on State Parties to the Covenant.</p>
<p style="text-align: justify;">Guiding Principle 22 states that where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes.</p>
<h2>Meta’s obligations to avoid political bias and false positives in its cross-check system</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board: </strong>The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to ensure that it is open about risks to user rights when there is involvement from the State in content moderation. Additionally, the Board must ask Meta to undertake a diversity and human rights audit of its existing policy teams, and commit to regular cultural training for its staff. Finally, the Board must investigate the potential conflicts of interest that arise when Meta’s policy team has any sort of nexus with political parties, and how that might impact content moderation.</p>
<p style="text-align: justify;">Explanation: For the cross-check system to be free from biases, it is important for Meta to come clear to the Board regarding the rationale, standards and processes of the cross check review, and report on the relative error rates of determinations made through cross check compared with ordinary enforcement procedures. It also needs to disclose to the Board in which particular situations it uses the system and in which it does not. Principle 4 under the Foundational Principles of the <a href="https://santaclaraprinciples.org/">Santa Clara Principles on Transparency and Accountability in Content Moderation</a> encourage companies to realize the risk to user rights when there is involvement from the State in processes of content moderation and asks companies to makes users aware that: a) a state actor has requested/participated in an action on their content/account, and b) the company believes that the action was needed as per the relevant law. Users should be allowed access to any rules or policies, formal or informal work relationships that the company holds with state actors in terms of content regulation, the process of flagging accounts/content and state requests to action.</p>
<p style="text-align: justify;">The Board must consider that erroneous lack of action (false positives) might not always be a system's flaw, but a larger, structural issue regarding how policy teams at Meta functions. As previous disclosures have <a href="https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346">proven</a>, the contours of what sort of violating content gets to stay up on the platform has been ideologically and politically coloured, as policy executives have prioritized the company’s ‘business interests’ over social harmony. In such light, it is not sufficient to simply propose better transparency and accountability measures for Meta to adopt within its content moderation processes to avoid political bias. Rather, the Board’s recommendations must focus on the structural aspect of the human moderator and policy team that is behind these processes. The Board must ask Meta to a) urgently undertake a diversity and human rights audit of its existing team and its hiring processes, b) commit to regular training to ensure that their policy staffs are culturally literate in the socio-political regions they work in. Further, the Board must seriously investigate the potential <a href="https://time.com/5883993/india-facebook-hate-speech-bjp/">conflicts of interest</a> that happen when regional policy teams of Meta, with nexus to political parties, are also tasked with regulating content from representatives of these parties, and how that impacts the moderation processes at large.</p>
<p style="text-align: justify;">Finally, in case decision <a href="https://www.oversightboard.com/decision/FB-691QAMHJ">2021-001-FB-FBR</a>, the Board made a number of recommendations to Meta which must be implemented in the current situation, including: a) considering the political context while looking at potential risks, b) employment of specialized staff in content moderation while evaluating political speech from influential users, c) familiarity with the political and linguistic context d) absence of any interference and undue influence, e) public explanation regarding the rules Meta uses when imposing sanctions against influential users and f) the sanctions being time-bound.</p>
<h2 style="text-align: justify;">Transparency of the cross-check system</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board: </strong>The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to increase the transparency of its cross-check system.</p>
<p style="text-align: justify;"><strong>Explanation: </strong>There are ways in which Meta can increase the transparency of not only the cross-check system, but the content moderation process in general. The following recommendations draw from <a href="https://santaclaraprinciples.org/">The Santa Clara Principles</a> and the Board’s own previous decisions:</p>
<p style="text-align: justify;">Considering Principle 2 of the Santa Clara Principles: Understandable Rules and Policies, Meta should ensure that the policies and rules governing moderation of content and user behaviors on Facebook are<strong> clear, easily understandable, and available in the languages</strong> in which the user operates.</p>
<p style="text-align: justify;">Drawing from Principle 5 on Integrity and Explainability and from the Board’s recommendations in case decision <a href="https://www.oversightboard.com/decision/FB-691QAMHJ">2021-001-FB-FBR</a> which advises Meta to“<em>Provide users with accessible information on how many violations, strikes and penalties have been assessed against them, and the consequences that will follow future violations</em>”, Meta should be able to <strong>explain the content moderation decisions to users in all cases</strong>: when under review, when the decision has been made to leave the content up, or take it down. We recommend that Meta keeps a publicly accessible running tally of the number of moderation decisions made on a piece of content till date with their explanations. This would allow third parties (like journalists, activists, researchers and the OSB) to keep Facebook accountable when it does not follow its own policies, as has previously been the case.</p>
<p style="text-align: justify;">In the same case decision, the Board has also previously recommended that Meta “<em>Produce more information to help users understand and evaluate the process and criteria for applying the newsworthiness allowance, including how it applies to influential accounts. The company should also clearly explain the rationale, standards and processes of the cross-check review, and report on the relative error rates of determinations made through cross-checking compared with ordinary enforcement procedures.</em>” Thus, Meta should <strong>publicly explain the cross check system </strong>in detail with examples, and make public the list of attributes that qualify a piece of content for secondary review.</p>
<p style="text-align: justify;">The Operational Principles further provide actionable steps that Meta can take to improve the transparency of their content moderation systems. Drawing from Principle 2: Notice and Principle 3: Appeals, Meta should make a satisfactory <strong>appeals process available </strong>to users - whether they be decisions to leave up or takedown content. The appeals process should be handled by context aware teams. Meta should then <strong>publish the results</strong> of the cross check system and the appeals processes as part of their transparency reports including data like total content actioned, rate of success in appeals and cross check process, decisions overturned and preserved etc, which would also satisfy the first Operational Principle: Numbers.</p>
<h2 style="text-align: justify;">Resources needed to improve the system for users and entities who do not post in English</h2>
<p style="text-align: justify;"><strong>Recommendations for the Board: </strong>The Board must urge Meta to urgently invest in resources to expand Meta’s content moderation services into the local contexts in which the company operates and invest in training data for local languages.</p>
<p style="text-align: justify;"><strong>Explanation: </strong>The cross-check system is not a fundamentally different problem than content moderation. It has been shown time and time again that Meta’s handling of content from non-Western, non-English language contexts is severely lacking. It has been shown how content hosted on the platform has been used to<a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang"> inflame existing tensions in developing countries</a>, <a href="https://www.wsj.com/articles/facebook-services-are-used-to-spread-religious-hatred-in-india-internal-documents-show-11635016354?mod=article_inline">promote religious hatred in India</a>, <a href="https://www.wsj.com/articles/burn-the-houses-rohingya-survivors-recount-the-day-soldiers-killed-hundreds-1526048545?mod=article_inline">genocide in Mynmar</a>, and continue to support <a href="https://www.wsj.com/articles/facebook-drug-cartels-human-traffickers-response-is-weak-documents-11631812953?mod=article_inline">human traffickers and drug cartels</a> on the platform even when these issues have been identified.</p>
<p style="text-align: justify;">There is an urgent need to invest resources to expand Meta’s content moderation services into the local contexts in which the company operates. The company should make all policies and rule documents available in the languages of its users; invest in creating automated tools that are capable of flagging content that is not posted in English; and add people familiar with the local contexts to provide context aware second level reviews. The Facebook Files show that even according to company engineering, <a href="https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184?mod=article_inline">automated content moderation</a> is still not very effective in identifying hate speech and other harmful content. Meta should focus on hiring, training and retaining human moderators who have knowledge of local contexts. Bias training of all content moderators, but especially those who will participate in the second level reviews in the cross check system is also extremely important to ensure acceptable decisions.</p>
<p style="text-align: justify;">Additionally, in keeping with Meta’s human rights commitments, the company should develop and publish a policy for responding to human rights violations when they are pointed out by activists, researchers, journalists and employees as a matter of due process. It should not wait for a negative news cycle to stir them into action <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">as it seems to have done in previous cases</a>.</p>
<h2 style="text-align: justify;">Benefits and limitations of automated technologies</h2>
<p style="text-align: justify;">Meta <a href="https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation%5C">recently changed</a> its moderation practice wherein it uses technology to prioritize content for human reviewers based on their severity index. Facebook <a href="https://transparency.fb.com/policies/improving/prioritizing-content-review/">has not specified</a> the technology it uses to prioritize high-severity content but its research record shows that it <a href="https://ai.facebook.com/blog/the-shift-to-generalized-ai-to-better-identify-violating-content">uses</a> a host of automated <a href="https://ai.facebook.com/tools#frameworks-and-tools">frameworks and tools</a> to detect violating content, including image recognition tools, object detection tools, natural language processing models, speech models and reasoning models. One such model is the <a href="https://ai.facebook.com/blog/community-standards-report/">Whole Post Integrity Embeddings</a> (“WPIE”) which can judge various elements in a given post (caption, comments, OCR, image etc.) to work out the context and the content of the post. Facebook also uses image matching models (SimSearchNet++) that are trained to match variations of an image with a high degree of precision and improved recall; multi-lingual masked language models on cross-lingual understanding such as <a href="https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/">XLM-R</a> that can accurately identify hate-speech and other policy-violating content across a wide range of languages. More recently, Facebook introduced its machine translation model called the <a href="https://analyticsindiamag.com/facebooks-new-machine-translation-model-works-without-help-of-english-data/">M2M-100</a> whose goal is to perform bidirectional translation between 7000 languages.</p>
<p style="text-align: justify;">Despite the advances in this field, there are inherent <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf">limitations</a> of such automated tools. <a href="https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms">Experts</a> have repeatedly maintained that AI will get better at understanding context but it will not replace human moderators for the foreseeable future. One such instance where these limitations were <a href="https://www.politico.eu/article/facebook-content-moderation-automation/">exposed</a> was during the COVID-19 pandemic, when Facebook sent its human moderators home - the number of removals flagged as hate speech on its platform more than doubled to 22.5 million in the second quarter of 2020 but the number of successful content appeals was dropped to 12,600 from the 2.3 million figure for the first three months of 2020.</p>
<p style="text-align: justify;"><a href="https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184?mod=article_inline">The Facebook Files</a> show that Meta’s AI cannot consistently identify first-person shooting videos, racist rants and even the difference between cockfighting and car crashes. Its automated systems are only capable of removing posts that generate just 3% to 5% of the views of hate speech on the platform and 0.6% of all content that violates Meta’s policies against violence and incitement. As such, it is difficult to accept the company’s claim that nearly all of the hate speech it takes down was discovered by AI before it was reported by users.</p>
<p style="text-align: justify;">However, the benefits of such technology cannot be discounted, especially when one considers automated technology as a way of reducing <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">trauma</a> for human moderators. Using AI for prioritizing content for review can turn out to be effective for human moderators as it can increase their efficiency and reduce harmful effects of content moderation on them. Additionally, it can also limit the exposure of harmful content to internet users. Moreover, AI can also reduce the impact of harmful content on human moderators by allocating content to moderators on the basis of their exposure history. Theoretically, if the company’s claims are to be believed, using automated technology for prioritizing content for review can help to improve the mental health of Facebook’s human moderators.</p>
<hr />
<p>Click to download the file <a class="external-link" href="https://cis-india.org/internet-governance/policy-on-cross-checks">here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-policy-on-cross-checks'>http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-policy-on-cross-checks</a>
</p>
No publisher[in alphabetical order] Anamika Kundu, Digvijay Singh, Divyansha Sehgal and Torsha SarkarFreedom of Speech and ExpressionInternet FreedomFacebookInternet Governance2022-02-09T05:31:32ZBlog EntryPanel discussion on 'How to Avoid Digital ID Systems That Put People at Risk: Lessons from Afghanistan' at Freedom Online Conference
http://editors.cis-india.org/internet-governance/news/panel-discussion-how-to-avoid-digital-id-systems-that-put-people-at-risk
<b>Amber Sinha participated as a panelist in a panel discussion on How to Avoid Digital ID Systems That Put People at Risk: Lessons from Afghanistan at the Freedom Online Conference yesterday.</b>
<p style="text-align: justify; ">The Freedom Online Coalition (FOC) was established in 2011 in response to the growing recognition of the importance of the Internet for the enjoyment of human rights. Periodically, the FOC holds a multistakeholder Conference that aims to deepen the discussion on how online freedoms are helping to promote social, cultural and economic development. The ownership of the Conference program and outputs lies with the host country, most often the Chair of the Coalition during that year.</p>
<p style="text-align: justify; ">The aim of the panel was to use the lessons learned from the Afghanistan case to take a critical and realistic look at the implementation of digital identification programs around the world. A video of the panel can be <a class="external-link" href="https://www.freedomonlineconference.com/session/how-to-avoid-digital-id-systems-that-put-people-at-risk-lessons-from-afghanistan">accessed here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/panel-discussion-how-to-avoid-digital-id-systems-that-put-people-at-risk'>http://editors.cis-india.org/internet-governance/news/panel-discussion-how-to-avoid-digital-id-systems-that-put-people-at-risk</a>
</p>
No publisherpraskrishnaFreedom of Speech and ExpressionDigital IDInternet Governance2021-12-03T14:52:35ZNews ItemRight to Exclusion, Government Spaces, and Speech
http://editors.cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech
<b>The conclusion of the litigation surrounding Trump blocking its critiques on Twitter brings to forefront two less-discussed aspects of intermediary liability: a) if social media platforms could be compelled to ‘carry’ speech under any established legal principles, thereby limiting their right to exclude users or speech, and b) whether users have a constitutional right to access social media spaces of elected officials. This essay analyzes these issues under the American law, as well as draws parallel for India, in light of the ongoing litigation around the suspension of advocate Sanjay Hegde’s Twitter account.</b>
<p> </p>
<p>This article first appeared on the Indian Journal of Law and Technology (IJLT) blog, and can be accessed <a class="external-link" href="https://www.ijlt.in/post/right-to-exclusion-government-controlled-spaces-and-speech">here</a>. Cross-posted with permission. </p>
<p>---</p>
<h2><span class="s1">Introduction</span></h2>
<p class="p2"><span class="s1">On April 8, the Supreme Court of the United States (SCOTUS), vacated the judgment of the US Court of Appeals for Second Circuit’s in <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf%23page=1"><span class="s2"><em>Knight First Amendment Institute v Trump</em></span></a>. In that case, the Court of Appeals had precluded Donald Trump, then-POTUS, from blocking his critics from his Twitter account on the ground that such action amounted to the erosion of constitutional rights of his critics. The Court of Appeals had held that his use of @realDonaldTrump in his official capacity had transformed the nature of the account from private to public, and therefore, blocking users he disagreed with amounted to viewpoint discrimination, something that was incompatible with the First Amendment.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">The SCOTUS <a href="https://www.supremecourt.gov/opinions/20pdf/20-197_5ie6.pdf"><span class="s2">ordered</span></a> the case to be dismissed as moot, on account of Trump no longer being in office. Justice Clarence Thomas issued a ten-page concurrence that went into additional depth regarding the nature of social media platforms and user rights. It must be noted that the concurrence does not hold any direct precedential weightage, since Justice Thomas was not joined by any of his colleagues at the bench for the opinion. However, given that similar questions of public import, are currently being deliberated in the ongoing <em>Sanjay Hegde</em> <a href="https://www.barandbench.com/news/litigation/delhi-high-court-sanjay-hegde-challenge-suspension-twitter-account-hearing-july-8"><span class="s2">litigation</span></a> in the Delhi High Court, Justice Thomas’ concurrence might hold some persuasive weightage in India. While the facts of these litigations might be starkly different, both of them are nevertheless characterized by important questions of applying constitutional doctrines to private parties like Twitter and the supposedly ‘public’ nature of social media platforms.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">In this essay, we consider the legal questions raised in the opinion as possible learnings for India. In the first part, we analyze the key points raised by Justice Thomas, vis-a-vis the American legal position on intermediary liability and freedom of speech. In the second part, we apply these deliberations to the <em>Sanjay Hegde </em>litigation, as a case-study and a roadmap for future legal jurisprudence to be developed.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">A flawed analogy</span></h2>
<p class="p2"><span class="s1">At the outset, let us briefly refresh the timeline of Trump’s tryst with Twitter, and the history of this litigation: the Court of Appeals decision was <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf%23page=1"><span class="s2">issued</span></a> in 2019, when Trump was still in office. Post-November 2020 Presidential Election, where he was voted out, his supporters <a href="https://indianexpress.com/article/explained/us-capitol-hill-siege-explained-7136632/"><span class="s2">broke</span></a> into Capitol Hill. Much of the blame for the attack was pinned on Trump’s use of social media channels (including Twitter) to instigate the violence and following this, Twitter <a href="https://blog.twitter.com/en_us/topics/company/2020/suspension"><span class="s2">suspended</span></a> his account permanently.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">It is this final fact that seized Justice Thomas’ reasoning. He noted that a private party like Twitter’s power to do away with Trump’s account altogether was at odds with the Court of Appeals’ earlier finding about the public nature of the account. He deployed a hotel analogy to justify this: government officials renting a hotel room for a public hearing on regulation could not kick out a dissenter, but if the same officials gather informally in the hotel lounge, then they would be within their rights to ask the hotel to kick out a heckler. The difference in the two situations would be that, <em>“the government controls the space in the first scenario, the hotel, in the latter.” </em>He noted that Twitter’s conduct was similar to the second situation, where it “<em>control(s) the avenues for speech</em>”. Accordingly, he dismissed the idea that the original respondents (the users whose accounts were blocked) had any First Amendment claims against Trump’s initial blocking action, since the ultimate control of the ‘avenue’ was with Twitter, and not Trump.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">In the facts of the case however, this analogy was not justified. The Court of Appeals had not concerned itself with the question of private ‘control’ of entire social media spaces, and given the timeline of the litigation, it was impossible for them to pre-empt such considerations within the judgment. In fact, the only takeaway from the original decision had been that an elected representative’s utilization of his social media account for official purposes transformed </span><span class="s3">only that particular space</span><span class="s1"><em> </em>into a public forum where constitutional rights would find applicability. In delving into questions of ‘control’ and ‘avenues of speech’, issues that had been previously unexplored, Justice Thomas conflates a rather specific point into a much bigger, general conundrum. Further deliberations in the concurrence are accordingly put forward upon this flawed premise.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Right to exclusion (and must carry claims)</span></h2>
<p class="p2"><span class="s1">From here, Justice Thomas identified the problem to be “<em>private, concentrated control over online content and platforms available to the public</em>”, and brought forth two alternate regulatory systems — common carrier and public accommodation — to argue for ‘equal access’ over social media space. He posited that successful application of either of the two analogies would effectively restrict a social media platform’s right to exclude its users, and “<em>an answer may arise for dissatisfied platform users who would appreciate not being blocked</em>”. Essentially, this would mean that platforms would be obligated to carry <em>all </em>forms of (presumably) legal speech, and users would be entitled to sue platforms in case they feel their content has been unfairly taken down, a phenomenon Daphne Keller <a href="http://cyberlaw.stanford.edu/blog/2018/09/why-dc-pundits-must-carry-claims-are-relevant-global-censorship"><span class="s2">describes</span></a> as ‘must carry claims’.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">Again, this is a strange place to find the argument to proceed, since the original facts of the case were not about ‘<em>dissatisfied platform users’,</em> but an elected representative’s account being used in dissemination of official information. Beyond the initial ‘private’ control deliberation, Justice Thomas did not seem interested in exploring this original legal position, and instead emphasized on analogizing social media platforms in order to enforce ‘equal access’, finally arriving at a position that would be legally untenable in the USA.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">The American law on intermediary liability, as embodied in Section 230 of the Communications Decency Act (CDA), has two key components: first, intermediaries are <a href="https://www.eff.org/issues/cda230"><span class="s2">protected</span></a> against the contents posted by its users, under a legal model <a href="https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGLISH.pdf"><span class="s2">termed</span></a> as ‘broad immunity’, and second, an intermediary does not stand to lose its immunity if it chooses to moderate and remove speech it finds objectionable, popularly <a href="https://intpolicydigest.org/section-230-how-it-actually-works-what-might-change-and-how-that-could-affect-you/"><span class="s2">known</span></a> as the Good Samaritan protection. It is the effect of these two components, combined, that allows platforms to take calls on what to remove and what to keep, translating into a ‘right to exclusion’. Legally compelling them to carry speech, under the garb of ‘access’ would therefore, strike at the heart of the protection granted by the CDA.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Learnings for India</span></h2>
<p class="p2"><span class="s1">In his petition to the Delhi High Court, Senior Supreme Court Advocate, Sanjay Hegde had contested that the suspension of his Twitter account, on the grounds of him sharing anti-authoritarian imagery, was arbitrary and that:<span class="Apple-converted-space"> </span></span></p>
<ol style="list-style-type: lower-alpha;" class="ol1"><li class="li2"><span class="s1">Twitter was carrying out a public function and would be therefore amenable to writ jurisdiction under Article 226 of the Indian Constitution; and</span></li><li class="li2"><span class="s1">The suspension of his account had amounted to a violation of his right to freedom of speech and expression under Article 19(1)(a) and his rights to assembly and association under Article 19(1)(b) and 19(1)(c); and</span></li><li class="li2"><span class="s1">The government has a positive obligation to ensure that any censorship on social media platforms is done in accordance with Article 19(2).<span class="Apple-converted-space"> </span></span></li></ol>
<p class="p3"><span class="s1"></span></p>
<p class="p5"><span class="s1">The first two prongs of the original petition are perhaps easily disputed: as previous <a href="https://indconlawphil.wordpress.com/2020/01/28/guest-post-social-media-public-forums-and-the-freedom-of-speech-ii/"><span class="s2">commentary</span></a> has pointed out, existing Indian constitutional jurisprudence on ‘public function’ does not implicate Twitter, and accordingly, it would be a difficult to make out a case that account suspensions, no matter how arbitrary, would amount to a violation of the user’s fundamental rights. It is the third contention that requires some additional insight in the context of our previous discussion.<span class="Apple-converted-space"> </span></span></p>
<h3><span class="s1">Does the Indian legal system support a right to exclusion?<span class="Apple-converted-space"> </span></span></h3>
<p class="p2"><span class="s1">Suing Twitter to reinstate a suspended account, on the ground that such suspension was arbitrary and illegal, is in its essence a request to limit Twitter’s right to exclude its users. The petition serves as an example of a must-carry claim in the Indian context and vindicates Justice Thomas’ (misplaced) defence of ‘<em>dissatisfied platform users</em>’. Legally, such claims perhaps have a better chance of succeeding here, since the expansive protection granted to intermediaries via Section 230 of the CDA, is noticeably absent in India. Instead, intermediaries are bound by conditional immunity, where availment of a ‘safe harbour’, i.e., exemption from liability, is contingent on fulfilment of statutory conditions, made under <a href="https://indiankanoon.org/doc/844026/"><span class="s2">section 79</span></a> of the Information Technology (IT) Act and the rules made thereunder. Interestingly, in his opinion, Justice Thomas had briefly visited a situation where the immunity under Section 230 was made conditional: to gain Good Samaritan protection, platforms might be induced to ensure specific conditions, including ‘nondiscrimination’. This is controversial (and as commentators have noted, <a href="https://www.lawfareblog.com/justice-thomas-gives-congress-advice-social-media-regulation"><span class="s2">wrong</span></a>), since it had the potential to whittle down the US' ‘broad immunity’ model of intermediary liability to a system that would resemble the Indian one.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">It is worth noting that in the newly issued Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, proviso to Rule 3(1)(d) allows for “<em>the removal or disabling of access to any information, data or communication link [...] under clause (b) on a voluntary basis, or on the basis of grievances received under sub-rule (2) [...]</em>” without dilution of statutory immunity. This does provide intermediaries a right to exclude, albeit limited, since its scope is restricted to content removed under the operation of specific sub-clauses within the rules, as opposed to Section 230, which is couched in more general terms. Of course, none of this precludes the government from further prescribing obligations similar to those prayed in the petition.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">On the other hand, it is a difficult proposition to support that Twitter’s right to exclusion should be circumscribed by the Constitution, as prayed. In the petition, this argument is built over the judgment in <a href="https://indiankanoon.org/doc/110813550/"><span class="s2"><em>Shreya Singhal v Union of India</em></span></a>, where it was held that takedowns under section 79 are to be done only on receipt of a court order or a government notification, and that the scope of the order would be restricted to Article 19(2). This, in his opinion, meant that “<em>any suo-motu takedown of material by intermediaries must conform to Article 19(2)</em>”.</span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">To understand why this argument does not work, it is important to consider the context in which the <em>Shreya Singhal </em>judgment was issued. Previously, intermediary liability was governed by the Information Technology (Intermediaries Guidelines) Rules, 2011 issued under section 79 of the IT Act. Rule 3(4) made provisions for sending takedown orders to the intermediary, and the prerogative to send such orders was on ‘<em>an affected person</em>’. On receipt of these orders, the intermediary was bound to remove content and neither the intermediary nor the user whose content was being censored, had the opportunity to dispute the takedown.</span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">As a result, the potential for misuse was wide-open. Rishabh Dara’s <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf"><span class="s2">research</span></a> provided empirical evidence for this; intermediaries were found to act on flawed takedown orders, on the apprehension of being sanctioned under the law, essentially chilling free expression online. The <em>Shreya Singhal</em> judgment, in essence, reined in this misuse by stating that an intermediary is legally obliged to act <em>only when </em>a takedown order is sent by the government or the court. The intent of this was, in the court’s words: “<em>it would be very difficult for intermediaries [...] to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.</em>”<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p5"><span class="s1">In light of this, if Hegde’s petition succeeds, it would mean that intermediaries would now be obligated to subsume the entirety of Article 19(2) jurisprudence in their decision-making, interpret and apply it perfectly, and be open to petitions from users when they fail to do so. This might be a startling undoing of the court’s original intent in <em>Shreya Singhal</em>. Such a reading also means limiting an intermediary’s prerogative to remove speech that may not necessarily fall within the scope of Article 19(2), but is still systematically problematic, including unsolicited commercial communications. Further, most platforms today are dealing with an unprecedented spread and consumption of harmful, misleading information. Limiting their right to exclude speech in this manner, we might be <a href="https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf"><span class="s2">exacerbating</span></a> this problem. <span class="Apple-converted-space"> </span></span></p>
<h3><span class="s1">Government-controlled spaces on social media platforms</span></h3>
<p class="p2"><span class="s1">On the other hand, the original finding of the Court of Appeals, regarding the public nature of an elected representative’s social media account and First Amendment rights of the people to access such an account, might yet still prove instructive for India. While the primary SCOTUS order erases the precedential weight of the original case, there have been similar judgments issued by other courts in the USA, including by the <a href="https://globalfreedomofexpression.columbia.edu/cases/davison-v-randall/"><span class="s2">Fourth Circuit</span></a> court and as a result of a <a href="https://knightcolumbia.org/content/texas-attorney-general-unblocks-twitter-critics-in-knight-institute-v-paxton"><span class="s2">lawsuit</span></a> against a Texas Attorney General.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">A similar situation can be envisaged in India as well. The Supreme Court has <a href="https://indiankanoon.org/doc/591481/"><span class="s2">repeatedly</span></a> <a href="https://indiankanoon.org/doc/27775458/"><span class="s2">held</span></a> that Article 19(1)(a) encompasses not just the right to disseminate information, but also the right to <em>receive </em>information, including <a href="https://indiankanoon.org/doc/438670/"><span class="s2">receiving</span></a> information on matters of public concern. Additionally, in <a href="https://indiankanoon.org/doc/539407/"><span class="s2"><em>Secretary, Ministry of Information and Broadcasting v Cricket Association of Bengal</em></span></a>, the Court had held that the right of dissemination included the right of communication through any media: print, electronic or audio-visual. Then, if we assume that government-controlled spaces on social media platforms, used in dissemination of official functions, are ‘public spaces’, then the government’s denial of public access to such spaces can be construed to be a violation of Article 19(1)(a).<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Conclusion</span></h2>
<p class="p2"><span class="s1">As indicated earlier, despite the facts of the two litigations being different, the legal questions embodied within converge startlingly, inasmuch that are both examples of the growing discontent around the power wielded by social media platforms, and the flawed attempts at fixing it.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">While the above discussion might throw some light on the relationship between an individual, the state and social media platforms, many questions still continue to remain unanswered. For instance, once we establish that users have a fundamental right to access certain spaces within the social media platform, then does the platform have a right to remove that space altogether? If it does so, can a constitutional remedy be made against the platform? Initial <a href="https://indconlawphil.wordpress.com/2018/07/01/guest-post-social-media-public-forums-and-the-freedom-of-speech/"><span class="s2">commentary</span></a> on the Court of Appeals’ decision had contested that the takeaway from that judgment had been that constitutional norms had a primacy over the platform’s own norms of governance. In such light, would the platform be constitutionally obligated to <em>not </em>suspend a government account, even if the content on such an account continues to be harmful, in violation of its own moderation standards?<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">This is an incredibly tricky dimension of the law, made trickier still by the dynamic nature of the platforms, the intense political interests permeating the need for governance, and the impacts on users in the instance of a flawed solution. Continuous engagement, scholarship and emphasis on having a human rights-respecting framework underpinning the regulatory system, are the only ways forward.<span class="Apple-converted-space"> </span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"><br /></span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space">---</span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"><br /></span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"></span></span></p>
<p>The author would like to thank Gurshabad Grover and Arindrajit Basu for reviewing this piece. </p>
<div> </div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech'>http://editors.cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech</a>
</p>
No publisherTorSharkFreedom of Speech and ExpressionIntermediary LiabilityInformation Technology2021-07-02T12:05:13ZBlog EntryOn the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
http://editors.cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021
<b>This note examines the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The analysis is consistent with previous work carried out by CIS on issues of intermediary liability and freedom of expression. </b>
<p><span id="docs-internal-guid-6127737f-7fff-b2eb-1b4a-ff9009a1050f"></span></p>
<p dir="ltr">On 25 February 2021, the Ministry of Electronics and Information Technology (Meity) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (hereinafter, ‘the rules’). In this note, we examine whether the rules meet the tests of constitutionality under Indian jurisprudence, whether they are consistent with the parent Act, and discuss potential benefits and harms that may arise from the rules as they are currently framed. Further, we make some recommendations to amend the rules so that they stay in constitutional bounds, and are consistent with a human rights based approach to content regulation. Please note that we cover some of the issues that CIS has already highlighted in comments on previous versions of the rules.</p>
<p dir="ltr"> </p>
<p dir="ltr">The note can be downloaded <a class="external-link" href="https://cis-india.org/internet-governance/legality-constitutionality-il-rules-digital-media-2021">here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021'>http://editors.cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021</a>
</p>
No publisherTorsha Sarkar, Gurshabad Grover, Raghav Ahooja, Pallavi Bedi and Divyank KatiraFreedom of Speech and ExpressionInternet GovernanceIntermediary LiabilityInternet FreedomInformation Technology2021-06-21T11:52:39ZBlog EntryNew rules leave social media users vulnerable: Experts
http://editors.cis-india.org/internet-governance/news/deccan-herald-krupa-joseph-june-10-2021-new-rules-leave-social-media-users-vulnerable
<b>They analyse the implications of the government vs Twitter controversy on individual privacy</b>
<p>The article by Krupa Joseph was <a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/new-rules-leave-social-media-users-vulnerable-experts-993460.html">published in the Deccan Herald</a> on 10 June 2021. Torsha Sarkar has been quoted.</p>
<hr />
<p style="text-align: justify; ">The government had notified the changes on February 25, and allowed social media companies three months to comply. Twitter and WhatsApp had then separately approached the Delhi High Court against the new regulations, fearing they could compromise user privacy.</p>
<p class="Default" style="text-align: justify; ">On Monday, the court gave Twitter three weeks to file a response to the government’s charge that it had not appointed a grievance officer as claimed.</p>
<p class="Default"><strong>Vague rules</strong></p>
<p class="Default" style="text-align: justify; ">Karthik Srinivasan, communications consultant, who uses his blog Beast of Traal to comment on social media, says the new rules are “vague and open-ended”.</p>
<p class="Default" style="text-align: justify; ">“Coupled with the fact that we still do not have a data protection law, the rules could be severely misused both by government and private entities,” he says.</p>
<p class="Default" style="text-align: justify; ">Users are particularly vulnerable in a country where anything and everything offends a lot of people, he says.</p>
<p class="Default"><strong>Law overreach</strong></p>
<p class="Default" style="text-align: justify; ">Torsha Sarkar, researcher with the Centre for Internet and Society, says the rules introduce additional obligations for social media platforms and classify intermediaries.</p>
<p style="text-align: justify; ">“Intermediaries with over five million users would have obligations to introduce traceability, instal automated filtering, provide detailed grievance redressal mechanisms, and publish compliance <span> reports detailing action taken on takedown orders,” she says.</span></p>
<p class="Default" style="text-align: justify; ">While some of these obligations are similar to those laid down internationally, some alterations are causing concern. The traceability requirement, for example, is highly contentious as it would erode user privacy.</p>
<p class="Default" style="text-align: justify; ">“It is also concerning that the user threshold, for a country like India, with such vast Internet usage, is set at a very low level. This means that even smaller social media platforms might becompelled to carry out economically crippling obligations,” she explains.</p>
<p class="Default" style="text-align: justify; ">The legislative overreach is seen in how the initial draft , which only covered entities like Twitter and Facebook, now seeks to cover digital news media and content curators like Netfl ixand Hulu, she says.</p>
<p class="Default">Stretching the scope of the legislation this way is undemocratic since it was not subject to any public consultation, she notes.</p>
<p class="Default"><b>Case in High Court</b></p>
<p class="Default" style="text-align: justify; ">Mishi Choudhary, technology lawyer and founder of SFLC.in, a legal services organisation specialising in law, technology and policy, says the IT rules notified by the government are unconstitutional. “In the garb of addressing misinformation and regulating technology companies, the government has been exceeding the powers granted through subordinate legislation and using it for political purposes,” she says. It is on these grounds that the Free and Open Source Software community has challenged the new rules in the Kerala High Court. “Technology companies need regulation but not at the expense of user rights,” she says.</p>
<p class="Default"><b>Congress </b><span>‘</span><b>toolkit</b><span>’ </span><b>row</b></p>
<p style="text-align: justify; ">A few weeks after social media platforms were asked to take down posts critical of thegovernment’s management of India’s Covid-19 crisis, Twitter once again found itself at thereceiving end. Last week, Twitter labelled a tweet by BJP leader Sambit Patra, accusing theCongress of working with a ‘toolkit, as ‘manipulated media’. Twitter says it gives the label totweets that include media (videos, audio, and images) that are “deceptively altered orfabricated”. The Delhi police then sent a notice to Twitter in connection and asked the micro-blogging site to explain the reasons for assigning the tag. The police also conducted raids onTwitter offices in India. Things escalated when Twitter said the government was intimidating it. The government hit back saying law-making was its privileges, and Twitter, being a social media platform, should not dictate legal policy framework.</p>
<p class="Default"><b>New rules</b></p>
<p class="Default" style="text-align: justify; ">Under the new IT rules, social media companies like Facebook, WhatsApp and Twitter will be responsible for identifying the originator of a flagged message within 36 hours. They also have to appoint a chief compliance officer, a nodal contact person and a resident grievance officer. Failing to comply with these rules would cause the platforms to lose their status as intermediaries, and make them liable for whatever is posted on their platforms.</p>
<p class="Default"> </p>
<p style="text-align: justify; "><span><br /></span></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/deccan-herald-krupa-joseph-june-10-2021-new-rules-leave-social-media-users-vulnerable'>http://editors.cis-india.org/internet-governance/news/deccan-herald-krupa-joseph-june-10-2021-new-rules-leave-social-media-users-vulnerable</a>
</p>
No publisherKrupa JosephFreedom of Speech and ExpressionSocial MediaInternet Governance2021-06-14T11:27:53ZNews ItemRegulating Sexist Online Harassment as a Form of Censorship
http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-regulating-sexist-online-harassment
<b>This paper is part of a series under IT for Change’s project, Recognize, Resist, Remedy: Combating Sexist Hate Speech Online. The series, titled Rethinking Legal-Institutional Approaches to Sexist Hate Speech in India, aims to create a space for civil society actors to proactively engage in the remaking of online governance, bringing together inputs from legal scholars, practitioners, and activists. The papers reflect upon the issue of online sexism and misogyny, proposing recommendations for appropriate legal-institutional responses. The series is funded by EdelGive Foundation, India and International Development Research Centre, Canada.</b>
<p><span>Introduction</span></p>
<p style="text-align: justify; ">The proliferation of internet use was expected to facilitate greater online participation of women and <a class="external-link" href="https://ssrn.com/abstract=2039116">other marginalised groups</a>. However, over the past few years, as more and more people have come online, it is evident that social power in online spaces mirrors offline hierarchies. While identity and security thefts may be universal experiences, women and the LGBTQ+ community continue to face barriers to safety that men often do not, aside from structural barriers to access. Sexist harassment pervades the online experience of women, be it on dating sites, <a class="external-link" href="https://academic.oup.com/bjc/article/57/6/1462/2623986">online forums, or social media</a>.</p>
<p style="text-align: justify; ">In her book, <i><a class="external-link" href="https://yalebooks.yale.edu/book/9780300215120/twitter-and-tear-gas">Twitter and Tear Gas: The Power and Fragility of Networked Protest</a></i>, Zeynep Tufekci argues that the nature and impact of censorship on social media are very different. Earlier, censorship was enacted by restricting speech. But now, it also works in the form of organised harassment campaigns, which use the qualities of viral outrage to impose a disproportionate cost on the very act of speaking out. Therefore, censorship plays out not merely in the form of the removal of speech but through disinformation and hate speech campaigns.</p>
<p style="text-align: justify; ">In most cases, this censorship of content does not necessarily meet the threshold of hate speech, and free speech advocates have traditionally argued for counter speech as the most effective response to such speech acts. However, the structural and organised nature of harassment and extreme speech often renders counter speech ineffective. This paper will explore the nature of online sexist hate and extreme speech as a mode of censorship. Online sexualised harassment takes various forms including doxxing, cyberbullying, stalking, identity theft, incitement to violence, etc. While there are some regulatory mechanisms – either in law, or in the form of community guidelines that address them, this paper argues for the need to evolve a composite framework that looks at the impact of such censorious acts on online speech and regulatory strategies to address them.</p>
<hr />
<p style="text-align: justify; "><a href="http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf/at_download/file" class="external-link">Click on to read the full text</a> [PDF; 495 Kb]</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-regulating-sexist-online-harassment'>http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-regulating-sexist-online-harassment</a>
</p>
No publisheramberFreedom of Speech and ExpressionInternet GovernanceCensorship2021-05-31T09:56:31ZBlog EntryRegulating Sexist Online Harassment: A Model of Online Harassment as a Form of Censorship
http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf
<b></b>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf'>http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf</a>
</p>
No publisheramberFreedom of Speech and ExpressionInternet GovernanceCensorship2021-05-31T09:39:14ZFileResponse to Mozilla DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period
http://editors.cis-india.org/internet-governance/blog/response-to-mozilla-dns-over-https-doh-and-trusted-recursive-resolver-trr-comment-period
<b>CIS has submitted a response to Mozilla's DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period</b>
<p> </p>
<p>This submission presents a response by the Centre for Internet & Society (CIS) to Mozilla’s DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment <a class="external-link" href="https://blog.mozilla.org/netpolicy/2020/11/18/doh-comment-period-2020/">Period</a> (hereinafter, the “Consultation”) released on November 18, 2020. CIS appreciates Mozilla’s consultations, and is grateful for the opportunity to put forth its views and comments.</p>
<p>Read the response <a class="external-link" href="https://cis-india.org/internet-governance/cis-mozilla-doh-trr/">here</a>.</p>
<p> </p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/response-to-mozilla-dns-over-https-doh-and-trusted-recursive-resolver-trr-comment-period'>http://editors.cis-india.org/internet-governance/blog/response-to-mozilla-dns-over-https-doh-and-trusted-recursive-resolver-trr-comment-period</a>
</p>
No publisherGurshabad Grover, Divyank KatiraFreedom of Speech and Expression2021-01-19T07:35:24ZBlog EntryMapping Web Censorship & Net Neutrality Violations
http://editors.cis-india.org/internet-governance/blog/mapping-web-censorship-net-neutrality-violations
<b></b>
<p> </p>
<p>For over a year, researchers at the Centre
for Internet and Society have been studying website blocking by internet
service providers (ISPs) in India. We have learned that major ISPs
don’t always block the same websites, and also use different blocking
techniques. <strong>To take this study further, and map net neutrality violations by ISPs, we need your help.</strong>
We have developed CensorWatch, a research tool to collect empirical
evidence about what websites are blocked by Indian ISPs, and which
blocking methods are being used to do so. Read more about this project (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/qxKoDnnG4cR8mPZaiOr8immlHKFilRoRSYOvX_26BcZRtiN_hoo5VrFfQHbDqaES1OV6jUM0RbWCZs1ODSHr_Pf9yeJFesRxxQvyUrZm4Tlcvdjmh232QQV3fOkmrj9wiVh5LQiW1LQAprvYWmHp_s-TW5ZdNXZY07QvlFR01dKzIxnv7TorEfkyazo" target="_blank">link</a>), <strong>download CensorWatch</strong> (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/F9Wsq5zbx6VJKZxrsjYFy3Q5-jSkk0-3nr5hBfuyQiDUEKyEm_fLY6kh4W9MB7GOLoPZbowqsXDT17DEmFgMoFY4IIOEjxq0rNCtFeEc7b-0GSnRPeLDi9VmYX5WE1vGlwMvM7BPtyfmXD6lNdIWzAdjq_MpSqWRACk3JJNPhzqieJXoEoOnY8WH1rxR4HnJwDjyJHSkHgMTmWcm0POB_kDOtt2fk_GnXkkjv5LK7MxRZe8f" target="_blank">link</a>), and help determine if ISPs are complying with India’s net neutrality regulations.</p>
<div>
<p> </p>
<p><a class="external-link" href="https://play.google.com/store/apps/details?id=com.censorwatch.netprobesapp"><img src="https://cis-india.org/internet-governance/censorwatch/" alt="null" width="75%" /></a></p>
<p> </p>
<div>
<div>
<div>Learn more about website blocking in India, through our recent work on the issue —</div>
<ol><li>Using information from court orders,
user reports, and government orders, and running network tests from six
ISPs, Kushagra Singh, Gurshabad Grover and Varun Bansal presented the <strong>largest study of web blocking</strong>
in India. Through their work, they demonstrated that major ISPs in
India use different techniques to block websites, and that they don’t
block the same websites (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/mgmW9wuVo0QjRGqm9DnDQiVT4lYy3lgY5maOgjAk05baH_NWtRSfznWooMtcTgQ2a059mWk91p_lMZqJAqaRHXZOLSEQQOAMeM5RowiyfY3giKQm3aDJoYnWw7VhAHeBjdkObBFF0PYWjoC1NJi21fSZyifOWm_CvlC3gq7nxbHtejEy" target="_blank">link</a>).</li><li>Gurshabad Grover and Kushagra Singh
collaborated with Simone Basso of the Open Observatory of Network
Interference (OONI) to study <strong>HTTPS traffic blocking in India</strong> by running experiments on the networks of three popular Indian ISPs: ACT Fibernet, Bharti Airtel, and Reliance Jio (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/oP_eOysGeBOsgRW-5k8V-ReWU_DMUhykR2wN9ZAqndgHev3bxY1c8kSSviR3jjOMqzOJhP05AfK2CtHAH8-Zv21mU7uAW2ainkl5tmS-uZx3LG15MjZXbRQyE71871AouDuXY0hLTVEVG3ovaEvb8BSFOhJz7NpnTZdsY5vIOeBqSsaB31HJdMT8bNELQJ8VjhUoNw" target="_blank">link</a>).</li><li>For <em>The Leaflet</em>, Torsha Sarkar and Gurshabad Grover wrote about the <strong>legal framework of blocking in India</strong>
— Section 69A of the IT Act and its rules. They considered commentator
opinions questioning the constitutionality of the regime, whether
originators of content are entitled to a hearing, and whether Rule 16,
which mandates confidentiality of content takedown requests received by
intermediaries from the Government, continues to be operative (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/WggQUDysA9mWPEzvGTRc43aPpKNmNjDcdEzj1ALhrbXgQWqnZRY9L9J45XXbJ3yCnX9-XIuYyRTQ588cBiYNQIs2KsfB0Dydz2QY4Z5VdMTdJ-RMr2M5uDqJ8Amr5gT3APy01bg8gNTyoEvdIcKryjrWnUFlTdxFAtohQ_AwVRjTbzC5FcAFhO9DdHOQV0Xp9X65At3tR17epGvo" target="_blank">link</a>).</li><li>In the <em>Hindustan Times</em>, Gurshabad Grover critically analysed <strong>the confidentiality requirement embedded within Section 69A of the IT Act</strong> and argued how this leads to internet users in India experiencing arbitrary censorship (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/j75HVdd7j4huKQd0kP9lusNpz1ZL0CxXMEWeySOhsQZbcKECrEKfaq52LlB-QjnT1TIB1mjqhB0TyweA7rLCq41Rd_6uyBUo8-Uc4iHiHSXYxC06rhW7o7ZFtCt7bKdNldDWkoMhSD7x0daAhzcSdLSPbNBRSy1HkGEGZ7Z_11tovlleodez9gm60zyvkGNM1YMQSLZ4NZ0k8RD2zncGPoWXjsytI4YwnQyy_QZNSKOSdY2_X6GoVSugRZhmyWwWCpHpk-yDM7XJ0OF4GZlTUSgfhcfftJEGBlQlkQ" target="_blank">link</a>).</li><li>Torsha Sarkar, along with Sarvjeet Singh of the Centre for Communication Governance (CCG), spoke to <em>Medianama</em> delineating the <strong>procedural aspects of section 69A of the IT Act </strong>(<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/QAWrguo8Vx6X1PsmbTvCTYQ6U6nycGdSRg9gfDYFTRxUAa82nB6gYpuPyEE3VztSJzG2888ua224upBlg-k9Tu29TZdhl3ET71WwsKUfKxdyUPkLiY1A4jSD1p59sH0KXlQBqU10H38gDFHZ5WVsMCwZXLTISv9SvXIRx7Vu59U4HBV-hhB3BSpe_SApQnHQgPN0BIl0g852jSINvTI6Bh5HGNTWZ3nQWRn5H1vShoG4Q3VcZBWfewbc" target="_blank">link</a>).</li><li>Arindrajit Basu spoke to the <em>Times of India</em> about the <strong>geopolitical and regulatory implications</strong> of the Indian government’s move to ban fifty-nine Chinese applications from India (<a href="https://4jok2.r.ag.d.sendibm3.com/mk/cl/f/lICwdbQnezwqQKZHQ_Xso6Qp7735jleiJJJI88DgKZx348ewlSRWU1uFyEbtMwZOoJRS5MjHbX9KgklFrlc-jKTXKL2S4K5aCXEU2isCuFhwORAz_DnnBai7nr2pyiK0HmM0Eb3AD_JyTUwWtg9O6c0jV0Nf8cbTuT3FD7WypVO_NWUJ_GZVo7er10LMUXE_1EP_d2nh2uziuXXmM1JV-9NN6klSATsLa_tprf0bDNbNa_U4DHMm6oQvXFfVHj74jRhq3nKDkCzQeQZ_SRMxNNqIUIN5aMLGbQfBAziZ_E3hIYp-ptOQ7Y2cqF_4eiYdY20tBm5ltySmFBQQi5_nFQ" target="_blank">link</a>).</li></ol>
</div>
</div>
</div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/mapping-web-censorship-net-neutrality-violations'>http://editors.cis-india.org/internet-governance/blog/mapping-web-censorship-net-neutrality-violations</a>
</p>
No publisherpranavFreedom of Speech and ExpressionNet NeutralityInternet Governanceinternet governanceCensorship2020-10-05T07:59:47ZBlog EntryThe State of Secure Messaging
http://editors.cis-india.org/internet-governance/blog/the-state-of-secure-messaging
<b>A look at the protections provided by and threats posed to secure communication online.</b>
<p><em>This blogpost was edited by Gurshabad Grover and Amber Sinha.</em></p>
<p dir="ltr">The current benchmark for secure communication online is
end-to-end encrypted messaging. It refers to a method of encryption
wherein the contents of a message are only readable by the devices of
the individuals, or endpoints, participating in the communication. All
other Internet intermediaries such as internet service providers,
internet exchange points, undersea cable operators, data centre
operators, and even the messaging service providers themselves cannot
read them. This is achieved through cryptographic <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">mechanisms</a>
that allow independent devices to establish a shared secret key over an
insecure communication channel, which they then use to encrypt and
decrypt messages. Common examples of end-to-end encrypted messaging are
applications like Signal and WhatsApp.</p>
<p dir="ltr">This post attempts to give at-risk individuals, concerned
citizens, and civil society at large a more nuanced understanding of the
protections provided and threats posed to the security and privacy of
their communications online.</p>
<h4 dir="ltr">Threat Model</h4>
<p dir="ltr">The first step to assessing security and privacy is to
identify and understand actors and risks. End-to-end encrypted messaging
applications consider the following threat model:</p>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Device compromise: Can happen physically through loss or
theft, or remotely. Access to an individual’s device could be gained
through technical flaws or coercion (<a href="https://www.eff.org/wp/digital-privacy-us-border-2017">legal</a>, or <a href="https://xkcd.com/538/">otherwise</a>). It can be temporary or be made persistent by installing <a href="https://citizenlab.ca/2019/10/nso-q-cyber-technologies-100-new-abuse-cases/">malware</a> on the device.</p>
</li><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Network monitoring and interference: Implies access to data
in transit over a network. All Internet intermediaries have such
access. They may either actively interfere with the communication or
passively <a href="https://www.theatlantic.com/international/archive/2013/07/the-creepy-long-standing-practice-of-undersea-cable-tapping/277855/">observe</a> traffic.</p>
</li><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Server compromise: Implies access to the web server hosting
the application. This could be achieved through technical flaws,
insider access such as an employee, or through coercion (<a href="https://en.wikipedia.org/wiki/Investigatory_Powers_Act_2016">legal</a>, or otherwise). </p>
</li></ul>
<p dir="ltr">End-to-end encrypted messaging aims to offer complete
message confidentiality and integrity in the face of server and network
compromise, and some protections against device compromise. These are
detailed below.</p>
<h4 dir="ltr">Protections Provided</h4>
<p dir="ltr">Secure messaging services guarantee certain properties. For
mature services that have received adequate study from researchers, we
can assume them to be sound, barring implementation flaws which are
described later.</p>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Confidentiality: The contents of a message are kept private and the ciphers used are <a href="https://pthree.org/2016/06/19/the-physics-of-brute-force/">practically</a> unbreakable by adversaries.</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Integrity: The contents of a message cannot be modified in transit.</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Deniability: Aims to mimic unrecorded real-world
conversations where an individual can deny having said something.
Someone in possession of the chat transcript cannot <em>cryptographically</em>
prove that an individual authored a particular message. While some
applications feature such off-the-record messaging capabilities, the
legal applicability of such mechanisms is <a href="https://debian-administration.org/users/dkg/weblog/104">debatable</a>.</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Forward and Future Secrecy: These properties aim to limit
the effects of a temporary compromise of credentials on a device.
Forward secrecy ensures messages collected over the network, which were
sent before the compromise, cannot be decrypted. Future secrecy ensures
messages sent post-compromise are protected. These mechanisms are easily
circumvented in practice as past messages are usually stored on the
device being compromised, and future messages can be obtained by gaining
persistent access during compromise. These properties are meant to
protect individuals <a href="https://hal.inria.fr/hal-01966560/document">aware</a> of these limitations in exceptional situations such as a journalist crossing a border.</p>
</li></ul>
<h4 dir="ltr">Shortcomings</h4>
<p dir="ltr">While secure messaging services offer useful protections
they also have some shortcomings. It is useful to understand these and
their mitigations to minimise risk.</p>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Metadata: Information about a communication such as <strong>who</strong> the participants are, <strong>when</strong> the messages are sent, <strong>where</strong> the participants are located, and <strong>what</strong>
the size of a message is can offer important contextual information
about a conversation. While some popular messaging services <a href="https://signal.org/blog/sealed-sender/">attempt</a>
to minimize metadata generation, metadata leakage, in general, is still
considered an open problem because such information can be gleaned by
network monitoring as well as from server compromise. Application
policies around whether such data is stored and for how long it is
retained can improve privacy. There are also <a href="https://ricochet.im/">experimental</a> approaches that use techniques like onion routing to hide metadata.</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Authentication: This is the process of asserting whether an
individual sending or receiving a message is who they are thought to
be. Current messaging services trust application servers and cell
service providers for authentication, which means that they have the
ability to replace and impersonate individuals in conversations.
Messaging services offer advanced features to mitigate this risk, such
as notifications when a participant’s identity changes, and manual
verification of participants’ security keys through other communication
channels (in-person, mail, etc.).</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Availability: An individual’s access to a messaging service
can be impeded. Intermediaries may delay or drop messages resulting in
what is called a denial of service attack. While messaging services are
quite resilient to such attacks, governments may censor or completely
shut down Internet access.</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Application-level gaps: Capabilities offered by services in
addition to messaging, such as contact discovery, online status, and
location sharing are often <a href="https://www.forbes.com/sites/thomasbrewster/2017/01/22/whatsapp-facebook-backdoor-government-data-request/">not covered</a>
by end-to-end encryption and may be stored by the application server.
Application policies around how such information is gathered and
retained affect privacy.</p>
</li></ul>
<ul><li style="list-style-type: disc;" dir="ltr">
<p dir="ltr">Implementation flaws and backdoors: Software or hardware
flaws (accidental or intentional) on an individual’s device could be
exploited to circumvent the protections provided by end-to-end
encryption. For mature applications and platforms, accidental flaws are
difficult and <a href="https://arstechnica.com/information-technology/2019/09/for-the-first-time-ever-android-0days-cost-more-than-ios-exploits/">expensive</a> to exploit, and as such are only accessible to Government or other
powerful actors who typically use them to surveil individuals of
interest (and not for mass surveillance). Intentional flaws or backdoors
introduced by manufacturers may also be present. The only defence
against these is security researchers who rely on manual inspection to
examine software and network interactions to detect them.</p>
</li></ul>
<h4 dir="ltr">Messaging Protocols and Standards</h4>
<p dir="ltr">In the face of demands for exceptional access to encrypted
communication from governments, and risks of mass surveillance from both
governments and corporations, end-to-end encryption is important to
enable secure and private communication online. The signal protocol,
which is open and adopted by popular applications like WhatsApp and
Signal, is considered a success story as it brought end-to-end
encryption to over a billion users and has become a de-facto standard.</p>
<p dir="ltr">However, it is unilaterally developed and controlled by a single organisation. Messaging Layer Security (or <a href="https://datatracker.ietf.org/wg/mls/about/">MLS</a>)
is a working group within the Internet Engineering Task Force (IETF)
that is attempting to standardise end-to-end encryption through
participation of individuals from corporations, academia, and civil
society. The draft protocol offers the standard security properties
mentioned above, except for deniability which is still being considered.
It incorporates novel research that allows it to scale efficiently for
large groups up to thousands of participants, which is an improvement
over the signal protocol. MLS aims to increase adoption further by
creating open standards and implementations, similar to the Transport
Layer Security (TLS) protocol used to encrypt much of the web today.
There is also a need to look beyond end-to-end encryption to address its
shortcomings, particularly around authentication and metadata leakage.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/the-state-of-secure-messaging'>http://editors.cis-india.org/internet-governance/blog/the-state-of-secure-messaging</a>
</p>
No publisherdivyankFreedom of Speech and ExpressionEncryptionIETF2020-07-17T08:12:15ZBlog EntryDonald Trump is attacking the social media giants; here’s what India should do differently
http://editors.cis-india.org/internet-governance/blog/donald-trump-is-attacking-the-social-media-giants-here2019s-what-india-should-do-differently
<b>For a robust and rights-respecting public sphere, India needs to ensure that large social media platforms receive adequate protections, and are made more responsible to its users.</b>
<p>This piece was first published at <a class="external-link" href="https://scroll.in/article/965151/donald-trump-is-attacking-the-social-media-giants-heres-what-india-should-do-differently">Scroll</a>. The authors would like to thank Torsha Sarkar for reviewing and editing the piece, and to Divij Joshi for his feedback.</p>
<hr />
<div id="article-contents" class="article-body">
<p>In retaliation to Twitter <a class="link-external" href="https://www.nytimes.com/2020/05/26/technology/twitter-trump-mail-in-ballots.html" rel="nofollow noopener" target="_blank">labelling</a> one of US President Donald Trump’s tweets as being misleading, the White House signed an <a class="link-external" href="https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/" rel="nofollow noopener" target="_blank">executive order</a>
on May 28 that seeks to dilute protections that social media companies
in the US have with respect to third-party content on their platforms.</p>
<p>The
order argues that social media companies that engage in censorship stop
functioning as ‘passive bulletin boards’: they must consequently be
treated as ‘content creators’, and be held liable for content on their
platforms as such. The shockwaves of the decision soon reached India,
with news coverage of the event <a class="link-external" href="https://www.business-standard.com/article/companies/trump-twitter-spat-debate-rages-on-role-of-social-media-companies-120053100055_1.html" rel="nofollow noopener" target="_blank">starting</a> to <a class="link-external" href="https://economictimes.indiatimes.com/tech/internet/feud-between-donald-trump-and-jack-dorsey-can-have-long-lasting-effects-on-how-we-consume-media-in-india/articleshow/76111556.cms" rel="nofollow noopener" target="_blank">debate</a> the <a class="link-external" href="https://economictimes.indiatimes.com/tech/internet/trumps-move-against-social-media-cos-unlikely-to-change-indias-stand/articleshow/76094586.cms?from=mdr" rel="nofollow noopener" target="_blank">consequences</a> of Trump’s order on how India regulates internet services and social media companies.</p>
<p>The
debate on the responsibilities of online platforms is not new to India,
and recently took main stage in December 2018 when the Ministry of
Electronics and Information Technology, Meity, published a draft set of
guidelines that most online services – ‘intermediaries’ – must follow.
The draft rules, which haven’t been notified yet, propose to
significantly expand the obligations on intermediaries.</p>
<p>Trump’s
executive order, however, comes in the context of content moderation
practices by social media platforms, i.e. when platforms censor speech
of their volition, and not because of legal requirements. The legal
position of content moderation is relatively under-discussed, at least
in legal terms, when it comes to India.</p>
<p>In contrast to
commentators who have implicitly assumed that Indian law permits content
moderation by social media companies, we believe Indian law fails to
adequately account for content moderation and curation practices
performed by social media companies. There may be adverse consequences
for the exercise of freedom of expression in India if this lacuna is not
filled soon.</p>
<h3 class="cms-block cms-block-heading">India vs US<br /></h3>
<p>A
useful starting point for the analysis is to compare how the US and
India regulate liability for online services. In the US, Section 230 of
the Communications Decency Act provides online services with broad
immunity from liability for third party content that they host or
transmit.</p>
<p>There are two critical components to what is generally referred to as Section 230.</p>
<p>First,
providers of an ‘interactive computer service’, like your internet
service provider or a company like Facebook, will not be treated as
publishers or speakers of third-party content. This system has allowed
the internet speech and economy to <a class="link-external" href="https://law.emory.edu/elj/content/volume-63/issue-3/articles/how-law-made-silicon-valley.html" rel="nofollow noopener" target="_blank">flourish</a>
since it allows companies to focus on their service without a constant
paranoia for what users are transmitting through their service.</p>
<p>The
second part of Section 230 states that services are allowed to moderate
and remove, in ‘good faith’, such third-party content that they may
deem offensive or obscene. This allows for online services to instate
their own community guidelines or content policies.</p>
<p>In India,
section 79 of the Information Technology Act is the analogous provision:
it grants intermediaries conditional ‘safe harbour’. This means
intermediaries, again like Facebook or your internet provider, are
exempt from liability for third-party content – like messages or videos
posted by ordinary people – provided their functioning meets certain
requirements, and they comply with the allied rules, known as
Intermediary Guidelines.</p>
<p>The notable and stark difference between
Indian law and Section 230 is that India’s IT Act is largely silent on
content moderation practices. As Rahul Matthan <a class="link-external" href="https://www.livemint.com/opinion/columns/shield-online-platforms-for-content-moderation-to-work-11591116270685.html" rel="nofollow noopener" target="_blank">points out</a>,
there is no explicit allowance in Indian law for platforms to take down
content based on their own policies, even if such actions are done in
good faith.</p>
<h3 class="cms-block cms-block-heading">Safe harbour</h3>
<div> </div>
<p>One
may argue that the absence of an explicit permission does not
necessarily mean that any platform engaging in content moderation
practices will lose its safe harbour. However, the language of Section
79 and the allied rules may even create room for divesting social media
platforms of their safe harbour.</p>
<p>The first such indication is
that the conditions to qualify for safe harbour, intermediaries must not
modify said content, not select the recipients of particular content,
and take information down when it is brought to their notice by
governments or courts.</p>
<p>Most of the conditions are almost a
verbatim copy of a ‘mere conduit’ as defined by the EU Directive on
E-Commerce, 2000. This definition was meant to encapsulate the
functioning of services like infrastructure providers, which transmit
content without exerting any real control. Thus, by adopting this
definition for all intermediaries, Indian law mostly considers internet
services, even social media platforms, to be passive plumbing through
which information flows.</p>
<p>It is easy to see how this narrow conception of online services is severely <a class="link-external" href="https://georgetownlawtechreview.org/wp-content/uploads/2018/07/2.2-Gilespie-pp-198-216.pdf" rel="nofollow noopener" target="_blank">lacking</a>.</p>
<p>Most prominent social media platforms <a class="link-external" href="http://guidelines." rel="nofollow noopener" target="_blank">remove</a> or <a class="link-external" href="https://techcrunch.com/2019/12/16/instagram-fact-checking/" rel="nofollow noopener" target="_blank">hide</a> content, <a class="link-external" href="https://about.fb.com/news/2016/06/building-a-better-news-feed-for-you/" rel="nofollow noopener" target="_blank">algorithmically curate</a> news-feeds to make users keep coming back for more, and increasingly add <a class="link-external" href="https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html" rel="nofollow noopener" target="_blank">labels</a>
to content. If the law is interpreted strictly, these practices may be
adjudged to run afoul of the aforementioned conditions that
intermediaries need to satisfy in order to qualify for safe harbour.</p>
<h3 class="cms-block cms-block-heading">Platforms or editors?<br /></h3>
<p>For
instance, it can be argued that social media platforms initiate
transmission in some form when they pick and ‘suggest’ relevant
third-party content to users. When it comes to newsfeeds, neither the
content creator nor the consumer have as much control over how their
content is disseminated or curated as much as the platform does. By
curating newsfeeds, social media platforms can be said to essentially
‘selecting the receiver’ of transmissions.</p>
<p>The Intermediary
Guidelines further complicate matters by specifically laying out what is
not to be construed as ‘editing’ under the law. Under rule 3(3), the
act of taking down content pursuant to orders under the Act will not be
considered as ‘editing’ of said content.</p>
<p>Since the term ‘editing’
has been left undefined beyond the negative qualification, several
social media intermediaries may well qualify as editors. They use
algorithms that curate content for their users; like traditional news
editors, these algorithms use certain <a class="link-external" href="https://www.researchgate.net/profile/Michael_Devito/publication/302979999_From_Editors_to_Algorithms_A_values-based_approach_to_understanding_story_selection_in_the_Facebook_news_feed/links/5a19cc3d4585155c26ac56d4/From-Editors-to-Algorithms-A-values-based-approach-to-understanding-story-selection-in-the-Facebook-news-feed.pdf" rel="nofollow noopener" target="_blank">‘values’</a>
to determine what is relevant to their audiences. In other words, one
can argue that it is difficult to draw a bright line between editorial
and algorithmic acts.</p>
<p>To retain their safe harbour, the
counter-argument that social media platforms can rely is the fact that
Rule 3(5) of the Intermediary Guidelines requires intermediaries to
inform users that intermediaries reserve the right to take down user
content that relates to a wide of variety of acts, including content
that threatens national security, or is “[...] grossly harmful,
harassing, blasphemous, [etc.]”.</p>
<p>In practice, however, the
content moderation practices of some social media companies may go
beyond these categories. Additionally, the rule does not address the
legal questions created by these platforms’ curation of news-feeds.</p>
<p>The
purpose of highlighting how Section 79 treats the practices of social
media platforms is not with the intention of arguing that these
platforms should be held liable for user-generated content. Online
spaces created by social media platforms have allowed for individuals to
express themselves and participate in political organisation and <a class="link-external" href="https://www.pewresearch.org/internet/2018/07/11/public-attitudes-toward-political-engagement-on-social-media/" rel="nofollow noopener" target="_blank">debate</a>.</p>
<p>A
level of protection of intermediaries from immunity is therefore
critical for the protection of several human rights, especially the
right to freedom of speech. This piece only serves to highlight that
section 79 is antiquated and unfit to deal with modern online services.
The interpretative dangers that exist in the provision create regulatory
uncertainty for organisations operating in India.</p>
<h3 class="cms-block cms-block-heading">Dangers to speech<br /></h3>
<p>These dangers may not just be theoretical.</p>
<p>Only last year, Twitter CEO Jack Dorsey was <a class="link-external" href="https://www.hindustantimes.com/india-news/twitter-ceo-jack-dorsey-summoned-by-parliamentary-panel-on-feb-25-panel-refuses-to-hear-other-officials/story-8x9OUbNBo36uvp92L5nOKI.html" rel="nofollow noopener" target="_blank">summoned</a>
by the Parliamentary Committee on Information Technology to answer
accusations of the platform having a bias against ‘right-wing’ accounts.
More recently, BJP politician Vinit Goenka <a class="link-external" href="https://www.medianama.com/2020/06/223-vinit-goenka-twitter-khalistan/" rel="nofollow noopener" target="_blank">encouraged people to file cases against Twitter</a> for promoting separatist content.</p>
<p>Recent <a class="link-external" href="https://sflc.in/sites/default/files/reports/Intermediary_Liability_2_0_-_A_Shifting_Paradigm.pdf" rel="nofollow noopener" target="_blank">interventions</a>
from the Supreme Court have imposed proactive filtration and blocking
requirements on intermediaries, but these have been limited to
reasonable restrictions that may be imposed on free speech under Article
19 of India’s Constitution. Content moderation policies of
intermediaries like Twitter and Facebook go well beyond the scope of
Article 19 restrictions, and the apex court has not yet addressed this.</p>
<p>The
Delhi High Court, in Christian Louboutin v. Nakul Bajaj, has already
highlighted criteria for when e-commerce intermediaries can stake claim
to Section 79 safe harbour protections based on the active (or passive)
nature of their services. While the order came in the context of
intellectual property violations, nothing keeps a court from similarly
finding that Facebook and Twitter play an ‘active’ role when it comes to
content moderation and curation.</p>
<p>These companies may one day
find the ‘safe harbour’ rug pulled from under their feet if a court
reads section 79 more strictly. In fact, judicial intervention may not
even be required. The threat of such an interpretation may simply be
exploited by the government, and used as leverage to get social media
platforms to toe the government line.</p>
<h3 class="cms-block cms-block-heading">Protection and responsibility<br /></h3>
<p>Unfortunately,
the amendments to the intermediary guidelines proposed in 2018 do not
address the legal position of content moderation either. More recent
developments <a class="link-external" href="https://www.medianama.com/2020/04/223-meity-information-technology-act-amendments/" rel="nofollow noopener" target="_blank">suggest</a>
that the Meity may be contemplating amending the IT Act. This presents
an opportunity for a more comprehensive reworking of the Indian
intermediary liability regime than what is possible through delegated
legislation like the intermediary rules.</p>
<p>Intermediaries, rather
than being treated uniformly, should be classified based on their
function and the level of control they exercise over the content they
process. For instance, network infrastructure should continue to be
treated as ‘mere conduits’ and enjoy broad immunity from liability for
user-generated content.</p>
<p>More complex services like search engines
and online social media platforms can have differentiated
responsibilities based on the extent they can contextualise and change
content. The law should carve out an explicit permission to platforms to
moderate content in good faith. Such an allowance should be accompanied
by outlining best practices that these platforms can follow to ensure <a class="link-external" href="https://santaclaraprinciples.org/" rel="nofollow noopener" target="_blank">transparency and accountability</a> to their users.</p>
<p>For
a robust and rights-respecting public sphere, India needs to ensure
that large social media platforms receive adequate protections, and are
made more responsible to its users.</p>
<p><em>Anna Liz Thomas is a law
graduate and a policy researcher, currently working with the Centre for
Internet and Society. Gurshabad Grover manages research in the freedom
of expression and internet governance team at CIS</em>.</p>
</div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/donald-trump-is-attacking-the-social-media-giants-here2019s-what-india-should-do-differently'>http://editors.cis-india.org/internet-governance/blog/donald-trump-is-attacking-the-social-media-giants-here2019s-what-india-should-do-differently</a>
</p>
No publisherAnna Liz Thomas and Gurshabad GroverContent takedownFreedom of Speech and ExpressionIntermediary Liability2020-06-25T09:07:52ZBlog EntryICANN takes one step forward in its human rights and accountability commitments
http://editors.cis-india.org/internet-governance/blog/article-19-akriti-bopanna-and-ephraim-percy-kenyanito-december-16-2019-icann-takes-one-step-forward-in-its-human-rights-and-accountability-commitments
<b>Akriti Bopanna and Ephraim Percy Kenyanito take a look at ICANN's Implementation Assessment Report for the Workstream 2 recommendations and break down the key human rights considerations in it. Akriti chairs the Cross Community Working Party on Human Rights at ICANN and Ephraim works on Human Rights and Business for Article 19, leading their ICANN engagement.</b>
<p style="text-align: justify;">The article was first<a class="external-link" href="https://www.article19.org/resources/blog-icann-takes-one-step-forward-in-its-human-rights-and-accountability-commitments/"> published on Article 19</a> on December 16, 2019</p>
<hr style="text-align: justify;" />
<p style="text-align: justify;">ICANN is the international non-profit organization that brings together various stakeholders to create policies aimed at coordinating the Domain Name System. Some of these stakeholders include representatives from government, civil society, academia, the private sector, and the technical community.</p>
<p style="text-align: justify;">During the recently concluded 66th International Meeting of the Internet Corporation for Assigned Names and Numbers (ICANN) in Montreal (Canada); the ICANN board adopted by consensus the recommendations contained within the Work Stream 2 (WS2) Final Report. This report was generated as part of steps towards accountability after the September 30th 2016 U.S. government handing over of its unilateral control over ICANN, through its previous stewardship role of the Internet Assigned Names and Numbers Authority (IANA).</p>
<p style="text-align: justify;">The Workstream 2 Recommendations on Accountability are seen as a big step ahead in the incorporation of human rights in ICANN’s various processes, with over 100 recommendations on aspects ranging from diversity to transparency. An Implementation Team has been constituted which comprises the Co-chairs and the rapporteurs from the WS2 subgroups. They will primarily help the ICANN organization in interpreting recommendations of the groups where further clarification is needed on how to implement the same. As the next step, an Implementation Assessment Report has recently been published which looks at the various resources and steps needed. The steps are categorized into actions meant for one of the 3; the ICANN Board, Community and the ICANN organization itself. These will be funded by ICANN’s General Operating Fund, the Board and the org.</p>
<p style="text-align: justify;">The report is divided into the following 8 issues: 1) Diversity, 2) Guidelines for Good Faith, 3) Recommendations for a Framework of Interpretation for Human Rights, 4) Jurisdiction of Settlement of Dispute Issues, 5) Recommendations for Improving the ICANN Office of the Ombudsman, 6) Recommendations to increase SO/ AC Accountability, 7) Recommendations to increase Staff Accountability and 8) Recommendations to improve ICANN Transparency.</p>
<p style="text-align: justify;">This blog will take a look at the essential human rights related considerations of the report and how the digital rights community can get involved with the effectuation of the recommendations.</p>
<p style="text-align: justify;"><strong>Diversity</strong></p>
<p style="text-align: justify;">The core issues concerning the issue of diversity revolve around the need for a uniform definition of the parameters of diversity and a community discussion on the ones already identified; geographic representation, language, gender, age, physical disability, diverse skills and stakeholder constituency. An agreed upon definition of all of these is necessary before its Board approval and application consistently through the various parts of ICANN. In addition, it is also required to formulate a standard template for diversity data collection and report generation. This sub group’s recommendations are estimated to be implemented in 6-18 months. Many of the recommendations need to be analyzed for compliance with the General Data Protection Regulation (GDPR) such as collecting of information relating to disability. For now, the GDPR is only referenced with no further details on how steps considered will either comply or contrast the law.</p>
<p style="text-align: justify;"><strong>Good faith Guidelines</strong></p>
<p style="text-align: justify;">The Empowered Community (EC) which includes all the Supporting Organizations, At-Large-Advisory-Committee and Government Advisory Council, are called upon to conceptualize guidelines to be followed when individuals from the EC are participating in Board Removal Processes. Subsequent to this, the implementation will take 6-12 months.</p>
<p style="text-align: justify;"><strong>Framework of Interpretation for Human Rights</strong></p>
<p style="text-align: justify;">Central to the human rights conversation and finally approved, is the Human Rights Framework of Interpretation. However the report does not give a specific timeline for its implementation, only mentioning that this process will take more than 12 months. The task within this is to establish practices of how the core value of respecting human rights will be balanced with other core values while developing ICANN policies and execution of its operations. All policy development processes, reviews, Cross Community Working Group recommendations will need a framework to consider and incorporate human rights, in tandem with the Framework of Interpretation. It will also have to be shown that policies and recommendations sent to the Board have factored in the FOI.</p>
<p style="text-align: justify;"><strong>Transparency</strong></p>
<p style="text-align: justify;">The recommendations focus on the following four key areas as listed below:<br />1. Improving ICANN’s Documentary Information Disclosure Policy (DIDP).<br />2. Documenting and Reporting on ICANN’s Interactions with Governments.<br />3. Improving Transparency of Board Deliberations.<br />4. Improving ICANN’s Anonymous Hotline (Whistleblower Protection).</p>
<p style="text-align: justify;">The bulk of the burden for implementation is put on ICANN org with the community providing oversight and ensuring ICANN lives up to its commitments under various policies and laws. Subsequent to this, the implementation will take 6-12 months.</p>
<p style="text-align: justify;"><strong>How the ICANN community can contribute to this work</strong></p>
<p style="text-align: justify;">This is a defining moment on the future of ICANN and there are great opportunities for the ICANN multistakeholder community to continue shaping the future of the Internet. Some of the envisioned actions by the community include:</p>
<ul style="text-align: justify;">
<li>monitoring and assessing the performance of the various ICANN bodies, and acting on the recommendations that emerge from those accountability processes. This will only be done through collaborative formulation of processes and procedures for PDPS, CCWGs etc to incorporate HR considerations and subsequently implementation of the best practices suggested for improving SO/ACs accountability and transparency;</li>
<li>conducting diversity assessments to inform objectives and strategies for diversity criteria;</li>
<li>supporting contracted parties through legal advice for change in their agreements when it comes to choice of law and venue recommendations;</li>
<li style="text-align: justify;">contributing to conversations where the Ombudsman can expand his/her involvement that go beyond current jurisdiction and authority</li></ul>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/article-19-akriti-bopanna-and-ephraim-percy-kenyanito-december-16-2019-icann-takes-one-step-forward-in-its-human-rights-and-accountability-commitments'>http://editors.cis-india.org/internet-governance/blog/article-19-akriti-bopanna-and-ephraim-percy-kenyanito-december-16-2019-icann-takes-one-step-forward-in-its-human-rights-and-accountability-commitments</a>
</p>
No publisherAkriti Bopanna and Ephraim Percy KenyanitoFreedom of Speech and ExpressionICANNIANAInternet Governance2019-12-19T11:35:16ZBlog EntryIndia’s record on internet shutdown gets bleaker; now blocked in 2 NE states
http://editors.cis-india.org/internet-governance/news/hindustan-times-december-11-2019-indias-record-on-internet-shutdown-gets-bleaker
<b>India reported over 100 internet shutdown in 2018, according to an annual study of Freedom House, a US-based non-profit research organization.</b>
<p style="text-align: justify; ">The article was published in the <a class="external-link" href="https://www.hindustantimes.com/india-news/amid-anti-citizenship-bill-protests-internet-shutdown-in-tripura-arunachal/story-jqR4jxiJexKbKIivV6XZBP.html">Hindustan Times</a> on December 11, 2019. Pranesh Prakash was quoted.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">The internet shutdown on Tuesday in Arunachal Pradesh and Tripura amid spiraling protests against the <a href="https://www.hindustantimes.com/editorials/why-north-east-shouldn-t-be-wary-of-citizen-amendment-bill-opinion/story-JPYTnQROIi9cdXACK3k7KO.html" title="Citizenship (Amendment) Bill in the Northeast">Citizenship (Amendment) Bill in the Northeast</a> is the latest in a series of such shutdowns across India, which topped the list of countries that resorted to such measures in 2018.</p>
<p style="text-align: justify; ">India reported over 100 internet shutdown in 2018, according to an annual study of Freedom House, a US-based non-profit research organization. The study on the internet and digital media freedom was conducted in over 65 countries, which cover 87% of the world’s internet users</p>
<p style="text-align: justify; ">Police and administrative authorities have cited protests and other security reasons to routinely snap the internet in India.</p>
<p style="text-align: justify; ">The Centre promulgated the Temporary Suspension of Telecom Services (Public Emergency or Public Safety) Rules, 2017, under the Indian Telegraph Act, 1885, in August 2017 for legal sanction to the shutdowns.</p>
<p style="text-align: justify; ">As per the rules, Union home ministry secretary or secretaries of state home departments can order temporary suspension of the internet. An internet suspension order has to be taken up for review within five days.</p>
<p style="text-align: justify; ">Prior to 2017, authorities could shut down the internet under Section 144 of the Code of Criminal Procedure (CrPC), which empowers an executive magistrate to prohibit an assembly of over four people.</p>
<p style="text-align: justify; ">Section 5 (2) of the Telegraph Act, 1855, allowed the government to prevent transmission of any telegraphic message during a public emergency or in the interest of public safety.</p>
<p style="text-align: justify; ">The Kashmir Valley has remained under an internet shutdown since August 4. The shutdown was imposed hours ahead of the nullification of the Constitution’s Article 370 that gave Jammu and Kashmir special status.</p>
<p style="text-align: justify; ">Internet and phone lines were snapped ahead of Republic Day celebrations in 2010 in one of the first reported shutdowns in the Valley. Kashmir also holds the record for the longest shutdown when the internet was snapped for 133 days after the killing of Hizbul Mujahideen militant Burhan Wani in July 2016. The current shutdown, with 122 days and counting, is the second-longest.</p>
<p style="text-align: justify; ">The 100-day blackout in Darjeeling during the Gorkha agitation in 2016 is the third-longest internet shutdown in India.</p>
<p style="text-align: justify; ">Ahead of the verdict in the Ram Janmabhoomi-Babri Masjid title suit last month, the internet was shut down in parts of Maharashtra, Rajasthan, Haryana and Uttar Pradesh. The internet was shut down for three days in Gujarat during the agitation for a quota in jobs and educational institutes for the Patidar community in 2015.</p>
<p style="text-align: justify; ">As per the Software Freedom Law Centre, which provides free legal services to protect Free and Open Source Software, the total number of shutdowns in Indian since 2012 is more than 359. As per the tracker -- internetshutdowns.in -- which records such instances from newspaper clippings -- there have been 89 internet shutdowns in 2019, 134 in 2018, and 79 in 2017.</p>
<p style="text-align: justify; ">“As a part of this project, we track incidents of Internet shutdowns across India in an attempt to draw attention to the troubling trend of disconnecting access to Internet services, for reasons ranging from curbing unrest to preventing cheating in an examination,” it states as part of its purpose.</p>
<p style="text-align: justify; ">In September this year, the Kerala High Court held that access to the internet is a fundamental right. <span>According to Pranesh Prakash of the Centre for Internet Society, the shutdowns are largely unlawful.</span></p>
<p style="text-align: justify; ">“David Kaye, the UN special rapporteur on the right to freedom of opinion and expression, has condemned the shutdowns and noted that the principles of proportionality and necessity should be adhered to in case of shutdowns. Yet, there have been several instances where lives have been lost in Kashmir due to the lockdown,” he said.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/hindustan-times-december-11-2019-indias-record-on-internet-shutdown-gets-bleaker'>http://editors.cis-india.org/internet-governance/news/hindustan-times-december-11-2019-indias-record-on-internet-shutdown-gets-bleaker</a>
</p>
No publisherAdminFreedom of Speech and ExpressionInternet Governance2019-12-15T05:51:20ZNews ItemIn Twitter India’s Arbitrary Suspensions, a Question of What Constitutes a Public Space
http://editors.cis-india.org/internet-governance/blog/the-wire-torsha-sarkar-december-7-2019-twitter-arbitrary-suspension-public-space
<b>A discussion is underway about the way social media platforms may have to operate within the tenets of constitutional protections of free speech.</b>
<p style="text-align: justify; ">The article by Torsha Sarkar was <a class="external-link" href="https://thewire.in/tech/twitter-arbitrary-suspension-public-space">published in the Wire</a> on December 7, 2019.</p>
<hr />
<p style="text-align: justify; ">On October, 26 2019, Twitter suspended the account of senior advocate Sanjay Hegde. The reason? He had previously put up the famous photo of August Landmesser refusing to do the Nazi salute in a sea of crowd in the Blohm Voss shipyard.</p>
<p style="text-align: justify; ">According to the social media platform, the image violated Twitter’s ‘hateful imagery’ guidelines, despite the photo being around for decades and usually being recognised as a sign of resistance against blind authoritarianism.</p>
<p style="text-align: justify; "><img src="http://editors.cis-india.org/home-images/AugustLandmesser.png/@@images/bf841f6d-fd25-4bd8-b421-8e55d81c021b.png" alt="August Landmasser" class="image-inline" title="August Landmasser" /></p>
<p style="text-align: justify; "><i>August Landmesser. Photo: Public Domain</i></p>
<p style="text-align: justify; ">Twitter briefly revoked the suspension on October 27, but promptly suspended Hegde’s account again. This time, the action was prompted by Hegde quote-tweeting parts of a poem by Gorakh Pandey, titled ‘Hang him’, which was written in protest of the first death penalties given to two peasant revolutionaries in an independent India. This time, Hegde was informed that his account would not be restored.</p>
<p style="text-align: justify; ">Spurred by what he believed was Twitter’s arbitrary exercise of power, he proceeded to file a legal notice with Twitter, and <a href="https://www.livelaw.in/news-updates/sr-adv-sanjay-hegde-serves-legal-notice-on-twitter-for-restoration-of-account-149579">asked</a> the Ministry of Electronics and Information Technology (MeitY) to intervene in the matter. It is the subject matter of this ask that becomes of interest.</p>
<p style="text-align: justify; ">In his complaint, Hegde first outlines how the content shared by him did not violate any of Twitter’s community guidelines. He then goes on to highlight how his fundamental right of dissemination and receipt of information under Article 19(1)(a) were obstructed by the action of Twitter. Here, he places reliance to several key decisions of the Indian and the US Supreme court on media freedom, which provided thrust to his argument that a citizen’s right to free speech is meaningless if control was concentrated in the hands of a few private parties.</p>
<h3 style="text-align: justify; ">Vertical or horizontal?</h3>
<p style="text-align: justify; ">One of the first things we learn about fundamental rights is that they are enforceable against the government, and that they allow the individual to have a remedy against the excesses of the all-powerful state. This understanding of fundamental rights is usually called the ‘vertical’ approach – where the state, or the allied public authority is at the top and the individual, a non-public entity is at the bottom.</p>
<p style="text-align: justify; ">However, there is another, albeit underdeveloped, thread of constitutional jurisprudence that argues that in certain circumstances these rights can be claimed against another private entity. This is called the ‘horizontal’ application of fundamental rights.</p>
<p style="text-align: justify; ">In that note, Hegde’s contention essentially becomes this – claiming an enforceable remedy against the private entity for supposedly violating his fundamental right. This is clearly an ask for the Centre to consider a horizontal application of Article 19(1)(a) against large social media companies.</p>
<h3 style="text-align: justify; ">What could this mean?</h3>
<p style="text-align: justify; ">Lawyer Gautam Bhatia has <a href="https://indconlawphil.wordpress.com/2015/05/24/horizontality-under-the-indian-constitution-a-schema/">argued</a> that there are several ways in which a fundamental right can be enforced against another private entity. It must be noted that he derives this classification on the touchstone of existing judicial decisions, which is different from seeking an executive intervention. Nevertheless, it is interesting to consider the logic of his arguments as a thought exercise. Bhatia points out that one of the ways in which fundamental rights can be applied to a private entity is by assimilating the concerned entity as a ‘state’ as per Article 12.</p>
<p style="text-align: justify; ">There is a considerable amount of jurisprudence on the nature of the test to determine whether the assailed entity is state. In 2002, the Supreme Court <a href="https://indiankanoon.org/doc/471272/">held</a> that for an entity to be deemed state, it must be ‘functionally, financially and administratively dominated by or under the control of the Government’. If we go by this test, then a social media platform would most probably not come within the ambit of Article 12.</p>
<p style="text-align: justify; ">However, there is a thread of recent developments that might be interesting to consider. Earlier this year, a federal court of appeals in the US <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf#page=1">ruled</a> that the First Amendment prohibits President Donald Trump, who used his Twitter for government purposes, from blocking his critics. The court further held that when a public official uses their account for official purposes, then the account ceases to be a mere private account. This judgment has a sharp bearing in the current discussion, and the way social media platforms may have to operate within the tenets of constitutional protections of free speech.</p>
<p style="text-align: justify; ">Although the opinion of the federal court clearly noted that they did not concern themselves with the application of the First Amendment rights to the social media platforms, one cannot help but wonder – if the court rules that certain spaces in a social media account are ‘public’ by default, and that politicians cannot exclude critiques from those spaces, then <a href="https://www.forbes.com/sites/kalevleetaru/2017/08/01/is-social-media-really-a-public-space/#2ca9795b2b80">can</a> the company itself block or impede certain messages? If the company does it, can an enforceable remedy then be made against them?</p>
<p style="text-align: justify; "><img src="http://editors.cis-india.org/home-images/Trump.png/@@images/9bd98eba-124f-4be0-b60c-13482b76ae80.png" alt="Trump" class="image-inline" title="Trump" /></p>
<p style="text-align: justify; "><span style="text-align: center; "><i>A US court ruled that Donald Trump cannot block people on his Twitter account. Photo: Reuters</i></span></p>
<h3 style="text-align: justify; ">What can be done?</h3>
<p style="text-align: justify; ">Of course, there is no straight answer to this question. On one hand, social media platforms, owing to the enormous concentration of power and opaque moderating policies, have become gatekeepers of online speech to a large extent. If such power is left unchecked, then, as Hegde’s request demonstrates, a citizen’s free speech rights are meaningless.</p>
<p class="_yeti_done" style="text-align: justify; ">On the other hand, if we definitively agree that in certain circumstances, citizens should be allowed to claim remedies against these companies’ arbitrary exercise of power, then are we setting ourselves for a slippery slope? Would we make exceptions to the nature of spaces in the social media based on who is using it? If we do, then what would be the extent to which we would limit the company’s power of regulating speech in such space? How would such limitation work in consonance with the company’s need to protect public officials from targeted harassment?</p>
<p style="text-align: justify; ">At this juncture, given the novelty of the situation, our decisions should also be measured. One way of addressing this obvious paradigm shift is by considering the idea of oversight structures more seriously.</p>
<p style="text-align: justify; ">I have previously <a href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">written</a> about the possibility of having an independent regulator as a compromise between overtly stern government regulation and allowing social media companies to have free reign over the things that go on their platforms. In light of the recent events, this might be a useful alternative to consider.</p>
<p style="text-align: justify; ">Hegde had also asked the MeitY to issue guidelines to ensure that any censorship of speech in these social media platforms is to be done in accordance with the principles of Article 19.</p>
<p style="text-align: justify; ">If we presume that certain social media platforms are large and powerful enough to be treated akin to public spaces, then having an oversight authority to arbitrate and ensure the enforcement of constitutional principles for future disputes may just be the first step towards more evidence-based policymaking.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/the-wire-torsha-sarkar-december-7-2019-twitter-arbitrary-suspension-public-space'>http://editors.cis-india.org/internet-governance/blog/the-wire-torsha-sarkar-december-7-2019-twitter-arbitrary-suspension-public-space</a>
</p>
No publishertorshaFreedom of Speech and ExpressionInternet Governance2019-12-12T16:54:05ZBlog EntryA Deep Dive into Content Takedown Timeframes
http://editors.cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes
<b>Since the 1990s, internet usage has seen a massive growth, facilitated in part, by growing importance of intermediaries, that act as gateways to the internet. Intermediaries such as Internet Service Providers (ISPs), web-hosting providers, social-media platforms and search engines provide key services which propel social, economic and political development. However, these developments are also offset by instances of users engaging with the platforms in an unlawful manner. The scale and openness of the internet makes regulating such behaviour challenging, and in turn pose several interrelated policy questions.</b>
<p style="text-align: justify;">In this report, we will consider one such question by examining the appropriate time frame for an intermediary to respond to a government content removal request. The way legislations around the world choose to frame this answer has wider ramifications on issues of free speech and ease of carrying out operations for intermediaries. Through the course of our research, we found, for instance:</p>
<ol>
<li style="text-align: justify;">An one-size-fits-all model for illegal content may not be productive. The issue of regulating liability online contain several nuances, which must be considered for more holistic law-making. If regulation is made with only the tech incumbents in mind, then the ramifications of the same would become incredibly burdensome for the smaller companies in the market. </li>
<li style="text-align: justify;">Determining an appropriate turnaround time for an intermediary must also consider the nature and impact of the content in question. For instance, the Impact Assessment on the Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online cites research that shows that one-third of all links to Daesh propaganda were disseminated within the first one-hour of its appearance, and three-fourths of these links were shared within four hours of their release. This was the basic rationale for the subsequent enactment of the EU Terrorism Regulation, which proposed an one-hour time-frame for intermediaries to remove terrorist content.</li>
<li style="text-align: justify;">Understanding the impact of specific turnaround times on intermediaries requires the law to introduce in-built transparency reporting mechanisms. Such an exercise, performed periodically, generates useful feedback, which can be, in turn used to improve the system.</li></ol>
<div style="text-align: justify;"> </div>
<div style="text-align: justify;"><strong>Corrigendum: </strong>Please note that in the section concerning 'Regulation on Preventing the Dissemination of Terrorist Content Online', the report mentions that the Regulation has been 'passed in 2019'. At the time of writing the report, the Regulation had only been passed in the European Parliament, and as of May 2020, is currently in the process of a trilogue. </div>
<div style="text-align: justify;"> </div>
<div style="text-align: justify;"><strong>Disclosure</strong>: CIS is a recipient of research grants from Facebook India. </div>
<div style="text-align: justify;"> </div>
<hr />
<p style="text-align: justify;"><a class="external-link" href="http://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">Click to download the research paper</a> by Torsha Sarkar (with research assistance from Keying Geng and Merrin Muhammed Ashraf; edited by Elonnai Hickok, Akriti Bopanna, and Gurshabad Grover; inputs from Tanaya Rajwade)</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes'>http://editors.cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes</a>
</p>
No publishertorshaFreedom of Speech and ExpressionInternet GovernanceIntermediary Liability2020-06-26T11:59:06ZBlog Entry