The Centre for Internet and Society
http://editors.cis-india.org
These are the search results for the query, showing results 1 to 4.
Right to Exclusion, Government Spaces, and Speech
http://editors.cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech
<b>The conclusion of the litigation surrounding Trump blocking its critiques on Twitter brings to forefront two less-discussed aspects of intermediary liability: a) if social media platforms could be compelled to ‘carry’ speech under any established legal principles, thereby limiting their right to exclude users or speech, and b) whether users have a constitutional right to access social media spaces of elected officials. This essay analyzes these issues under the American law, as well as draws parallel for India, in light of the ongoing litigation around the suspension of advocate Sanjay Hegde’s Twitter account.</b>
<p> </p>
<p>This article first appeared on the Indian Journal of Law and Technology (IJLT) blog, and can be accessed <a class="external-link" href="https://www.ijlt.in/post/right-to-exclusion-government-controlled-spaces-and-speech">here</a>. Cross-posted with permission. </p>
<p>---</p>
<h2><span class="s1">Introduction</span></h2>
<p class="p2"><span class="s1">On April 8, the Supreme Court of the United States (SCOTUS), vacated the judgment of the US Court of Appeals for Second Circuit’s in <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf%23page=1"><span class="s2"><em>Knight First Amendment Institute v Trump</em></span></a>. In that case, the Court of Appeals had precluded Donald Trump, then-POTUS, from blocking his critics from his Twitter account on the ground that such action amounted to the erosion of constitutional rights of his critics. The Court of Appeals had held that his use of @realDonaldTrump in his official capacity had transformed the nature of the account from private to public, and therefore, blocking users he disagreed with amounted to viewpoint discrimination, something that was incompatible with the First Amendment.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">The SCOTUS <a href="https://www.supremecourt.gov/opinions/20pdf/20-197_5ie6.pdf"><span class="s2">ordered</span></a> the case to be dismissed as moot, on account of Trump no longer being in office. Justice Clarence Thomas issued a ten-page concurrence that went into additional depth regarding the nature of social media platforms and user rights. It must be noted that the concurrence does not hold any direct precedential weightage, since Justice Thomas was not joined by any of his colleagues at the bench for the opinion. However, given that similar questions of public import, are currently being deliberated in the ongoing <em>Sanjay Hegde</em> <a href="https://www.barandbench.com/news/litigation/delhi-high-court-sanjay-hegde-challenge-suspension-twitter-account-hearing-july-8"><span class="s2">litigation</span></a> in the Delhi High Court, Justice Thomas’ concurrence might hold some persuasive weightage in India. While the facts of these litigations might be starkly different, both of them are nevertheless characterized by important questions of applying constitutional doctrines to private parties like Twitter and the supposedly ‘public’ nature of social media platforms.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">In this essay, we consider the legal questions raised in the opinion as possible learnings for India. In the first part, we analyze the key points raised by Justice Thomas, vis-a-vis the American legal position on intermediary liability and freedom of speech. In the second part, we apply these deliberations to the <em>Sanjay Hegde </em>litigation, as a case-study and a roadmap for future legal jurisprudence to be developed.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">A flawed analogy</span></h2>
<p class="p2"><span class="s1">At the outset, let us briefly refresh the timeline of Trump’s tryst with Twitter, and the history of this litigation: the Court of Appeals decision was <a href="https://int.nyt.com/data/documenthelper/1365-trump-twitter-second-circuit-r/c0f4e0701b087dab9b43/optimized/full.pdf%23page=1"><span class="s2">issued</span></a> in 2019, when Trump was still in office. Post-November 2020 Presidential Election, where he was voted out, his supporters <a href="https://indianexpress.com/article/explained/us-capitol-hill-siege-explained-7136632/"><span class="s2">broke</span></a> into Capitol Hill. Much of the blame for the attack was pinned on Trump’s use of social media channels (including Twitter) to instigate the violence and following this, Twitter <a href="https://blog.twitter.com/en_us/topics/company/2020/suspension"><span class="s2">suspended</span></a> his account permanently.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">It is this final fact that seized Justice Thomas’ reasoning. He noted that a private party like Twitter’s power to do away with Trump’s account altogether was at odds with the Court of Appeals’ earlier finding about the public nature of the account. He deployed a hotel analogy to justify this: government officials renting a hotel room for a public hearing on regulation could not kick out a dissenter, but if the same officials gather informally in the hotel lounge, then they would be within their rights to ask the hotel to kick out a heckler. The difference in the two situations would be that, <em>“the government controls the space in the first scenario, the hotel, in the latter.” </em>He noted that Twitter’s conduct was similar to the second situation, where it “<em>control(s) the avenues for speech</em>”. Accordingly, he dismissed the idea that the original respondents (the users whose accounts were blocked) had any First Amendment claims against Trump’s initial blocking action, since the ultimate control of the ‘avenue’ was with Twitter, and not Trump.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">In the facts of the case however, this analogy was not justified. The Court of Appeals had not concerned itself with the question of private ‘control’ of entire social media spaces, and given the timeline of the litigation, it was impossible for them to pre-empt such considerations within the judgment. In fact, the only takeaway from the original decision had been that an elected representative’s utilization of his social media account for official purposes transformed </span><span class="s3">only that particular space</span><span class="s1"><em> </em>into a public forum where constitutional rights would find applicability. In delving into questions of ‘control’ and ‘avenues of speech’, issues that had been previously unexplored, Justice Thomas conflates a rather specific point into a much bigger, general conundrum. Further deliberations in the concurrence are accordingly put forward upon this flawed premise.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Right to exclusion (and must carry claims)</span></h2>
<p class="p2"><span class="s1">From here, Justice Thomas identified the problem to be “<em>private, concentrated control over online content and platforms available to the public</em>”, and brought forth two alternate regulatory systems — common carrier and public accommodation — to argue for ‘equal access’ over social media space. He posited that successful application of either of the two analogies would effectively restrict a social media platform’s right to exclude its users, and “<em>an answer may arise for dissatisfied platform users who would appreciate not being blocked</em>”. Essentially, this would mean that platforms would be obligated to carry <em>all </em>forms of (presumably) legal speech, and users would be entitled to sue platforms in case they feel their content has been unfairly taken down, a phenomenon Daphne Keller <a href="http://cyberlaw.stanford.edu/blog/2018/09/why-dc-pundits-must-carry-claims-are-relevant-global-censorship"><span class="s2">describes</span></a> as ‘must carry claims’.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">Again, this is a strange place to find the argument to proceed, since the original facts of the case were not about ‘<em>dissatisfied platform users’,</em> but an elected representative’s account being used in dissemination of official information. Beyond the initial ‘private’ control deliberation, Justice Thomas did not seem interested in exploring this original legal position, and instead emphasized on analogizing social media platforms in order to enforce ‘equal access’, finally arriving at a position that would be legally untenable in the USA.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">The American law on intermediary liability, as embodied in Section 230 of the Communications Decency Act (CDA), has two key components: first, intermediaries are <a href="https://www.eff.org/issues/cda230"><span class="s2">protected</span></a> against the contents posted by its users, under a legal model <a href="https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGLISH.pdf"><span class="s2">termed</span></a> as ‘broad immunity’, and second, an intermediary does not stand to lose its immunity if it chooses to moderate and remove speech it finds objectionable, popularly <a href="https://intpolicydigest.org/section-230-how-it-actually-works-what-might-change-and-how-that-could-affect-you/"><span class="s2">known</span></a> as the Good Samaritan protection. It is the effect of these two components, combined, that allows platforms to take calls on what to remove and what to keep, translating into a ‘right to exclusion’. Legally compelling them to carry speech, under the garb of ‘access’ would therefore, strike at the heart of the protection granted by the CDA.<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Learnings for India</span></h2>
<p class="p2"><span class="s1">In his petition to the Delhi High Court, Senior Supreme Court Advocate, Sanjay Hegde had contested that the suspension of his Twitter account, on the grounds of him sharing anti-authoritarian imagery, was arbitrary and that:<span class="Apple-converted-space"> </span></span></p>
<ol style="list-style-type: lower-alpha;" class="ol1"><li class="li2"><span class="s1">Twitter was carrying out a public function and would be therefore amenable to writ jurisdiction under Article 226 of the Indian Constitution; and</span></li><li class="li2"><span class="s1">The suspension of his account had amounted to a violation of his right to freedom of speech and expression under Article 19(1)(a) and his rights to assembly and association under Article 19(1)(b) and 19(1)(c); and</span></li><li class="li2"><span class="s1">The government has a positive obligation to ensure that any censorship on social media platforms is done in accordance with Article 19(2).<span class="Apple-converted-space"> </span></span></li></ol>
<p class="p3"><span class="s1"></span></p>
<p class="p5"><span class="s1">The first two prongs of the original petition are perhaps easily disputed: as previous <a href="https://indconlawphil.wordpress.com/2020/01/28/guest-post-social-media-public-forums-and-the-freedom-of-speech-ii/"><span class="s2">commentary</span></a> has pointed out, existing Indian constitutional jurisprudence on ‘public function’ does not implicate Twitter, and accordingly, it would be a difficult to make out a case that account suspensions, no matter how arbitrary, would amount to a violation of the user’s fundamental rights. It is the third contention that requires some additional insight in the context of our previous discussion.<span class="Apple-converted-space"> </span></span></p>
<h3><span class="s1">Does the Indian legal system support a right to exclusion?<span class="Apple-converted-space"> </span></span></h3>
<p class="p2"><span class="s1">Suing Twitter to reinstate a suspended account, on the ground that such suspension was arbitrary and illegal, is in its essence a request to limit Twitter’s right to exclude its users. The petition serves as an example of a must-carry claim in the Indian context and vindicates Justice Thomas’ (misplaced) defence of ‘<em>dissatisfied platform users</em>’. Legally, such claims perhaps have a better chance of succeeding here, since the expansive protection granted to intermediaries via Section 230 of the CDA, is noticeably absent in India. Instead, intermediaries are bound by conditional immunity, where availment of a ‘safe harbour’, i.e., exemption from liability, is contingent on fulfilment of statutory conditions, made under <a href="https://indiankanoon.org/doc/844026/"><span class="s2">section 79</span></a> of the Information Technology (IT) Act and the rules made thereunder. Interestingly, in his opinion, Justice Thomas had briefly visited a situation where the immunity under Section 230 was made conditional: to gain Good Samaritan protection, platforms might be induced to ensure specific conditions, including ‘nondiscrimination’. This is controversial (and as commentators have noted, <a href="https://www.lawfareblog.com/justice-thomas-gives-congress-advice-social-media-regulation"><span class="s2">wrong</span></a>), since it had the potential to whittle down the US' ‘broad immunity’ model of intermediary liability to a system that would resemble the Indian one.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">It is worth noting that in the newly issued Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, proviso to Rule 3(1)(d) allows for “<em>the removal or disabling of access to any information, data or communication link [...] under clause (b) on a voluntary basis, or on the basis of grievances received under sub-rule (2) [...]</em>” without dilution of statutory immunity. This does provide intermediaries a right to exclude, albeit limited, since its scope is restricted to content removed under the operation of specific sub-clauses within the rules, as opposed to Section 230, which is couched in more general terms. Of course, none of this precludes the government from further prescribing obligations similar to those prayed in the petition.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">On the other hand, it is a difficult proposition to support that Twitter’s right to exclusion should be circumscribed by the Constitution, as prayed. In the petition, this argument is built over the judgment in <a href="https://indiankanoon.org/doc/110813550/"><span class="s2"><em>Shreya Singhal v Union of India</em></span></a>, where it was held that takedowns under section 79 are to be done only on receipt of a court order or a government notification, and that the scope of the order would be restricted to Article 19(2). This, in his opinion, meant that “<em>any suo-motu takedown of material by intermediaries must conform to Article 19(2)</em>”.</span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">To understand why this argument does not work, it is important to consider the context in which the <em>Shreya Singhal </em>judgment was issued. Previously, intermediary liability was governed by the Information Technology (Intermediaries Guidelines) Rules, 2011 issued under section 79 of the IT Act. Rule 3(4) made provisions for sending takedown orders to the intermediary, and the prerogative to send such orders was on ‘<em>an affected person</em>’. On receipt of these orders, the intermediary was bound to remove content and neither the intermediary nor the user whose content was being censored, had the opportunity to dispute the takedown.</span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">As a result, the potential for misuse was wide-open. Rishabh Dara’s <a href="https://cis-india.org/internet-governance/intermediary-liability-in-india.pdf"><span class="s2">research</span></a> provided empirical evidence for this; intermediaries were found to act on flawed takedown orders, on the apprehension of being sanctioned under the law, essentially chilling free expression online. The <em>Shreya Singhal</em> judgment, in essence, reined in this misuse by stating that an intermediary is legally obliged to act <em>only when </em>a takedown order is sent by the government or the court. The intent of this was, in the court’s words: “<em>it would be very difficult for intermediaries [...] to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.</em>”<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p5"><span class="s1">In light of this, if Hegde’s petition succeeds, it would mean that intermediaries would now be obligated to subsume the entirety of Article 19(2) jurisprudence in their decision-making, interpret and apply it perfectly, and be open to petitions from users when they fail to do so. This might be a startling undoing of the court’s original intent in <em>Shreya Singhal</em>. Such a reading also means limiting an intermediary’s prerogative to remove speech that may not necessarily fall within the scope of Article 19(2), but is still systematically problematic, including unsolicited commercial communications. Further, most platforms today are dealing with an unprecedented spread and consumption of harmful, misleading information. Limiting their right to exclude speech in this manner, we might be <a href="https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf"><span class="s2">exacerbating</span></a> this problem. <span class="Apple-converted-space"> </span></span></p>
<h3><span class="s1">Government-controlled spaces on social media platforms</span></h3>
<p class="p2"><span class="s1">On the other hand, the original finding of the Court of Appeals, regarding the public nature of an elected representative’s social media account and First Amendment rights of the people to access such an account, might yet still prove instructive for India. While the primary SCOTUS order erases the precedential weight of the original case, there have been similar judgments issued by other courts in the USA, including by the <a href="https://globalfreedomofexpression.columbia.edu/cases/davison-v-randall/"><span class="s2">Fourth Circuit</span></a> court and as a result of a <a href="https://knightcolumbia.org/content/texas-attorney-general-unblocks-twitter-critics-in-knight-institute-v-paxton"><span class="s2">lawsuit</span></a> against a Texas Attorney General.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p4"><span class="s1">A similar situation can be envisaged in India as well. The Supreme Court has <a href="https://indiankanoon.org/doc/591481/"><span class="s2">repeatedly</span></a> <a href="https://indiankanoon.org/doc/27775458/"><span class="s2">held</span></a> that Article 19(1)(a) encompasses not just the right to disseminate information, but also the right to <em>receive </em>information, including <a href="https://indiankanoon.org/doc/438670/"><span class="s2">receiving</span></a> information on matters of public concern. Additionally, in <a href="https://indiankanoon.org/doc/539407/"><span class="s2"><em>Secretary, Ministry of Information and Broadcasting v Cricket Association of Bengal</em></span></a>, the Court had held that the right of dissemination included the right of communication through any media: print, electronic or audio-visual. Then, if we assume that government-controlled spaces on social media platforms, used in dissemination of official functions, are ‘public spaces’, then the government’s denial of public access to such spaces can be construed to be a violation of Article 19(1)(a).<span class="Apple-converted-space"> </span></span></p>
<h2><span class="s1">Conclusion</span></h2>
<p class="p2"><span class="s1">As indicated earlier, despite the facts of the two litigations being different, the legal questions embodied within converge startlingly, inasmuch that are both examples of the growing discontent around the power wielded by social media platforms, and the flawed attempts at fixing it.<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">While the above discussion might throw some light on the relationship between an individual, the state and social media platforms, many questions still continue to remain unanswered. For instance, once we establish that users have a fundamental right to access certain spaces within the social media platform, then does the platform have a right to remove that space altogether? If it does so, can a constitutional remedy be made against the platform? Initial <a href="https://indconlawphil.wordpress.com/2018/07/01/guest-post-social-media-public-forums-and-the-freedom-of-speech/"><span class="s2">commentary</span></a> on the Court of Appeals’ decision had contested that the takeaway from that judgment had been that constitutional norms had a primacy over the platform’s own norms of governance. In such light, would the platform be constitutionally obligated to <em>not </em>suspend a government account, even if the content on such an account continues to be harmful, in violation of its own moderation standards?<span class="Apple-converted-space"> </span></span></p>
<p class="p3"><span class="s1"></span></p>
<p class="p2"><span class="s1">This is an incredibly tricky dimension of the law, made trickier still by the dynamic nature of the platforms, the intense political interests permeating the need for governance, and the impacts on users in the instance of a flawed solution. Continuous engagement, scholarship and emphasis on having a human rights-respecting framework underpinning the regulatory system, are the only ways forward.<span class="Apple-converted-space"> </span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"><br /></span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space">---</span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"><br /></span></span></p>
<p class="p2"><span class="s1"><span class="Apple-converted-space"></span></span></p>
<p>The author would like to thank Gurshabad Grover and Arindrajit Basu for reviewing this piece. </p>
<div> </div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech'>http://editors.cis-india.org/internet-governance/blog/right-to-exclusion-government-spaces-and-speech</a>
</p>
No publisherTorSharkFreedom of Speech and ExpressionIntermediary LiabilityInformation Technology2021-07-02T12:05:13ZBlog EntryNew intermediary guidelines: The good and the bad
http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad
<b>In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. </b>
<p> </p>
<p>This article originally appeared in the Down to Earth <a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693">magazine</a>. Reposted with permission.</p>
<p>-------</p>
<p>The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.</p>
<p>These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.</p>
<p>The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.</p>
<p>These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly. </p>
<p>The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.</p>
<p>Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.</p>
<p><strong>What changed for the good?</strong></p>
<p>One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.</p>
<p>Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.</p>
<p>Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.</p>
<p>The new rules, therefore, envisage three types of entities:</p>
<ul><li>There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.</li><li>There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.</li><li>The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.</li></ul>
<p>The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.</p>
<p>I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.</p>
<p>Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.</p>
<p>One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:</p>
<ul><li>Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression</li><li>The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.</li></ul>
<p>The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.</p>
<p>This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.</p>
<p><strong>What should go?</strong></p>
<p>That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.</p>
<p>In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.</p>
<p>It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.</p>
<p>Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.</p>
<p>Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.</p>
<p><em>The author would like to thank Gurshabad Grover for his editorial suggestions. </em></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'>http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</a>
</p>
No publisherTorSharkIT ActIntermediary LiabilityInternet GovernanceCensorshipArtificial Intelligence2021-03-15T13:52:46ZBlog EntryRemove misinformation, but be transparent please!
http://editors.cis-india.org/internet-governance/blog/remove-misinformation-but-be-transparent-please-1
<b>The Covid-19 pandemic has seen an extensive proliferation of misinformation and misleading information on the internet - which in turn has highlighted a heightened need for online intermediaries to promptly and effectively deploy its content removal mechanisms. This blogpost examines how this necessity may affect the best practices of transparency reporting and obligations of accountability that these online intermediaries owe to their users, and formulates recommendations to allow preservation of information regarding Covid-19 related content removal, for future research. </b>
<p> </p>
<p>This article first <a class="external-link" href="http://cyberbrics.info/remove-misinformation-but-be-transparent-please/">appeared</a> in the CyberBrics. The author would like to thank Gurshabad Grover for his feedback and review. </p>
<h2 dir="ltr">Introduction</h2>
<p dir="ltr">We are living through, to put it mildly, strange times. The ongoing pandemic has pinballed into a humanitarian crisis, revealing and deepening the severe class inequalities that exist today. The crisis has been exacerbated by an ‘infodemic’, as the World Health Organization (WHO) <a href="https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf">notes</a>: a massive abundance of information - occasionally inaccurate - has reduced the general perception of trust and reliability of online sources regarding the disease.</p>
<p dir="ltr">As a response to this phenomenon, in March, the Ministry of Electronics and Information Technology (MeitY) issued an <a href="https://meity.gov.in/writereaddata/files/advisory_to_curb_false_news-misinformation_on_corona_virus.pdf">advisory</a> to all social media platforms, asking them to “take immediate action to disable/remove [misinformation on Covid-19] hosted on their platforms on priority basis.” This advisory comes at a time when several prominent online platforms, including <a href="https://gadgets.ndtv.com/internet/news/google-india-announces-steps-to-help-combat-covid-19-misinformation-2211357">Google</a>, <a href="https://blog.twitter.com/en_us/topics/company/2020/An-update-on-our-continuity-strategy-during-COVID-19.html">Twitter</a> and <a href="https://about.fb.com/news/2020/03/combating-covid-19-misinformation/">Facebook</a> are also voluntarily stepping up to remove ‘harmful’ and misleading content relating to the pandemic. In the process, these intermediaries have started to increasingly rely on automated tools to carry out these goals, since their human moderator teams had to be sent home on lockdown norms. </p>
<p dir="ltr">While the intention behind these decisions is understandable, one must wonder how this new-found speed to remove content, prompted by the bid to rid the social media space of ‘fake news’ may affect the best practices of transparency reporting and obligations of accountability that these online intermediaries owe to their users. In this piece, we explore these issues in a little more detail. </p>
<h2 dir="ltr">What is transparency reporting? </h2>
<p dir="ltr">Briefly speaking, transparency reports, in the context of online intermediaries and social media companies, are periodic (usually annual or half-yearly) reports that map different policy enforcement decisions the company has taken regarding, among other things, surveillance and censorship. These decisions are either carried out unilaterally by the company, by third-party notices (in case of content that is infringing copyright, for instance), or at the behest of state authorities. For instance, Google’s <a href="https://transparencyreport.google.com/?hl=en">page</a> on transparency reporting describes the process as “[s]haring data that sheds light on how the policies and actions of governments and corporations affect privacy, security, and access to information.”x</p>
<p dir="ltr">To gauge the importance of transparency reporting in today’s age of the internet, it is perhaps potent to consider their history. In the beginning of the past decade, Google was one of the only online intermediaries <a href="https://transparencyreport.google.com/user-data/overview?hl=en&user_requests_report_period=series:requests,accounts;authority:IN;time:&lu=user_requests_report_period">providing</a> any kind of information regarding government requests for user data, or requests for removal of content. </p>
<p dir="ltr">Then, in 2013, the Snowden Leaks happened. This was a watershed moment in the internet’s history, inasmuch as it displayed that these online intermediaries were often excessively pliant with government requests for user information, <a href="https://www.forbes.com/sites/kashmirhill/2013/11/14/silicon-valley-data-handover-infographic/#25de6ae45365">allowing</a> them backdoor surveillance access. Of course, all of these companies denied these allegations. </p>
<p dir="ltr">However, from this moment onwards, online intermediaries began to roll out transparency reports in a bid to fix their damaged goodwill, and till last year, it was <a href="https://cis-india.org/internet-governance/files/A%20collation%20and%20analysis%20of%20government%20requests%20for%20user%20data%20%20and%20content%20removal%20from%20non-Indian%20intermediaries%20.pdf">noted</a> that these reports continued to be more detailed, at least in the context of data and content related to users located in the US. A notable exception to this rule was the tech giant Amazon, whose <a href="https://www.amazon.com/gp/help/customer/display.html?nodeId=GYSDRGWQ2C2CRYEF">reports</a> are essentially a PDF document of three pages, with no nuance regarding any of the verticals mentioned. </p>
<p dir="ltr">Done well, these reports are invaluable sources of information about things like the number of legal takedowns effectuated by the intermediary, the number of times the government asked for user information from the intermediary for law enforcement purposes, and so on. This in turn becomes a useful way of measuring the breadth of government and private censorship and surveillance. For instance, this <a href="https://indianexpress.com/article/india/govt-emergency-requests-to-facebook-for-user-data-more-than-double-in-2019-6407110/">report</a> shows that the government emergency reports sent to Facebook have doubled since 2019, which is concerning, since it is not clear what does the company mean by an ‘emergency’ request, and whether its understanding matches up with that provided under the Indian <a href="https://cis-india.org/internet-governance/resources/it-procedure-and-safeguards-for-interception-monitoring-and-decryption-of-information-rules-2009">law</a>. Which means that it becomes difficult, in turn, to ascertain the nature of information that the company is handing over to the government. </p>
<h3 dir="ltr">Best practices and where to find them</h3>
<p dir="ltr">While transparency reports are great repositories to gauge the breadth of government censorship and surveillance, one early challenge has been the lack of standardized reporting. Since these reports were mostly autonomous initiatives by online intermediaries, each of them had taken their own forms. This in turn, had made any comparison between them difficult.</p>
<p dir="ltr">This has since been addressed by a number of organizations, including Electronic Frontier Foundation (<a href="https://www.eff.org/wp/who-has-your-back-2019">EFF</a>), <a href="https://www.newamerica.org/oti/reports/transparency-reporting-toolkit-content-takedown-reporting/">New America</a> and <a href="https://www.accessnow.org/transparency-reporting-index/">Access Now</a>, all creating their own metrics for measuring transparency reports. More definitively, in the context of content removal in 2018, a group of academicians, organizations and experts had collaborated to form the ‘<a href="https://santaclaraprinciples.org/">Santa Clara Principles on Transparency and Accountability in Content Moderation</a>’ which have since received the <a href="https://santaclaraprinciples.org/open-letter/">endorsement</a> of around seventy human rights groups. Taken together, these standards and methodologies of analysing transparency reports present a considerable body of work, against which content removals can be mapped.</p>
<h2 dir="ltr">Content takedown in the time of pandemic</h2>
<p dir="ltr">In some of our previous research, we have <a href="https://cis-india.org/internet-governance/blog/torsha-sarkar-november-30-2019-a-deep-dive-into-content-takedown-timeframes">argued</a> how the speed of removal, or the time taken by an intermediary to remove ‘unlawful’ content, says nothing about the accuracy of the said action. Twitter, for instance, can say that it took some <a href="https://transparency.twitter.com/en/twitter-rules-enforcement.html">‘action’</a> against 584,429 reports of hateful conduct for a specified period; this does not always mean that all the action it took was accurate, or fair, since very little publicly available information is there to comprehensively gauge how effective or accurate are the removal mechanisms deployed by these intermediaries. The heightened pressure to deal with harmful content related to the pandemic, can contribute further to one, removal of perfectly legitimate content (as <a href="https://www.theverge.com/2020/3/17/21184445/facebook-marking-coronavirus-posts-spam-misinformation-covid-19">examples</a> from Facebook shows, and as YouTube has <a href="https://youtube-creators.googleblog.com/2020/03/protecting-our-extended-workforce-and.html">warned</a> in blogs), and two, towards increasing and deepening the information asymmetry regarding accurate data around removals. </p>
<p dir="ltr">Given the diverse nature of misinformation and conspiracy theories relating to the pandemic currently present on the internet, this offers a critical time to <a href="https://cdt.org/insights/covid-19-content-moderation-research-letter/">study</a> the relation between online information and the outcomes of a public health crisis. However, these efforts stand to be thwarted if reliable information around removals relating to the pandemic continue to be unavailable. </p>
<h3 dir="ltr">How to map removals in these times?</h3>
<p dir="ltr">One, as the industry body IAMAI <a href="https://www.medianama.com/wp-content/uploads/PR_social-media_7-April3.pdf">notes</a>, while positive, collaborative steps between social media companies and the government to curb misinformation are welcome, any form of takedown at the behest of the state must take the correct legal path, as mandated by the provisions of the Information Technology (IT) Act. Additionally, all information regarding content takedowns to remove fake news related to Covid-19 must be <a href="https://cdt.org/insights/covid-19-content-moderation-research-letter/">preserved and collected</a> separately by these companies, and subsequently represented in their transparency reports. </p>
<p dir="ltr">Two, if the recent case of Twitter fact-checking Donald Trump’s tweet on electoral ballots is any indication, an online intermediary’s suo motu enforcement of its internal speech norms may take different shapes, apart from the usual takedown/leave up binary, including fact-checking and showing warning labels for conspiratorial content (<a href="https://about.fb.com/news/2020/04/covid-19-misinfo-update/">Facebook</a> for instance, has taken to adopt measures that would connect verified sources of information to users interacting with Covid-19 related misinformation). Accordingly, information regarding these additional measures must be mapped, including the efficacy of these steps, and should be presented in the transparency reports. </p>
<p dir="ltr">Additionally, several of these companies have stepped up to <a href="https://www.eff.org/deeplinks/2020/05/santa-clara-principles-during-covid-19-more-important-ever">use</a> automated moderation tools and systems for quick response against the spread of disinformation on their platforms. However, as YouTube’s Creator Blog <a href="https://youtube-creators.googleblog.com/2020/03/protecting-our-extended-workforce-and.html">warns</a> its users, some of these removals may be erroneous, and the users would accordingly have to appeal these decisions. Therefore, while information regarding removals prompted by the use of these tools must be preserved, and represented separately, these numbers should also be expanded to include the error rates of these automated tools, and the rate at which posts removed by error are reinstated. </p>
<p dir="ltr">Three, as previous research on transparency reporting has <a href="https://cis-india.org/internet-governance/blog/torsha-sarkar-suhan-s-and-gurshabad-grover-october-30-2019-through-the-looking-glass">shown</a>, there is a substantive bridge between the information provided by these companies for users based in the US, and those based out of other countries. This is problematic on several counts. Due to the <a href="https://cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1">expansive</a> <a href="https://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">issues</a> with the laws relating to content removal in India, this inadequate representation of information makes it impossible to gauge the practical ramifications of the opaque legal system, and accordingly, makes reforms difficult. In the current times, this lack of information may also paint an imperfect picture of government censorship. After all, the Indian government has, on multiple occasions, the dubious reputation of sending <a href="https://drive.google.com/drive/folders/1VqH8KzgTtbvF8jT2rtuhgrrOgph9XvCT">flawed</a> legal takedown notices and <a href="https://cpj.org/blog/2019/10/india-opaque-legal-process-suppress-kashmir-twitter.php">forcing</a> intermediaries to censor content nevertheless. </p>
<p dir="ltr">Therefore, this continued refusal to provide more nuanced information in the context of India would continue to facilitate these practices, and only increase the breadth of censorship of digital expression. </p>
<p dir="ltr">While the need to remove harmful information from social media platforms in this stage of the crisis might be necessary, such need must not circumvent the adherence to the minimum standards of transparency and accountability. If the Snowdean leaks are any indication, online companies can be made to change their policies during watershed moments in history. The current Covid-19 crisis is one such moment, both offline and online, and the need is more pressing than ever, for these companies to step up and do better. </p>
<p><strong><br /></strong></p>
<p dir="ltr">Shared under Creative Commons BY-SA 4.0 license</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/remove-misinformation-but-be-transparent-please-1'>http://editors.cis-india.org/internet-governance/blog/remove-misinformation-but-be-transparent-please-1</a>
</p>
No publisherTorShark2020-06-29T11:46:56ZBlog EntryWhy should we care about takedown timeframes?
http://editors.cis-india.org/internet-governance/blog/why-should-we-care-about-takedown-timeframes
<b>The issue of content takedown timeframe - the time period an intermediary is allotted to respond to a legal takedown order - has received considerably less attention in conversations about intermediary liability. This article examines the importance of framing an appropriate timeframe towards ensuring that speech online is not over-censored, and frames recommendations towards the same.
</b>
<p> </p>
<p> </p>
<p><em>This article first <a class="external-link" href="https://cyberbrics.info/why-should-we-care-about-takedown-timeframes/">appeared</a> in the CyberBRICS website. It has since been <a class="external-link" href="https://www.medianama.com/2020/04/223-content-takedown-timeframes-cyberbrics/">cross-posted</a> to the Medianama.</em></p>
<p><em>The findings and opinions expressed in this article are derived from the larger research report 'A deep dive into content takedown timeframes', which can be accessed <a class="external-link" href="https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">here</a>.</em></p>
<p><strong>Introduction</strong></p>
<p>Since the Ministry of Electronics and Information Technology (MeitY) proposed the draft amendments to the intermediary liability guidelines in December of 2018, speculations regarding their potential effects have been numerous. These have included, <a class="external-link" href="http://www.medianama.com/2020/01/223-traceability-accountability-necessary-intermediary-liability/">mapping</a> the requirement of traceability of originators vis-a-vis chilling effect on free speech online, or <a class="external-link" href="http://cyberbrics.info/rethinking-the-intermediary-liability-regime-in-india/">critiquing</a> the proactive filtering requirement as potentially leading to censorship.</p>
<p>One aspect, however, that has received a lesser amount of attention is encoded within Rule 3(8) of the draft amendments. By the virtue of that rule, the time-limit given to the intermediaries to respond to a legal content takedown request (“turnaround time”) has been reduced from 36 hours (as it was in the older version of the rules) to 24 hours. In essence, intermediaries, when faced with a takedown order from the government or the court, would now have to remove the concerned piece of content within 24 hours of receipt of the notice.</p>
<p>Why is this important? Consider this: the <a class="external-link" href="http://indiacode.nic.in/bitstream/123456789/1999/3/A2000-21.pdf">definition</a> of an ‘intermediary’ within the Indian law encompasses a vast amount of entities – cyber cafes, online-marketplaces, internet service providers and more. Governance of any intermediary liability norms would accordingly require varying levels of regulation, each of which recognizes the different composition of these entities. In light of that, the content takedown requirement, and specifically the turnaround time becomes problematic. Let alone that the vast amount of entities under the definition of intermediaries would probably find it impossible to implement this obligation due to their technical architecture, this obligation also seems to erase the nuances existing within entities which would actually fall within its scope. </p>
<p>Each category of online content, and more importantly, each category of intermediary are different, and any content takedown requirement must appreciate these differences. A smaller intermediary may find it more difficult to adhere to a stricter, shorter timeframe, than an incumbent. A piece of ‘terrorist’ content may be required to be treated with more urgency than something that is defamatory. These contextual cues are critical, and must be accordingly incorporated in any law on content takedown.</p>
<p>While making our submissions to the draft amendments, we found that there was a lack of research from the government’s side justifying the shortened turnaround time, nor were there any literature which focussed on turnaround time-frames as a critical point of regulation of intermediary liability. Accordingly, I share some findings from our research in the subsequent sections, which throw light on certain nuances that must be considered before proposing any content takedown time-frame. It is important to note that our research has not yet found what should be an appropriate turnaround time in a given situation. However, the following findings would hopefully start a preliminary conversation which may ultimately lead us to a right answer.</p>
<p><strong>What to consider when regulating takedown time-frames?</strong></p>
<p>I classify the findings from our research into a chronological sequence: a) broad legal reforms, b) correct identification of scope and extent of the law, c) institution of proper procedural safeguards, and d) post-facto review of the time-frame for evidence based policy-making.</p>
<p><em>1. Broad legal reforms: Harmonize the law on content takedown.</em></p>
<p>The Indian law for content takedown is administered through two different provisions under the Information Technology (IT) Act, each with their own legal procedures and scope. While the 24-hour turnaround time would be applicable for the procedure under one of them, there would continue to <a class="external-link" href="http://cis-india.org/internet-governance/resources/information-technology-procedure-and-safeguards-for-blocking-for-access-of-information-by-public-rules-2009">exist</a> a completely different legal procedure under which the government could still effectuate content takedown. For the latter, intermediaries would be given a 48-hour timeframe to respond to a government request with clarifications (if any).</p>
<p>Such differing procedures contributes to the creation of a confusing legal ecosystem surrounding content takedown, leading to arbitrary ways in which Indian users experience internet censorship. Accordingly, it is important to harmonize the existing law in a manner that the procedures and safeguards are seamless, and the regulatory process of content takedown is streamlined.</p>
<p><em>2. Correct identification of scope and extent of the law: Design a liability framework on the basis of the differences in the intermediaries, and the content in question.</em></p>
<p>As I have highlighted before, regulation of illegal content online cannot be <a class="external-link" href="https://blog.mozilla.org/netpolicy/2018/07/11/sustainable-policy-solutions-for-illegal-content/">one-size-fits-all</a>. Accordingly, a good law on content takedown must account for the nuances existing in the way intermediaries operate and the diversity of speech online. More specifically, there are two levels of classification that are critical.</p>
<p><em>One</em>, the law must make a fundamental classification between the intermediaries within the scope of the law. An obligation to remove illegal content can be implemented only by those entities whose technical architecture allows them to. While a search engine would be able to delink websites that are declared ‘illegal’, it would be absurd to expect a cyber cafe to follow a similar route of responding to a legal takedown order within a specified timeframe.</p>
<p>Therefore, one basis of classification must incorporate this difference in the technical architecture of these intermediaries. Apart from this, the law must also design liability for intermediaries on the basis of their user-base, annual revenue generated, and the reach, scope and potential impact of the intermediary’s actions.</p>
<p><em>Two, </em>it is important that the law recognizes that certain types of content would require more urgent treatment than other types of content. Several regulations across jurisdiction, including the NetzDG and the EU Regulation on Preventing of Dissemination of Terrorist Content Online, while problematic in their own counts, attempt to either limit their scope of application or frame liability based on the nature of content targeted.</p>
<p>The Indian law on the other hand, encompasses within its scope, a vast, varying array of content that is ‘illegal’, which includes on one hand, critical items like threatening ‘the sovereignty and integrity of India’ and on the other hand, more subjective speech elements like ‘decency or morality’. While an expedited time-frame may be permissible for the former category of speech, it is difficult to justify the same for the latter. More contextual judgments may be needed to assess the legality of content that is alleged to be defamatory or obscene, thereby making it problematic to have a shorter time-frame for the same.</p>
<p><em>3. Institution of proper procedural safeguards: Make notices mandatory and make sanctions gradated</em>.</p>
<p>Apart from the correct identification of scope and extent, it is important that there are sufficient procedural safeguards to ensure that the interests of the intermediaries and the users are not curtailed. While these may seem ancillary to the main point, how the law chooses to legislate on these issues (or does not), nevertheless has a direct bearing on the issue of content takedown and time-frames.</p>
<p>Firstly, while the Indian law mandates content takedown, it does not mandate a process through which a user is notified of such an action being taken. The mere fact that an incumbent intermediary is able to respond to removal notifications within a specified time-frame does not imply that its actions would not have ramifications on free speech. Ability to takedown content does not translate into accuracy of the action taken, and the Indian law fails to take this into account.</p>
<p>Therefore, additional obligations of informing users when their content has been taken down, institutes due process in the procedure. In the context of legal takedown, such notice mechanisms also <a class="external-link" href="http://www.eff.org/wp/who-has-your-back-2019">empower</a> users to draw attention to government censorship and targeting.</p>
<p>Secondly, a uniform time-frame of compliance, coupled with severe sanctions goes on to disrupt the competition against the smaller intermediaries. While the current law does not clearly elaborate upon the nature of sanctions that would be imposed, general principles of the doctrine of safe harbour dictate that upon failure to remove the content, the intermediary would be subject to the same level of liability as the person uploading the content. This threat of sanctions may have adverse effects on free speech online, resulting in potential <a class="external-link" href="http://cis-india.org/internet-governance/intermediary-liability-in-india.pdf">over-censorship</a> of legitimate speech.</p>
<p>Accordingly, sanctions should be restricted to instances of systematic violations. For critical content, the contours of what constitutes systematic violation may differ. The regulator must accordingly take into account the nature of content which the intermediary failed to remove, while assessing their liability.</p>
<p><em>4. Post-facto review of the time-frame for evidence based policy-making: Mandate transparency reporting.</em></p>
<p>Transparency reporting, apart from ensuring accountability of intermediary action, is also a useful tool for understanding the impact of the law, specifically with relation to time period of response. The NetzDG, for all its criticism, has received <a class="external-link" href="https://www.article19.org/wp-content/uploads/2017/09/170901-Legal-Analysis-German-NetzDG-Act.pdfhttp://">support</a> for requiring intermediaries to produce bi-annual transparency reports. These reports provide us important insight into the efficacy of any proposed turnaround time, which in turn helps us to propose more nuanced reforms into the law.</p>
<p>However, to cull out the optimal amount of information from these reports, it is important that these reporting practices are standardized. There exists some international body of work which proposes a methodology for standardizing transparency reports, including the Santa Clara Principles and the Electronic Frontier Foundation’s (EFF) ‘Who has your back?’ reports. We have also previously proposed a methodology that utilizes some of these pointers.</p>
<p>Additionally, due to the experimental nature of the provision, including a review provision in the law would ensure the efficacy of the exercise can also be periodically assessed. If the discussion in the preceding section is any indication, the issue of an appropriate turnaround time is currently in a regulatory flux, with no correct answer. In such a scenario, periodic assessments compel policymakers and stakeholders to discuss effectiveness of solutions, and the nature of the problems faced, leading to <a class="external-link" href="http://www.livemint.com/Opinion/svjUfdqWwbbeeVzRjFNkUK/Making-laws-with-sunset-clauses.html">evidence-based</a> policymaking.</p>
<p><strong>Why should we care?</strong></p>
<p>There is a lot at stake while regulating any aspect of intermediary liability, and the lack of smart policy-making may result in the dampening of the interests of any one of the stakeholder groups involved. As the submissions to the draft amendments by various civil societies and industry groups show, the updated turnaround time suffers from issues, which if not addressed, may lead to over-removal, and lack of due process in the content removal procedure.</p>
<p>Among others, these submissions pointed out that the shortened time-frame did not allow the intermediaries sufficient time to scrutinize a takedown request to ensure that all technical and legal requirements are adhered to. This in turn, may also prompt third-party action against user actions. Additionally, the significantly short time-frame also raised several implementational challenges. For smaller companies with fewer employees, such a timeframe can both be burdensome, from both a financial and capability point of view. This in turn, may result in over-censorship of speech online.</p>
<p>Failing to recognize and incorporate contextual nuances into any law on intermediary liability therefore, may critically alter the way we interact with online intermediaries, and in a larger scheme, with the internet.</p>
<p> </p>
<p> </p>
<p> </p>
<div> </div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/why-should-we-care-about-takedown-timeframes'>http://editors.cis-india.org/internet-governance/blog/why-should-we-care-about-takedown-timeframes</a>
</p>
No publisherTorSharkContent takedownIntermediary LiabilityChilling Effect2020-04-10T04:58:56ZBlog Entry