The Centre for Internet and Society
http://editors.cis-india.org
These are the search results for the query, showing results 1 to 13.
World Press Freedom Day 2017
http://editors.cis-india.org/internet-governance/news/world-press-freedom-day-2017
<b>Udbhav Tiwari represented the Centre for Internet & Society at the World Press Day event organised by UNESCO and the Digital Empowerment Foundation (DEF) at UNESCO House, New Delhi on May 3, 2017.</b>
<p class="gmail-m_1334623882080896793moz-forward-container" style="text-align: justify; ">The event had the release of two reports, one on Violence against Journalists in South Asia and one of Internet Shutdowns in India, with a panel accompanying the last one. The panel was quite interesting, with perspectives from Osama Manzar and a Editor from The Hoot standing out in particular about how social media websites are being used for rapid response governance and how these bans negatively affect those attempts. The agenda for the event is attached to this email.</p>
<p class="gmail-m_1334623882080896793moz-forward-container" style="text-align: justify; "><a class="external-link" href="http://cis-india.org/internet-governance/files/human-rights-versus-national-security.pdf">Click to read</a> about the Internet Shutdown report from the event.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/world-press-freedom-day-2017'>http://editors.cis-india.org/internet-governance/news/world-press-freedom-day-2017</a>
</p>
No publisherpraskrishnaFreedom of Speech and ExpressionInternet FreedomInternet Governance2017-05-20T02:52:39ZNews ItemTo preserve freedoms online, amend the IT Act
http://editors.cis-india.org/internet-governance/blog/hindustan-times-april-16-2019-gurshabad-grover-to-preserve-freedoms-online-amend-it-act
<b>Look into the mechanisms that allow the government and ISPs to carry out online censorship without accountability.</b>
<p style="text-align: justify; ">The article by Gurshabad Grover was published in the <a class="external-link" href="https://www.hindustantimes.com/analysis/to-preserve-freedoms-online-amend-the-it-act/story-aC0jXUId4gpydJyuoBcJdI.html">Hindustan Times</a> on April 16, 2019.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">The issue of blocking of websites and online services in India has gained much deserved traction after internet users reported that popular services like Reddit and Telegram were inaccessible on certain Internet Service Providers (ISPs). The befuddlement of users calls for a look into the mechanisms that allow the government and ISPs to carry out online censorship without accountability.</p>
<p style="text-align: justify; ">Among other things, Section 69A of the Information Technology (IT) Act, which regulates takedown and blocking of online content, allows both government departments and courts to issue directions to ISPs to block websites. Since court orders are in the public domain, it is possible to know this set of blocked websites and URLs. However, the process is much more opaque when it comes to government orders.</p>
<p style="text-align: justify; ">The Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009, issued under the Act, detail a process entirely driven through decisions made by executive-appointed officers. Although some scrutiny of such orders is required normally, it can be waived in cases of emergencies. The process does not require judicial sanction, and does not present an opportunity of a fair hearing to the website owner. Notably, the rules also mandate ISPs to maintain all such government requests as confidential, thus making the process and complete list of blocked websites unavailable to the general public.</p>
<p style="text-align: justify; ">In the absence of transparency, we have to rely on a mix of user reports and media reports that carry leaked government documents to get a glimpse into what websites the government is blocking. Civil society efforts to get the entire list of blocked websites have repeatedly failed. In response to the Right to Information (RTI) request filed by the Software Freedom Law Centre India in August 2017, the Ministry of Electronics and IT refused to provide the entire of list of blocked websites citing national security and public order, but only revealed the number of blocked websites: 11,422.</p>
<p style="text-align: justify; ">Unsurprisingly, ISPs do not share this information because of the confidentiality provision in the rules. A 2017 study by the Centre for Internet and Society (CIS) found all five ISPs surveyed refused to share information about website blocking requests. In July 2018, the Bharat Sanchar Nagam Limited rejected the RTI request by CIS which asked for the list of blocked websites.</p>
<p style="text-align: justify; ">The lack of transparency, clear guidelines, and a monitoring mechanism means that there are various forms of arbitrary behaviour by ISPs. First and most importantly, there is no way to ascertain whether a website block has legal backing through a government order because of the aforementioned confidentiality clause. Second, the rules define no technical method for the ISPs to follow to block the website. This results in some ISPs suppressing Domain Name System queries (which translate human-parseable addresses like ‘example.com’ to their network address, ‘93.184.216.34’), or using the Hypertext Transfer Protocol (HTTP) headers to block requests. Third, as has been made clear with recent user reports, users in different regions and telecom circles, but serviced by the same ISP, may be facing a different list of blocked websites. Fourth, when blocking orders are rescinded, there is no way to make sure that ISPs have unblocked the websites. These factors mean that two Indians can have wildly different experiences with online censorship.</p>
<p style="text-align: justify; ">Organisations like the Internet Freedom Foundation have also been pointing out how, if ISPs block websites in a non-transparent way (for example, when there is no information page mentioning a government order presented to users when they attempt to access a blocked website), it constitutes a violation of the net neutrality rules that ISPs are bound to since July 2018.</p>
<p style="text-align: justify; ">While the Supreme Court upheld the legality of the rules in 2015 in Shreya Singhal vs. Union of India, recent events highlight how the opaque processes can have arbitrary and unfair outcomes for users and website owners. The right to access to information and freedom of expression are essential to a liberal democratic order. To preserve these freedoms online, there is a need to amend the rules under the IT Act to replace the current regime with a transparent and fair process that makes the government accountable for its decisions that aim to censor speech on the internet.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/hindustan-times-april-16-2019-gurshabad-grover-to-preserve-freedoms-online-amend-it-act'>http://editors.cis-india.org/internet-governance/blog/hindustan-times-april-16-2019-gurshabad-grover-to-preserve-freedoms-online-amend-it-act</a>
</p>
No publishergurshabadFreedom of Speech and ExpressionIT ActInternet GovernanceInternet Freedom2019-04-16T10:09:41ZBlog EntrySubmission to the Facebook Oversight Board: Policy on Cross-checks
http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-policy-on-cross-checks
<b>The Centre for Internet & Society (CIS) submitted public comments to the Facebook Oversight Board on a policy consultation.</b>
<h2>Whether a cross-check system is needed?</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board</strong>: The Board should investigate the cross-check system as part of Meta’s larger problems with algorithmically amplified speech, and how such speech gets moderated.</p>
<p style="text-align: justify;"><strong>Explanation</strong>: The issues surrounding Meta’s cross-check system are not an isolated phenomena, but rather a reflection of the problems of algorithmically amplified speech, as well the lack of transparency in the company’s content moderation processes at large. At the outset, it must be stated that the majority of information on the cross-check system only became available after the media <a href="https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline">reports</a> published by the Wall Street Journal. While these reports have been extensive in documenting various aspects of the system, there is no guarantee that the disclosures obtained by them provides the complete picture regarding the system. Further, given that Meta has been found to purposely mislead the Board and the public on how the cross-check system operates, it is worth investigating the incentives that necessitate the cross-check system in the first place.</p>
<p style="text-align: justify;">Meta claims that the cross-check system works as a check for false positives: they “employ additional reviews for high-visibility content that may violate our policies.” Essentially they want to make sure that content that stays up on the platform and reaches a large audience, is following their content guidelines. However, previous disclosures have <a href="https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346">proven</a> policy executives have prioritized the company’s ‘business interests’ over removing content that violates their policies; and have <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">waited to act on known problematic content</a> until significant external pressure was built up, including in India. In this context, the cross-check system seems less like a measure designed to protect users who might be exposed to problematic content, and more as a measure for managing public perception of the company.</p>
<p style="text-align: justify;">Thus the Board should investigate both how content gains an audience on the platform, and how it gets moderated. Previous <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">whistleblower disclosures</a> have shown that the mechanics of algorithmically amplified speech, which prioritizes <a href="https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation/">engagement and growth over safety</a>, are easily taken advantage of by bad actors to promote their viewpoints through artificially induced virality. The cross-check system and other measures of content moderation at scale would not be needed if it was harder to spread problematic content on the platform in the first place. Instead of focusing only on one specific system, the Board needs to urge Meta to re-evaluate the incentives that drive content sharing on the platform and come up with ways that make the platform safer.</p>
<h2 style="text-align: justify;">Meta’s Obligations under Human Rights Law</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board: </strong>The Board must consider the cross-check system to be violative of Meta’s obligations under the International Covenant of Civil and Political Rights (ICCPR). Additionally, the cross-check ranker must be incorporated with Meta’s commitments towards human rights, as outlined in its Corporate Human Rights Policy.</p>
<p style="text-align: justify;">Explanation: Meta’s content moderation, and by extension, its cross-check system, is bound by both international human rights law as well as the Board’s past decisions. At the outset, The system fails the three-pronged test of legality, legitimacy and necessity and proportionality, as delineated under Article 19(3) of the International Covenant of Civil and Political Rights (ICCPR). Firstly, this system has been “<a href="https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353?mod=article_inline">scattered throughout the company, without clear governance or ownership</a>”, which violates the legality principle, since there is no clear guidance on what sort of speech, or which classes of users, would deserve the treatment of this system. Secondly, there is no understanding about the legitimacy of aims with which this system had been set up in the first place, beyond Meta’s own assertions, which have been <a href="https://www.oversightboard.com/news/215139350722703-oversight-board-demands-more-transparency-from-facebook/">countered</a> by evidence to the contrary. Thirdly, the necessity and proportionality of the restriction has to be <a href="https://www.oversightboard.com/decision/FB-691QAMHJ">read along</a> with the <a href="https://www.ohchr.org/en/issues/freedomopinion/articles19-20/pages/index.aspx">Rabat Plan of Action</a>, which requires that for a statement to become a criminal offense, a six-pronged test of threshold is to be applied: a) the social and political context, b) the speaker’s position or status in the society, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm. As news reports have indicated, Meta has been utilizing the cross-check system to privilege speech from influential users, and in the process, have shielded inflammatory, inciting speech that would have otherwise qualified the Rabat threshold. As such, the third requirement is not fulfilled either.</p>
<p style="text-align: justify;">Additionally, Meta’s own <a href="https://about.fb.com/wp-content/uploads/2021/03/Facebooks-Corporate-Human-Rights-Policy.pdf">Corporate Human Rights Policy</a> commits to respecting human rights in line with the UN Guiding Principles on Business and Human Rights (UNGPs). Therefore, the cross-check ranker must incorporate these existing commitments to human rights, including:</p>
<ul>
<li style="text-align: justify;">The right to freedom of expression:, UN Special Rapporteur on freedom of opinion and expression report <a href="https://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/38/35">A/HRC/38/35</a> (2018); <a href="https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=25729&LangID=E">Joint Statement of international freedom of expression monitors on COVID-19 (March, 2020)</a>.</li></ul>
<p style="text-align: justify;">The Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression addresses the regulation of user-generated online content.</p>
<p>The Joint Statement issued regarding Governmental promotion and protection of access to and free flow of information during the pandemic.</p>
<ul>
<li>The right to non-discrimination: International Convention on the Elimination of All Forms of Racial Discrimination (<a href="https://www.ohchr.org/EN/ProfessionalInterest/Pages/CERD.aspx">ICERD</a>), Articles 1 and 4.</li></ul>
<p>Article 1 of the ICERD defines racial discrimination.</p>
<p>Article 4 of the ICERD condemns propaganda and organisations that attempt to justify discrimination or are based on the idea of racial supremacism.</p>
<ul>
<li>Participation in public affairs and the right to vote: ICCPR Article 25.</li>
<li>The right to remedy: General Comment No. 31, Human Rights Committee (2004) (<a href="https://tbinternet.ohchr.org/_layouts/15/treatybodyexternal/Download.aspx?symbolno=CCPR%2fC%2f21%2fRev.1%2fAdd.13&Lang=en">General Comment 31</a>); UNGPs, Principle 22.</li></ul>
<p>The General Comment discusses the nature of the general legal obligation imposed on State Parties to the Covenant.</p>
<p style="text-align: justify;">Guiding Principle 22 states that where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes.</p>
<h2>Meta’s obligations to avoid political bias and false positives in its cross-check system</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board: </strong>The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to ensure that it is open about risks to user rights when there is involvement from the State in content moderation. Additionally, the Board must ask Meta to undertake a diversity and human rights audit of its existing policy teams, and commit to regular cultural training for its staff. Finally, the Board must investigate the potential conflicts of interest that arise when Meta’s policy team has any sort of nexus with political parties, and how that might impact content moderation.</p>
<p style="text-align: justify;">Explanation: For the cross-check system to be free from biases, it is important for Meta to come clear to the Board regarding the rationale, standards and processes of the cross check review, and report on the relative error rates of determinations made through cross check compared with ordinary enforcement procedures. It also needs to disclose to the Board in which particular situations it uses the system and in which it does not. Principle 4 under the Foundational Principles of the <a href="https://santaclaraprinciples.org/">Santa Clara Principles on Transparency and Accountability in Content Moderation</a> encourage companies to realize the risk to user rights when there is involvement from the State in processes of content moderation and asks companies to makes users aware that: a) a state actor has requested/participated in an action on their content/account, and b) the company believes that the action was needed as per the relevant law. Users should be allowed access to any rules or policies, formal or informal work relationships that the company holds with state actors in terms of content regulation, the process of flagging accounts/content and state requests to action.</p>
<p style="text-align: justify;">The Board must consider that erroneous lack of action (false positives) might not always be a system's flaw, but a larger, structural issue regarding how policy teams at Meta functions. As previous disclosures have <a href="https://www.wsj.com/articles/facebook-hate-speech-india-politics-muslim-hindu-modi-zuckerberg-11597423346">proven</a>, the contours of what sort of violating content gets to stay up on the platform has been ideologically and politically coloured, as policy executives have prioritized the company’s ‘business interests’ over social harmony. In such light, it is not sufficient to simply propose better transparency and accountability measures for Meta to adopt within its content moderation processes to avoid political bias. Rather, the Board’s recommendations must focus on the structural aspect of the human moderator and policy team that is behind these processes. The Board must ask Meta to a) urgently undertake a diversity and human rights audit of its existing team and its hiring processes, b) commit to regular training to ensure that their policy staffs are culturally literate in the socio-political regions they work in. Further, the Board must seriously investigate the potential <a href="https://time.com/5883993/india-facebook-hate-speech-bjp/">conflicts of interest</a> that happen when regional policy teams of Meta, with nexus to political parties, are also tasked with regulating content from representatives of these parties, and how that impacts the moderation processes at large.</p>
<p style="text-align: justify;">Finally, in case decision <a href="https://www.oversightboard.com/decision/FB-691QAMHJ">2021-001-FB-FBR</a>, the Board made a number of recommendations to Meta which must be implemented in the current situation, including: a) considering the political context while looking at potential risks, b) employment of specialized staff in content moderation while evaluating political speech from influential users, c) familiarity with the political and linguistic context d) absence of any interference and undue influence, e) public explanation regarding the rules Meta uses when imposing sanctions against influential users and f) the sanctions being time-bound.</p>
<h2 style="text-align: justify;">Transparency of the cross-check system</h2>
<p style="text-align: justify;"><strong>Recommendation for the Board: </strong>The Board must urge Meta to adopt and implement the Santa Clara Principles on Transparency and Accountability to increase the transparency of its cross-check system.</p>
<p style="text-align: justify;"><strong>Explanation: </strong>There are ways in which Meta can increase the transparency of not only the cross-check system, but the content moderation process in general. The following recommendations draw from <a href="https://santaclaraprinciples.org/">The Santa Clara Principles</a> and the Board’s own previous decisions:</p>
<p style="text-align: justify;">Considering Principle 2 of the Santa Clara Principles: Understandable Rules and Policies, Meta should ensure that the policies and rules governing moderation of content and user behaviors on Facebook are<strong> clear, easily understandable, and available in the languages</strong> in which the user operates.</p>
<p style="text-align: justify;">Drawing from Principle 5 on Integrity and Explainability and from the Board’s recommendations in case decision <a href="https://www.oversightboard.com/decision/FB-691QAMHJ">2021-001-FB-FBR</a> which advises Meta to“<em>Provide users with accessible information on how many violations, strikes and penalties have been assessed against them, and the consequences that will follow future violations</em>”, Meta should be able to <strong>explain the content moderation decisions to users in all cases</strong>: when under review, when the decision has been made to leave the content up, or take it down. We recommend that Meta keeps a publicly accessible running tally of the number of moderation decisions made on a piece of content till date with their explanations. This would allow third parties (like journalists, activists, researchers and the OSB) to keep Facebook accountable when it does not follow its own policies, as has previously been the case.</p>
<p style="text-align: justify;">In the same case decision, the Board has also previously recommended that Meta “<em>Produce more information to help users understand and evaluate the process and criteria for applying the newsworthiness allowance, including how it applies to influential accounts. The company should also clearly explain the rationale, standards and processes of the cross-check review, and report on the relative error rates of determinations made through cross-checking compared with ordinary enforcement procedures.</em>” Thus, Meta should <strong>publicly explain the cross check system </strong>in detail with examples, and make public the list of attributes that qualify a piece of content for secondary review.</p>
<p style="text-align: justify;">The Operational Principles further provide actionable steps that Meta can take to improve the transparency of their content moderation systems. Drawing from Principle 2: Notice and Principle 3: Appeals, Meta should make a satisfactory <strong>appeals process available </strong>to users - whether they be decisions to leave up or takedown content. The appeals process should be handled by context aware teams. Meta should then <strong>publish the results</strong> of the cross check system and the appeals processes as part of their transparency reports including data like total content actioned, rate of success in appeals and cross check process, decisions overturned and preserved etc, which would also satisfy the first Operational Principle: Numbers.</p>
<h2 style="text-align: justify;">Resources needed to improve the system for users and entities who do not post in English</h2>
<p style="text-align: justify;"><strong>Recommendations for the Board: </strong>The Board must urge Meta to urgently invest in resources to expand Meta’s content moderation services into the local contexts in which the company operates and invest in training data for local languages.</p>
<p style="text-align: justify;"><strong>Explanation: </strong>The cross-check system is not a fundamentally different problem than content moderation. It has been shown time and time again that Meta’s handling of content from non-Western, non-English language contexts is severely lacking. It has been shown how content hosted on the platform has been used to<a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang"> inflame existing tensions in developing countries</a>, <a href="https://www.wsj.com/articles/facebook-services-are-used-to-spread-religious-hatred-in-india-internal-documents-show-11635016354?mod=article_inline">promote religious hatred in India</a>, <a href="https://www.wsj.com/articles/burn-the-houses-rohingya-survivors-recount-the-day-soldiers-killed-hundreds-1526048545?mod=article_inline">genocide in Mynmar</a>, and continue to support <a href="https://www.wsj.com/articles/facebook-drug-cartels-human-traffickers-response-is-weak-documents-11631812953?mod=article_inline">human traffickers and drug cartels</a> on the platform even when these issues have been identified.</p>
<p style="text-align: justify;">There is an urgent need to invest resources to expand Meta’s content moderation services into the local contexts in which the company operates. The company should make all policies and rule documents available in the languages of its users; invest in creating automated tools that are capable of flagging content that is not posted in English; and add people familiar with the local contexts to provide context aware second level reviews. The Facebook Files show that even according to company engineering, <a href="https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184?mod=article_inline">automated content moderation</a> is still not very effective in identifying hate speech and other harmful content. Meta should focus on hiring, training and retaining human moderators who have knowledge of local contexts. Bias training of all content moderators, but especially those who will participate in the second level reviews in the cross check system is also extremely important to ensure acceptable decisions.</p>
<p style="text-align: justify;">Additionally, in keeping with Meta’s human rights commitments, the company should develop and publish a policy for responding to human rights violations when they are pointed out by activists, researchers, journalists and employees as a matter of due process. It should not wait for a negative news cycle to stir them into action <a href="https://www.theguardian.com/technology/2021/apr/12/facebook-fake-engagement-whistleblower-sophie-zhang">as it seems to have done in previous cases</a>.</p>
<h2 style="text-align: justify;">Benefits and limitations of automated technologies</h2>
<p style="text-align: justify;">Meta <a href="https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation%5C">recently changed</a> its moderation practice wherein it uses technology to prioritize content for human reviewers based on their severity index. Facebook <a href="https://transparency.fb.com/policies/improving/prioritizing-content-review/">has not specified</a> the technology it uses to prioritize high-severity content but its research record shows that it <a href="https://ai.facebook.com/blog/the-shift-to-generalized-ai-to-better-identify-violating-content">uses</a> a host of automated <a href="https://ai.facebook.com/tools#frameworks-and-tools">frameworks and tools</a> to detect violating content, including image recognition tools, object detection tools, natural language processing models, speech models and reasoning models. One such model is the <a href="https://ai.facebook.com/blog/community-standards-report/">Whole Post Integrity Embeddings</a> (“WPIE”) which can judge various elements in a given post (caption, comments, OCR, image etc.) to work out the context and the content of the post. Facebook also uses image matching models (SimSearchNet++) that are trained to match variations of an image with a high degree of precision and improved recall; multi-lingual masked language models on cross-lingual understanding such as <a href="https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/">XLM-R</a> that can accurately identify hate-speech and other policy-violating content across a wide range of languages. More recently, Facebook introduced its machine translation model called the <a href="https://analyticsindiamag.com/facebooks-new-machine-translation-model-works-without-help-of-english-data/">M2M-100</a> whose goal is to perform bidirectional translation between 7000 languages.</p>
<p style="text-align: justify;">Despite the advances in this field, there are inherent <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf">limitations</a> of such automated tools. <a href="https://www.theverge.com/2019/2/27/18242724/facebook-moderation-ai-artificial-intelligence-platforms">Experts</a> have repeatedly maintained that AI will get better at understanding context but it will not replace human moderators for the foreseeable future. One such instance where these limitations were <a href="https://www.politico.eu/article/facebook-content-moderation-automation/">exposed</a> was during the COVID-19 pandemic, when Facebook sent its human moderators home - the number of removals flagged as hate speech on its platform more than doubled to 22.5 million in the second quarter of 2020 but the number of successful content appeals was dropped to 12,600 from the 2.3 million figure for the first three months of 2020.</p>
<p style="text-align: justify;"><a href="https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligence-11634338184?mod=article_inline">The Facebook Files</a> show that Meta’s AI cannot consistently identify first-person shooting videos, racist rants and even the difference between cockfighting and car crashes. Its automated systems are only capable of removing posts that generate just 3% to 5% of the views of hate speech on the platform and 0.6% of all content that violates Meta’s policies against violence and incitement. As such, it is difficult to accept the company’s claim that nearly all of the hate speech it takes down was discovered by AI before it was reported by users.</p>
<p style="text-align: justify;">However, the benefits of such technology cannot be discounted, especially when one considers automated technology as a way of reducing <a href="https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona">trauma</a> for human moderators. Using AI for prioritizing content for review can turn out to be effective for human moderators as it can increase their efficiency and reduce harmful effects of content moderation on them. Additionally, it can also limit the exposure of harmful content to internet users. Moreover, AI can also reduce the impact of harmful content on human moderators by allocating content to moderators on the basis of their exposure history. Theoretically, if the company’s claims are to be believed, using automated technology for prioritizing content for review can help to improve the mental health of Facebook’s human moderators.</p>
<hr />
<p>Click to download the file <a class="external-link" href="https://cis-india.org/internet-governance/policy-on-cross-checks">here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-policy-on-cross-checks'>http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-policy-on-cross-checks</a>
</p>
No publisher[in alphabetical order] Anamika Kundu, Digvijay Singh, Divyansha Sehgal and Torsha SarkarFreedom of Speech and ExpressionInternet FreedomFacebookInternet Governance2022-02-09T05:31:32ZBlog EntrySubmission to the Facebook Oversight Board in Case 2021-008-FB-FBR: Brazil, Health Misinformation and Lockdowns
http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-in-case-2021-008-fb-fbr-brazil-health-misinformation-and-lockdowns
<b>In this note, we answer questions set out by the Board, pursuant to case 2021-008-FB-FBR, which concerned a post made by a Brazilian sub-national health official, and raised questions on health misinformation and enforcement of Facebook's community standards. </b>
<h1 style="text-align: justify;" dir="ltr">Background </h1>
<p dir="ltr">The <a href="https://about.fb.com/news/tag/oversight-board/">Oversight Board</a> is an expert body created to exercise oversight over Facebook’s content moderation decisions and enforcement of community guidelines. It is entirely independent from Facebook in its funding and administration and provides decisions on questions of policy as well as individual cases. It can also make recommendations on Facebook’s content policies. Its decisions are binding on Facebook, unless implementing them could violate the law. Accordingly, Facebook <a href="https://transparency.fb.com/oversight/oversight-board-cases/">implements</a> these decisions across identical content with parallel context, when it is technically and operationally possible to do so. </p>
<p dir="ltr">In June 2021, the Board made an <a href="https://oversightboard.com/news/170403765029629-announcement-of-case-2021-008-fb-fbr/">announcement</a> soliciting public comments on case 2021-008-FB-FBR, concerning a Brazilian state level medical council’s post questioning the effectiveness of lockdowns during the COVID-19 pandemic. Specifically, the post noted that lockdowns (i) are ineffective; (ii) lead to an increase in mental disorders, alcohol abuse, drug abuse, economic damage etc.; (iii) are against fundamental rights under the Brazilian Constitution; and (iv) are condemned by the World Health Organisation (“WHO”). These assertions were backed up by two statements (i) an alleged quote by Dr. Nabarro (WHO) stating that “the lockdown does not save lives and makes poor people much poorer”; and (ii) an example of how the Brazilian state of Amazonas had an increase in deaths and hospital admissions after lockdown. Ultimately, the post concluded that effective COVID-19 preventive measures include education campaigns about hygiene measures, use of masks, social distancing, vaccination and extensive monitoring by the government — but never the decision to adopt lockdowns. The post was viewed around 32,000 times and shared over 270 times. It was not reported by anyone. </p>
<p dir="ltr">Facebook did not take any action against the post, since it had opined that the post is not violative of its community standards. Moreover, WHO has also not advised Facebook to remove claims against lockdowns. In such a scenario, Facebook referred the case to the Oversight Board citing its public importance. </p>
<p dir="ltr">In its announcement, the Board sought answers on the following points: </p>
<ol><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether Facebook’s decision to take no action against the content was consistent with its Community Standards and other policies, including the Misinformation and Harm policy (which sits within the rules on <a href="https://www.facebook.com/communitystandards/credible_violence">Violence and Incitement</a>). </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether Facebook’s decision to take no action is consistent with the company’s stated values and human rights commitments. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether, in this case, Facebook should have considered alternative enforcement measures to removing the content (e.g., the <a href="https://www.facebook.com/communitystandards/false_news">False News</a> Community Standard places an emphasis on “reduce” and “inform,” including: labelling, downranking, providing additional context etc.), and what principles should inform the application of these measures. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">How Facebook should treat content posted by the official accounts of national or sub-national level public health authorities, including where it may diverge from official guidance from international health organizations. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Insights on the post’s claims and their potential impact in the context of Brazil, including on national efforts to prevent the spread of COVID-19. </p>
</li><li style="list-style-type: decimal;" dir="ltr">
<p dir="ltr">Whether Facebook should create a new Community Standard on health misinformation, as recommended by the Oversight Board in case decision <a href="https://oversightboard.com/decision/FB-XWJQBU9A/">2020-006-FB-FBR</a>.</p>
</li></ol>
<h1 style="text-align: justify;" dir="ltr">Submission to the Board</h1>
<p dir="ltr">Facebook’s decision to take no action against the post is consistent with its (i) <a href="https://www.facebook.com/communitystandards/credible_violence">Violence and Incitement</a> community standard read with the <a href="https://www.facebook.com/help/230764881494641">COVID-19 Policy Updates and Protections</a>; and (ii) <a href="https://www.facebook.com/communitystandards/false_news">False News</a> community standard. Facebook’s<a href="https://about.fb.com/news/2018/08/hard-questions-free-expression/"> website</a> as well as all of the Board’s <a href="https://oversightboard.com/decision/FB-6YHRXHZR/">past</a> <a href="https://oversightboard.com/decision/FB-QBJDASCV/">decisions</a> refer to the International Covenant on Civil and Political Rights’ (ICCPR) jurisprudence based <a href="https://www2.ohchr.org/english/bodies/hrc/docs/gc34.pdf">three-pronged test</a> of legality, legitimate aim, and necessity and proportionality in determining violations of Facebook’s community standards. Facebook must apply the same principles to guide the use of its enforcement actions too, keeping in mind the context, intent, tone and impact of the speech. </p>
<p dir="ltr">First, none of Facebook’s aforementioned rules contain explicit prohibitions on content questioning lockdown effectiveness. There is nothing to indicate that “misinformation”, which is undefined, includes within its scope information about the effectiveness of lockdowns. The World Health Organisation has also not advised against such posts. Applying the principle of legality, any person cannot reasonably foresee that such content is prohibited. Accordingly, Facebook’s community standards have not been violated, </p>
<p dir="ltr">Second, the post does not meet the threshold of causing “imminent” harm stipulated in the community standards. Case decision <a href="https://oversightboard.com/decision/FB-XWJQBU9A/">2020-006-FB-FBR</a>, notes that an assessment of “imminence” is made with reference to factors like context, speaker credibility, language etc. Presently, the post’s language and tone, including its quoting of experts and case studies, indicate that its intent is to encourage informed, scientific debate on lockdown effectiveness. </p>
<p dir="ltr">Third, Facebook’s false news community standard does contain any explicit prohibitions. Hence there is no question of its violation. Any decision to the contrary may go against the standard’s stated policy logic of not stifling public discourse, and create a chilling effect on posts questioning the lockdown efficacy. This will set a problematic precedent that Facebook will be mandated to implement.</p>
<p dir="ltr">Presently, Facebook cannot remove the post since no community standards have been violated. Facebook must not reduce the post’s circulation since this may stifle public discussion around lockdown effectiveness. Further, its removal would have resulted in violation of the user’s right to freedom of opinion and expression, as guaranteed by the Universal Declaration of Human Rights (UDHR) and the ICCPR, which are in turn part of Facebook’s Corporate Human Rights Policy. </p>
<p dir="ltr">Instead, Facebook can provide additional context along with the post through its “<a href="https://about.fb.com/news/2018/04/inside-feed-article-context/">related articles</a>” feature, by showing fact checked articles talking about the benefits of lockdown. This approach is the most beneficial since (i) it is less restrictive than reducing circulation of the post; (ii) it balances interests better than not taking any actions by allowing people to be informed about both sides of the debate on lockdowns so that they can make an informed assessment. </p>
<p dir="ltr">Further, Facebook’s treatment of content posted by official accounts of national or sub-national health authorities should be circumscribed by its updated <a href="https://transparency.fb.com/features/approach-to-newsworthy-content/">Newsworthy Content Policy</a>, and the Board’s decision in the <a href="https://oversightboard.com/decision/FB-691QAMHJ/">2021-001-FB-FBR</a>, which had adopted the <a href="https://www.ohchr.org/en/issues/freedomopinion/articles19-20/pages/index.aspx">Rabat Plan of Action</a> to determine whether a restriction on freedom of expression is required to prevent incitement. The Rabat Plan of Action proposes a six-prong test, that considers: a) the social and political context, b) status of the speaker, c) intent to incite the audience against a target group, d) content and form of the speech, e) extent of its dissemination and f) likelihood of harm, including imminence. Apart from taking these factors into consideration, Facebook must <a href="https://transparency.fb.com/features/approach-to-newsworthy-content/">perform</a> a balancing test to determine whether the public interest of the information in the post outweighs the risks of harm. </p>
<p dir="ltr">In the Board’s decision in <a href="https://oversightboard.com/decision/FB-XWJQBU9A/">2020-006-FB-FBR</a>, it was recommended to Facebook to: a) set out a clear and accessible Community Standard on health misinformation, b) consolidate and clarify existing rules in one place (including defining key terms such as misinformation) and c) provision of "detailed hypotheticals that illustrate the nuances of interpretation and application of [these] rules" to provide further clarity for users. Following this, Facebook has <a href="https://assets.documentcloud.org/documents/20491921/covid-19-response-full.pdf">notified</a> its implementation measures, where it has fully implemented these recommendations, thereby bringing it into compliance.</p>
<p dir="ltr">Finally, Brazil is one of the <a href="https://www.bbc.com/news/world-51235105">worst affected</a> countries in the pandemic. It has also been <a href="https://www.ft.com/content/ea62950e-89c0-4b8b-b458-05c90a55b81f">struggling </a>to combat the spread of fake news during the pandemic. President Bolsanaro has been <a href="https://www.hrw.org/news/2021/01/28/brazil-crackdown-critics-covid-19-response">criticised</a> for <a href="https://www.theguardian.com/commentisfree/2020/feb/07/democracy-and-freedom-of-expression-are-under-threat-in-brazil">curbing free speech</a> by using a dictatorship-era <a href="http://www.iconnectblog.com/2021/02/undemocratic-legislation-to-undermine-freedom-of-speech-in-brazil/">national security law</a>., and questioned on his handling of the pandemic, including his own controversial <a href="https://www.bbc.com/news/world-latin-america-56479614">statements </a>questioning lockdown effectiveness. In such a scenario, the post may be perceived in a political colour rather than as an attempt at scientific discussion. However, it is unlikely that the post will lead to any-knee jerk reactions, since people are already familiar with the lockdown debate on which much has already been said and done. A post like this which merely reiterates one side of an ongoing debate is not likely to cause people to take any action to violate lockdown.</p>
<p dir="ltr">For detailed explanation on these questions, please see <a class="external-link" href="https://cis-india.org/internet-governance/facebook-oversight-board-submission-brazil">here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-in-case-2021-008-fb-fbr-brazil-health-misinformation-and-lockdowns'>http://editors.cis-india.org/internet-governance/blog/submission-to-the-facebook-oversight-board-in-case-2021-008-fb-fbr-brazil-health-misinformation-and-lockdowns</a>
</p>
No publisherTanvi Apte and Torsha SarkarInternet FreedomMisinformationIntermediary LiabilityInformation Technology2021-07-01T07:34:09ZBlog EntryResurrecting the marketplace of ideas
http://editors.cis-india.org/internet-governance/blog/hindu-businessline-february-19-2019-arindrajit-basu-resurrecting-the-marketplace-of-ideas
<b>There is no ‘silver bullet’ for regulating content on the web. It requires a mix of legal and empirical analysis.</b>
<p style="text-align: justify; ">The article by Arindrajit Basu was published in <a class="external-link" href="https://www.thehindubusinessline.com/opinion/resurrecting-the-marketplace-of-ideas/article26313605.ece">Hindu Businessline</a> on February 19, 2019.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">A century after the ‘marketplace of ideas’ first found its way into a US Supreme Court judgment through the dissenting opinion of Justice Oliver Wendell Holmes Jr <i>(Abrams v United States, 1919</i>), the oft-cited rationale for free speech is arguably under siege.</p>
<p style="text-align: justify; ">The increasing quantity and range of online speech hosted by internet platforms coupled with the shock waves sent by revelations of rampant abuse through the spread of misinformation has lead to a growing inclination among governments across the globe to demand more aggressive intervention by internet platforms in filtering the content they host.</p>
<p style="text-align: justify; ">Rule 3(9) of the Draft of the Information Technology [Intermediary Guidelines (Amendment) Rules] 2018 released by the Ministry of Electronics and Information Technology (MeiTy) last December follows the interventionist regulatory footsteps of countries like Germany and France by mandating that platforms use “automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.”</p>
<p style="text-align: justify; ">Like its global counterparts, this rule, which serves as a pre-condition for granting immunity to the intermediary from legal claims arising out of user-generated communications, might not only have an undue ‘chilling effect’ on free speech but is also a thoroughly uncooked policy intervention.</p>
<h2 style="text-align: justify; ">Censorship by proxy</h2>
<p style="text-align: justify; ">Rule 3(9) and its global counterparts might not be in line with the guarantees enmeshed in the right to freedom of speech and expression for three reasons. First, the vague wording of the law and the abstruse guidelines for implementation do not provide clarity, accessibility and predictability — which are key requirements for any law restricting free speech .The NetzDG-the German law, aimed at combating agitation and fake news, has attracted immense criticism from civil society activists and the UN Special Rapporteur David Kaye on similar grounds.</p>
<p style="text-align: justify; ">Second, as proved by multiple empirical studies across the globe, including one conducted by CIS on the Indian context, it is likely that legal requirements mandating that private sector actors make determinations on content restrictions can lead to over-compliance as the intermediary would be incentivised to err on the side of removal to avoid expensive litigation.</p>
<p style="text-align: justify; ">Finally, by shifting the burden of determining and removing ‘unlawful’ content onto a private actor, the state is effectively engaging in ‘censorship by proxy’. As per Article 12 of the Constitution, whenever a government body performs a ‘public function’, it must comply with all the enshrined fundamental rights.</p>
<p style="text-align: justify; ">Any individual has the right to file a writ petition against the state for violation of a fundamental right, including the right to free speech.</p>
<p style="text-align: justify; ">However, judicial precedent on the horizontal application of fundamental rights, which might enable an individual to enforce a similar claim against a private actor has not yet been cemented in Indian constitutional jurisprudence.</p>
<p style="text-align: justify; ">This means that any individual whose content has been wrongfully removed by the platform may have no recourse in law — either against the state or against the platform.</p>
<h2 style="text-align: justify; ">Algorithmic governmentality</h2>
<p style="text-align: justify; ">Using automated technologies comes with its own set of technical challenges even though they enable the monitoring of greater swathes of content. The main challenge to automated filtering is the incomplete or inaccurate training data as labelled data sets are expensive to curate and difficult to acquire, particularly for smaller players.</p>
<p style="text-align: justify; ">Further, an algorithmically driven solution is an amorphous process.</p>
<p style="text-align: justify; ">Through it is hidden layers and without clear oversight and accountability mechanisms, the machine generates an output, which corresponds to assessing the risk value of certain forms of speech, thereby reducing it to quantifiable values — sacrificing inherent facets of dignity such as the speaker’s unique singularities, personal psychological motivations and intentions.</p>
<h2 style="text-align: justify; ">Possible policy prescriptions</h2>
<p style="text-align: justify; ">The first step towards framing an adequate policy response would be to segregate the content needing moderation based on the reason for them being problematic.</p>
<p style="text-align: justify; ">Detecting and removing information that is false might require the crafting of mechanisms that are different from those intended to tackle content that is true but unlawful, such as child pornography.</p>
<p style="text-align: justify; ">Any policy prescription needs to be adequately piloted and tested before implementation. It is also likely that the best placed prescription might be a hybrid amalgamation of the methods outlined below.</p>
<p style="text-align: justify; ">Second, it is imperative that the nature of intermediaries to which a policy applies are clearly delineated. For example, Whatsapp, which offers end-to-end encrypted services would not be able to filter content in the same way internet platforms like Twitter can.</p>
<p style="text-align: justify; ">The first option going forward is user-filtering, which as per a recent paper written by Ivar Hartmann, is a decentralised process, through which the users of an online platform collectively endeavour to regulate the flow of information.</p>
<p style="text-align: justify; ">Users collectively agree on a set of standards and general guidelines for filtering. This method combined with an oversight and grievance redressal mechanism to address any potential violation may be a plausible one.</p>
<p style="text-align: justify; ">The second model is enhancing the present model of self-regulation. Ghonim and Rashbass recommend that the platform must publish all data related to public posts and the processes followed in a certain post attaining ‘viral’ or ‘trending’ status or conversely, being removed.</p>
<p style="text-align: justify; ">This, combined with Application Programme Interfaces (APIs) or ‘Public Interest Algorithms’, which enables the user to keep track of the data-driven process that results in them being exposed to a certain post, might be workable if effective pilots for scaling are devised.</p>
<p style="text-align: justify; ">The final model that operates outside the confines of technology are community driven social mechanisms. An example of this is Telengana Police Officer Remi Rajeswari’s efforts to combat fake news in rural areas by using Janapedam — an ancient form of story-telling — to raise awareness about these issues.</p>
<p style="text-align: justify; ">Given the complex nature of the legal, social and political questions involved here, the quest for a ‘silver-bullet’ might be counter-productive.</p>
<p style="text-align: justify; ">Instead, it is essential for us to take a step back, frame the right questions to understand the intricacies in the problems involved and then, through a mix of empirical and legal analysis, calibrate a set of policy interventions that may work for India today.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/hindu-businessline-february-19-2019-arindrajit-basu-resurrecting-the-marketplace-of-ideas'>http://editors.cis-india.org/internet-governance/blog/hindu-businessline-february-19-2019-arindrajit-basu-resurrecting-the-marketplace-of-ideas</a>
</p>
No publisherbasuFreedom of Speech and ExpressionInternet FreedomInternet Governance2019-02-22T02:18:53ZBlog EntryOn the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
http://editors.cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021
<b>This note examines the legality and constitutionality of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The analysis is consistent with previous work carried out by CIS on issues of intermediary liability and freedom of expression. </b>
<p><span id="docs-internal-guid-6127737f-7fff-b2eb-1b4a-ff9009a1050f"></span></p>
<p dir="ltr">On 25 February 2021, the Ministry of Electronics and Information Technology (Meity) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (hereinafter, ‘the rules’). In this note, we examine whether the rules meet the tests of constitutionality under Indian jurisprudence, whether they are consistent with the parent Act, and discuss potential benefits and harms that may arise from the rules as they are currently framed. Further, we make some recommendations to amend the rules so that they stay in constitutional bounds, and are consistent with a human rights based approach to content regulation. Please note that we cover some of the issues that CIS has already highlighted in comments on previous versions of the rules.</p>
<p dir="ltr"> </p>
<p dir="ltr">The note can be downloaded <a class="external-link" href="https://cis-india.org/internet-governance/legality-constitutionality-il-rules-digital-media-2021">here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021'>http://editors.cis-india.org/internet-governance/blog/on-the-legality-and-constitutionality-of-the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021</a>
</p>
No publisherTorsha Sarkar, Gurshabad Grover, Raghav Ahooja, Pallavi Bedi and Divyank KatiraFreedom of Speech and ExpressionInternet GovernanceIntermediary LiabilityInternet FreedomInformation Technology2021-06-21T11:52:39ZBlog EntryNGOs, individuals urge state CMs to curb Internet shutdown
http://editors.cis-india.org/internet-governance/news/times-of-india-april-4-2017-ngos-individuals-urge-state-cms-to-curb-internet-shutdown
<b>Amid rising instances of Internet curbs, a group of individuals and organisations have urged the chief ministers of 12 states to only restrict specific online content rather than resort to complete shutdown.</b>
<p style="text-align: justify; ">The article was <a class="external-link" href="http://timesofindia.indiatimes.com/business/india-business/ngos-individuals-urge-state-cms-to-curb-internet-shutdown/articleshow/58011598.cms">published in the Times of India</a> on April 4, 2017.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; ">SFLC.in, a Delhi-based not-for-profit organisation, along with various Internet-related firms have sent letters in this regard to the chief ministers of these states impacted by Internet shutdowns.</p>
<p style="text-align: justify; ">The letters have been written to the chief ministers of Uttar Pradesh, <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Nagaland">Nagaland</a>, Manipur, Maharashtra, J&K, <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Jharkhand">Jharkhand</a>, Rajasthan, Meghalaya, <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Arunachal-Pradesh">Arunachal Pradesh</a>, <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Bihar">Bihar</a>, <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Gujarat">Gujarat</a> and Haryana.</p>
<p style="text-align: justify; ">"The Internet shutdowns are imposed using state power under Section 144 by these specific states and not by the Union Government. The central government is bound to follow the process under Section 69 IT act.</p>
<p style="text-align: justify; ">"These letters to the chief ministers of all 12 states, which have been affected by Internet shutdowns till date, are an effort by us to address the source of the problem," SFLC.in President and Legal Director Mishi Choudhary told .</p>
<p style="text-align: justify; ">As per Internet Shutdown tracker of SFLC, there have been 28 incidents of Internet closure in Jammu & Kashmir, 9 cases each in Gujarat and Haryana, 8 in Rajasthan, 3 Nagaland, 2 cases each in Uttar Pradesh, Bihar and Manipur and 1 incident each in Maharashtra, Jharkhand, Meghalaya and Arunachal Pradesh since 2012.</p>
<p style="text-align: justify; ">As per the tracker, far India has experienced a record number of 66 such incidents since 2012, with the number increasing more than two-fold from 14 in 2015 to 31 in 2016.</p>
<p style="text-align: justify; ">The letters sent to the chief ministers urge them to "take requisite action that would prohibit the issuance of orders that make Internet services entirely inaccessible for a particular area, and rather recommend that Section 69A and the procedure established by the rules therein be applied to limit the restriction to certain specific online content."</p>
<p style="text-align: justify; ">The signatories of the letters include the Centre for Internet and Society, Digital Empowerment Foundation, Internet Democracy Project, IT for Change and Society for Knowledge Commons, individuals like <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Anivar-Aravind">Anivar Aravind</a> (Executive Director, Indic Project), IIT Bombay professor <a class="key_underline" href="http://timesofindia.indiatimes.com/topic/Kannan-Moudgalya">Kannan Moudgalya</a> and others.</p>
<p style="text-align: justify; ">"We are hopeful that our efforts will make the government take in account the enormous effects of Internet shutdowns on the social-economic condition of our citizens and understand their plight," Choudhary said. PRS MKJ</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/times-of-india-april-4-2017-ngos-individuals-urge-state-cms-to-curb-internet-shutdown'>http://editors.cis-india.org/internet-governance/news/times-of-india-april-4-2017-ngos-individuals-urge-state-cms-to-curb-internet-shutdown</a>
</p>
No publisherpraskrishnaFreedom of Speech and ExpressionInternet FreedomInternet GovernanceCensorship2017-04-07T02:43:39ZNews ItemInternet Speech: Perspectives on Regulation and Policy
http://editors.cis-india.org/internet-governance/events/internet-speech-perspectives-on-regulation-and-policy
<b>The Centre for Internet & Society and the University of Munich (LMU), Germany are jointly organizing an international symposium at India Habitat Centre in New Delhi on April 5, 2019</b>
<p><img src="http://editors.cis-india.org/home-images/FreeSpeechSymposium_Poster_02.jpg/@@images/89fe6323-7608-482a-8072-dc241e9f0fda.jpeg" alt="Free Speech Poster" class="image-inline" title="Free Speech Poster" /></p>
<hr />
<p><a class="external-link" href="http://cis-india.org/internet-governance/files/free-speech-symposium-agenda"><b>Click to download the agenda</b></a></p>
<p> </p>
<p><a class="external-link" href="https://cis-india.org/internet-governance/files/free-speech-symposium-agenda"> </a></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/events/internet-speech-perspectives-on-regulation-and-policy'>http://editors.cis-india.org/internet-governance/events/internet-speech-perspectives-on-regulation-and-policy</a>
</p>
No publisherakritiFreedom of Speech and ExpressionInternet GovernanceFeaturedInternet FreedomEvent2019-04-01T16:38:54ZEventInternet Freedom Festival 2017
http://editors.cis-india.org/internet-governance/news/internet-freedom-festival-2017
<b>The Global Unconference of the Internet Freedom Communities took place at Valencia in Spain from March 6 to 10, 2017. The event was organized by the IFF. Vidushi Marda on behalf of CIS took part in the event.</b>
<p style="text-align: justify; ">Vidushi as part of her work with Working Group 1 (WG1) of the Freedom Online Coalition (FOC), organised a workshop along with Mallory Knodel from APC. This workshop was titled "Practical implementations of human rights respecting cybersecurity policy". Participants in the workshop were divided into groups to evaluate the recommendations developed by WG1 in light of existing cyber security policies from around the world. The recommendations can be found <a class="external-link" href="https://www.freedomonlinecoalition.com/wp-content/uploads/2014/04/FOC-WG1-Recommendations-Final-21Sept-2015.pdf">here</a> and the accompanying narrative document can be found <a class="external-link" href="https://www.freedomonlinecoalition.com/wp-content/uploads/2014/04/FOC-WG1-Narrative-Final-28-April-2016.pdf">here</a>.</p>
<p style="text-align: justify; ">The session ended up being productive - we received feedback from participants about the effectiveness of the recommendations, and also about aspects of these recommendations that needed revisiting/more work. A more detailed account of the session can be found at the <a class="external-link" href="https://internetfreedomfestival.org/wiki/index.php/Practical_implementations_of_human_rights_respecting_cybersecurity_policy">Wiki page</a>.</p>
<p style="text-align: justify; ">Vidushi also attended the following sessions:</p>
<ul>
<li><a class="external-link" href="https://internetfreedomfestival.org/wiki/index.php/Data_Protection_law_and_is_different_manifestations">Data Protection Law and its Different Manifestations</a></li>
<li><a class="external-link" href="https://internetfreedomfestival.org/wiki/index.php/Using_the_Ranking_Digital_Rights_Corporate_Accountability_Index_for_Advocacy_%26_Research">Using the Ranking Digital Rights Corporate Accountability Index for Advocacy & Research</a></li>
<li><a class="external-link" href="https://internetfreedomfestival.org/wiki/index.php/The_identity_we_can%27t_change:_a_new_wave_of_biometric_policies_around_the_world">The identity we can't change: a new wave of biometric policies around the world</a></li>
<li style="text-align: justify; "><a class="external-link" href="https://internetfreedomfestival.org/wiki/index.php/Enabling_free_speech_online_by_legal_defence:_the_need_for_skilled_lawyers_to_secure_the_free_flow_of_information_online">Enabling free speech online by legal defence: the need for skilled lawyers to secure the free flow of information online</a>: Vidushi channeled a discussion about Shreya Singhal v. Union of India as an important case study in understanding how legal defence has been used to secure rights online. She specifically spoke about the distinction made in the judgment b/w communications on the internet vs. communications elsewhere.</li>
</ul>
<p>For more info <a class="external-link" href="https://internetfreedomfestival.org/">see here</a>.</p>
<ul>
</ul>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/internet-freedom-festival-2017'>http://editors.cis-india.org/internet-governance/news/internet-freedom-festival-2017</a>
</p>
No publisherpraskrishnaInternet FreedomInternet Governance2017-03-29T11:24:38ZNews ItemIETF103
http://editors.cis-india.org/internet-governance/news/ietf-103
<b>Internet Engineering Task Force (IETF) organized the IETF103 in Bangkok from November 3 to November 9, 2018. Gurshabad Grover attended the event.</b>
<p class="moz-txt-link-rfc2396E">In the IETF hackathon, Gurshabad collaborated with Alp Toker (from NetBlocks.org) to develop a client-side website for testing DNS over HTTPS (DoH) servers. The tool can be used for decentralised testing of DoH servers for censorship and measurement. The tool can be found <a class="external-link" href="https://netblocks.org/tmp/doh/">here</a>. The slide deck we used to present can be found <a class="external-link" href="https://datatracker.ietf.org/meeting/103/materials/slides-103-hrpc-hackathon-update-00">here</a>.</p>
<p class="moz-txt-link-rfc2396E" style="text-align: justify; ">In the meeting of the Human Rights Protocol Considerations (hrpc) research group, Niels ten Oever and Gurshabad presented a report from the hackathon. The video of the session is available on <a class="external-link" href="https://www.youtube.com/watch?v=Bd33Be_P-FY">YouTube</a>.</p>
<p class="moz-txt-link-rfc2396E" style="text-align: justify; ">In the same meeting, it was decided that Gurshabad will be becoming a co-editor (with Niels ten Oever) on 'Guidelines for Human Rights Protocol Considerations' (draft-irtf-hrpc-guidelines), which is an active Internet Draft detailing a methodology for conducting human rights reviews of protocols and networking standards.</p>
<p class="moz-txt-link-rfc2396E" style="text-align: justify; ">In the meeting of Registration Protocols Extensions (regext) working group, a human rights review I submitted of the 'Verification Code Extension for the Extensible Provisioning Protocol (EPP)'(draft-ietf-regext-verificationcode) was discussed at length. The video of the session is available on <a class="external-link" href="https://www.youtube.com/watch?v=RTpCpfBbIiI">YouTube</a>.</p>
<p class="moz-txt-link-rfc2396E" style="text-align: justify; ">Gurshabad participated in the meetings of several other working groups, including Software Updates for IoT Devices (SUIT), Transport Layer Security (tls), and Privacy Enhancements and Assessments Research Group (pearg).</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/ietf-103'>http://editors.cis-india.org/internet-governance/news/ietf-103</a>
</p>
No publisherAdminInternet FreedomInternet Governance ForumCensorship2018-12-14T02:05:18ZNews ItemFrom Virtual to Reliable: Exploring Freedom and Facts in the World of WWW (World Wide Web)
http://editors.cis-india.org/internet-governance/news/from-virtual-to-reliable-exploring-freedom-and-facts-in-the-world-of-www-world-wide-web
<b>An interactive seminar on internet freedom was organized by the Embassy of the Kingdom of Netherlands and Adaan Foundation on March 21, 2017 at the India International Centre in New Delhi. Saikat Dutta and Amber Sinha were panelists. </b>
<p>The seminar was coincident with the inauguration of the World Press Photo Exhibition 2016. In total there were four panelists. <a class="external-link" href="http://cis-india.org/internet-governance/files/interactive-seminar-on-internet-freedom">Read the agenda here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/from-virtual-to-reliable-exploring-freedom-and-facts-in-the-world-of-www-world-wide-web'>http://editors.cis-india.org/internet-governance/news/from-virtual-to-reliable-exploring-freedom-and-facts-in-the-world-of-www-world-wide-web</a>
</p>
No publisherpraskrishnaFreedom of Speech and ExpressionInternet FreedomInternet Governance2017-03-29T04:01:25ZNews ItemEuropean Summer School on Internet Governance
http://editors.cis-india.org/internet-governance/news/european-summer-school-on-internet-governance
<b>The 13th European Summer School on Internet Governance was held at Meissen in Germany from 13 - 20 July 2019. Akriti Bopanna attended the school. The event was organized by EuroSSIG. </b>
<p>More information on the event can be <a class="external-link" href="https://eurossig.eu/eurossig/2019-edition/programme-2019/">accessed on this page</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/european-summer-school-on-internet-governance'>http://editors.cis-india.org/internet-governance/news/european-summer-school-on-internet-governance</a>
</p>
No publisherAdminCyber SecurityInternet GovernanceInternet Freedom2019-07-23T00:30:15ZNews ItemContent takedown and users' rights
http://editors.cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1
<b>After Shreya Singhal v Union of India, commentators have continued to question the constitutionality of the content takedown regime under Section 69A of the IT Act (and the Blocking Rules issued under it). There has also been considerable debate around how the judgement has changed this regime: specifically about (i) whether originators of content are entitled to a hearing, (ii) whether Rule 16 of the Blocking Rules, which mandates confidentiality of content takedown requests received by intermediaries from the Government, continues to be operative, and (iii) the effect of Rule 16 on the rights of the originator and the public to challenge executive action. In this opinion piece, we attempt to answer some of these questions.</b>
<p style="text-align: justify;" class="normal"> </p>
<p style="text-align: justify;" class="normal">This article was first <a class="external-link" href="http://https://theleaflet.in/content-takedown-and-users-rights/">published</a> at the Leaflet. It has subsequently been republished by <a class="external-link" href="https://scroll.in/article/953146/how-india-is-using-its-information-technology-act-to-arbitrarily-take-down-online-content">Scroll.in</a>, <a class="external-link" href="https://kashmirobserver.net/2020/02/15/content-takedown-and-users-rights/">Kashmir Observer</a> and the <a class="external-link" href="https://cyberbrics.info/content-takedown-and-users-rights/">CyberBRICS blog</a>. </p>
<p style="text-align: justify;" class="normal"><strong><br /></strong></p>
<p style="text-align: justify;" class="normal"><strong>Introduction</strong></p>
<p style="text-align: justify;" class="normal">Last year, several Jio users from different states <a href="https://www.medianama.com/2019/03/223-indiankanoon-jio-block/">reported</a> that sites like Indian Kanoon, Reddit and Telegram were inaccessible through their connections. While attempting to access the website, the users were presented with a notice that the websites were blocked on orders from the Department of Telecommunications (DoT). When contacted by the founder of Indian Kanoon, Reliance Jio <a href="https://in.reuters.com/article/us-india-internet-idINKCN1RF14D">stated</a> that the website had been blocked on orders of the government, and that the order had been rescinded the same evening. However, in response to a Right to Information (RTI) request, the DoT <a href="https://twitter.com/indiankanoon/status/1218193372210323456">said</a> they had no information about orders relating to the blocking of Indian Kanoon.</p>
<p style="text-align: justify;" class="normal">Alternatively, consider that the Committee to Protect Journalists (CPJ) <a href="https://cpj.org/blog/2019/10/india-opaque-legal-process-suppress-kashmir-twitter.php">expressed concern</a> last year that the Indian government was forcing Twitter to suspend accounts or remove content relating to Kashmir. They reported that over the last two years, the Indian government suppressed a substantial amount of information coming from the area, and prevented Indians from accessing more than five thousand tweets.</p>
<p style="text-align: justify;" class="normal">These instances are <a href="https://www.hindustantimes.com/analysis/to-preserve-freedoms-online-amend-the-it-act/story-aC0jXUId4gpydJyuoBcJdI.html">symptomatic</a> of a larger problem of opaque and arbitrary content takedown in India, enabled by the legal framework under the Information Technology (IT) Act. The Government derives its powers to order intermediaries (entities storing or transmitting information on behalf of others, a definition which includes internet service providers and social media platforms alike) to block online resources through <a href="https://indiankanoon.org/doc/10190353/">section 69A</a> of the IT Act and the <a href="https://meity.gov.in/writereaddata/files/Information%20Technology%20%28%20Procedure%20and%20safeguards%20for%20blocking%20for%20access%20of%20information%20by%20public%29%20Rules%2C%202009.pdf">rules</a> [“the blocking rules”] notified thereunder. Apart from this, <a href="https://indiankanoon.org/doc/844026/">section 79</a> of the IT Act and its allied rules also prescribe a procedure for content removal. <a href="https://cis-india.org/internet-governance/files/a-deep-dive-into-content-takedown-frames">Conversations</a> with one popular intermediary revealed that the government usually prefers to use its powers under section 69A, possibly because of the opaque nature of the procedure that we highlight below.</p>
<p style="text-align: justify;" class="normal">Under section 69A, a content removal request can be sent by authorised personnel in the Central Government not below the rank of a Joint Secretary. The grounds for issuance of blocking orders under section 69A are: “<em>the interest of the sovereignty and integrity of India, defence of India, the security of the state, friendly relations with foreign states or public order or for preventing incitement to the commission of any cognisable offence relating to the above.</em>” Specifically, the blocking rules envisage the process of blocking to be largely executive-driven, and require strict confidentiality to be maintained around the issuance of blocking orders. This shrouds content takedown orders in a cloak of secrecy, and makes it impossible for users and content creators to ascertain the legitimacy or legality of the government action in any instance of blocking.</p>
<p style="text-align: justify;" class="normal"><strong>Issues</strong></p>
<p style="text-align: justify;" class="normal">The Supreme Court had been called to determine the constitutional validity of section 69A and the allied rules in <a href="https://indiankanoon.org/doc/110813550/"><em>Shreya Singhal v Union of India</em></a>. The petitioners had contended that as per the procedure laid down by these rules, there was no guarantee of pre-decisional hearing afforded to the originator of the information. Additionally, the petitioners pointed out that the safeguards built into section 95 and 96 of the Code of Criminal Procedure (CrPC), which allow state governments to ban publications and persons to initiate legal challenges to those actions respectively, were absent from the blocking procedures. Lastly, the petitioners assailed rule 16 of the blocking rules, which mandated confidentiality of blocking procedures, on the grounds that it was affecting their fundamental rights.</p>
<p style="text-align: justify;" class="normal">The Court, however, found little merit in these arguments. Specifically, the Court found that section 69A was narrowly drawn and had sufficient procedural safeguards, which included the grounds of issuance of a blocking order being specifically drawn, and mandating that the reasons of the website blocking be in writing, thus making it amenable to judicial review. Further, the Court also found that the provision of setting up of a review committee saved the law from being constitutional infirmity. In the Court’s opinion, the mere absence of additional safeguards, as the ones built into the CrPC, did not mean that the law was unconstitutional.</p>
<p style="text-align: justify;" class="normal">But do the ground realities align with the Court’s envisaged implementation of these principles? Apar Gupta, a counsel for the petitioners, <a href="https://indianexpress.com/article/opinion/columns/but-what-about-section-69a/">pointed</a> out that there was no recorded instance of pre-decisional hearing being granted to show that this safeguard contained in the rules was actually being implemented. However, Gautam Bhatia <a href="https://indconlawphil.wordpress.com/2015/03/25/the-supreme-courts-it-act-judgment-and-secret-blocking/">read</a> <em>Shreya Singhal </em>to make an important advance: that the right of hearing be mandatorily extended to the ‘originator’, i.e. the content creator.</p>
<p style="text-align: justify;" class="normal">Additionally, Bhatia also noted that the Court, while upholding the constitutionality of the procedure under section 69A, held that the “<em>reasons have to be recorded in writing in such blocking order so that they may be assailed in a writ petition under Article 226 of the Constitution.</em>”</p>
<p style="text-align: justify;" class="normal">There are two important takeaways from this. <em>Firstly</em>, he argued that the broad contours of the judgment invoke an established constitutional doctrine — that the fundamental right under Article 19(1)(a) does not merely include the right of expression, but also the <em>right of access to information. </em>Accordingly, the right of challenging a blocking order was not only vested in the originator or the concerned intermediary, but may rest with the general public as well. And <em>secondly</em>, by the doctrine of necessary implication, it followed that for the general public to challenge any blocking order under Article 226, the blocking orders must be made public. While Bhatia concedes that public availability of blocking orders may be an over-optimistic reading of the judgment, recent events suggest that even the commonly-expected result, i.e. that the content creators having the right to a hearing, has not been implemented by the Government.</p>
<p style="text-align: justify;" class="normal">Consider the <a href="https://internetfreedom.in/delhi-hc-issues-notice-to-the-government-for-blocking-satirical-dowry-calculator-website/">blocking</a> of the satirical website DowryCalculator.com in September 2019 on orders from the government. The website displayed a calculator that suggests a ‘dowry’ depending on the salary and education of a prospective groom: even if someone misses the satire, the contents of the website are not immediately relatable to any grounds of removal listed under section 69A of the IT Act.</p>
<p style="text-align: justify;" class="normal"> Tanul Thakur, the creator of the website, was not granted a hearing despite the fact that he had publicly claimed the ownership of the website at various times and that the website had been covered widely by the press. The information associated with the domain name also publicly lists Thakur’s name and contact information. Clearly, the government made no effort to contact Thakur when passing the order. Perhaps even more worryingly, when he <a href="https://internetfreedom.in/delhi-hc-issues-notice-to-the-government-for-blocking-satirical-dowry-calculator-website/">tried</a> to access a copy of the blocking order by filing a RTI, the MeitY cited the confidentiality rule to deny him the information.</p>
<p style="text-align: justify;" class="normal">This incident documents a fundamental problem plaguing the rules: the confidentiality clause is still being used to deny disclosure of key information on content takedown orders. The government has also used the provision to deny citizens a list of blocked websites , as responses to RTI requests have proven <a href="https://cis-india.org/internet-governance/blog/rti-application-to-bsnl-for-the-list-of-websites-blocked-in-india">time</a> and <a href="https://sflc.in/deity-provides-list-sites-blocked-2013-withholds-orders">again</a>.</p>
<p style="text-align: justify;" class="normal">Clearly, the Supreme Court’s rationale in considering Section 69A and the blocking rules as constitutional is not one that is implemented in reality. The confidentiality clause is preventing legal challenges to content blocking in totality: content creators are unable access the orders, and hence are unable to understand the executive’s reasoning in ordering their content to be blocked from public access.</p>
<p style="text-align: justify;" class="normal">As we noted earlier, the grounds of issuing a blocking order under section 69A pertain to certain reasonable restrictions on expression permitted by Article 19(2), which are couched in broad terms. The government’s implementation of section 69A and the rules make it impossible for any judicial review or accountability on the conformity of blocking orders with the mentioned grounds under the rules, or any reasonable restriction at all.</p>
<p style="text-align: justify;" class="normal"><strong>The Way Forward</strong></p>
<p style="text-align: justify;" class="normal">From the opacity of proceedings under the law, to the lack of information regarding the same on public domain, the Indian content takedown regime leaves a lot to be desired from both the government and intermediaries at play. </p>
<p style="text-align: justify;" class="normal">First, we believe the Supreme Court’s decision in <em>Shreya Singhal v. Union of India</em> casts an obligation on the government to attempt to contact the content creator if they are passing a content takedown order to an intermediary. <em>Second</em>, even if the content creator is unavailable for a hearing at that instance, the confidentiality clause should not be used to prevent future disclosure of information to the content creator, so that affected citizens can access and challenge these orders.</p>
<p style="text-align: justify;" class="normal">While we wait for legal reform, intermediaries can also step up to ensure the rights of users online are upheld. On receiving formal orders, intermediaries should <a href="https://cis-india.org/internet-governance/blog/torsha-sarkar-suhan-s-and-gurshabad-grover-october-30-2019-through-the-looking-glass">assess</a> the legality of the received request. This should involve ensuring that only authorised agencies and personnel have sent the content removal orders, that the order specifically mentions what provision the government is exercising the power under, and that the content removal requests relate to the grounds of removal that are permissible under section 69A. For instance, intermediaries should refuse to entertain content removal requests under section 69A of the IT Act if they relate to obscenity, a ground not covered by the provision.</p>
<p style="text-align: justify;" class="normal">The representatives of the intermediary should also push for the committee to grant a hearing to the content creator. Here, the intermediary can act as a liaison between the uploader and the governmental authorities.</p>
<p style="text-align: justify;" class="normal">The Supreme Court’s recent decision in <a href="https://indiankanoon.org/doc/82461587/"><em>Anuradha Bhasin v. Union of India</em></a><em> </em>offers a glimmer of hope for user rights online<em>. </em>While the case primarily challenged the orders imposing section 144 of the CrPC and a communication blockade in Jammu and Kashmir, the final decision does affirm the fundamental principle that government-imposed restrictions on the freedom of expression and assembly must be made available to the public and affected parties to enable challenges in a court of law.</p>
<p style="text-align: justify;" class="normal"> The judiciary has yet another opportunity to consider the provision and the rules: late last year, Tanul Thakur <a href="https://internetfreedom.in/delhi-hc-issues-notice-to-the-government-for-blocking-satirical-dowry-calculator-website/">approached</a> the Delhi High Court to challenge the orders passed by the government to ISPs to block his website. One hopes that the future holds robust reforms to the content takedown regime.</p>
<p style="text-align: justify;" class="normal"> We live in an era where the ebb and flow of societal discourse is increasingly channeled through intermediaries on the internet. In the absence of a mature, balanced and robust framework that enshrines the rule of law, we risk arbitrary modulation of the marketplace of ideas by the executive.</p>
<p style="text-align: justify;" class="normal"><em> </em></p>
<p style="text-align: justify;" class="normal"><em>Torsha Sakar and Gurshabad Grover are researchers at the Centre for Internet and Society.</em></p>
<p style="text-align: justify;" class="normal"><em>Disclosure: The Centre for Internet and Society is a recipient of research grants from Facebook and Google.</em></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1'>http://editors.cis-india.org/internet-governance/blog/content-takedown-and-users-rights-1</a>
</p>
No publisherTorsha Sarkar, Gurshabad GroverInternet FreedomInternet GovernanceIntermediary LiabilityCensorship2020-02-17T05:18:25ZBlog Entry