The Centre for Internet and Society
http://editors.cis-india.org
These are the search results for the query, showing results 51 to 65.
Beyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media
http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-beyond-public-squares-dumb-conduits-and-gatekeepers
<b>In the past few years, social networking sites have come to play a central role in intermediating the public’s access to and deliberation of information critical to a thriving democracy. In stark contrast to early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia.</b>
<p style="text-align: justify; ">There is a dire need to think of regulatory strategies that look beyond the ‘dumb conduit’ metaphors that justify safe harbor protection to social networking sites. Alongside, it is also important to critically analyze the outcomes of regulatory steps such that they do not adversely impact free speech and privacy. By surveying the potential analogies of company towns, common carriers, and editorial functions, this essay provides a blueprint for how we may envision differentiated intermediary liability rules to govern social networking sites in a responsive manner.</p>
<h2>Introduction</h2>
<p style="text-align: justify; ">Only months after Donald Trump’s 2016 election victory — a feat mired in controversy over alleged Russian interference using social media, specifically Facebook — Mark Zuckerberg remarked that his company has grown to serve a role more akin to government, rather than a corporation. Zuckerberg argued that Facebook was responsible for creating guidelines and rules that governed the exchange of ideas of over two billion people online. Another way to look at the same argument is to acknowledge that, today, a quarter of the world’s population (and of India) are subject to the laws of Facebook’s terms and conditions and privacy policies, and public discourse around the globe is shaped within the constraints and conditions they create. Social media platforms, like Facebook, wield hitherto unimaginable power to catalyze public opinions, causing a particular narrative to gather steam — that Big Tech can pose an existential threat to democracy.</p>
<p style="text-align: justify; "><span>This, of course, is in absolute contrast to the early utopian visions which imagined that the internet would create a more informed public, facilitate citizen-led engagement, and democratize media. Instead, what we see now is the growing association of social media platforms with political polarization and the entrenchment of racism, homophobia, and xenophobia. The regulation of social networking sites has emerged as one of the most important and complex policy problems of this time. In this essay, I will explore the inefficacy of the existing regulatory framework, and provide a blueprint for how to think of appropriate regulatory metaphors to revisit it.</span></p>
<hr />
<ul>
<li><a class="external-link" href="https://itforchange.net/digital-new-deal/2020/11/01/beyond-public-squares-dumb-conduits-and-gatekeepers-the-need-for-a-new-legal-metaphor-for-social-media/"> Click on to read the article</a> published by IT for Change</li>
<li><a href="http://editors.cis-india.org/internet-governance/files/beyond-public-squares-dumb-conduits-and-gatekeepers.pdf" class="external-link">Download the PDF</a> (34,328 Kb) to read the full article, pages 126 - 138.</li>
</ul>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-beyond-public-squares-dumb-conduits-and-gatekeepers'>http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-beyond-public-squares-dumb-conduits-and-gatekeepers</a>
</p>
No publisheramberSocial MediaInternet Governance2021-05-31T10:23:36ZBlog EntryBeyond Public Squares, Dumb Conduits, and Gatekeepers: The Need for a New Legal Metaphor for Social Media
http://editors.cis-india.org/internet-governance/files/beyond-public-squares-dumb-conduits-and-gatekeepers.pdf
<b></b>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/files/beyond-public-squares-dumb-conduits-and-gatekeepers.pdf'>http://editors.cis-india.org/internet-governance/files/beyond-public-squares-dumb-conduits-and-gatekeepers.pdf</a>
</p>
No publisheramberInternet Governance2021-05-31T10:19:33ZFileRegulating Sexist Online Harassment as a Form of Censorship
http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-regulating-sexist-online-harassment
<b>This paper is part of a series under IT for Change’s project, Recognize, Resist, Remedy: Combating Sexist Hate Speech Online. The series, titled Rethinking Legal-Institutional Approaches to Sexist Hate Speech in India, aims to create a space for civil society actors to proactively engage in the remaking of online governance, bringing together inputs from legal scholars, practitioners, and activists. The papers reflect upon the issue of online sexism and misogyny, proposing recommendations for appropriate legal-institutional responses. The series is funded by EdelGive Foundation, India and International Development Research Centre, Canada.</b>
<p><span>Introduction</span></p>
<p style="text-align: justify; ">The proliferation of internet use was expected to facilitate greater online participation of women and <a class="external-link" href="https://ssrn.com/abstract=2039116">other marginalised groups</a>. However, over the past few years, as more and more people have come online, it is evident that social power in online spaces mirrors offline hierarchies. While identity and security thefts may be universal experiences, women and the LGBTQ+ community continue to face barriers to safety that men often do not, aside from structural barriers to access. Sexist harassment pervades the online experience of women, be it on dating sites, <a class="external-link" href="https://academic.oup.com/bjc/article/57/6/1462/2623986">online forums, or social media</a>.</p>
<p style="text-align: justify; ">In her book, <i><a class="external-link" href="https://yalebooks.yale.edu/book/9780300215120/twitter-and-tear-gas">Twitter and Tear Gas: The Power and Fragility of Networked Protest</a></i>, Zeynep Tufekci argues that the nature and impact of censorship on social media are very different. Earlier, censorship was enacted by restricting speech. But now, it also works in the form of organised harassment campaigns, which use the qualities of viral outrage to impose a disproportionate cost on the very act of speaking out. Therefore, censorship plays out not merely in the form of the removal of speech but through disinformation and hate speech campaigns.</p>
<p style="text-align: justify; ">In most cases, this censorship of content does not necessarily meet the threshold of hate speech, and free speech advocates have traditionally argued for counter speech as the most effective response to such speech acts. However, the structural and organised nature of harassment and extreme speech often renders counter speech ineffective. This paper will explore the nature of online sexist hate and extreme speech as a mode of censorship. Online sexualised harassment takes various forms including doxxing, cyberbullying, stalking, identity theft, incitement to violence, etc. While there are some regulatory mechanisms – either in law, or in the form of community guidelines that address them, this paper argues for the need to evolve a composite framework that looks at the impact of such censorious acts on online speech and regulatory strategies to address them.</p>
<hr />
<p style="text-align: justify; "><a href="http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf/at_download/file" class="external-link">Click on to read the full text</a> [PDF; 495 Kb]</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-regulating-sexist-online-harassment'>http://editors.cis-india.org/internet-governance/blog/it-for-change-amber-sinha-regulating-sexist-online-harassment</a>
</p>
No publisheramberFreedom of Speech and ExpressionInternet GovernanceCensorship2021-05-31T09:56:31ZBlog EntryRegulating Sexist Online Harassment: A Model of Online Harassment as a Form of Censorship
http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf
<b></b>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf'>http://editors.cis-india.org/internet-governance/files/it-for-change-february-2021-amber-sinha-regulating-sexist-online-harassment.pdf</a>
</p>
No publisheramberFreedom of Speech and ExpressionInternet GovernanceCensorship2021-05-31T09:39:14ZFileWomen on Covid lists get lewd calls and messages
http://editors.cis-india.org/internet-governance/news/deccan-herald-may-21-2021-krupa-joseph-women-on-covid-lists-get-lewd-calls-and-messages
<b>Perverts are eating into precious time in the middle of a pandemic and adding to the overall anxiety.</b>
<p><span>Women are getting lewd calls and messages when they share their phone numbers to seek and offer pandemic-related help.</span></p>
<p style="text-align: justify; ">On April 15, Shasvathi Siva tweeted about how her number, shared on blood donation and social media groups, received obscene photos and video calls from strangers.</p>
<p>When she spoke about the harassment on Instagram, she ended up receiving more abuse from men.</p>
<p>With the second wave of the pandemic raging, many patients and families are turning to social media to search for medicines, oxygen, and even hospital beds.</p>
<p style="text-align: justify; ">Ambika Tandon, senior researcher, Centre for Internet and Society, says, “There are many stories of how prominent and outspoken women like journalists and activists have received hate speech and messages threatening violence.” What is shocking, she says, is not the harassment, but that it is not stopping even during a medical emergency.</p>
<hr />
<p><a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/women-on-covid-lists-get-lewd-calls-and-messages-988523.html"> Click to read</a> the complete coverage in Deccan Herald on May 21, 2022.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/deccan-herald-may-21-2021-krupa-joseph-women-on-covid-lists-get-lewd-calls-and-messages'>http://editors.cis-india.org/internet-governance/news/deccan-herald-may-21-2021-krupa-joseph-women-on-covid-lists-get-lewd-calls-and-messages</a>
</p>
No publisherpraskrishnaGenderInternet Governance2021-05-24T06:35:20ZNews ItemComments and recommendations to the Guidelines for “Influencer Advertising on Digital Media”
http://editors.cis-india.org/internet-governance/blog/comments-and-recommendations-to-the-guidelines-for-201cinfluencer-advertising-on-digital-media201d
<b>In February, the Advertising Standards Council of India (ASCI) had issued draft rules for regulation of digital influencers, with an aim to "understand the peculiarities of [online] advertisements and the way consumers view them", as well as to ensure that: "consumers must be able to distinguish when something is being promoted with an intention to influence their opinion or behaviour for an immediate or eventual commercial gain". In lieu of this, we presented our responses. </b>
<p dir="ltr"><em><br /></em></p>
<p dir="ltr"><em>The authors would like to thank Merrin Muhammed for research assistance, and Pranav MB for editorial assistance. </em></p>
<h2>Introduction</h2>
<p dir="ltr">The Centre for Internet and Society (CIS) is a non-profit research organisation that works extensively on policy issues relating to privacy, freedom of expression, accessibility for persons with diverse abilities, access to knowledge, intellectual property rights and openness. In the past, CIS has also engaged with and contributed to an extensive body of work in India, concerning intermediary liability, regulation of social media and platform governance. The research at CIS seeks to understand the reconfiguration of social processes and structures through the internet and digital media technologies, and vice versa.</p>
<p dir="ltr">Please find below our recommendations for the Guidelines for "Influencer advertising on digital media" [“the Guidelines”]. The first section summarizes a few of our specific comments and concerns with the Guidelines, while the second section brings up a few other general observations that the ASCI ought to take into account. CIS is grateful for the opportunity to submit its views. </p>
<h2>High-level comments</h2>
<h3 dir="ltr">Operation of these Guidelines vis-a-vis the Consumer Protection Act, 2019</h3>
<p dir="ltr">The Consumer Protection Act, 2019 [“the Act”], makes provisions for regulating ‘advertisements’ and ‘endorsements.’ For instance, section 2(1) of the Act defines advertisements as: </p>
<p dir="ltr">“[...] any audio or visual publicity, representation, endorsement or pronouncement made by means of light, sound, smoke, gas, print, electronic media, internet or website and includes any notice, circular, label, wrapper, invoice or such other documents;”</p>
<p dir="ltr">Further, section 2(18) of the Act defines endorsement, in relation to an advertisement as:</p>
<p dir="ltr">“[...] (i) any message, verbal statement, demonstration; or </p>
<p dir="ltr">(ii) depiction of the name, signature, likeness or other identifiable personal characteristics of an individual; or </p>
<p dir="ltr">(iii) depiction of the name or seal of any institution or organisation, </p>
<p dir="ltr">which makes the consumer to believe that it reflects the opinion, finding or experience of the person making such endorsement.”</p>
<p dir="ltr">Additionally the Central Consumer Protection Authority (CCPA) is vested with the power to conduct investigations in instances of false or misleading advertisements, order discontinuation or modification of advertisements, and impose penalties. </p>
<p dir="ltr">We believe these provisions are expansive enough to cover those aspects of influencer advertising that the ASCI is intending to regulate. In light of this, it is important for the ASCI to clarify how the Complaints Procedure set up in the original ‘The Code for Self Regulation’ would operate vis-a-vis the power of the CCPA. </p>
<h2>Proposed Guidelines</h2>
<h3>Definition</h3>
<div> </div>
<p><em><strong>More specific definitions for Digital Media </strong></em></p>
<p dir="ltr">While it is commendable that the Guidelines identify a multitude of entities and services to encompass the definition for ‘Digital Media,’ we must highlight that these definitions are currently ambiguous. For instance, the Guidelines do not make it clear what Near Video on Demand, Subscription Video on Demand, Pay Per View, etc. are. These are pertinent details that would help consumers identify the nature of the viewed content, as well as allow influencers and brands to make clearer advertisement decisions. </p>
<p dir="ltr">Additionally, in light of the notification of The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 [“the 2021 rules”], which encompass online curated content providers (OCCPs), it is important for the Guidelines to clarify the relationship between its identified Digital Media entities and the OCCPs under the relevant law. While we recognize that the obligations for the different entities under the Guidelines and the 2021 rules are distinct, the lack of clarification might lead to a confusing ecosystem of regulatory obligations for entities that can be assuaged at this stage. </p>
<p><em><strong>Influencer </strong></em></p>
<p dir="ltr">The Guidelines define “Influencers” as “someone who has access to an audience and the power to affect their audience's purchasing decisions or opinions about a product, service, brand or experience, because of the influencer's authority, knowledge, position, or relationship with their audience, An influencer can intervene in an editorial context or in collaboration with a brand to publish content.” Although this definition is all encompassing, it could lead to confusion among users of social media on the matter of whether they are Influencers or not, since the Guidelines don’t mention any specific audience thresholds that serve as a prerequisite for qualifying under the Guidelines. The confusion also extends to the existing definition of “Celebrities” under the ASCI Guidelines For Celebrities In Advertising. </p>
<p dir="ltr">The Guidelines For Celebrities In Advertising state that: </p>
<p dir="ltr">“Celebrities” are defined as famous and well-known people who are from the field of Entertainment and Sports and would also include other famous and well-known personalities like Doctors, Authors, Activists, Educationists, etc. who get compensated for appearing in advertising. </p>
<p dir="ltr">The definition is substantiated by an endnote which states that a celebrity is one who is </p>
<p dir="ltr">“*Compensated Rs. 20 lakhs or above as per current limit for appearing in a single advertisement or a campaign or per year, whichever is more AND / OR is listed in top 100 celebrities as per any one of the current and immediate past list of Forbes or the Times or Celebrity track report of Hansa Research or any such list which is intended to be indicative and not exhaustive.”</p>
<p dir="ltr">We believe that a more clearer definition of “Influencers” similar to the definition of “Celebrities” in the Guidelines with markers such as verification, number of followers, income from posts per year etc., could be used to highlight who these Guidelines apply to. This will benefit the Influencer, the user, and the complaint handling authority.</p>
<p dir="ltr"><em><strong>Details of specific media channels</strong></em></p>
<p dir="ltr">In the chapter ‘Ready reckoner for specific media channels,’ the Guidelines mention a catalogue of places and instances where such disclosure ought to be made, for specific media channels. While the Guidelines mention the exact details for Facebook, and Instagram (including reels, stories, etc.), these details are missing for some of the other media channels mentioned, including Twitter, Pinterest, and Snapchat. </p>
<p dir="ltr">For Twitter, the Guidelines state: “Include the disclosure label or tag at the beginning of the body of the message as a tag.” Similar directions are given for promotions to be done via Pinterest. and Snapchat, where the disclosure is ought to be in the ‘message.’ However, the main method of communication on all these platforms is via other methods, and not ‘messages.’ Since this direction does not clarify where the disclosure ought to be, it has the potential to create confusion for both influencers, and brands on how best to comply with the Guidelines. Hence, we believe that the Guidelines should be updated to reflect the exact specifications of the media channels, and the places where the disclosures ought to be made. </p>
<h2>Other Comments</h2>
<h3 dir="ltr"><em>The need for some guidelines on advertisements directed at children</em></h3>
<p dir="ltr">It is estimated that as of February 2021, 10.6 percent of Instagram users in India are from the age group of 13-17 years. Hence there is a need to look at responsible advertising as well as think of the products that the influencers advertise. Additionally, a large number of influencers’ posts are targeted at children and teenagers, which increases their responsibility connected to advertisements. The draft Personal Data Protection Bill, 2019 prohibits guardian data fiduciaries, i.e. data fiduciaries who operate commercial websites, or online services directed at children (or process large volumes of personal data of children) from profiling, tracking, or behavioural monitoring of, or targeted advertising directed at, children and undertaking any other processing of personal data that can cause significant harm to the child. Though this is a good move, the obligation to not target advertisements at children is not extended to all data fiduciaries. While we do understand that it is difficult to gauge which posts are being viewed by children, the Guidelines could recommend that the Influencers who are aware of their main demographic being children, or teenagers, must take more care in the products they endorse, and take greater care to make the children aware that the post they are sharing is an advertisement. </p>
<p dir="ltr">Additionally we suggest that based on the control that the brands have in terms of content and decision making, and choose the influencer they want to engage with the brands could also ensure the correct audience for their product. Hencer along with the influencer the brand should also take care to ensure who the influencers main demographic are and see if the product is suited for that age group. </p>
<p dir="ltr">A PDF version of this response can be accessed <a class="external-link" href="https://cis-india.org/internet-governance/files/influencers-guidelines-comments">here</a>.</p>
<p dir="ltr"> </p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/comments-and-recommendations-to-the-guidelines-for-201cinfluencer-advertising-on-digital-media201d'>http://editors.cis-india.org/internet-governance/blog/comments-and-recommendations-to-the-guidelines-for-201cinfluencer-advertising-on-digital-media201d</a>
</p>
No publisherTorsha Sarkar and Shweta MohandasDigital AdvertisementsInternet Governance2021-04-05T09:58:12ZBlog EntryNew intermediary guidelines: The good and the bad
http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad
<b>In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. </b>
<p> </p>
<p>This article originally appeared in the Down to Earth <a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693">magazine</a>. Reposted with permission.</p>
<p>-------</p>
<p>The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.</p>
<p>These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.</p>
<p>The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.</p>
<p>These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly. </p>
<p>The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.</p>
<p>Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.</p>
<p><strong>What changed for the good?</strong></p>
<p>One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.</p>
<p>Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.</p>
<p>Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.</p>
<p>The new rules, therefore, envisage three types of entities:</p>
<ul><li>There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.</li><li>There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.</li><li>The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.</li></ul>
<p>The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.</p>
<p>I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.</p>
<p>Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.</p>
<p>One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:</p>
<ul><li>Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression</li><li>The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.</li></ul>
<p>The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.</p>
<p>This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.</p>
<p><strong>What should go?</strong></p>
<p>That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.</p>
<p>In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.</p>
<p>It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.</p>
<p>Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.</p>
<p>Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.</p>
<p><em>The author would like to thank Gurshabad Grover for his editorial suggestions. </em></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'>http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</a>
</p>
No publisherTorSharkIT ActIntermediary LiabilityInternet GovernanceCensorshipArtificial Intelligence2021-03-15T13:52:46ZBlog EntryResponse to the Pegasus Questionnaire issued by the SC Technical Committee
http://editors.cis-india.org/internet-governance/blog/response-to-pegasus-questionnaire-issued-by-sc-technical-committee
<b>On March 25, 2022, the Supreme Court appointed Technical Committee constituted to examine the allegations of alleged unauthorised surveillance using the Pegasus software released a questionnaire seeking responses and comments from the general public.</b>
<p style="text-align: justify; ">The questionnaire had 11 questions and the responses had to be submitted through an online form- which was available <a class="external-link" href="https://pegasus-india-investigation.in/invitation-to-comment/-">here</a>. The last date for submitting the response was March 31, 2022. CIS had submitted the following responses to the questions in the questionnaire. Access the <b><a href="http://editors.cis-india.org/internet-governance/response-to-the-pegasus-investigation" class="internal-link">Response to the Questionnaire</a></b></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/response-to-pegasus-questionnaire-issued-by-sc-technical-committee'>http://editors.cis-india.org/internet-governance/blog/response-to-pegasus-questionnaire-issued-by-sc-technical-committee</a>
</p>
No publisherAnamika Kundu, Digvijay, Arindrajit Basu, Shweta Mohandas and Pallavi BediIT ActSurveillanceInternet GovernancePrivacy2022-04-13T14:45:41ZBlog EntryCIS Seminar Series
http://editors.cis-india.org/internet-governance/blog/cis-seminar-series
<b>The CIS seminar series will be a venue for researchers to share works-in-progress, exchange ideas, identify avenues for collaboration, and curate research. We also seek to mitigate the impact of Covid-19 on research exchange, and foster collaborations among researchers and academics from diverse geographies. Every quarter we will be hosting a remote seminar with presentations, discussions and debate on a thematic area. </b>
<p style="text-align: justify; ">The first seminar series was held on 7th and 8th October on the theme of <a href="https://cis-india.org/internet-governance/blog/cis-seminar-series-information-disorder">‘Information Disorder: Mis-, Dis- and Malinformation’</a>,</p>
<h3 style="text-align: justify; ">Theme for the Second Seminar (to be held online)</h3>
<h3>Moderating Data, Moderating Lives: Debating visions of (automated) content moderation in the contemporary</h3>
<p style="text-align: justify; ">Artificial Intelligence (AI) and Machine Learning (ML) based approaches have become increasingly popular as “solutions” to curb the extent of mis-, dis- mal-information, hate speech, online violence and harassment on social media. The pandemic and the ensuing work from home policy forced many platforms to shift to automated moderation which further highlighted the inefficacy of existing models (<a href="https://www.zotero.org/google-docs/?u73Lwx">Gillespie, 2020)</a> to deal with the surge in misinformation and harassment. These efforts, however, raise a range of interrelated concerns such as freedom and regulation of speech on the privately public sphere of social media platforms; algorithmic governance, censorship and surveillance; the relation between virality, hate, algorithmic design and profits; and social, political and cultural implications of ordering social relations through computational logics of AI/ML.</p>
<p style="text-align: justify; ">On one hand, large-scale content moderation approaches (that include automated AI/ML-based approaches) have been deemed “necessary” given the enormity of data generated <a href="https://www.zotero.org/google-docs/?JHQ0rF">(Gillespie, 2020)</a>, on the other hand, they have been regarded as “technological fixtures” offered by the Silicon Valley <a href="https://www.zotero.org/google-docs/?YLFnLm">(Morozov, 2013)</a>, or “tyrannical” as they erode existing democratic measures <a href="https://www.zotero.org/google-docs/?Ia8JYp">(Harari, 2018)</a>. Alternatively, decolonial, feminist and postcolonial approaches insist on designing AI/ML models that centre voices of those excluded to sustain and further civic spaces on social media (<a href="https://www.zotero.org/google-docs/?1Sa8vf">Siapera, 2022)</a>.</p>
<p style="text-align: justify; ">From the global south perspective, issues around content moderation foreground the hierarchies inbuilt in the existing knowledge infrastructures. First, platforms remain unwilling to moderate content in under-resourced languages of the global south citing technological difficulties. Second, given the scale and reach of social media platforms and inefficient moderation models, the work is outsourced to workers in the global south who are meant to do the dirty work of scavenging content off these platforms for the global north. Such concerns allow us to interrogate the techno-solutionist approaches as well as their critiques situated in the global north. These realities demand that we articulate a different relationship with AI/ML while also being critical of AI/ML as an instrument of social empowerment for those at the “bottom of the pyramid” <a href="https://www.zotero.org/google-docs/?bvx6mV">(Arora, 2016)</a>.</p>
<p style="text-align: justify; ">The seminar invites scholars interested in articulating nuanced responses to content moderation that take into account the harms perpetrated by algorithmic governance of social relations and irresponsible intermediaries while being cognizant of the harmful effects of mis-, dis- mal-information, hate speech, online violence and harassment on social media.</p>
<p style="text-align: justify; ">We invite abstract submissions that respond to these complexities vis-a-vis content moderation models or propose provocations regarding automated moderation models and their in/efficacy in furthering egalitarian relationships on social media, especially in the global south.</p>
<p style="text-align: justify; ">Submissions can reflect on the following themes using legal, policy, social, cultural and political approaches. Also, the list is not exhaustive and abstracts addressing other ancillary concerns are most welcome:</p>
<ul>
<li>Metaphors of (content) moderation: mediating utopia, dystopia, scepticism surrounding AI/ML approaches to moderation.</li>
<li>From toxic to healthy, from purity to impurity: Interrogating gendered, racist, colonial tropes used to legitimize content moderation </li>
<li>Negotiating the link between content moderation, censorship and surveillance in the global south</li>
<li>Whose values decide what is and is not harmful? </li>
<li>Challenges of building moderation models for under resourced languages.</li>
<li>Content moderation, algorithmic governance and social relations. </li>
<li>Communicating algorithmic governance on social media to the not so “tech-savvy” among us.</li>
<li>Speculative horizons of content moderation and the future of social relations on the internet. </li>
<li>Scavenging abuse on social media: Immaterial/invisible labour for making for-profit platforms safer to use.</li>
<li>Do different platforms moderate differently? Interrogating content moderation on diverse social media platforms, and multimedia content.</li>
<li>What should and should not be automated? Understanding prevalence of irony, sarcasm, humour, explicit language as counterspeech.</li>
<li>Maybe we should not automate: Alternative, bottom-up approaches to content moderation</li>
</ul>
<h3>Seminar Format</h3>
<p>We are happy to welcome abstracts for one of two tracks:</p>
<p><strong>Working paper presentation</strong></p>
<p style="text-align: justify; ">A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.</p>
<p><strong>Coffee-shop conversations</strong></p>
<p style="text-align: justify; ">In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.</p>
<p style="text-align: justify; ">We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.</p>
<p>All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.</p>
<p>Please send your abstracts to <a href="mailto:workshops@cis-india.org">workshops@cis-india.org</a>.</p>
<h3>Timeline</h3>
<div id="_mcePaste"><ol>
<li>Abstract Submission Deadline: 18th April</li>
<li>Results of Abstract review: 25th April</li>
<li>Full submissions (of draft papers): 25th May</li>
<li>Seminar date: Tentative 31st May</li>
</ol></div>
<h3>References</h3>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>International Journal of Communication</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>, </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>10</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>(0), 19.</span></a></span><span> </span></p>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Gillespie, T. (2020). Content moderation, AI, and the question of scale. </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>Big Data & Society</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>, </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>7</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>(2), 2053951720943234. https://doi.org/10.1177/2053951720943234</span></a></span><span> </span></p>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Harari, Y. N. (2018, August 30). </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>Why Technology Favors Tyranny</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/</span></a></span><span> </span></p>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Morozov, E. (2013). </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>To save everything, click here: The folly of technological solutionism</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span> (First edition). PublicAffairs.</span></a></span><span> </span></p>
<p><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; ">Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. </a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "><i>International Journal of Bullying Prevention</i></a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; ">, </a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "><i>4</i></a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; ">(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7</a></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/cis-seminar-series'>http://editors.cis-india.org/internet-governance/blog/cis-seminar-series</a>
</p>
No publisherCheshta AroraInternet GovernanceMachine LearningArtificial IntelligenceEventSeminar2022-04-11T15:19:11ZBlog EntryPandemic Technology takes its Toll on Data Privacy
http://editors.cis-india.org/internet-governance/blog/deccan-herald-aman-nair-and-pallavi-bedi-june-13-2021-pandemic-technology-takes-its-toll-on-data-privacy
<b>The absence of any legal framework has meant these tools are now being used for purposes beyond managing the pandemic.</b>
<p style="text-align: center; ">The article by Aman Nair and Pallavi Bedi was <a class="external-link" href="https://www.deccanherald.com/specials/pandemic-technology-takes-its-toll-on-data-privacy-996870.html">published in the Deccan Herald </a>on June 13, 2021.</p>
<hr />
<p style="text-align: center; "><img src="http://editors.cis-india.org/home-images/ArogyaSetuApp.jpg" alt="Arogya Setu App" class="image-inline" title="Arogya Setu App" /></p>
<p style="text-align: center; "><span class="discreet">People show Arogya Setu App installed in their phones while travelling by special New Delhi-Bilaspur train from New Delhi Railway Station. Credit: PTI File Photo<br /></span></p>
<p style="text-align: justify; "> </p>
<p style="text-align: center; "><img src="http://editors.cis-india.org/home-images/CovidCertificate.jpg/@@images/672b385b-d0b0-49af-953d-ae96a42be117.jpeg" alt="Covid Certificate" class="image-inline" title="Covid Certificate" /></p>
<p style="text-align: center; "><span class="discreet">Jabalpur: A beneficiary shows his certificate on his mobile phone after receiving COVID-19 vaccine dose, at Gyan Ganga College in Jabalpur, Saturday, May 15, 2021. (PTI Photo)</span></p>
<p style="text-align: justify; ">At a time when technology is spawning smart solutions to combat Covid-19 worldwide, India’s digital response to the pandemic has stoked concerns that surveillance could pose threats to the privacy of the personal data collected. Be it apps or drones, there is widespread criticism that digital tools are being misused to share information without knowledge or consent. At the other end of the spectrum, the great urban-rural digital divide is hampering the already sluggish vaccination drive, exposing vulnerable populations to a fast-mutating virus.</p>
<p style="text-align: justify; ">Last year, the Centre, states and municipal corporations launched more than 70 apps relating to Covid-19, demonstrating the country’s digital-driven approach to handling the pandemic. Chief among these was the central government’s contact tracing app Aarogya Setu. Launched under the Digital India programme, the app quickly came under scrutiny over data privacy.</p>
<p style="text-align: justify; ">As per its privacy policy, Aarogya Setu collects personal details such as name, age, sex, profession and location. As there is no underlying legislation forming its basis, and in the absence of a personal data protection bill, serious privacy concerns regarding the collection, storage and use of personal data have been raised.</p>
<p style="text-align: justify; ">The government has attempted to mitigate these concerns with reassurances that the data will be used solely in tracing the spread of the virus. However, recent reports from the Kulgam district of Jammu and Kashmir point to the sharing of application data with police. This demonstrates how easy it is to use personal data for purposes other than which it was collected, and presents a serious threat to citizen privacy.</p>
<p style="text-align: justify; ">Though Aarogya Setu was initially launched as ‘consensual’ and ‘voluntary’, it soon became mandatory for individuals to download the app for various purposes such as air and rail travel (this order was subsequently withdrawn) and for government officials. Initially it was also mandatory for the private sector, but this was later watered down to state that employers should, on a ‘best effort basis', ensure that the app is downloaded by all employees having compatible phones. However, the ‘best effort basis’ soon translated into mandatory imposition for certain individuals, especially those working in the ‘gig economy’.</p>
<p style="text-align: justify; ">Several states had also launched apps for various purposes ranging from contact tracing of suspected Covid patients to monitoring the movement of quarantined patients. As a report by the Centre for Internet and Society observed, given the attention on Aarogya Setu, most of the apps launched by the state governments escaped scrutiny and public attention.Most of these apps either did not have a privacy policy or the policy was vague and often did not provide important details such as who was collecting the data, the time period for retaining the data and whether personal data could be shared with other departments, most notably, law enforcement.Apart from contact tracing apps, the pandemic also ushered in a wave of other apps and digital tools by the government. These include systems such as drones to check whether people are following Covid-19 norms and facial recognition cameras to report to the police whether someone has broken quarantine. Similar to Aarogya Setu, these tools have also largely been brought about in the absence of a legal and regulatory framework.<br />The absence of any legal framework has meant these tools are now being used for purposes beyond managing the pandemic.</p>
<p style="text-align: justify; ">The government is now planning to use facial recognition technology along with Aadhaar toauthenticate people before giving them vaccine shots.</p>
<p style="text-align: justify; ">Aarogya Setu is now linked with the vaccination process. Beneficiaries have been provided an option to register through Aarogya Setu. The pandemic has also provided a means for the government to bring in changes to health policies and introduce the National Health Data Management Policy for the creation of a Unique Health Identity Number for citizens.</p>
<h3 style="text-align: justify; ">Vaccination and digital platforms</h3>
<p style="text-align: justify; ">The use of digital technology has extended to the vaccination process through the deployment of the Covid Vaccine Intelligence Network (Co-WIN) platform.During the first phase of inoculation, beneficiaries were required to register on the Co-WIN app while in the subsequent phases, registration was to be done on the Co-WIN website. The beneficiary is required to upload a photo identity proof.</p>
<p style="text-align: justify; ">While Aadhaar has been identified as one of the seven documents that can be uploaded for this, the Health Ministry has clarified that Aadhaar is not mandatory for registration either through Co-WIN or through Aarogya Setu. However, as per media reports, certain vaccination centres still seem to insist on Aadhaar identity even though beneficiaries may have used another identity proof to register on the Co-WIN website.</p>
<p style="text-align: justify; ">It is also pertinent to note that the website did not have a privacy policy till the Delhi High Court issued directions on June 2, 2021. The privacy policy hyperlinked on the Co-WIN app directed the user to the Health Data Policy of the National Health Data Management Policy, 2020.</p>
<p style="text-align: justify; ">The vaccination drive has been used as a means to push the health identity project forward as beneficiaries who have opted to provide Aadhaar identity proof have also been provided with a health identity number on their vaccination certificate. It is interesting to note that Co-WIN’s privacy policy now states that if the beneficiary uses Aadhaar as identity proof, it can 'opt' to get a Unique Health Id.However, as a recent report revealed, health identity numbers have already been generated for certain beneficiaries without obtaining consent from them for the purpose.</p>
<h3 style="text-align: justify; ">Have the apps been successful?</h3>
<p style="text-align: justify; ">One could argue that privacy concerns are a worthwhile tradeoffin order to contain the spread of thepandemic. But it is worth examining how successful these technologies have been. In reality, the use of digital technology at every stage of combating the pandemic has clearly highlighted the extent of our digital divide. As per data from TRAI, there are around 750 million Internet subscribers in India,which is only a little more than half of India’s estimated 1.3 billion citizens — with this gap having a significant impact on the efficacy of the government’s strategies. Aarogya Setu has fallen far short of its goal, of having near universal adoption. It has limited adoption in much of the country. This has severely limited its efficacy in tracing the spread of the virus. Research from Maulana Azad Medical College has cited socio-economic inequalities,educational barriers and the lack of smartphone penetration as being the key causes behind the app’s limited success, pointing back to the digital divide. Moreover, the app has also brought with it a host of associated problems including lateral surveillance and function creep caused by the addition of new features. All of which, along with the previously mentioned privacy concerns, have served to hamper public trust and adoption.</p>
<p style="text-align: justify; ">A similar situation is seen in the case of vaccination and the Centre’s Co-WIN web portal. The need for registration, first on the Co-WIN app and later on the Co-WIN web portal, has disproportionately affected those who either have no or limited digital access. Many of them belong to vulnerable groups such as migrant and informal sector workers (mainly from disadvantaged castes), LGBTQIA + individuals, sex workers and both urban and rural poor. These issues have also been acknowledged by the Supreme Court, which raised serious concerns about the government being able to achieve its stated object of universal vaccination.</p>
<p style="text-align: justify; ">As the inoculation exercise opened up for the 18-45 age group, it increasingly favoured the urban population who possessed the technological and digital literacy to either create or access a host of tools. One need to only look at the wave of automated CO-WIN bots that arose as soon as the vaccination process was expanded to see how these dynamics manifested.</p>
<p style="text-align: justify; ">Ultimately, the digital-driven approach that the governments have adopted has resulted in a number of issues — most notably, data privacy and exclusion. Going forward, government strategies must actively account for these factors and ensure that citize rights are adequately protected.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/deccan-herald-aman-nair-and-pallavi-bedi-june-13-2021-pandemic-technology-takes-its-toll-on-data-privacy'>http://editors.cis-india.org/internet-governance/blog/deccan-herald-aman-nair-and-pallavi-bedi-june-13-2021-pandemic-technology-takes-its-toll-on-data-privacy</a>
</p>
No publisherAman Nair and Pallavi BediHealth TechPrivacyInternet GovernanceTechnological Protection MeasuresCovid19Healthcare2021-06-26T06:52:52ZBlog EntryInsult to Kannada shows Google AI in a poor light
http://editors.cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light
<b>A Google search for ‘the ugliest language in India’ yielded ‘Kannada’ as the answer late last week, causing widespread outrage.
</b>
<p>The article by Krupa Joseph was <a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/insult-to-kannada-shows-google-ai-in-a-poor-light-995307.html">published in Deccan Herald</a> on June 8, 2021. Pranesh Prakash and Shweta Mohandas have been quoted.</p>
<hr />
<p>Google has since apologised, saying the answer does not reflect its views, but questions still remain about why this happened at all, and who drafted the answer.</p>
<p style="text-align: justify; ">“When artificial intelligence gets it wrong, things can go really wrong, says tech entrepreneur,”Hari Prasad Nadig, who has worked on Kannada in free and open source soft ware.“Usually, you would expect Google to give an answer based on citings from multiple sources,and at least one or two credible sources.</p>
<p style="text-align: justify; ">Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.</p>
<p style="text-align: justify; ">“Usually, you would expect Google to give an answer based on citings from multiple sources, and at least one or two credible sources. Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.</p>
<h3 style="text-align: justify; ">Fallible process</h3>
<p style="text-align: justify; ">Pranesh Prakash, Centre for Internet and Society, Bengaluru, says the incident exposes the fallibility of the process by which Google selects its “featured snippets”.</p>
<p style="text-align: justify; ">“It is not an opinion that Google or its employees or its algorithms have come up with, but rather an existing opinion that Google wrongly amplified,” he says.It demonstrates that the snippets that Google features as ‘facts’ aren’t necessarily based on facts, he says.</p>
<h3 style="text-align: justify; ">Periodic checks</h3>
<p style="text-align: justify; ">Shweta Mohandas, researcher with the Center for Internet and Society, says Google does not create content, but only provides content that is available on the Internet.</p>
<p style="text-align: justify; ">“Hence, the biases come from the tags, then used to train the AI. There should be periodic checks on the data fed into the system,” she says. Such blunders can be prevented if the tags and results are audited periodically, and a mechanism is put in place to enable people to report them, she says.</p>
<h3 style="text-align: justify; ">Who was upto mischief?</h3>
<p style="text-align: justify; ">The answer was created on a financial services website whose owners aren’t revealing their names Pavanaja UB, CEO, Vishva Kannada Softech, says the answer was attributed to a website called debt consolidations questions.com — but he was unable to find this post anywhere on the site.“This is a website registered in Russia and it offers questions and answers on many topics. But this particular page could not be found. Maybe it was removed following the outrage,” he says. Pavanaja believes this was a deliberate attempt to upset people. “The website lists no information about the owner and gives no contact details. Even if such a question did exist on the page before, how did it get to the top of the Google search results?” he wonders.</p>
<p style="text-align: justify; ">He suggests that someone planted the answer and kept searching for it until it reached the top.“But who would take so much effort?” he says.</p>
<h3 style="text-align: justify; ">Furore and after</h3>
<p>‘Kannada’ came up as an answer to a query in Google about ‘the ugliest language in India’.</p>
<p style="text-align: justify; ">Aravind Limbavali, minister for Kannada and Culture, demanded an apology from Google, and threatened legal action against the company “for maligning the image of our beautiful language.”</p>
<p>Google removed the answer and issued a statement:</p>
<p style="text-align: justify; ">“We know this is not ideal, but we take swift corrective action when we are made aware of an issue and are continually working to improve our algorithms. Naturally, these are not reflective of the opinions of Google, and we apologise for the misunderstanding and hurting any sentiments."</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light'>http://editors.cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light</a>
</p>
No publisherKrupa JosephInternet GovernanceArtificial Intelligence2021-06-26T05:25:38ZNews ItemTwitter's India troubles show tough path ahead for digital platforms
http://editors.cis-india.org/internet-governance/news/dw-june-21-2021-aditya-sharma-twitter-india-troubles-show-tough-path-ahead-for-digital-platforms
<b>Twitter is in a standoff with Indian authorities over the government's new digital rules. Critics see the rules as an attempt to curb free speech, while others say more action is needed to hold tech giants accountable.
</b>
<p style="text-align: justify; ">The blog by Aditya Sharma <a class="external-link" href="https://www.dw.com/en/twitters-india-troubles-show-tough-path-ahead-for-digital-platforms/a-57980916">was published by DW</a> on 21 June 2021. Torsha Sarkar was quoted.</p>
<hr style="text-align: justify; " />
<p style="text-align: justify; "><img src="http://editors.cis-india.org/home-images/Intermediary.jpg/@@images/08eb8de3-4fd6-408f-94d2-3f202da0e730.jpeg" alt="Intermediary" class="image-right" title="Intermediary" /></p>
<p style="text-align: justify; ">Twitter holds a relatively low share of India's social media market. But, since 2017, the huge nation has emerged as Twitter's fastest-growing market, becoming critical to its global expansion plans.</p>
<p style="text-align: justify; ">In February, the Indian government <a href="https://www.dw.com/en/india-targets-twitter-whatsapp-with-new-regulatory-rules/a-56708566">introduced new guidelines</a> to regulate digital content on rapidly growing social media platforms.</p>
<p style="text-align: justify; ">The so-called Intermediary Guidelines are aimed at regulating content on internet platforms such as Twitter and Facebook, making them more accountable to legal requests for the removal of posts and sharing information about the originators of messages.</p>
<p style="text-align: justify; ">Employees at these companies can be held criminally liable for not complying with the government's requests.</p>
<p style="text-align: justify; ">Large social media firms must also set up mechanisms to address grievances and appoint executives to liaise with law enforcement under the new rules, as well as appoint an India-based compliance officer who would be held criminally liable for the content on their platforms.</p>
<p style="text-align: justify; ">The Indian government says the rules empower "users who become victims of defamation, morphed images, sexual abuse," among other online crimes. It also said that the rules seek to tackle the problem of disinformation.</p>
<p style="text-align: justify; ">But critics fear that the rules could be used to target government opponents and make sure dissidents don't use the platforms.</p>
<p style="text-align: justify; ">Social media companies were expected to comply with the new rules by May 25.</p>
<p style="text-align: justify; ">Some Indian media reports have recently said that Twitter lost its status as an "intermediary" and the legal protection that came with it, due to its failure to comply with the new rules.</p>
<h3 style="text-align: justify; ">Failure to comply and serious implications</h3>
<p style="text-align: justify; ">Apar Gupta, the executive director of the Internet Freedom Foundation, a New Delhi-based digital rights advocacy group, says failure to comply with the rules could threaten Twitter's India operations.</p>
<p style="text-align: justify; ">"Not complying with the rules would pose a real risk to Twitter's operational environment," he told DW.</p>
<p style="text-align: justify; ">"It will need to go to court to defend itself each time criminal prosecutions are launched against it," he added.</p>
<p style="text-align: justify; ">The first case against Twitter was filed last week, where it was charged with failing to stop the spread of a video on its platform that allegedly incited "hate and enmity" between two religious groups.</p>
<p style="text-align: justify; "><span>'Heavy censorship'</span></p>
<p style="text-align: justify; ">Gupta says adhering to all the government's demands would substantially change Twitter.</p>
<p style="text-align: justify; ">"Absolute compliance would mean heavy censorship of individual tweets, removal of the manipulated media tags, and blocking/suspension of accounts at the government's behest," he said.</p>
<p style="text-align: justify; ">Torsha Sarkar, policy officer at the Bengaluru-based Centre for Internet and Society, fears that Twitter might at times be compelled to overcomply with government demands, threatening user rights.</p>
<p style="text-align: justify; ">"This can be either by over-complying with flawed information requests, thereby selling out its users, or taking down content that offends the majoritarian sensibilities," she told DW.</p>
<p style="text-align: justify; ">Last week, three special rapporteurs appointed by a top UN human rights body expressed "serious concerns" that certain parts of the guidelines "may result in the limiting or infringement of a wide range of human rights."</p>
<p style="text-align: justify; ">They urged New Delhi to review the rules, adding that they did not conform to India's international human rights obligations and could threaten the digital rights of Indians.</p>
<h3 style="text-align: justify; ">Twitter's balancing act</h3>
<p style="text-align: justify; ">It is not the first time that Twitter has been accused of giving in to government pressure to censor content on its platform.</p>
<p style="text-align: justify; ">At the height of the long-running farmer protests, <a href="https://www.dw.com/en/farmer-protests-india-blocks-prominent-twitter-accounts-detains-journalists/a-56411354">Twitter blocked hundreds of tweets</a> and accounts, including the handle of a prominent news magazine. It subsequently unblocked them following public outrage.</p>
<p style="text-align: justify; ">The US company stopped short of complying with demands to block the accounts of activists, politicians and journalists, arguing that such a move would "violate their fundamental right to free expression under Indian law."</p>
<p style="text-align: justify; ">According to local media reports, Twitter's Indian executives were reportedly threatened with fines and imprisonment if the accounts were not taken down.</p>
<h3 style="text-align: justify; ">Special police notify Twitter offices</h3>
<p style="text-align: justify; ">Last month, the labeling of a tweet by a politician from the ruling BJP as "manipulated media" prompted a special unit of the <a href="https://www.dw.com/en/india-police-visit-twitter-offices-over-manipulated-tweet/a-57650193">Delhi police to visit Twitter's offices</a> in the capital and neighboring Gurgaon. Police notified the offices about an investigation into the labeling of the post.</p>
<p style="text-align: justify; ">Twitter India's managing director, Manish Maheswari, was said to have been asked to appear before the police for questioning, according to media reports.</p>
<p style="text-align: justify; ">Some Twitter employees have refused to talk about the ongoing tensions for fear of government reprisals.</p>
<p style="text-align: justify; ">"Such kind of intimidation does not happen every day. (But) Everyone at Twitter India is terrified," people familiar with the matter told DW on the condition of anonymity.</p>
<h3 style="text-align: justify; ">Big Tech VS sovereign power?</h3>
<p style="text-align: justify; ">Those calling for better regulation of tech giants say transnational <a href="https://www.dw.com/en/india-social-media-conflict/a-57702394">social media companies like Twitter lack accountability</a>, blaming them for the alleged inaction against online abuse and disinformation campaigns.</p>
<p style="text-align: justify; ">"The problem with these rules is that they centralize greater power toward the government without providing for the objective benefit of rights toward users," Gupta said.</p>
<p style="text-align: justify; ">"If Twitter were to comply with these rules, it would make a bad situation worse," he said.</p>
<p style="text-align: justify; ">Twitter is unlikely to ditch a major market such as India.</p>
<p style="text-align: justify; ">Sarkar from the Centre for Internet and Society said "It might be difficult to say how the powers of big tech are going to collide with sovereign nations, especially in light of flawed legal interventions around the world."</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/dw-june-21-2021-aditya-sharma-twitter-india-troubles-show-tough-path-ahead-for-digital-platforms'>http://editors.cis-india.org/internet-governance/news/dw-june-21-2021-aditya-sharma-twitter-india-troubles-show-tough-path-ahead-for-digital-platforms</a>
</p>
No publisherAditya SharmaSocial MediaInternet GovernanceIntermediary LiabilityInformation Technology2021-06-26T02:54:19ZNews ItemThe Geopolitics of Cyberspace: A Compendium of CIS Research
http://editors.cis-india.org/internet-governance/blog/arindrajit-basu-september-24-2021-the-geopolitics-of-cyberspace-compendium-of-cis-research
<b>Cyberspace is undoubtedly shaping and disrupting commerce, defence and human relationships all over the world. Opportunities such as improved access to knowledge, connectivity, and innovative business models have been equally met with nefarious risks including cyber-attacks, disinformation campaigns, government driven digital repression, and rabid profit-making by ‘Big Tech.’ Governments have scrambled to create and update global rules that can regulate the fair and equitable uses of technology while preserving their own strategic interests.</b>
<p style="text-align: justify;">With a rapidly digitizing economy and clear interests in shaping global rules that favour its strategic interests, India stands at a crucial juncture on various facets of this debate. How India governs and harnesses technology, coupled with how India translates these values and negotiates its interests globally, will surely have an impact on how similarly placed emerging economies devise their own strategies. The challenge here is to ensure that domestic technology governance as well as global engagements genuinely uphold and further India’s democratic fibre and constitutional vision.</p>
<p style="text-align: justify;">Since 2018, researchers at the Centre for Internet and Society have produced a body of research including academic writing, at the intersection of geopolitics and technology covering global governance regimes on trade and cybersecurity, including their attendant international law concerns, the digital factor in bilateral relationships (with a focus on the Indo-US and Sino-Indian relationships). We have paid close focus to the role of emerging technologies in this debate, including AI and 5G as well as how private actors in the technology domain, operating across national jurisdictions, are challenging and upending traditionally accepted norms of international law, global governance, and geopolitics.</p>
<p style="text-align: justify;">The global fissures in this space matter fundamentally for individuals who increasingly use digital spaces to carry out day to day activities: from being unwitting victims of state surveillance to harnessing social media for causes of empowerment to falling prey to state-sponsored cyber attacks, the rules of cyber governance, and its underlying politics. Yet, the rules are set by a limited set of public officials and technology lawyers within restricted corridors of power. Better global governance needs more to be participatory and accessible. CIS’s research and writing has been cognizant of this, and attempted to merge questions of global governance with constitutional and technical questions that put individuals and communities centre-stage.</p>
<p>Research and writing produced by CIS researchers and external collaborators from 2018 onward is detailed in the appended compendium.</p>
<h2>Compendium</h2>
<h3>Global cybersecurity governance and cyber norms</h3>
<p style="text-align: justify;"><em>Two decades since a treaty governing state behaviour in cyberspace was mooted by Russia, global governance processes have meandered along. The security debate has often been polarised along “Cold War” lines but the recent amplification of cyberspace governance as developmental, social and economic has seen several new vectors added to this debate. This past year two parallel processes at the United Nations General Assembly’s First Committee on Disarmament and International Security-United Nations Group of Governmental Experts (UN-GGE) and the United Nations Open Ended Working Group managed to produce consensus reports but several questions on international law, norms and geopolitical co-operation remain. India has been a participant at these crucial governance debates. Both the substance of the contribution, along with its implications remain a key focus area for our research.</em></p>
<p style="text-align: justify;"><em>Edited Volumes</em></p>
<ul>
<li>Karthik Nachiappan and Arindrajit Basu <a href="https://www.india-seminar.com/2020/731.htm">India and Digital World-Making</a>, <em>Seminar </em>731, 1 July 2020 <em>(featuring contributions from Manoj Kewalramani, Gunjan Chawla, Torsha Sarkar, Trisha Ray, Sameer Patil, Arun Vishwanathan, Vidushi Marda, Divij Joshi, Asoke Mukerji, Pallavi Raghavan, Karishma Mehrotra, Malavika Raghavan, Constantino Xavier, Rajen Harshe' and Suman Bery</em>)</li></ul>
<p><em><br />Long-Form Articles</em></p>
<ol>
<li>Arindrajit Basu and Elonnai Hickok, <a href="https://cis-india.org/internet-governance/blog/arindrajit-basu-and-elonnai-hickok-november-30-2018-cyberspace-and-external-affairs"><em>Cyberspace and External Affairs: A Memorandum for India</em></a> (Memorandum, Centre for Internet and Society, 30 Nov 2018) </li>
<li><a href="https://cis-india.org/internet-governance/blog/the-potential-for-the-normative-regulation-of-cyberspace-implications-for-india"><em>The Potential for the Normative Regulation of Cyberspace</em></a><em> </em>(White Paper, Centre for Internet and Society, 30 July 2018) </li>
<li>Arindrajit Basu and Elonnai Hickok <a href="https://cis-india.org/internet-governance/blog/conceptualizing-an-international-security-regime-for-cyberspace"><em>Conceptualizing an International Security Architecture for cyberspace</em></a><em> </em>(Briefings of the Global Commission on the Stability of Cyberspace, Bratislava, Slovakia, May 2018)</li>
<li>Sunil Abraham, Mukta Batra, Geetha Hariharan, Swaraj Barooah, and Akriti Bopanna,<a href="https://cis-india.org/internet-governance/files/indias-contribution-to-internet-governance-debates"> India's contribution to internet governance debates</a> (NLUD Student Law Journal, 2018)</li></ol>
<p><em><br />Blog Posts and Op-eds</em></p>
<ul>
<li>Arindrajit Basu, Irene Poetranto, and Justin Lau, <a href="https://carnegieendowment.org/2021/05/19/un-struggles-to-make-progress-on-securing-cyberspace-pub-84491">The UN struggles to make progress in cyberspace</a><em>, Carnegie Endowment for International Peace</em>, May 19th, 2021</li>
<li>Andre’ Barrinha and Arindrajit Basu, <a href="https://directionsblog.eu/could-cyber-diplomacy-learn-from-outer-space/">Could cyber diplomacy learn from outer space</a>, <em>EU Cyber Direct</em>, 20th April 2021</li>
<li>Arindrajit Basu and Pranesh Prakash<strong>, </strong><a href="https://www.thehindu.com/opinion/lead/patching-the-gaps-in-indias-cybersecurity/article34000336.ece">Patching the gaps in India’s cybersecurity</a>, <em>The Hindu, </em>6th March 2021</li>
<li>Arindrajit Basu and Karthik Nachiappan, <a href="https://www.leidensecurityandglobalaffairs.nl/articles/will-india-negotiate-in-cyberspace">Will India negotiate in cyberspace?</a>, Leiden Security and Global Affairs blog,December 16, 2020</li>
<li>Elizabeth Dominic, <a href="https://cis-india.org/internet-governance/blog/the-debate-over-internet-governance-and-cyber-crimes-west-vs-the-rest">The debate over internet governance and cybercrimes: West vs the rest?</a>,<em> Centre for Internet and Society, </em>June 08, 2020</li>
<li>Arindrajit Basu, <a href="https://www.lawfareblog.com/indias-role-global-cyber-policy-formulation"><em>India’s role in Global Cyber Policy Formulation</em></a><em>, Lawfare, Nov 7, 2019</em></li>
<li>Pukhraj Singh, <a href="https://cis-india.org/internet-governance/blog/guest-post-before-cyber-norms-let2019s-talk-about-disanalogy-and-disintermediation">Before cyber norms,let's talk about disanalogy and disintermediation</a>, <em>Centre for Internet and Society, </em>Nov 15th, 2019</li>
<li>Arindrajit Basu and Karan Saini, <a href="https://mwi.usma.edu/setting-international-norms-cyber-conflict-hard-doesnt-mean-stop-trying/">Setting International Norms of Cyber Conflict is Hard, But that Doesn’t Mean that We Should Stop Trying</a><em>, Modern War Institute, </em>30th Sept, 2019</li>
<li>Arindrajit Basu, <a href="https://www.orfonline.org/expert-speak/politics-by-other-means-fostering-positive-contestation-and-charting-red-lines-through-global-governance-in-cyberspace-56811/"><em>Politics by other means: Fostering positive contestation and charting red lines through global governance in cyberspace</em></a><em> (Digital Debates, </em>Volume 6, 2019<em>)</em></li>
<li>Arindrajit Basu<em>, </em><a href="https://thewire.in/trade/will-the-wto-finally-tackle-the-trump-card-of-national-security">Will the WTO Finally Tackle the ‘Trump’ Card of National Security?</a><em> (The Wire, </em>8th May 2019<em>)</em></li></ul>
<p><em>Policy Submissions</em></p>
<ol>
<li>Arindrajit Basu, <a href="https://cis-india.org/internet-governance/blog/cis-comments-on-pre-draft-of-the-report-of-the-un-open-ended-working-group">CIS Submission to OEWG </a>(Centre for Internet and Society, Policy Submission, 2020)</li>
<li>Aayush Rathi, Ambika Tandon, Elonnai Hickok, and Arindrajit Basu. “<a href="https://cis-india.org/internet-governance/blog/cis-submission-to-un-high-level-panel-on-digital-cooperation">CIS Submission to UN High-Level Panel on Digital Cooperation</a>.” Policy submission. Centre for Internet and Society, January 2019.</li>
<li>Arindrajit Basu,Gurshabad Grover, and Elonnai Hickok. “<a href="https://cis-india.org/internet-governance/blog/arindrajit-basu-gurshabad-grover-elonnai-hickok-january-22-2019-response-to-gcsc-on-request-for-consultation">Response to GCSC on Request for Consultation: Norm Package Singapore</a>.” Centre for Internet and Society, January 17, 2019.</li>
<li>Arindrajit Basu and Elonnai Hickok. <a href="https://cis-india.org/internet-governance/files/gcsc-response.">Submission of Comments to the GCSC Definition of ‘Stability of Cyberspace</a> (Centre for Internet and Society, September 6, 2019)</li></ol>
<ol></ol>
<h3>Digital Trade and India's Political Economy</h3>
<p style="text-align: justify;"><em>The modern trading regime and its institutions were born largely into a world bereft of the internet and its implications for cross-border flow and commerce. Therefore, regulatory ambitions at the WTO have played catch up with the technological innovation that has underpinned the modern global digital economy. Driven by tech giants, the “developed” world has sought to restrict the policy space available to the emerging world to impose mandates regarding data localisation, source code disclosure, and taxation - among other initiatives central to development. At the same time emerging economies have pushed back, making for a tussle that continues to this day. Our research has focussed both on issues of domestic political economy and data governance,and the implications these domestic issues have on how India and other emerging economies negotiate at the world stage.</em></p>
<p><em>Long-Form articles and essays</em></p>
<ol>
<li>Arindrajit Basu, Elonnai Hickok and Aditya Chawla,<em> </em><a href="https://cis-india.org/internet-governance/blog/the-localisation-gambit-unpacking-policy-moves-for-the-sovereign-control-of-data-in-india"><strong>T</strong></a><a href="https://cis-india.org/internet-governance/blog/the-localisation-gambit-unpacking-policy-moves-for-the-sovereign-control-of-data-in-india">he Localisation Gambit: Unpacking policy moves for the sovereign control of data in India</a><em> (</em>Centre for Internet and Society<em>, </em>March 19, 2019)<strong><em> </em></strong></li>
<li>Arindrajit Basu,<a href="about:blank">Sovereignty in a datafied world: A framework for Indian diplomacy</a> in Navdeep Suri and Malancha Chakrabarty (eds) <em>A 2030 Vision for India’s Economic Diplomacy </em>(Observer Research Foundation 2021) </li>
<li>Amber Sinha, Elonnai Hickok, Udbhav Tiwari and Arindrajit Basu, <a href="https://cis-india.org/internet-governance/files/mlat-report">Cross Border Data-Sharing and India </a>(Centre for Internet and Society, 2018)</li></ol>
<p><em>Blog posts and op-eds </em></p>
<ul>
<li>Arindrajit Basu,<a class="external-link" href="http://www.hinrichfoundation.com/research/article/wto/can-the-wto-build-consensus-on-digital-trade/"> Can the WTO build consensus on digital trade,</a> Hinrich Foundation,October 05,2021<br /></li><li>Amber Sinha, <a href="https://thewire.in/tech/twitter-modi-government-big-tech-new-it-rules">The power politics behind Twitter versus Government of India</a>, <em>The Wire</em>, June 03, 2021</li>
<li>Karthik Nachiappan and Arindrajit Basu, <a href="https://www.thehindu.com/opinion/op-ed/shaping-the-digital-world/article32224942.ece?homepage=true">Shaping the Digital World</a>, <em>The Hindu</em>, 30th July 2020</li>
<li>Arindrajit Basu and Karthik Nachiappan, <a href="https://www.india-seminar.com/2020/731/731_arindrajit_and_karthik.htm"><em>India and the global battle for data governance</em></a>, Seminar 731, 1st July 2020</li>
<li>Amber Sinha and Arindrajit Basu, <a href="https://scroll.in/article/960676/analysis-reliance-jio-facebook-deal-highlights-indias-need-to-revisit-competition-regulations">Reliance Jio-Facebook deal highlights India’s need to revisit competition regulations</a>, <em>Scroll</em>, 30th April 2020</li>
<li>Arindrajit Basu and Amber Sinha, <a href="https://thediplomat.com/2020/04/the-realpolitik-of-the-reliance-jio-facebook-deal/">The realpolitik of the Reliance-Jio Facebook deal</a>, <em>The Diplomat</em>, 29th April 2020</li>
<li>Arindrajit Basu, <a href="https://thediplomat.com/2020/01/the-retreat-of-the-data-localization-brigade-india-indonesia-and-vietnam/"><em>The Retreat of the Data Localization Brigade: India, Indonesia, Vietnam</em></a><em>, The Diplomat</em>, Jan 10, 2020</li>
<li>Amber Sinha and Arindrajit Basu, <a href="https://www.epw.in/engage/article/politics-indias-data-protection-ecosystem"><em>The Politics of India’s Data Protection Ecosystem</em></a>, <em>EPW Engage</em>, 27 Dec 2019</li>
<li>Arindrajit Basu and Justin Sherman, <a href="https://www.lawfareblog.com/key-global-takeaways-indias-revised-personal-data-protection-bill">Key Global Takeaways from India’s Revised Personal Data Protection Bill</a>, <em>Lawfare</em>, Jan 23, 2020</li>
<li>Nikhil Dave,“<a href="https://cis-india.org/internet-governance/geo-economic-impacts-of-the-coronavirus-global-supply-chains-part-i">Geo-Economic Impacts of the Coronavirus: Global Supply Chains</a>.” <em>Centre for Internet and Society</em> , June 16, 2020.</li></ul>
<h3>International Law and Human Rights</h3>
<p style="text-align: justify;"><em>International law and human rights are ostensibly technology neutral, and should lay the edifice for digital governance and cybersecurity today. Our research on international human rights has focussed on global surveillance practices and other internet restrictions employed by a variety of nations, and the implications this has for citizens and communities in India and similarly placed emerging economies. CIS researchers have also contributed to, and commented on World Intellectual Property Organization negotiations at the intersection of international Intellectual Property (IP) rules and the human rights.</em></p>
<p><em>Long-form article</em></p>
<p><em> </em></p>
<ol>
<li>Arindrajit Basu, <a href="https://cis-india.org/internet-governance/extra-territorial-surveillance-and-the-incapacitation-of-human-rights">Extra Territorial Surveillance and the incapacitation of international human rights law</a>, 12 NUJS LAW REVIEW 2 (2019)</li>
<li>Gurshabad Grover and Arindrajit Basu, ”<a href="https://cyberlaw.ccdcoe.org/wiki/Scenario_24:_Internet_blockage">Internet Blockage</a>”(Scenario contribution to NATO CCDCOE Cyber Law Toolkit,2021)</li>
<li>Arindrajit Basu and Elonnai Hickok, <a href="https://www.ijlt.in/journal/conceptualizing-an-international-framework-for-active-private-cyber-defence">Conceptualizing an international framework for active private cyber defence </a>(Indian Journal of Law and Technology, 2020)</li><li>Arindrajit Basu,<a class="external-link" href="http://www.orfonline.org/wp-content/uploads/2021/10/Digital-Debates__CyFy2021.pdf">Challenging the dogmatic inevitability of extraterritorial state surveillance </a>in Trisha Ray and Rajeswari Pillai Rajagopalan (eds) Digital Debates: CyFy Journal 2021 (New Delhi:ORF and Global Policy Journal,2021)<br /></li></ol>
<p><em>Blog Posts and op-eds</em></p>
<ul>
<li>Arindrajit Basu, “<a href="https://www.medianama.com/2020/08/223-american-law-on-mass-surveillance-post-schrems-ii/">Unpacking US Law And Practice On Extraterritorial Mass Surveillance In Light Of Schrems II</a>”, <em>Medianama</em>, 24th August 2020</li>
<li>Anubha Sinha, “World Intellectual Property Organisation: Notes from the Standing Committee on Copyright Negotiations (<a href="https://cis-india.org/a2k/blogs/wipo-sccr-41-notes-from-day-1">Day 1</a>, <a href="https://cis-india.org/a2k/blogs/wipo-sccr-41-notes-from-day-2">Day 2</a>, <a href="https://cis-india.org/a2k/blogs/wipo-sccr-41-notes-from-day-3-and-day-4-1">Day 3 and 4</a>)”, July 2021</li><li>Raghav Ahooja and Torsha Sarkar,<a class="external-link" href="http://www.lawfareblog.com/how-not-regulate-internet-lessons-indian-subcontinent">How (not) to regulate the internet:Lessons from the Indian Subcontinent</a>,Lawfare,September 23,2021,<br /></li></ul>
<h3>Bilateral Relationships</h3>
<p style="text-align: justify;"><em>Technology has become a crucial factor in shaping bilateral and plurilateral co-operation and competition. Given the geopolitical fissures and opportunities since 2020, our research has focussed on how technology governance and cybersecurity could impact the larger ecosystem of Indo-China and India-US relations. Going forward, we hope to undertake more research on technology in plurilateral arrangements, including the Quadrilateral Security Dialogue. </em></p>
<ul>
<li>Arindrajit Basu and Justin Sherman, <a href="https://thediplomat.com/2021/03/the-huawei-factor-in-us-india-relations/">The Huawei Factor in US-India Relations</a>,<em>The Diplomat</em>, 22 March 2021</li>
<li>Aman Nair, “<a href="https://cis-india.org/internet-governance/blog/tiktok-it2019s-time-for-biden-to-make-a-decision-on-his-digital-policy-with-china">TIkTok: It’s Time for Biden to Make a Decision on His Digital Policy with China</a>,” <em>Centre for Internet and Society</em>, January 22, 2021,</li>
<li>Arindrajit Basu and Gurshabad Grover, <a href="https://thediplomat.com/2020/10/india-needs-a-digital-lawfare-strategy-to-counter-china/">India Needs a Digital Lawfare Strategy to Counter China</a>, <em>The Diplomat</em>, 8th October 2020</li>
<li>Anam Ajmal, <a href="https://timesofindia.indiatimes.com/blogs/toi-edit-page/the-app-ban-will-have-an-impact-on-the-holding-companies-global-power-projection-begins-at-home/">The app ban will have an impact on the holding companies...global power projection begins at home</a>, <em>Times of India</em>, July 7th, 2020 (Interview with Arindrajit Basu)</li>
<li>Justin Sherman and Arindrajit Basu, <a href="https://thediplomat.com/2020/03/trump-and-modi-embrace-but-remain-digitally-divided/">Trump and Modi embrace, but remain digitally divided</a>, <em>The Diplomat</em>, March 05th, 2020</li></ul>
<h3>Emerging Technologies</h3>
<p style="text-align: justify;"><em>Governance needs to keep pace with the technological challenges posed by emerging technologies, including 5G and AI. To do so an interdisciplinary approach that evaluates these scientific advances in line with the regimes that govern them is of utmost importance. While each country will need to regulate technology through the lens of their strategic interests and public policy priorities, it is clear that geopolitical tensions on standard-setting and governance models compels a more global outlook.</em></p>
<p><em>Long-Form reports</em></p>
<ol>
<li>Anoushka Soni and Elizabeth Dominic,<a href="https://cis-india.org/internet-governance/legal-and-policy-implications-of-autonomous-weapons-systems"> Legal and Policy implications of Autonomous weapons systems</a> (Centre for Internet and Society, 2020)</li>
<li>Aayush Rathi, Gurshabad Grover, and Sunil Abraham,<a href="https://cis-india.org/internet-governance/blog/regulating-the-internet-the-government-of-india-standards-development-at-the-ietf"> Regulating the internet: The Government of India & Standards Development at the IETF</a> (Centre for Internet and Society, 2018)</li></ol>
<p><em>Blog posts and op-eds</em></p>
<ul>
<li>Aman Nair, <a href="https://cis-india.org/internet-governance/blog/would-banning-chinese-telecom-companies-make-5g-secure-in-india">Would banning Chinese telecom companies make India 5G secure in India?</a> <em>Centre for Internet and Society</em>, 22nd December 2020</li>
<li>Arindrajit Basu and Justin Sherman<strong>, </strong><a href="https://www.lawfareblog.com/two-new-democratic-coalitions-5g-and-ai-technologies">Two New Democratic Coalitions on 5G and AI Technologies</a>, <em>Lawfare</em>, 6th August 2020</li>
<li>Nikhil Dave, <a href="https://cis-india.org/internet-governance/blog/the-5g-factor.">The 5G Factor: A Primer</a>, <em>Centre for Internet and Society,</em> July 20, 2020.</li>
<li>Gurshabad Grover, <a href="https://indianexpress.com/article/opinion/columns/huawei-ban-india-united-states-china-5755232/">The Huawei bogey</a> <em>Indian Express</em>, May 30th, 2019</li>
<li>Arindrajit Basu and Pranav MB, <a href="https://cis-india.org/internet-governance/blog/what-is-the-problem-with-2018ethical-ai2019-an-indian-perspective">What is the problem with 'Ethical AI'?:An Indian perspective</a>, Centre for Internet and Society, July 21, 2019</li></ul>
<p><strong> </strong></p>
<p><em> </em></p>
<hr />
<p style="text-align: justify;"><em>(This compendium was drafted by Arindrajit Basu with contributions from Anubha Sinha. Aman Nair, Gurshabad Grover, and Pranav MB reviewed the draft and provided vital insight towards its conceptualization and compilation</em>. Dishani Mondal and Anand Badola provided important inputs at earlier stages of the process towards creating this compendium)</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/arindrajit-basu-september-24-2021-the-geopolitics-of-cyberspace-compendium-of-cis-research'>http://editors.cis-india.org/internet-governance/blog/arindrajit-basu-september-24-2021-the-geopolitics-of-cyberspace-compendium-of-cis-research</a>
</p>
No publisherarindrajitCyber SecurityInternet GovernanceCyberspace2021-11-15T14:48:49ZBlog EntryA Guide to Drafting Privacy Policy under the Personal Data Protection Bill, 2019
http://editors.cis-india.org/internet-governance/blog/shweta-reddy-september-17-2021-a-guide-to-drafting-privacy-policy-under-personal-data-protection-bill
<b>The Personal Data Protection Bill, 2019, (PDP Bill) which is currently being deliberated by the Joint Parliamentary Committee, is likely to be tabled in the Parliament during the winter session of 2021.</b>
<p style="text-align: justify;">The Bill in its current form, doesn’t have explicit transitory provisions i.e. a defined timeline for the enforcement of the provisions of the Bill post its notification as an enforceable legislation. Since the necessary subject matter expertise may be limited on short notice and out of budget for certain companies, we intend to release a series of guidance documents that will attempt to simplify the operational requirements of the legislation.</p>
<p style="text-align: justify;">Certain news reports had earlier suggested that the Joint Parliamentary Committee reviewing the Bill has proposed <a class="external-link" href="https://economictimes.indiatimes.com/news/politics-and-nation/parliamentary-panel-examining-personal-data-protection-bill-recommends-89-changes/articleshow/80138488.cms">89 new amendments and a new clause</a>. The nature and content of these amendments so far remain unclear. However, we intend to start the series by addressing some frequently asked questions around meeting the requirements of publishing a privacy notice and shall make the relevant changes post notification of the new Bill. The solutions provided in this guidance document are mostly based on international best practices and any changes in the solutions based on Indian guidelines and the revised PDP Bill will be redlined in the future.</p>
<p style="text-align: justify;">The frequently asked questions and other specific examples on complying with the requirements of publishing a privacy policy have been compiled based on informal discussions with stakeholders, unsolicited queries from smaller organizations and publicly available details from conferences on the impact of the Bill. We intend to conduct extensive empirical analysis of additional queries or difficulties faced by smaller organizations towards achieving compliance post the notification of the new Bill. Regardless, any smaller organizations(NGOs, start-ups etc.) interested in discussing compliance related queries can get in touch with us.</p>
<hr />
<p style="text-align: justify;">Click to download the <a href="http://editors.cis-india.org/internet-governance/guide-to-personal-data-protection-bill.pdf" class="internal-link">full report here</a>. The report was reviewed by Pallavi Bedi and Amber Sinha.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/shweta-reddy-september-17-2021-a-guide-to-drafting-privacy-policy-under-personal-data-protection-bill'>http://editors.cis-india.org/internet-governance/blog/shweta-reddy-september-17-2021-a-guide-to-drafting-privacy-policy-under-personal-data-protection-bill</a>
</p>
No publishershwetarInternet GovernanceData ProtectionPrivacy2021-09-20T10:34:40ZBlog EntryMedia Market Risk Ratings: India
http://editors.cis-india.org/internet-governance/blog/gdi-and-cis-torsha-sarkar-pranav-m-bidare-and-gurshabad-grover-july-12-2021-media-market-risk-ratings-india
<b>The Centre for Internet and Society (CIS) and the Global Disinformation Index (GDI) are launching a study into the risk of disinformation on digital news platforms in India, creating an index that is intended to serve donors and brands with a neutral assessment of news sites that they can utilise to defund disinformation.</b>
<h2>Introduction</h2>
<p style="text-align: justify;">The harms of disinformation are proliferating around the globe—threatening our elections, our health, and our shared sense of facts.</p>
<p style="text-align: justify;">The infodemic laid bare by COVID-19 conspiracy theories clearly shows that disinformation costs peoples’ lives. Websites masquerading as news outlets are driving and profiting financially from the situation.</p>
<p style="text-align: justify;">The goal of the Global Disinformation Index (GDI) is to cut off the revenue streams that incentivise and sustain the spread of disinformation. Using both artificial and human intelligence, the GDI has created an assessment framework to rate the disinformation risk of news domains.</p>
<p style="text-align: justify;">The GDI risk rating provides advertisers, ad tech companies and platforms with greater information about a range of disinformation flags related to a site’s <strong>content</strong> (i.e. reliability of content), <strong>operations</strong> (i.e. operational and editorial integrity) and <strong>context</strong> (i.e. perceptions of brand trust). The findings in this report are based on the human review of these three pillars: <strong>Content, Operations</strong>, and <strong>Context</strong>.</p>
<p style="text-align: justify;">A site’s disinformation risk level is based on that site’s aggregated score across all of the reviewed pillars and indicators. A site’s overall score ranges from zero (maximum risk level) to 100 (minimum risk level). Each indicator that is included in the framework is scored from zero to 100. The output of the index is therefore the site’s overall disinformation risk level, rather than the truthfulness or journalistic quality of the site.</p>
<h2 style="text-align: justify;">Key Findings</h2>
<p>In reviewing the media landscape for India, the assessment found that:</p>
<p><strong><em>Nearly a third of the sites in our sample had a high risk of disinforming their online users.</em></strong></p>
<ul>
<li style="text-align: justify;">Eighteen sites were found to have a high disinformation risk rating. This group includes sites that are published in all the three languages in our scope: English, Hindi and Bengali.</li>
<li style="text-align: justify;">Around half of the websites in our sample had a ‘medium’ risk rating. No site performed exceptionally on all fronts, resulting in no sites having a minimum risk rating. On the other hand, no site performed so poorly as to earn a maximum risk rating.</li></ul>
<p><strong><em>Only a limited number of Indian sites present low levels of disinformation risks.</em></strong></p>
<ul>
<li>No website was rated as having a ‘minimum’ disinformation risk.</li>
<li>Eight sites were rated with a ‘low’ level of disinformation risk. Seven out of these websites served content primarily in English, one in Hindi.</li></ul>
<p><strong><em>The media sites assessed in India tend to perform very poorly on publishing transparent operational checks and balances.</em></strong></p>
<ul>
<li style="text-align: justify;">Over one-third of the sites in our sample published little information about their ownership structure, and also failed to be transparent about their revenue sources.</li>
<li style="text-align: justify;">Only ten of the sites in our sample publish any information about their policies on how they correct errors in their reporting.</li></ul>
<p><strong><em>Association with traditional media did not play a significant factor in determining risk of disinformation.</em></strong></p>
<ul>
<li>On average, websites associated with TV or print did not perform any differently when compared to websites that solely serve digital content.</li></ul>
<p style="text-align: justify;">The findings show that on the whole, Indian websites can substantially increase their trustworthiness by taking measures to address these shortfalls in their operational checks and balances. For example, they could increase transparency on the structure of their businesses and have clear policies on how they address errors in their reporting. Both of these measures are in line with universal standards of good journalistic practices, as agreed by the Journalism Trust Initiative.</p>
<hr />
<p style="text-align: justify;">Click to download the <a href="http://editors.cis-india.org/internet-governance/media-market-risk-ratings.pdf" class="internal-link">full report here</a>. To read the report in Hindi, <a class="external-link" href="https://cis-india.org/internet-governance/resources/media-bazaar-jokhim-rating.pdf">click here</a>. The authors extend their thanks to Anna Liz Thomas, Sanah Javed, Sagnik Chatterjee, and Raghav Ahooja for their assistance.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/gdi-and-cis-torsha-sarkar-pranav-m-bidare-and-gurshabad-grover-july-12-2021-media-market-risk-ratings-india'>http://editors.cis-india.org/internet-governance/blog/gdi-and-cis-torsha-sarkar-pranav-m-bidare-and-gurshabad-grover-july-12-2021-media-market-risk-ratings-india</a>
</p>
No publisherTorsha Sarkar, Pranav M Bidare, and Gurshabad GroverDigital NewsDigital AccessInternet GovernanceDigital IndiaHomepage2022-01-25T13:29:06ZBlog Entry