The Centre for Internet and Society
http://editors.cis-india.org
These are the search results for the query, showing results 1 to 15.
Unpacking Algorithmic Infrastructures: Mapping the Data Supply Chain in the Healthcare Industry in India
http://editors.cis-india.org/raw/unpacking-algorithmic-infrastructures
<b>The Unpacking Algorithmic Infrastructures project, supported by a grant from the Notre Dame-IBM Tech Ethics Lab, aims to study the Al data supply chain infrastructure in healthcare in India, and aims to critically analyse auditing frameworks that are utilised to develop and deploy AI systems in healthcare. It will map the prevalence of Al auditing practices within the sector to arrive at an understanding of frameworks that may be developed to check for ethical considerations - such as algorithmic bias and harm within healthcare systems, especially against marginalised and vulnerable populations. </b>
<p style="text-align: justify; ">There has been an increased interest in health data in India over the recent years, where health data policies encourage sharing of data with different entities, at the same time, there has been a growing interest in deployment of Al in healthcare from startups, hospitals, as well as multinational technology companies.</p>
<p style="text-align: justify; ">Given the invisibility of algorithmic infrastructures that underlie the digital economy and the important decisions these technologies can make about patients' health, it's important to look at how these systems are developed, how data flows within them, how these systems are tested and verified and what ethical considerations inform their deployment.</p>
<p style="text-align: justify; "><img src="http://editors.cis-india.org/home-images/ResearchersWork.png/@@images/00a848c7-b7f7-41b4-8bd9-45f2928fd44e.png" alt="Researchers at Work" class="image-inline" title="Researchers at Work" /></p>
<p style="text-align: justify; "><strong>The </strong><strong>Unpacking Algorithmic Infrastructures</strong> project, supported by a grant from the Notre Dame-IBM Tech Ethics Lab, aims to study the Al data supply chain infrastructure in healthcare in India, and aims to critically analyse auditing frameworks that are utilised to develop and deploy AI systems in healthcare. It will map the prevalence of Al auditing practices within the sector to arrive at an understanding of frameworks that may be developed to check for ethical considerations - such as algorithmic bias and harm within healthcare systems, especially against marginalised and vulnerable populations.</p>
<h3 style="text-align: justify; ">Research Questions</h3>
<ol>
<li style="text-align: justify; ">To what extent organisations take ethical principles into account when developing AI , managing the training and testing dataset, and while deploying the AI in the healthcare sector.</li>
<li style="text-align: justify; ">What best practices for auditing can be put in place based on our critical understanding of AI data supply chains and auditing frameworks being employed in the healthcare sector.</li>
<li style="text-align: justify; ">What is a possible auditing framework that is best suited to organisations in the majority world.</li>
</ol>
<h3>Research Design and Methods</h3>
<p>For this study, we will use a comprehensive mixed methods approach. We will survey professionals working towards designing, developing and deploying AI systems for healthcare in India, across technology and healthcare organizations. We will also undertake in-depth interviews with experts who are part of key stakeholder groups.</p>
<p>We hereby invite researchers, technologists, healthcare professionals, and others working at the intersection of Artificial Intelligence and Healthcare to speak to us and help us inform the study. You may contact Shweta Monhandas at <a href="mailto:shweta@cis-india.org">shweta@cis-india.org</a></p>
<ol> </ol>
<hr />
<p>Research Team: Amrita Sengupta, Chetna V. M., Pallavi Bedi, Puthiya Purayil Sneha, Shweta Mohandas and Yatharth.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/raw/unpacking-algorithmic-infrastructures'>http://editors.cis-india.org/raw/unpacking-algorithmic-infrastructures</a>
</p>
No publisherAmrita Sengupta, Chetna V. M., Pallavi Bedi, Puthiya Purayil Sneha, Shweta Mohandas and YatharthHealth TechRAW BlogResearchData ProtectionHealthcareResearchers at WorkArtificial Intelligence2024-01-05T02:38:22ZBlog Entry‘Techplomacy’ and the negotiation of AI standards for the Indo-Pacific
http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific
<b>Researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific.</b>
<p>This is a modified version of the post that appeared in<strong> </strong><a href="https://www.aspistrategist.org.au/high-time-for-australia-and-india-to-step-up-their-tech-diplomacy/"><strong>The Strategist</strong></a><strong><span> </span></strong></p>
<p><strong>By Arindrajit Basu with inputs from and review by Amrita Sengupta and Isha Suri</strong></p>
<hr />
<p style="text-align: justify; "><span>Later this month, UN member states elected American candidate Doreen Bogdan-Martin "</span><a href="https://www.brookings.edu/blog/techtank/2022/08/12/the-most-important-election-you-never-heard-of/">the most important election you have never heard off</a><span>" to elect the next secretary-general of the International Telecommunications Union (ITU). While this technical body's work may be esoteric, the election was fiercely contested with Russian candidate (and former Huawei executive; aptly reflecting the geopolitical competition that is underway in determining the “</span><a href="https://www.lowyinstitute.org/the-interpreter/election-future-internet">future of the internet”</a><span> through the technical standards that underpin it. The “Internet Protocol” (IP) that is the set of rules governing the communication and exchange of data over the internet itself is being subjected to political contestation between a Sino-Russian vision that would see the standard give way to greater government control and a US vision ostensibly rooted in more inclusive multi-stakeholder participation.</span></p>
<p style="text-align: justify; ">As critical and emerging technologies take the geopolitical centre-stage, the global tug of war over the development, utilisation, and deployment is playing out most ferociously at standard-setting organisations, an arms’ length away from the media limelight. Powerful state and non-state actors alike are already seeking to shape standards in ways that suit their economic, political, and normative priorities. It is time for emerging economies, middle powers and a wider array of private actors and members from the civil society to play a more meaningful and tangible role in the process.</p>
<p><strong> </strong></p>
<h3><strong>What are standards and why do they matter</strong></h3>
<p style="text-align: justify; ">Simply put, standards are blueprints or protocols with requirements which ‘standardise’ products and related processes around the world, thus ensuring that they are interoperable, safe and sustainable. For example, USB, WiFi or a QWERTY keyboard can be used around the world because they are built on technical standards that enable equipment produced adopting these standards to be used around the world.Standards are negotiated both domestically-at domestic standard-setting bodies such as the Bureau of Indian Standards (BIS) or Standards Australia (SA) or global standard-development organisations such as the International Telecommunications Union (ITU) or the International Standardisation Organisation (ISO). While standards are not legally binding unless they are explicitly imposed as requirements in a legislation, they have immense coercive value. Not adhering to recognised standards means that certain products may not reach markets as they are not compatible with consumer requirements or cannot claim to meet health or safety expectations. The harmonisation of internationally recognised standards serves as the bedrock for global trade and commerce. Complying with a global standard is particularly critical because of its applicability across several markets. Further, international trade law proclaims that World Trade Organisation (WTO) members can impose trade restrictive domestic measures only on the basis of published or soon to be published international standards.(Article 2.4 of the <a href="https://www.wto.org/english/tratop_e/tbt_e/tbt_e.htm">Technical Barriers to Trade</a> Agreement)</p>
<p style="text-align: justify; ">Shaping global standards is of immense geopolitical and economic value to states and the private sector alike. States that are able to ‘export’ their domestic technological standards internationally enable their companies to reap a significant economic advantage because it is cheaper for them to adopt global standards. Further, companies draw huge revenue by holding patents to technologies that are essential to comply with a certain standard popularly known as Standard Essential Patents or SEPs and licensing them to other players who want to enter the market. For context, IPlytics <a href="https://www.lightreading.com/5g/nokia-boasts-of-essential-5g-patents-milestone/d/d-id/773445">estimated</a> that cumulative global royalty income from licensing SEPs was USD 20 billion in 2020, anticipated to increase significantly in the coming years due to massive technological upgradation currently underway.</p>
<p style="text-align: justify; ">China’s push for dominance to influence the 5G standard at the Third Generation Partnership Project (3GPP) illustrates how prioritising standards-setting both through domestic industrial policy and foreign policy could provide rich economic and geopolitical dividends. After failing to meaningfully influence the setting of the 3G and 4G standards,the Chinese government commenced a national effort that sought to harmonise domestic standards, improve government coordination of standard-setting efforts, and obtain a first movers advantage over other nations developing their own domestic 5G standards. This was combined with a diplomatic push that saw vigorous private sector <a href="https://asia.nikkei.com/Politics/International-relations/China-leads-the-way-on-global-standards-for-5G-and-beyond">participation </a>(Huawei put in 20 5G related proposals whereas Ericsson and Nokia put in just 16 and 10 respectively);</p>
<p style="text-align: justify; ">packing key leadership positions in Working Groups with representatives from Chinese companies and institutions; and ensuring that all Chinese participants vote in unison for any proposal. It is no surprise therefore that Chinese companies now lead the way on 5G with Huawei <a href="https://insights.greyb.com/company-with-most-5g-patents/">owning</a> the most number of 5G patents and has <a href="https://www.cfr.org/blog/china-huawei-5g">finalised</a> more 5G contracts than any other company despite restrictions placed on Huawei’s gear by some countries. As detailed in its “Make in China”strategy, China will now activelyapply its winning strategy to other standard-setting avenues as well</p>
<h3><span>Standards for Artificial Intelligence</span></h3>
<p><strong> </strong></p>
<p style="text-align: justify; ">A number of institutions, including private actors such as Huawei and Cloud Walk have contributed to China’s 2018 <a href="https://cset.georgetown.edu/publication/artificial-intelligence-standardization-white-paper-2021-edition/">AI standardisation white paper</a> that was revised and updated in 2021.The white paper maps the work of SDOs in the field of AI standards and outlines a number of recommendations on how Chinese actors can use global SDOs to boost industrial competitiveness and globally promote “Chinese wisdom.” While there are cursory references to the role of standards in furthering “ethics” and “privacy,” the document does not outline how China will look to promote these values at SDOs.</p>
<p style="text-align: justify; "><span>Artificial Intelligence (AI) is a general purpose technology that has various outcomes and use-cases.Top down regulation of AI by governments is emerging across jurisdictions but this may not keep pace with the rapidly evolving technology being developed by the private sector or adequately check the diversity of use-cases. On the other hand, private sector driven self-regulatory initiatives focussing on ‘ethical AI’ are very broad and provide too much leeway to technology companies to evade the law. Technical standards offer a middle ground where multiple stakeholders can come together to devise uniform requirements on various stages of the AI development lifecycle. Of course, technical standards must co-exist with government driven regulation as well as self regulatory codes to holistically govern the deployment of AI globally. However, while the first two modes of regulation has received plenty of attention from policy-makers and scholars alike, AI standard-setting is an emerging field that has yet to be concretely evaluated from a strategic and diplomatic perspective.</span></p>
<p><strong> </strong></p>
<p><strong> </strong></p>
<h3><strong>Introducing a new CIS-ASPI project</strong></h3>
<p><strong> </strong></p>
<p style="text-align: justify; ">This is why researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific. Given the immense economic value of shaping global technical standards, it is imperative that SDOs not be dominated only by the likes of the US, Europe or China. The standards likely to impact a majority of nations, devised only from the purview of a few countries may be context agnostic to the needs of emerging economies. Further, there are values at stake here. An excessive focus on security, accuracy or quality of AI-driven products may make some technology palatable across the world even if the technology undermines core democratic values such as privacy, and anti-discrimination. China’s<a href="https://www.ft.com/content/c3555a3c-0d3e-11ea-b2d6-9bf4d1957a67"> efforts</a> at shaping Facial Recognition Technology (FRT) standards at the ITU have been criticised for moving beyond mere technical specifications into the domain of policy recommendations despite there being a lack of representation of experts on human rights, consumer protection or data protection at the ITU. Accordingly, diversity of representation in terms of expertise, gender, and nationality at SDOs, including in leadership positions, are aspects our project will explore with an eye towards creating more inclusive participation.</p>
<p style="text-align: justify; "><span>Through this project ,we hope to identify how key stakeholders drive these initiatives and how technological standards can be devised in line both with core democratic values and strategic priorities. Through extensive consultations with several stakeholder groups, we plan to offer learning products to policy makers and technical delegates alike to enable Australian and Indian delegates to serve as ambassadors for our respective nations.</span></p>
<p style="text-align: justify; "><span>For more information on this new and exciting project funded by the Australian Departmentfor Foreign Affairs and Trade as part of the Australia India Cyber and Critical Technology Partnership grants, visit </span><a href="http://www.aspi.org.au/techdiplomacy">www.aspi.org.au/techdiplomacy</a><span> and https://www.internationalcybertech.gov.au/AICCTP-grant-round-two</span></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific'>http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific</a>
</p>
No publisherarindrajitInternet GovernanceArtificial Intelligence2022-10-21T17:16:10ZBlog EntryAI in the Future of Work
http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work
<b>Artificial Intelligence and allied technologies form part of what is being called the fourth Industrial Revolution.</b>
<p style="text-align: justify; ">Some analysts <a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf">project the loss of jobs</a> as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will <a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf">enhance and complement</a> human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from <a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf">machine-to-machine interactions on the factory floor</a>, to automated decision-making systems.</p>
<p style="text-align: justify; ">Some analysts <a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf">project the loss of jobs</a> as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will <a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf">enhance and complement</a> human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from <a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf">machine-to-machine interactions on the factory floor</a>, to automated decision-making systems.</p>
<h3 style="text-align: justify; ">Studying the Platform Economy</h3>
<p style="text-align: justify; ">The platform economy, in particular, is dependent on AI in the design of aggregator platforms that form a two-way market between customers and workers. Platforms deploy AI at a number of different stages, from recruitment to assignment of tasks to workers. AI systems often reflect existing social biases, as they are built using biased datasets, and by non-diverse teams that are not attuned to such biases. This has been the case in the platform economy as well, where biased systems impact the ability of marginalised workers to access opportunities. To take an example, Amazon’s algorithm to filter workers’ resumes was <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G">biased against women</a> because it was trained on 10 years of hiring data, and ended up reflecting the underrepresentation of women in the tech industry. That is not to say that algorithms introduce biases where they didn’t exist earlier, but that they take existing biases and hard code them into systems in a systematic and predictable manner.</p>
<p style="text-align: justify; ">Biases are made even more explicit in marketplace platforms, that allow employers to review workers’ profiles and skills for a fee. In a study of platforms offering home-based services in India, we found that marketplace platforms offer filtering mechanisms which allow employers to filter workers by demographic characteristics such as gender, age, religion, and in one case, caste (the research publication is forthcoming). The design of the platform itself, in this case, encourages and enables discrimination of workers. One of the leading platforms in India had ‘Hindu maid’ and ‘Hindu cook’ as its top search term, reflecting the ways in which employers from the dominant religion are encouraged to discriminate against workers from minority religions in the Indian platform economy.</p>
<p style="text-align: justify; ">Another source of bias in the platform economy are rating and pricing systems, which can reduce the quality and quantum of work offered to marginalised workers. Rating systems exist across platform types - those that offer on-demand or location-based work, microwork platforms, and marketplace platforms. They allow customers and employers to rate workers on a scale, and are most often one-way feedback systems to review a worker’s performance (as our forthcoming research discusses, we found very few examples of feedback loops that also allow workers to rate employers). Rating systems <a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf">have been found</a> to be a source of anxiety for workers, as they can be rated poorly for unfair reasons, including their demographic characteristics. Most platforms penalise workers for poor ratings, and may even stop them from accessing any tasks at all if their ratings fall below a certain threshold. Without adequate grievance redressal mechanisms that allow workers to contest poor ratings, rating systems are prone to reflect customer biases while appearing neutral. It is difficult to assess the level of such bias without companies releasing data comparing ratings of workers by their demographic characteristics, but it <a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf">has been argued</a> that there is ample evidence to believe that demographic characteristics will inevitably impact workers ratings due to widespread biases.</p>
<h3>Searching for a Solution</h3>
<p style="text-align: justify; ">It is clear that platform companies need to be pushed into solving for biases and making their systems more fair and non-discriminatory. Some companies, such as Amazon in the example above, have responded by suspending algorithms that are proven to be biased. However, this is a temporary fix, as companies rarely seek to drop such projects indefinitely. In the platform economy, where algorithms are central to the business model of companies, complete suspension is near impossible. Amazon also tried another quick fix - it <a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G">altered the algorithm</a> to respond neutrally to terms such as ‘woman’. This is a process known as debiasing the model, through which any biased connections (such as between the word ‘woman’ and downgrading) being made by the algorithm are explicitly removed. Another solution is diversifying or debiasing datasets. In this example, the algorithm could be fed a larger sample of resumes and decision-making logics from industries that have a higher representation of women.</p>
<p style="text-align: justify; ">Another set of solutions could be drawn from anti-discrimination law, which prohibit discrimination at the workplace. In India, anti-discrimination laws protect against wage inequality, as well as discrimination at the stage of recruitment for protected groups such as transgender persons. While it can be argued that biased rating systems lead to wage inequality, there are several barriers to applying anti-discrimination law for workers in the platform economy. One, most jurisdictions, including India, protect only employees from discrimination, not self-employed contractors. Another challenge is the lack of data to prove that rating or recruitment algorithms are discriminatory, without which legal recourse is impossible. <a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf">Rosenblat et al.</a> (2016) discuss these challenges in the context of the US, suggesting solutions such as addressing employment misclassification or modifying pleading requirements to bring platform workers under the protection of the law.</p>
<p style="text-align: justify; ">Feminist principles point to structural shifts that are required to ensure robust protections for workers. Analysing algorithmic systems from a feminist lens indicates several points in the design at which interventions must be focused to ensure impact. The teams designing algorithms need to be made more diverse, along with integrating an explicit focus on assessing the impact of systems at the stage of design. Companies need to be more transparent with their data, and encourage independent audits of their systems. Corporate and government actors must be held to account to fix broken AI systems.</p>
<hr />
<p style="text-align: justify; "><span>Ambika Tandon is a Senior Researcher at the <a href="https://cis-india.org/">Centre for Internet & Society (CIS)</a> in India, where she studies the intersections of gender and technology. She focuses on women’s work in the digital economy, and the impact of emerging technologies on social inequality. She is also interested in developing feminist methods for technology research. Ambika tweets at <a href="https://twitter.com/AmbikaTandon">@AmbikaTandon</a>.</span></p>
<p style="text-align: justify; ">The blog was originally <a class="external-link" href="https://ethicalsource.dev/blog/ai-in-the-future-of-work/">published in the Organization for Ethical Source</a></p>
<p>
For more details visit <a href='http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work'>http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work</a>
</p>
No publisherambikaCISRAWResearchers at WorkArtificial IntelligenceFuture of Work2021-12-07T01:51:42ZBlog EntryPracticing Feminist Principles
http://editors.cis-india.org/raw/practicing-feminist-principles
<b>AI can serve to challenge social inequality and dismantle structures of power.</b>
<p style="text-align: justify; "><span>Artificial intelligence systems have been heralded as a tool to purge our systems of social biases, opinions, and behaviour, and produce ‘hard objectivity’. However, on the contrary, it has become evident that AI systems can sharpen inequalities and bias by hard coding it. If left unattended, automated decision-making can be dangerous and dystopian.</span></p>
<p style="text-align: justify; "><strong>However, when appropriated by feminists, AI can serve to challenge social inequality and dismantle structures of power. There are many routes to such appropriation – resisting authoritarian uses through movement-building and creating our own alternative systems that harness the strength of AI towards achieving social change.</strong></p>
<p style="text-align: justify; "><strong>Feminist principles can be a handy framework to understand and transform the impact of AI systems. Key principles include reflexivity, participation, intersectionality, and working towards structural change.</strong> When operationalised, these principles can be used to enhance the capacities of local actors and institutions working towards developmental goals. They can also be used to theoretically ground collective action against the use of AI systems by institutions of power.</p>
<p style="text-align: justify; "><strong>Reflexivity</strong> in the design and implementation of AI would imply a check on the privilege and power, or lack thereof, of the various stakeholders involved in an ecosystem. By being reflexive, designers can take steps to account for power hierarchies in the process of design. A popular example of the impact of power differentials is in national statistics. Collected largely by male surveyors speaking to male heads of households, national statistics can often undervalue or misrepresent women’s labour and health. See Data2x. “<a class="external-link" href="https://www.data4sdgs.org/sites/default/files/2017-09/Gender%20Data%20-%20Data4SDGs%20Toolbox%20Module.pdf">Gender Data: Sources, Gaps, and Measurement Opportunities</a>,” March 2017 and Statistics Division. “Gender, Statistics and Gender Indicators Developing a Regional Core Set of Gender Statistics and Indicators in Asia and the Pacific.” <a class="external-link" href="https://www.unescap.org/sites/default/files/Framework-and-Indicator-set.pdf">United Nations Economic and Social Commission for Asia and the Pacific, 2013</a>. <span>AI systems would need to be reflexive of such gaps and plan steps to mitigate them.</span></p>
<p style="text-align: justify; "><strong>Participation</strong> as a principle focuses on the process. A participatory process would account for the perspectives and lived experiences of various stakeholders, including those most impacted by its deployment. <strong>In the health ecosystem, for instance, this would include policymakers, public and private healthcare providers, frontline workers, and patients. A health information system with a bottom-up design would account for metrics of success determined by not just high-level organisations such as the World Health Organisation and national governments, but also by providers and frontline workers</strong>. Among other benefits, participation in designing AI systems also leads to buy-in and ownership of the technology right at the outset, promoting widespread adoption.</p>
<p style="text-align: justify; "><strong>Intersectionality</strong> calls for addressing the social difference in the datasets, design, and deployment of AI. <strong>Research across fields has shown the perpetuation of inequality based on gender, income, race, and other characteristics through AI that is based on biased datasets.</strong></p>
<p style="text-align: justify; ">The most critical principle is to ensure that AI systems are working to challenge inequality, including inequality perpetrated by patriarchal, racist, and capitalist systems. Aligning with feminist objectives means that systems that have objectives that do not align with feminist goals – such as those that enhance state capacities to surveil and police – would immediately be excluded. Systems that are designed to exclude and oppress will not work to further feminist goals, even if they integrate other progressive elements such as intersectional datasets or dynamic consent architecture (which would allow users to opt in and out easily).</p>
<p style="text-align: justify; ">We must work towards decreasing social inequality and achieve egalitarian outcomes in and through its practice. Thus, while explicitly feminist projects such as those that produce better datasets or advocate for participatory mechanisms are of course practicing this principle, I would argue that it is also practiced by any project that furthers feminist goals. Take for example AI projects that aim to reduce hate speech and misinformation online. Given that women and other marginalised groups are often at the receiving end of violence, such work can be classified as feminist even if it doesn’t actively target gender-based violence.</p>
<p style="text-align: justify; ">All technology is embedded in social relations. Practicing feminist principles in the design of AI only serves to account for these social relations and design better, more robust systems. <strong>Feminist practitioners can mobilise these to ensure a future of AI with inclusive, community-owned, participatory systems, combined with collective challenges to systems of domination.</strong></p>
<hr />
<h3>References</h3>
<p>Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14, no. 3 (1988): 575–99. https://doi.org/10.2307/3178066.</p>
<p>Link to the original article <a class="external-link" href="https://feministai.pubpub.org/pub/practicing-feminist-principles/release/1?readingCollection=c218d365">here</a></p>
<p>
For more details visit <a href='http://editors.cis-india.org/raw/practicing-feminist-principles'>http://editors.cis-india.org/raw/practicing-feminist-principles</a>
</p>
No publisherambikaGender, Welfare, and PrivacyCISRAWResearchers at WorkArtificial Intelligence2021-12-07T00:54:54ZBlog EntryFinding Needles in Haystacks - Discussing the Role of Automated Filtering in the New Indian Intermediary Liability Rules
http://editors.cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules
<b>On the 25th of February this year The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new Rules broaden the scope of which entities can be considered as intermediaries to now include curated-content platforms (Netflix) as well as digital news publications. This blogpost analyzes the rule on automated filtering, in the context of the growing use of automated content moderation.
</b>
<p class="p1"><span class="s1">This article first <a class="external-link" href="https://www.law.kuleuven.be/citip/blog/finding-needles-in-haystacks/">appeared</a> on the KU Leuven's Centre for IT and IP (CITIP) blog. Cross-posted with permission.</span></p>
<p class="p1"><span class="s1">----</span></p>
<p class="p1"><span class="s1">Mathew Sag in his 2018 <a href="https://scholarship.law.nd.edu/cgi/viewcontent.cgi?article=4761&context=ndlr"><span class="s2">paper</span></a> on internet safe harbours discussed how the internet resulted in a shift from the traditional gatekeepers of knowledge (publishing houses) that used to decide what knowledge could be showcased, to a system where everybody who has access to the internet can showcase their work. A “<em>content creator</em>” today ranges from legacy media companies to any person who has access to a smartphone and an internet connection. In a similar trajectory, with the increase in websites and mobile apps and the functions that they serve, the scope of what is an internet intermediary has widened all over the world. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1"><strong>Who is an Intermediary?</strong></span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">In India the definition of “<em>intermediary</em>” is found under Section 2(w) of the <a href="https://www.meity.gov.in/writereaddata/files/itbill2000.pdf"><span class="s2">Information Technology (IT) Act 2000</span></a>, which defines an Intermediary as <em>“with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecoms service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-marketplaces and cyber cafes”.</em> The all-encompassing nature of the definition has allowed the dynamic nature of intermediaries to be included under the definition of the Act, and the Guidelines that have been published periodically (<a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"><span class="s2">2011</span></a>, <a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"><span class="s2">2018</span></a> and <a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"><span class="s2">2021</span></a>). With more websites and social media companies, and even more content creators online today, there is a need to look at ways in which intermediaries can remove illegal content or content that goes against their community guidelines.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">Along with the definition of an intermediary, the IT Act, under Section 79, provides exemptions which grant safe harbours to internet intermediaries, from liability from third-party content, and further empowers the central government to make Rules that act as guidelines for the intermediaries to follow. The Intermediary Liability Rules hence seek to regulate content and lay down safe harbour provisions for intermediaries and internet service providers. To keep up with the changing nature of the internet and internet intermediaries, India relies on the Intermediary Liability Rules to regulate and provide a conducive environment for intermediaries. In view of this provision India has as of now published three versions of the Intermediary Liability (IL) Rules. The first Rules came out in<a href="https://www.meity.gov.in/writereaddata/files/GSR314E_10511%25281%2529_0.pdf"><span class="s2"> 2011</span></a>, followed by the introduction of draft amendments to the law in<a href="https://www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf"><span class="s2"> 2018</span></a> and finally the latest <a href="https://www.meity.gov.in/writereaddata/files/Intermediary_Guidelines_and_Digital_Media_Ethics_Code_Rules-2021.pdf"><span class="s2">2021 </span></a>version, which would supersede the earlier Rules of 2011. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1"><strong>The Growing Use of Automated Content Moderation </strong></span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">With each version of the Rules there seemed to be changes that ensured that they were abreast with the changing face of the internet and the changing nature of both content and content creator. Hence the 2018 version of the Rules showcase a push towards automated content filtering. The text of Rule 3(9) reads as follows: “<em>The Intermediary shall deploy technology based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content</em>”.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">Under Rule 3(9), intermediaries were required to deploy automated tools or appropriate mechanisms to proactively identify, remove or disable public access to unlawful content. However, neither the 2018 IL Rules, nor the parent Act (the IT Act) specified which content can be deemed unlawful. The 2018 Rules also failed to establish the specific responsibilities of the intermediaries, instead relying on vague terms like “<em>appropriate mechanisms</em>” and with “<em>appropriate controls</em>”. Hence it can be seen that though the Rules mandated the use of automated tools, neither them nor the IT Act provided clear guidelines on what could be removed. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">The lack of clear guidelines and list of content that can be removed had left the decision up to the intermediaries to decide which content, if not actively removed, could cost them their immunity. It has been previously documented that the lack of clear guidelines in the 2011 version of the <a href="https://cis-india.org/internet-governance/chilling-effects-on-free-expression-on-internet"><span class="s2">Rules</span></a>, led to intermediaries over complying with take down notices, often taking down content that did not warrant it. The existing tendency to over-comply, combined with automated filtering could have resulted in a number of <a href="https://cis-india.org/internet-governance/how-india-censors-the-web-websci#:~:text=One%2520of%2520the%2520primary%2520ways,certain%2520websites%2520for%2520its%2520users."><span class="s2">unwarranted take downs</span></a>.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">While the 2018 Rules mandated the deployment of automated tools, the year 2020, (possibly due to the pandemic induced work from home safety protocols and global lockdowns) saw major social media companies announcing the move towards a fully automated system of content<a href="https://www.medianama.com/2020/03/223-facebook-content-moderation-coronavirus-medianamas-take/"><span class="s2"> moderation</span></a>. Though the use of automated content removal seems like the right step considering the <a href="https://www.businessinsider.in/tech/news/facebook-content-moderator-who-quit-reportedly-wrote-a-blistering-letter-citing-stress-induced-insomnia-among-other-trauma/articleshow/82075608.cms"><span class="s2">trauma </span></a>that human moderators had to go through, the algorithms that are being used now to remove content are relying on the parameters, practices and data from earlier removals made by the human moderators. More recently, in India with the emergence of the second wave of the COVID19 wave, the Ministry of Electronics and Information Technology has <a href="https://www.thehindu.com/news/national/govt-asks-social-media-platforms-to-remove-100-covid-19-related-posts/article34406733.ece"><span class="s2">asked </span></a>social media platforms to remove “<em>unrelated, old and out of the context images or visuals, communally sensitive posts and misinformation about COVID19 protocols</em>”.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1"><strong>The New IL Rules - A ray of hope?</strong></span></p>
<p class="p3"><span class="s3">The 2021 version of the IL Rules provides a more nuanced approach to the use of automated content filtering compared to the earlier version. Rule 4(4) now requires only “</span><span class="s1">significant social media intermediaries” to use automated tools to identity and take down content pertaining to “child sexual abuse material”, or “depicting rape”, or any information which is identical to a content that has already been removed through a take-down notice. The Rules define a social media intermediary as “<em>intermediary which primarily or solely enables interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services”</em> .The Rules also go a step further to create another type of intermediary, the significant social media intermediary. A significant social media intermediary is defined as one “<em>having a number of registered users in India above such threshold as notified by the Central Government</em>''. Hence what can be considered as a social media intermediary that qualifies as a significant one could change at any time.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s4">Along with adding a new threshold (qualifying as a significant social media intermediary) the Rules, in contrast to the 2018 version, also emphasises the need of such removal to be </span><span class="s1">proportionate to the interests of freedom of speech and expression and privacy of users. The Rules also call for “<em>appropriate human oversight</em>” as well as a periodic review of the tools used for content moderation. The Rules by using the term “<em>shall endeavor</em>” aids in reducing the pressure on the intermediary to set up these mechanisms. This also means that the requirement is now on a best effort basis, as opposed to the word “<em>shall</em>” in the 2018 version of the Rules, which made it mandatory.</span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p1"><span class="s1">Although the Rules now narrow down the instances where automated content removal can take place, the concerns around over compliance and censorship still loom. One of the reasons for concern is that the Rules still fail to require the intermediaries to set up a mechanism for redress or for appeals to such removal. Additionally, the provision that states that automated systems could remove content that have been previously taken down, creates a cause for worry as the propensity of the intermediaries to over comply and take down content has already been documented. This then brings us back to the previous issue where the social media company’s automated systems were removing legitimate news sources. Though the 2021 Rules tries to clarify certain provisions related to automated filtering, like the addition of the safeguards, the Rules also suffer from vague provisions that could cause issues related to compliance. The use of terms such as “<em>proportionate</em>”, “<em>having regard to free speech</em>” etc. fail to lay down definitive directions for the intermediaries (in this case SSMI) to comply with. Additionally, as earlier stated, being qualified as a SSMI can change at any time, either based on the change in the number of users, or the change in the threshold of users, mandated by the government. The absence of human intervention during removal, vague guidelines and fear of losing out on safe harbour provisions, add to the already increasing trend of censorship in social media. With the use of automated means and the fast, and almost immediate removal of content would mean that certain content creators might not even be able to post their content <a href="https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online"><span class="s2">online.</span><span class="s5"> With the use of proactive filtering through automated means the content can be removed almost immediately.</span></a></span><span class="s6"> </span><span class="s1">With India’s current trend of new internet users, some of these creators would also be <a href="https://timesofindia.indiatimes.com/business/india-business/for-the-first-time-india-has-more-rural-net-users-than-urban/articleshow/75566025.cms"><span class="s2">first time users</span></a> of the internet. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p3"><span class="s1"><strong>Conclusion</strong></span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p3"><span class="s1">The need for automated removal of content is understandable, based not only on the sheer volume of content but also the nightmare stories of the toll it takes on human content moderators, who otherwise have to go through hours of disturbing content. Though the Indian Intermediary Liability Guidelines have improved from the earlier versions in terms of moving away from mandating proactive filtering, there still needs to be consideration of how these technologies are used, and the laws should understand the shift in the definition of who a content creator is. There needs to be ways of recourse to unfair removal of content and a means to get an explanation of why the content was removed, via notices to the user. In the case of India, the notices should be in Indian languages as well, so that the people are able to understand them. </span></p>
<p class="p2"><span class="s1"></span></p>
<p class="p3"><span class="s1">In the absence of further clear guidelines, the perils of over-censorship by the intermediaries in order to stay out of trouble could lead to further stifling of not just freedom of speech but also access to information. In addition, the fear of content being taken down or even potential prosecution could mean that people resort to self-censorship, preventing them from exercising their fundamental rights to freedom of speech and expression, as guaranteed by the Indian Constitution. We hope that the next version of the Rules take a more nuanced approach to automated content removal and ensure adequate and specific safeguards to ensure a conducive environment for both intermediaries and content creators. </span></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules'>http://editors.cis-india.org/internet-governance/blog/finding-needles-in-haystacks-discussing-the-role-of-automated-filtering-in-the-new-indian-intermediary-liability-rules</a>
</p>
No publisherShweta Mohandas and Torsha SarkarInternet GovernanceIntermediary LiabilityArtificial Intelligence2021-08-03T07:28:53ZBlog EntryNew intermediary guidelines: The good and the bad
http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad
<b>In pursuance of the government releasing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, this blogpost offers a quick rundown of some of the changes brought about the Rules, and how they line up with existing principles of best practices in content moderation, among others. </b>
<p> </p>
<p>This article originally appeared in the Down to Earth <a class="external-link" href="https://www.downtoearth.org.in/blog/governance/new-intermediary-guidelines-the-good-and-the-bad-75693">magazine</a>. Reposted with permission.</p>
<p>-------</p>
<p>The Government of India notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The operation of these rules would be in supersession of the existing intermediary liability rules under the Information Technology (IT) Act, made back in 2011.</p>
<p>These IL rules would have a significant impact on our relationships with internet ‘intermediaries’, i.e. gatekeepers and getaways to the internet, including social media platforms, communication and messaging channels.</p>
<p>The rules also make a bid to include entities that have not traditionally been considered ‘intermediaries’ within the law, including curated-content platforms such as Netflix and Amazon Prime as well as digital news publications.</p>
<p>These rules are a significant step-up from the draft version of the amendments floated by the Union government two years ago; in this period, the relationship between the government around the world and major intermediaries changed significantly. </p>
<p>The insistence of these entities in the past, that they are not ‘arbiters of truth’, for instance, has not always held water in their own decision-makings.</p>
<p>Both Twitter and Facebook, for instance, have locked the former United States president Donald Trump out of their platforms. Twitter has also resisted to fully comply with government censorship requests in India, spilling into an interesting policy tussle between the two entities. It is in the context of these changes, therefore, that we must we consider the new rules.</p>
<p><strong>What changed for the good?</strong></p>
<p>One of the immediate standouts of these rules is in the more granular way in which it aims to approach the problem of intermediary regulation. The previous draft — and in general the entirety of the law — had continued to treat ‘intermediaries’ as a monolithic entity, entirely definable by section 2(w) of the IT Act, which in turn derived much of its legal language from the EU E-commerce Directive of 2000.</p>
<p>Intermediaries in the directive were treated more like ‘simple conduits’ or dumb, passive carriers who did not play any active role in the content. While that might have been the truth of the internet when these laws and rules were first enacted, the internet today looks much different.</p>
<p>Not only is there a diversification of services offered by these intermediaries, there’s also a significant issue of scale, wielded by a few select players, either by centralisation or by the sheer number of user bases. A broad, general mandate would, therefore, miss out on many of these nuances, leading to imperfect regulatory outcomes.</p>
<p>The new rules, therefore, envisage three types of entities:</p>
<ul><li>There are the ‘intermediaries’ within the traditional, section 2(w) meaning of the IT Act. This would be the broad umbrella term for all entities that would fall within the ambit of the rules.</li><li>There are the ‘social media intermediaries’ (SMI), as entities, which enable online interaction between two or more users.</li><li>The rules identify ‘significant social media intermediaries’ (SSMI), which would mean entities with user-thresholds as notified by the Central Government.</li></ul>
<p>The levels of obligations vary based on these hierarchies of classification. For instance, an SSMI would be obligated with a much higher standard of transparency and accountability towards their users. They would have to fulfill by publishing six-monthly transparency reports, where they have to outline how they dealt with requests for content removal, how they deployed automated tools to filter content, and so on.</p>
<p>I have previously argued how transparency reports, when done well, are an excellent way of understanding the breadth of government and social media censorships. Legally mandating this is then perhaps a step in the right direction.</p>
<p>Some other requirements under this transparency principle include giving notice to users whose content has been disabled, allowing them to contest such removal, etc.</p>
<p>One of the other rules from the older draft that had raised a significant amount of concern was the proactive filtering mandate, where intermediaries were liable to basically filter for all unlawful content. This was problematic on two counts:</p>
<ul><li>Developments in machine learning technologies are simply not up there to make this a possibility, which would mean that there would always be a chance that legitimate and legal content would get censored, leading to general chilling effect on digital expression</li><li>The technical and financial burden this would impose on intermediaries would have impacted the competition in the market.</li></ul>
<p>The new rules seemed to have lessened this burden, by first, reducing it from being mandatory to being best endeavour-basis; and second, by reducing the ambit of ‘unlawful content’ to only include content depicting sexual abuse, child sexual abuse imagery (CSAM) and duplicating to already disabled / removed content.</p>
<p>This specificity would be useful for better deployment of such technologies, since previous research has shown that it’s considerably easier to train a machine learning tool on corpus of CSAM or abuse, rather than on more contextual, subjective matters such as hate speech.</p>
<p><strong>What should go?</strong></p>
<p>That being said, it is concerning that the new rules choose to bring online curated content platforms (OCCPs) within the ambit of the law by proposals of a three-tiered self-regulatory body and schedules outlining guidelines about the rating system these entities should deploy.</p>
<p>In the last two years, several attempts have been made by the Internet and Mobile Association of India (IAMAI), an industry body consisting of representatives of these OCCPs, to bring about a self-regulatory code that fills in the supposed regulatory gap in the Indian law.</p>
<p>It is not known if these stakeholders were consulted before the enactment of these provisions. Some of this framework would also apply to publishers of digital news portals.</p>
<p>Noticeably, this entire chapter was also missing from the old draft, and introducing it in the final form of the law without due public consultations is problematic.</p>
<p>Part III and onwards of the rules, which broadly deal with the regulation of these entities, therefore, should be put on hold and opened up for a period of public and stakeholder consultations to adhere to the true spirit of democratic participation.</p>
<p><em>The author would like to thank Gurshabad Grover for his editorial suggestions. </em></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad'>http://editors.cis-india.org/internet-governance/blog/new-intermediary-guidelines-the-good-and-the-bad</a>
</p>
No publisherTorSharkIT ActIntermediary LiabilityInternet GovernanceCensorshipArtificial Intelligence2021-03-15T13:52:46ZBlog EntryCIS Seminar Series
http://editors.cis-india.org/internet-governance/blog/cis-seminar-series
<b>The CIS seminar series will be a venue for researchers to share works-in-progress, exchange ideas, identify avenues for collaboration, and curate research. We also seek to mitigate the impact of Covid-19 on research exchange, and foster collaborations among researchers and academics from diverse geographies. Every quarter we will be hosting a remote seminar with presentations, discussions and debate on a thematic area. </b>
<p style="text-align: justify; ">The first seminar series was held on 7th and 8th October on the theme of <a href="https://cis-india.org/internet-governance/blog/cis-seminar-series-information-disorder">‘Information Disorder: Mis-, Dis- and Malinformation’</a>,</p>
<h3 style="text-align: justify; ">Theme for the Second Seminar (to be held online)</h3>
<h3>Moderating Data, Moderating Lives: Debating visions of (automated) content moderation in the contemporary</h3>
<p style="text-align: justify; ">Artificial Intelligence (AI) and Machine Learning (ML) based approaches have become increasingly popular as “solutions” to curb the extent of mis-, dis- mal-information, hate speech, online violence and harassment on social media. The pandemic and the ensuing work from home policy forced many platforms to shift to automated moderation which further highlighted the inefficacy of existing models (<a href="https://www.zotero.org/google-docs/?u73Lwx">Gillespie, 2020)</a> to deal with the surge in misinformation and harassment. These efforts, however, raise a range of interrelated concerns such as freedom and regulation of speech on the privately public sphere of social media platforms; algorithmic governance, censorship and surveillance; the relation between virality, hate, algorithmic design and profits; and social, political and cultural implications of ordering social relations through computational logics of AI/ML.</p>
<p style="text-align: justify; ">On one hand, large-scale content moderation approaches (that include automated AI/ML-based approaches) have been deemed “necessary” given the enormity of data generated <a href="https://www.zotero.org/google-docs/?JHQ0rF">(Gillespie, 2020)</a>, on the other hand, they have been regarded as “technological fixtures” offered by the Silicon Valley <a href="https://www.zotero.org/google-docs/?YLFnLm">(Morozov, 2013)</a>, or “tyrannical” as they erode existing democratic measures <a href="https://www.zotero.org/google-docs/?Ia8JYp">(Harari, 2018)</a>. Alternatively, decolonial, feminist and postcolonial approaches insist on designing AI/ML models that centre voices of those excluded to sustain and further civic spaces on social media (<a href="https://www.zotero.org/google-docs/?1Sa8vf">Siapera, 2022)</a>.</p>
<p style="text-align: justify; ">From the global south perspective, issues around content moderation foreground the hierarchies inbuilt in the existing knowledge infrastructures. First, platforms remain unwilling to moderate content in under-resourced languages of the global south citing technological difficulties. Second, given the scale and reach of social media platforms and inefficient moderation models, the work is outsourced to workers in the global south who are meant to do the dirty work of scavenging content off these platforms for the global north. Such concerns allow us to interrogate the techno-solutionist approaches as well as their critiques situated in the global north. These realities demand that we articulate a different relationship with AI/ML while also being critical of AI/ML as an instrument of social empowerment for those at the “bottom of the pyramid” <a href="https://www.zotero.org/google-docs/?bvx6mV">(Arora, 2016)</a>.</p>
<p style="text-align: justify; ">The seminar invites scholars interested in articulating nuanced responses to content moderation that take into account the harms perpetrated by algorithmic governance of social relations and irresponsible intermediaries while being cognizant of the harmful effects of mis-, dis- mal-information, hate speech, online violence and harassment on social media.</p>
<p style="text-align: justify; ">We invite abstract submissions that respond to these complexities vis-a-vis content moderation models or propose provocations regarding automated moderation models and their in/efficacy in furthering egalitarian relationships on social media, especially in the global south.</p>
<p style="text-align: justify; ">Submissions can reflect on the following themes using legal, policy, social, cultural and political approaches. Also, the list is not exhaustive and abstracts addressing other ancillary concerns are most welcome:</p>
<ul>
<li>Metaphors of (content) moderation: mediating utopia, dystopia, scepticism surrounding AI/ML approaches to moderation.</li>
<li>From toxic to healthy, from purity to impurity: Interrogating gendered, racist, colonial tropes used to legitimize content moderation </li>
<li>Negotiating the link between content moderation, censorship and surveillance in the global south</li>
<li>Whose values decide what is and is not harmful? </li>
<li>Challenges of building moderation models for under resourced languages.</li>
<li>Content moderation, algorithmic governance and social relations. </li>
<li>Communicating algorithmic governance on social media to the not so “tech-savvy” among us.</li>
<li>Speculative horizons of content moderation and the future of social relations on the internet. </li>
<li>Scavenging abuse on social media: Immaterial/invisible labour for making for-profit platforms safer to use.</li>
<li>Do different platforms moderate differently? Interrogating content moderation on diverse social media platforms, and multimedia content.</li>
<li>What should and should not be automated? Understanding prevalence of irony, sarcasm, humour, explicit language as counterspeech.</li>
<li>Maybe we should not automate: Alternative, bottom-up approaches to content moderation</li>
</ul>
<h3>Seminar Format</h3>
<p>We are happy to welcome abstracts for one of two tracks:</p>
<p><strong>Working paper presentation</strong></p>
<p style="text-align: justify; ">A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.</p>
<p><strong>Coffee-shop conversations</strong></p>
<p style="text-align: justify; ">In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.</p>
<p style="text-align: justify; ">We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.</p>
<p>All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.</p>
<p>Please send your abstracts to <a href="mailto:workshops@cis-india.org">workshops@cis-india.org</a>.</p>
<h3>Timeline</h3>
<div id="_mcePaste"><ol>
<li>Abstract Submission Deadline: 18th April</li>
<li>Results of Abstract review: 25th April</li>
<li>Full submissions (of draft papers): 25th May</li>
<li>Seminar date: Tentative 31st May</li>
</ol></div>
<h3>References</h3>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>International Journal of Communication</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>, </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>10</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>(0), 19.</span></a></span><span> </span></p>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Gillespie, T. (2020). Content moderation, AI, and the question of scale. </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>Big Data & Society</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>, </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>7</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>(2), 2053951720943234. https://doi.org/10.1177/2053951720943234</span></a></span><span> </span></p>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Harari, Y. N. (2018, August 30). </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>Why Technology Favors Tyranny</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/</span></a></span><span> </span></p>
<p class="MsoNormal" style="text-align:justify; "><span><a href="https://www.zotero.org/google-docs/?ZHb88g"><span>Morozov, E. (2013). </span></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><i><span>To save everything, click here: The folly of technological solutionism</span></i></a><a href="https://www.zotero.org/google-docs/?ZHb88g"><span> (First edition). PublicAffairs.</span></a></span><span> </span></p>
<p><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; ">Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. </a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "><i>International Journal of Bullying Prevention</i></a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; ">, </a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; "><i>4</i></a><a href="https://www.zotero.org/google-docs/?ZHb88g" style="text-align: justify; ">(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7</a></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/cis-seminar-series'>http://editors.cis-india.org/internet-governance/blog/cis-seminar-series</a>
</p>
No publisherCheshta AroraInternet GovernanceMachine LearningArtificial IntelligenceEventSeminar2022-04-11T15:19:11ZBlog EntryInsult to Kannada shows Google AI in a poor light
http://editors.cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light
<b>A Google search for ‘the ugliest language in India’ yielded ‘Kannada’ as the answer late last week, causing widespread outrage.
</b>
<p>The article by Krupa Joseph was <a class="external-link" href="https://www.deccanherald.com/metrolife/metrolife-your-bond-with-bengaluru/insult-to-kannada-shows-google-ai-in-a-poor-light-995307.html">published in Deccan Herald</a> on June 8, 2021. Pranesh Prakash and Shweta Mohandas have been quoted.</p>
<hr />
<p>Google has since apologised, saying the answer does not reflect its views, but questions still remain about why this happened at all, and who drafted the answer.</p>
<p style="text-align: justify; ">“When artificial intelligence gets it wrong, things can go really wrong, says tech entrepreneur,”Hari Prasad Nadig, who has worked on Kannada in free and open source soft ware.“Usually, you would expect Google to give an answer based on citings from multiple sources,and at least one or two credible sources.</p>
<p style="text-align: justify; ">Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.</p>
<p style="text-align: justify; ">“Usually, you would expect Google to give an answer based on citings from multiple sources, and at least one or two credible sources. Google’s AI should be good enough not to draw answers from opinionated sources,” he says. Google shouldn’t even try to answer prejudiced questions like this in the first place, and the answer shows how flawed it is, he told Metrolife.</p>
<h3 style="text-align: justify; ">Fallible process</h3>
<p style="text-align: justify; ">Pranesh Prakash, Centre for Internet and Society, Bengaluru, says the incident exposes the fallibility of the process by which Google selects its “featured snippets”.</p>
<p style="text-align: justify; ">“It is not an opinion that Google or its employees or its algorithms have come up with, but rather an existing opinion that Google wrongly amplified,” he says.It demonstrates that the snippets that Google features as ‘facts’ aren’t necessarily based on facts, he says.</p>
<h3 style="text-align: justify; ">Periodic checks</h3>
<p style="text-align: justify; ">Shweta Mohandas, researcher with the Center for Internet and Society, says Google does not create content, but only provides content that is available on the Internet.</p>
<p style="text-align: justify; ">“Hence, the biases come from the tags, then used to train the AI. There should be periodic checks on the data fed into the system,” she says. Such blunders can be prevented if the tags and results are audited periodically, and a mechanism is put in place to enable people to report them, she says.</p>
<h3 style="text-align: justify; ">Who was upto mischief?</h3>
<p style="text-align: justify; ">The answer was created on a financial services website whose owners aren’t revealing their names Pavanaja UB, CEO, Vishva Kannada Softech, says the answer was attributed to a website called debt consolidations questions.com — but he was unable to find this post anywhere on the site.“This is a website registered in Russia and it offers questions and answers on many topics. But this particular page could not be found. Maybe it was removed following the outrage,” he says. Pavanaja believes this was a deliberate attempt to upset people. “The website lists no information about the owner and gives no contact details. Even if such a question did exist on the page before, how did it get to the top of the Google search results?” he wonders.</p>
<p style="text-align: justify; ">He suggests that someone planted the answer and kept searching for it until it reached the top.“But who would take so much effort?” he says.</p>
<h3 style="text-align: justify; ">Furore and after</h3>
<p>‘Kannada’ came up as an answer to a query in Google about ‘the ugliest language in India’.</p>
<p style="text-align: justify; ">Aravind Limbavali, minister for Kannada and Culture, demanded an apology from Google, and threatened legal action against the company “for maligning the image of our beautiful language.”</p>
<p>Google removed the answer and issued a statement:</p>
<p style="text-align: justify; ">“We know this is not ideal, but we take swift corrective action when we are made aware of an issue and are continually working to improve our algorithms. Naturally, these are not reflective of the opinions of Google, and we apologise for the misunderstanding and hurting any sentiments."</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light'>http://editors.cis-india.org/internet-governance/news/deccan-herald-june-8-2021-krupa-joseph-insult-to-kannada-shows-google-ai-in-a-poor-light</a>
</p>
No publisherKrupa JosephInternet GovernanceArtificial Intelligence2021-06-26T05:25:38ZNews ItemThe Wolf in Sheep's Clothing: Demanding your Data
http://editors.cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data
<b>The increasing digitalization of the economy and ubiquity of the Internet, coupled with developments in Artificial Intelligence (AI) and Machine Learning (ML) has given rise to transformational business models across several sectors.</b>
<p> </p>
<p>This piece was originally published in <a class="external-link" href="https://telecom.economictimes.indiatimes.com/tele-talk/the-wolf-in-sheep-s-clothing-demanding-your-data/4497">The Economic Times Telecom</a>, on 8 September, 2020.<span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0"></span></p>
<p>The increasing digitalization of the economy and ubiquity of the <a href="https://telecom.economictimes.indiatimes.com/tag/internet">Internet</a>, coupled with developments in <a href="https://telecom.economictimes.indiatimes.com/tag/artificial+intelligence">Artificial Intelligence</a>
(AI) and Machine Learning (ML) has given rise to transformational
business models across several sectors. These developments have changed
the very structure of existing sectors, with a few dominant firms
straddling across many sectors. The position of these firms is
entrenched due to the large amounts of data they have, and usage of
sophisticated algorithms that deliver very targeted service/content and
their global nature.<br /><br /></p>
<p>Such data based network businesses
are generally multi-sided platforms subject to network effects and
winner takes all phenomena, often, making traditional competition
regulation inappropriate. In addition, there has been concern that such
companies hurt competition as they are owners of large amounts of data
collected globally, the very basis on which new services are predicated.
Also since users have an inertia to share their data on multiple
platforms, new companies find it very challenging to emerge. Several of
the large companies are of US origin. Several regions/countries such as
EU, UK, India are concerned that while these companies benefit from the
data of their citizens or their <a href="https://telecom.economictimes.indiatimes.com/tag/devices">devices</a>,
SMEs and other companies in their own countries find it increasingly
difficult to remain viable or achieve scale. With the objective of
supporting enterprises, including SMEs in their own countries, Europe,
UK India are in different stages of data regulation initiatives.<br /><br /></p>
<p>In India, the <a href="https://telecom.economictimes.indiatimes.com/tag/personal+data+protection">Personal Data Protection</a>
(PDP) Bill, 2019 deals with the framework for collecting, managing and
transferring of Personal Data of Indian citizens, including mandating
sharing of anonymized data of individuals and non-personal data for
better targeting of services or policy making. In addition, the Report
by the Committee of Experts (CoE) on Non Personal Data (NPD) came up
with a Framework for Regulating NPD. Since the NPD Report is a more
recent phenomenon, this articles analyzes some aspects of it.<br /><br /></p>
<p>According
to CoE, non-personal data could be of two types. First, data or
information which was never about an individual (e.g. weather data).
Second, data or information that once was related to an individual (e.g.
mobile number) but has now ceased to be identifiable due to the removal
of certain identifiers through the process of ‘anonymisation’. However,
it may be possible to recover the personal data from such anonymized
data and therefore, the distinction between personal and non-personal is
not clean. In any case, the PDP bill 2019 deals with personal data. If
the CoE felt that some aspect of personal data (including anonymized
data) were not adequately dealt with, it should work to strengthen it.
The current approach of the CoE is bound to create confusion and
overlapping jurisdiction. Since anonymized data is required to be
shared, there are disincentives to anonymization, causing greater risk
to individual privacy.<br /><br /></p>
<p>A new class of business based on a “<em>horizontal classification cutting across different industry sectors</em>” is defined. This refers to any business that derives “<em>new or additional economic value from data, by collecting, storing, processing, and managing data</em>”
based on a certain threshold of data collected/processed that will be
defined by the regulatory authority that is outlined in the report. The
CoE also recommends that “<em>Data Businesses will provide, within India, open access to meta-data and regulated access to the underlying data</em>” without any remuneration. Further, “<em>By
looking at the meta-data, potential users may identify opportunities
for combining data from multiple Data Businesses and/or governments to
develop innovative solutions, products and services. Subsequently, data
requests may be made for the detailed underlying data</em>”.<br /><br /></p>
<p>With
increasing digitalization, today almost every business is a data
business. The problem in such categorization will be with the definition
of thresholds. It is likely that even a small video sharing app or an
AR/VR app would store/collect/process/transmit more data than say a
mid-sized bank in terms of data volumes. Further, with increasing
embedding of <a href="https://telecom.economictimes.indiatimes.com/tag/iot">IoT</a>
in various aspects of our lives and businesses (smart manufacturing,
logistics, banking etc), the amount of data that is captured by even
small entities can be huge.<br /><br /></p>
<p>The private sector, driven by
profitability, identifies innovative business models, risks capital and
finds unique ways of capturing and melding different data sets. In
order to sustain economic growth, such innovation is necessary. The
private sector would also like legal protection over these aspects of
its businesses, including the unique IPR that may be embedded in the
processing of data or its business processes. But mandating such onerous
requirements on sharing by the CoE is going to kill any private
initiative. Any regulatory regime must balance between the need to
provide a secure environment for protecting data of incumbents and
making it available to SMEs/businesses.<br /><br /></p>
<p>Meta data
provides insights to the company’s databases and processes. These are
source of competitive advantage for any company. Meta data is not
without a context. The basis of demanding such disclosure is mandated
with the proposed NPD Regulator who would evaluate such a purpose. In
practice, purposes are open to interpretation and the structure of
appeal mechanism etc is going to stall any such sharing. Would such
mandates of sharing not interfere with the existing Intellectual
Property Rights? Or the freedom to contract? Any innovation could easily
be made available to a competitor that front-ends itself with a
start-up. To mandate making such data available would not be fair.
Further, how would the NPD regulator even ensure that such data is used
for the purpose (which the proposed regulator is supposed to evaluate)
that it is sought for? In Europe, where such <a href="https://telecom.economictimes.indiatimes.com/tag/data+sharing">data sharing</a>
mandates are being considered, the focus is on public data. For private
entities, the sharing is largely based on voluntary contributions.
Compulsory sharing is mandated only under restricted situations where
market failure situations are not addressed through Competition Act and
provided legitimate interest of the data holder and existing legal
provisions are taken into account.<br /><br /></p>
<p>Further, the
compliance requirements for such Data Businesses is very onerous and
makes a mockery of “minimum government” framework of the government. The
CoE recommends that all Data Businesses, whether government NGO, or
private “<em>to disclose data elements collected, stored and processed, and data-based services offered</em>”. As if this was not enough, the CoE further recommends that “<em>Every
Data Business must declare what they do and what data they collect,
process and use, in which manner, and for what purposes (like disclosure
of data elements collected, where data is stored, standards adopted to
store and secure data, nature of data processing and data services
provided). This is similar to disclosures required by pharma industry
and in food products</em>”. Such disclosures are necessary in these
industries as the companies in this sector deal with critical aspects of
human life. But are such requirements necessary for all activities and
businesses? As long as organizations collect and process data, in a
legal manner, within the sectoral regulation, why should such
information have to be “reported”? Further, such bureaucratic processes
and reporting requirements are only going to be a burden to existing
legitimate businesses and give rise to a thriving regulatory license
raj.<br /><br /></p>
<p>Further questions that arise are: How is any
compliance agency going to make sure that all the underlying metadata is
made available in a timely manner? As companies respond to a dynamic
environment, their analysis and analytical tools change and so does the
metadata. This inherent aspect of businesses raises the question: At
what point in time should companies make their meta-data available? How
will the compliance be monitored?<br /><br /></p>
<p>Conclusion: The CoE
needs to create an enabling and facilitating an environment for data
sharing. The incentives for different types of entities to participate
and contribute must be recognized. Adequate provisions for risks and
liabilities arising out data sharing need to be thought through.
National initiatives on data sharing should not create an onerous
reporting regime, as envisaged by the CoE, even if digital.<br /><br /></p>
<p class="article-disclaimer"><em>DISCLAIMER:
The views expressed are solely of the author and ETTelecom.com does not
necessarily subscribe to it. ETTelecom.com shall not be responsible for
any damage caused to any person/organisation directly or indirectly.</em></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data'>http://editors.cis-india.org/internet-governance/blog/the-wolf-in-sheeps-clothing-demanding-your-data</a>
</p>
No publisherRekha JainInternet GovernanceData ProtectionArtificial Intelligence2020-11-10T17:44:13ZBlog Entry Comments on NITI AAYOG Working Document: Towards Responsible #AIforAll
http://editors.cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall
<b>The NITI Aayog Working Document on Responsible AI for All released on 21st July 2020 serves as a significant statement of intent from NITI Aayog, acknowledging the need to ensure that any conception of “Responsible AI” must fulfill constitutional responsibilities, incorporated through workable principles. However, as it is a draft document for discussion, it is important to highlight next steps for research and policy levers to build upon this report.</b>
<div> </div>
<div>Read our comments in their entirety <a href="http://editors.cis-india.org/internet-governance/comments-to-aiforall-pdf" class="internal-link" title="Comments to AIForAll pdf">here</a>.</div>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall'>http://editors.cis-india.org/internet-governance/blog/comments-on-niti-aayog-working-document-towards-responsible-aiforall</a>
</p>
No publisherShweta Mohandas, Arindrajit Basu and Ambika Tandoninternet governanceInternet GovernanceArtificial Intelligence2020-08-18T06:25:18ZBlog EntryTowards Algorithmic Transparency
http://editors.cis-india.org/internet-governance/blog/towards-algorithmic-transparency
<b>This policy brief examines the issue of transparency as a key ethical component in the development, deployment, and use of Artificial Intelligence.</b>
<p> </p>
<p>This brief proposes a framework that seeks to overcome the challenges in preserving transparency when dealing with machine learning algorithms, and suggests solutions such as the incorporation of audits, and ex ante approaches to building interpretable models right from the design stage. Read the full report <a href="http://editors.cis-india.org/internet-governance/algorithmic-transparency-pdf" class="internal-link" title="Algorithmic Transparency PDF">here</a>.</p>
<p> </p>
<p> </p>
<p>The Regulatory Practices Lab at CIS aims to produce regulatory policy
suggestions focused on India, but with global application, in an agile
and targeted manner and to promote transparency around practices
affecting digital rights. <br />The Regulatory Practices Lab is supported by Google and Facebook.<br /><br /></p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/towards-algorithmic-transparency'>http://editors.cis-india.org/internet-governance/blog/towards-algorithmic-transparency</a>
</p>
No publisherRadhika Radhakrishnan, and Amber SinhaRegulatory Practices LabInternet GovernanceFeaturedAlgorithmsinternet governanceTransparencyArtificial Intelligence2020-07-15T13:16:44ZBlog EntryEthics and Human Rights Guidelines for Big Data for Development Research
http://editors.cis-india.org/raw/bd4d-ethics-human-rights-guidelines
<b>This is a four-part review of guideline documents for ethics and human rights in big data for development research. This research was produced as part of the Big Data for Development network supported by International Development Research Centre, Canada</b>
<p> </p>
<h4>Part #1 - Review of Principles of Ethics in Biomedical Science: <a href="http://editors.cis-india.org/raw/bd4d-guideline-documents/biomedicalscience" class="internal-link" title="CIS_BD4D_Guideline01_MS+AS_BiomedicalScience PDF">Download</a> (PDF)</h4>
<h4>Part #2 - Review of Principles of Ethics in Computer Science: <a href="http://editors.cis-india.org/raw/bd4d-guideline-documents/computerscience" class="internal-link" title="CIS_BD4D_Guideline02_RS+AS_ComputerScience PDF">Download</a> (PDF)</h4>
<h4>Part #3 - Summary of Review of Codes of Ethics for Big Data and AI: <a href="http://editors.cis-india.org/raw/bd4d-guideline-documents/AIEthicsReview" class="internal-link" title="CIS_BD4D_Guideline03_AS+PT_BigDataAIEthicsReview_SummaryNotes PDF">Download</a> (PDF)</h4>
<h4>Part #4 - Extended Review of Codes of Ethics for Big Data and AI: <a href="http://editors.cis-india.org/raw/bd4d-guideline-documents/ExtendedNotes" class="internal-link" title="CIS_BD4D_Guideline04_PT+PB_BigDataAIEthicsReview_ExtendedNotes PDF">Download</a> (PDF)</h4>
<hr />
<p>The rapid expansion in the volume, velocity, and variety of data available, together with the development of innovative forms of statistical analytics, is generally referred to as “big data”; though there is no single agreed upon definition of the term. Big data promises to provide new insights and solutions across a wide range of sectors. Despite enormous optimism about the scope and variety of big data’s potential applications, many remain concerned about its widespread adoption, with some scholars suggesting it could generate as many harms as benefits. The predecessor disciplines of data science such as computer sciences, applied mathematics, and statistics have traditionally managed to stay out of the scope of ethical frameworks, based on the assumption that they do not involve humans as subject of their research. While critical study into big data is still in its infancy, there is a growing belief that there are significant discontinuities between the rapid growth in big data and the ethical framework that exists to govern its use. In this set of documents, we look at them in detail.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/raw/bd4d-ethics-human-rights-guidelines'>http://editors.cis-india.org/raw/bd4d-ethics-human-rights-guidelines</a>
</p>
No publisherAmber Sinha, Manjri Singh, Rajashri Seal, Pranav Bhaskar Tiwari, Pranav M BidareResearchers at WorkBD4DRAW ResearchBig Data for DevelopmentArtificial Intelligence2020-05-20T07:56:48ZBlog EntryPanelist at launch of Google-UNESCAP AI Report
http://editors.cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report
<b>Arindrajit Basu was a speaker at the panel launching the Google-UNESCAP AI Report at the GovInsider Forum held at the United Nations Convention Centre in Bangkok on October 16, 2019. </b>
<p>Click to <a class="external-link" href="http://cis-india.org/internet-governance/files/launch-the-ai-report">view the agenda</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report'>http://editors.cis-india.org/internet-governance/news/panelist-at-launch-of-google-unescap-ai-report</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-11-02T06:48:25ZNews ItemFarming the Future: Deployment of Artificial Intelligence in the agricultural sector in India
http://editors.cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future
<b>This case study was published as a chapter in the joint UNESCAP-Google publication titled Artificial Intelligence in Public Service Delivery. The chapter in its final form would not have been possible without the efforts and very useful interventions by our colleagues at Digital Asia Hub,Google, and UNESCAP.</b>
<p><img src="http://editors.cis-india.org/home-images/Findings.jpg" alt="Findings" class="image-inline" title="Findings" /></p>
<hr />
<p> </p>
<p style="text-align: justify; ">Although agriculture is a critical sector for India’s economic development, it continues to face many challenges including a lack of <span>modernization of agricultural methods, fragmented landholdings, erratic rainfalls, overuse of groundwater and a lack of access to </span><span>information on weather, markets and pricing. As state governments create policies and frameworks to mitigate these challenges, the </span><span>role of technology has often come up as a potential driver of positive change.</span></p>
<p style="text-align: justify; "><span>Farmers in the southern Indian states of Karnataka and Andhra Pradesh are facing significant challenges. For hundreds of years,these farmers have relied on traditional agricultural methods to make sowing and harvesting decisions, but now volatile weather patterns and shifting monsoon seasons are making such ancient wisdom obsolete. Farmers are unable to predict weather patterns or crop yields accurately, making it difficult for them to make informed financial and operational decisions associated with planting and harvesting. Erratic weather patterns particularly affect those farmers who reside in remote areas, cut off from meaningful accessto infrastructure and information. In addition to a lack of vital weather information, farmers may lack information about market conditions and may then sell their crops to intermediaries at below-market prices.</span></p>
<p style="text-align: justify; "><span>Against this backdrop, the state governments and local partners in southern India teamed up with Microsoft to develop predictive AI services to help smallholder farmers to improve their crop yields and give them greater price control. Since 2016 three applications have been developed and applied for use in these communities, two of which are discussed in this case study: the AI-sowing app and the price forecasting model.</span></p>
<hr />
<p style="text-align: justify; "><a class="external-link" href="https://www.unescap.org/sites/default/files/publications/AI%20Report.pdf">Click to read</a> the report here.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future'>http://editors.cis-india.org/internet-governance/blog/artificial-intelligence-in-the-delivery-of-public-services-elonnai-hickok-pranav-bidare-arindrajit-basu-siddharth-october-16-2019-farming-the-future</a>
</p>
No publisherElonnai Hickok, Arindrajit Basu, Siddharth Sonkar and Pranav M BInternet GovernanceArtificial Intelligence2019-10-16T13:41:02ZBlog EntryAI Opera- AI as a total work of art
http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art
<b>On October 11, 2019, Shweta Mohandas and Mira were invited as panelists for the 'AI Opera- AI as a total work of art' event organized by Goethe as part of the India Week Hamburg 2019 held in Bangalore. CIS was an event partner. </b>
<p style="text-align: justify; ">The panel had to present different perspectives and possibilities of Artificial Intelligence (AI). The discussion was facilitated by German artist, performer and filmmaker Christoph Faulhaber. For more info, <a class="external-link" href="https://www.goethe.de/ins/in/en/sta/ban/ver.cfm?fuseaction=events.detail&event_id=21670394">click here</a>.</p>
<p>
For more details visit <a href='http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art'>http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art</a>
</p>
No publisherAdminInternet GovernanceArtificial Intelligence2019-10-14T14:30:56ZNews Item