<?xml version="1.0" encoding="utf-8" ?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns="http://purl.org/rss/1.0/">




    



<channel rdf:about="http://editors.cis-india.org/search_rss">
  <title>Centre for Internet and Society</title>
  <link>http://editors.cis-india.org</link>
  
  <description>
    
            These are the search results for the query, showing results 61 to 71.
        
  </description>
  
  
  
  
  <image rdf:resource="http://editors.cis-india.org/logo.png"/>

  <items>
    <rdf:Seq>
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/news/ai-in-healthcare"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/news/ai-for-good-workshop"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/raw/ai-hype-cycles-and-artistic-subversions"/>
        
        
            <rdf:li rdf:resource="http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific"/>
        
    </rdf:Seq>
  </items>

</channel>


    <item rdf:about="http://editors.cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space">
    <title>Amazon launches Machine Learning-based platform for healthcare space</title>
    <link>http://editors.cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space</link>
    <description>
        &lt;b&gt;Amazon’s Comprehend Medical platform uses a new HIPAA-eligible machine learning service to process unstructured medical text and information such as dosages, symptoms and signs, and patient diagnosis.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The article by Kul Bhushan was published in the &lt;a class="external-link" href="https://www.hindustantimes.com/tech/nov-28-amazon-launches-machine-learning-driven-platform-for-healthcare-space/story-3EuXjDiVO8NLBxjOMKkopO.html"&gt;Hindustan Times&lt;/a&gt; on November 28, 2018.&lt;/p&gt;
&lt;hr style="text-align: justify; " /&gt;
&lt;p style="text-align: justify; "&gt;With an objective to push deeper into the health space, Amazon has introduced a new &lt;a href="https://www.hindustantimes.com/topic/machine-learning"&gt;Machine Learning&lt;/a&gt; (ML) software to analyse medical records for better treatments of patients and reduce overall expenditure.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Unveiled  at the company’s re:Invent cloud conference in Las Vegas, Amazon’s  Comprehend Medical platform uses a new “HIPAA-eligible machine learning  service that allows developers to process unstructured medical text and  identify information such as patient diagnosis, treatments, dosages,  symptoms and signs, and more.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Comprehend Medical helps health  care providers, insurers, researchers, and clinical trial investigators  as well as health care IT, biotech, and pharmaceutical companies to  improve clinical decision support, streamline revenue cycle and clinical  trials management, and better address data privacy and protected health  information (PHI) requirements,” explains the company on its &lt;a href="https://aws.amazon.com/blogs/machine-learning/introducing-medical-language-processing-with-amazon-comprehend-medical/" rel="nofollow"&gt;website&lt;/a&gt;.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amazon  aims to mitigate the time spent on manually analysing medical data of a  patient. The company hopes the software will ultimately empower users  to make a more informed decision about their health and even things like  scheduling care visits.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;“Unlocking this information from medical language makes a variety of  common medical use cases easier and cost-effective, including: clinical  decision support (e.g., getting a historical snapshot of a patient’s  medical history), revenue cycle management (e.g., simplifying the  time-intensive manual process of data entry), clinical trial management  (e.g., by identifying and recruiting patients with certain attributes  into clinical trials), building population health platforms, and helping  address (PHI) requirements (e.g., for privacy and security  assurance.),” the company added.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amazon also pointed out that some  of the medical institutes such as Seattle’s Fred Hutchinson Cancer  Research Center and Roche Diagnostics have already implemented the  software.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Amazon’s expansion into the healthcare space comes after it acquired  health-focused startup PillPack for $1 billion earlier this year. Apart  from Amazon, other technology companies like Apple and Microsoft are  investing into the healthcare space.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Apple is already offering  HealthKit and CareKit platforms to develop apps focused on health. The  company earlier this year launched &lt;a href="https://www.hindustantimes.com/tech/apple-watch-series-4-launched-with-ecg-compatibility-new-design/story-2LqdNq7YjAXGU3HEH5om8N.html"&gt;Apple Watch Series 4 with ECG support&lt;/a&gt;.  Microsoft, however, has deeper footprints in the health segment. The  company is building a bunch of Artificial Intelligence-based tools for  healthcare.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;For instance, Microsoft’s Project InnerEye uses machine learning  technology to build tools for automatic, quantitative analysis of  three-dimensional radiological images.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;According to various  reports, Artificial Intelligence is going to make a big impact in the  healthcare industry. An Accenture report in 2017 &lt;a href="https://www.accenture.com/t20171215T032059Z__w__/us-en/_acnmedia/PDF-49/Accenture-Health-Artificial-Intelligence.pdf" rel="nofollow" target="_blank"&gt;predicted&lt;/a&gt; that the AI apps can create $150 billion in annual savings for the United States alone.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Back in India, the adoption of AI in healthcare is growing. According  to a report by the Centre for Internet and Society India, “the use of  AI in healthcare in India is increasing with new startups and large ICT  companies offering AI solutions for healthcare challenges in the  country.”&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Bengalure-based startup mfine has developed an AI-based  healthcare platform which learns medical standards and protocols and  diagnosis and treatment methods to further help the doctors with  necessary data and analysis. The company earlier this year raised $4.2  million in funding.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space'&gt;http://editors.cis-india.org/internet-governance/news/hindustan-times-november-28-2018-kul-bhushan-amazon-launches-machine-learning-based-platform-for-healthcare-space&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-12-03T00:23:06Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art">
    <title>AI Opera- AI as a total work of art</title>
    <link>http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art</link>
    <description>
        &lt;b&gt;On October 11, 2019,  Shweta Mohandas and Mira were invited as panelists for the 'AI Opera- AI as a total work of art' event organized by Goethe as part of the India Week Hamburg 2019 held in Bangalore. CIS was an event partner. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;The panel had to present different perspectives and possibilities of Artificial Intelligence (AI). The discussion was facilitated by German artist, performer and filmmaker Christoph Faulhaber. For more info, &lt;a class="external-link" href="https://www.goethe.de/ins/in/en/sta/ban/ver.cfm?fuseaction=events.detail&amp;amp;event_id=21670394"&gt;click here&lt;/a&gt;.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art'&gt;http://editors.cis-india.org/internet-governance/news/ai-opera-ai-as-a-total-work-of-art&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-14T14:30:56Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work">
    <title>AI in the Future of Work</title>
    <link>http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work</link>
    <description>
        &lt;b&gt;Artificial Intelligence and allied technologies form part of what is being called the fourth Industrial Revolution.&lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;Some analysts &lt;a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf"&gt;project the loss of jobs&lt;/a&gt; as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will &lt;a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf"&gt;enhance and complement&lt;/a&gt; human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from &lt;a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf"&gt;machine-to-machine interactions on the factory floor&lt;/a&gt;, to automated decision-making systems.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Some analysts &lt;a href="https://workofthefuturecongress.mit.edu/wp-content/uploads/2019/06/w25682.pdf"&gt;project the loss of jobs&lt;/a&gt; as AI replaces humans, especially in job roles that consist of repetitive tasks that are easier to automate. Another prediction is that AI, as preceding technologies, will &lt;a href="https://www.ilo.org/wcmsp5/groups/public/---dgreports/---cabinet/documents/publication/wcms_647306.pdf"&gt;enhance and complement&lt;/a&gt; human capability, rather than replacing it at large scales. AI at the workplace includes a wide range of technologies, from &lt;a href="https://www.infosys.com/human-amplification/Documents/manufacturing-ai-perspective.pdf"&gt;machine-to-machine interactions on the factory floor&lt;/a&gt;, to automated decision-making systems.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Studying the Platform Economy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The platform economy, in particular, is dependent on AI in the design of aggregator platforms that form a two-way market between customers and workers. Platforms deploy AI at a number of different stages, from recruitment to assignment of tasks to workers. AI systems often reflect existing social biases, as they are built using biased datasets, and by non-diverse teams that are not attuned to such biases. This has been the case in the platform economy as well, where biased systems impact the ability of marginalised workers to access opportunities. To take an example, Amazon’s algorithm to filter workers’ resumes was &lt;a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G"&gt;biased against women&lt;/a&gt; because it was trained on 10 years of hiring data, and ended up reflecting the underrepresentation of women in the tech industry. That is not to say that algorithms introduce biases where they didn’t exist earlier, but that they take existing biases and hard code them into systems in a systematic and predictable manner.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Biases are made even more explicit in marketplace platforms, that allow employers to review workers’ profiles and skills for a fee. In a study of platforms offering home-based services in India, we found that marketplace platforms offer filtering mechanisms which allow employers to filter workers by demographic characteristics such as gender, age, religion, and in one case, caste (the research publication is forthcoming). The design of the platform itself, in this case, encourages and enables discrimination of workers. One of the leading platforms in India had ‘Hindu maid’ and ‘Hindu cook’ as its top search term, reflecting the ways in which employers from the dominant religion are encouraged to discriminate against workers from minority religions in the Indian platform economy.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another source of bias in the platform economy are rating and pricing systems, which can reduce the quality and quantum of work offered to marginalised workers. Rating systems exist across platform types - those that offer on-demand or location-based work, microwork platforms, and marketplace platforms. They allow customers and employers to rate workers on a scale, and are most often one-way feedback systems to review a worker’s performance (as our forthcoming research discusses, we found very few examples of feedback loops that also allow workers to rate employers). Rating systems &lt;a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf"&gt;have been found&lt;/a&gt; to be a source of anxiety for workers, as they can be rated poorly for unfair reasons, including their demographic characteristics. Most platforms penalise workers for poor ratings, and may even stop them from accessing any tasks at all if their ratings fall below a certain threshold. Without adequate grievance redressal mechanisms that allow workers to contest poor ratings, rating systems are prone to reflect customer biases while appearing neutral. It is difficult to assess the level of such bias without companies releasing data comparing ratings of workers by their demographic characteristics, but it &lt;a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf"&gt;has been argued&lt;/a&gt; that there is ample evidence to believe that demographic characteristics will inevitably impact workers ratings due to widespread biases.&lt;/p&gt;
&lt;h3&gt;Searching for a Solution&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;It is clear that platform companies need to be pushed into solving for biases and making their systems more fair and non-discriminatory. Some companies, such as Amazon in the example above, have responded by suspending algorithms that are proven to be biased. However, this is a temporary fix, as companies rarely seek to drop such projects indefinitely. In the platform economy, where algorithms are central to the business model of companies, complete suspension is near impossible. Amazon also tried another quick fix - it &lt;a href="https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G"&gt;altered the algorithm&lt;/a&gt; to respond neutrally to terms such as ‘woman’. This is a process known as debiasing the model, through which any biased connections (such as between the word ‘woman’ and downgrading) being made by the algorithm are explicitly removed. Another solution is diversifying or debiasing datasets. In this example, the algorithm could be fed a larger sample of resumes and decision-making logics from industries that have a higher representation of women.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Another set of solutions could be drawn from anti-discrimination law, which prohibit discrimination at the workplace. In India, anti-discrimination laws protect against wage inequality, as well as discrimination at the stage of recruitment for protected groups such as transgender persons. While it can be argued that biased rating systems lead to wage inequality, there are several barriers to applying anti-discrimination law for workers in the platform economy. One, most jurisdictions, including India, protect only employees from discrimination, not self-employed contractors. Another challenge is the lack of data to prove that rating or recruitment algorithms are discriminatory, without which legal recourse is impossible. &lt;a href="https://datasociety.net/pubs/ia/Discriminating_Tastes_Customer_Ratings_as_Vehicles_for_Bias.pdf"&gt;Rosenblat et al.&lt;/a&gt; (2016) discuss these challenges in the context of the US, suggesting solutions such as addressing employment misclassification or modifying pleading requirements to bring platform workers under the protection of the law.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Feminist principles point to structural shifts that are required to ensure robust protections for workers. Analysing algorithmic systems from a feminist lens indicates several points in the design at which interventions must be focused to ensure impact. The teams designing algorithms need to be made more diverse, along with integrating an explicit focus on assessing the impact of systems at the stage of design. Companies need to be more transparent with their data, and encourage independent audits of their systems. Corporate and government actors must be held to account to fix broken AI systems.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Ambika Tandon is a Senior Researcher at the &lt;a href="https://cis-india.org/"&gt;Centre for Internet &amp;amp; Society (CIS)&lt;/a&gt; in India, where she studies the intersections of gender and technology. She focuses on women’s work in the digital economy, and the impact of emerging technologies on social inequality. She is also interested in developing feminist methods for technology research. Ambika tweets at &lt;a href="https://twitter.com/AmbikaTandon"&gt;@AmbikaTandon&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The blog was originally &lt;a class="external-link" href="https://ethicalsource.dev/blog/ai-in-the-future-of-work/"&gt;published in the Organization for Ethical Source&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work'&gt;http://editors.cis-india.org/raw/oes-ambika-tandon-ai-in-the-future-of-work&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>ambika</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>CISRAW</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    
    
        <dc:subject>Future of Work</dc:subject>
    

   <dc:date>2021-12-07T01:51:42Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/news/ai-in-healthcare">
    <title>AI in Healthcare</title>
    <link>http://editors.cis-india.org/internet-governance/news/ai-in-healthcare</link>
    <description>
        &lt;b&gt;The Center for Information Technology and Public Policy (CITAPP) and the International Institute of Information Technology Bangalore (IIITB) invited Radhika Radhakrishnan for a talk at IIIT-Bangalore on September 13, 2019. &lt;/b&gt;
        &lt;p style="text-align: justify; "&gt;In her talk, she  critically questioned the dominant narrative of “AI for social good” that has been widely adopted by various stakeholders in India (including the private sector, non-profits, and the Indian State) from a feminist standpoint. Specific to healthcare in India, such a narrative has been employed towards solving development challenges (such as a shortage of medical practitioners in remote regions of the country) through the introduction of AI applications targeted towards the sick-poor. Through her research and fieldwork, she analysed the layers of expropriation and experimentation that come into play when AI technologies become a method of using 'diverse' bodies and medical records of the sick-poor as ‘data’ to train proprietary AI algorithms at a low cost in the absence of effective State regulatory mechanisms. She argued that structural challenges (such as lack of incentives for medical practitioners to join public healthcare) get reframed into opportunities to substitute labour (people) by capital (technology) through innovation of “spectacular technologies” such as AI. Throughout the talk, she also highlighted the methodologies she used to conduct this research.&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/news/ai-in-healthcare'&gt;http://editors.cis-india.org/internet-governance/news/ai-in-healthcare&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Industry 4.0</dc:subject>
    
    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-09-19T16:15:24Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit">
    <title>AI for Social Good Summit</title>
    <link>http://editors.cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit</link>
    <description>
        &lt;b&gt;Arindrajit Basu was a speaker at the event co-organized by Google AI and United Nations ESCAP on December 13, 2018 in Bangkok, Thailand.&lt;/b&gt;
        &lt;p class="moz-quote-pre" style="text-align: justify; "&gt;Arindrajit spoke at the panel " How can governments use AI in Public Service Delivery" along with Malavika Jayaram, Jake Lucci,Punit Shukla,Simon Schmooly and Gal Oren. He presented CIS research on AI in agriculture in Karnataka-which will be published as part of a compendium documenting case studies worldwide soon.&lt;/p&gt;
&lt;p class="moz-quote-pre" style="text-align: justify; "&gt;&lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-for-social-good-summit"&gt;Click to read more&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit'&gt;http://editors.cis-india.org/internet-governance/news/unescap-and-google-ai-december-13-bangkok-ai-for-social-good-summit&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-12-25T01:02:01Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india">
    <title> AI for Healthcare: Understanding Data Supply Chain and Auditability in India </title>
    <link>http://editors.cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india</link>
    <description>
        &lt;b&gt;This report aims to understand the prevalence and use of AI auditing practices in the healthcare sector. By mapping the data supply chain underlying AI technologies, the study aims to unpack i) how AI systems are developed and deployed to achieve healthcare outcomes and, ii) how AI audits are perceived and implemented by key stakeholders in the healthcare ecosystem. &lt;/b&gt;
        
&lt;p dir="ltr"&gt;Read our full report &lt;a href="http://editors.cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india-pdf" class="internal-link" title="AI for Healthcare: Understanding Data Supply Chain and Auditability in India PDF"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p dir="ltr"&gt;The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.&lt;/p&gt;
&lt;p dir="ltr"&gt;For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.&lt;/p&gt;
&lt;p id="docs-internal-guid-874e64d9-7fff-d16c-ed57-d245c7214bec" dir="ltr"&gt;Our primary research questions are as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What auditing practices, if any, are being followed by technology companies and healthcare institutions?&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;What best practices can organisations based in India adopt to improve AI auditability?&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p id="docs-internal-guid-28d92dc2-7fff-c54b-addb-63beee845252" dir="ltr"&gt;This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance&amp;nbsp; on datasets from the Global North.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.&lt;/p&gt;
&lt;/li&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Improve data management across the AI data supply chain&lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Adopt standardised data-sharing policies&lt;/em&gt;. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare. &lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Emphasise not just data quantity but also data quality&lt;/em&gt;. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.&lt;span class="Apple-tab-span"&gt; &lt;/span&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Streamline AI auditing as a form of governance&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Standardise the practice of AI auditing&lt;/em&gt;. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Build organisational knowledge and inter-stakeholder collaboration&lt;/em&gt;. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Prioritise transparency and public accountability in auditing standards&lt;/em&gt;. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.&lt;/p&gt;
&lt;ul&gt;
&lt;li style="list-style-type: disc;" dir="ltr"&gt;
&lt;p dir="ltr"&gt;Centre public good in India’s AI industrial policy&lt;/p&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Adopt focused and transparent approaches to investing in and financing AI projects&lt;/em&gt;. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.&lt;/p&gt;
&lt;p dir="ltr"&gt;&lt;em&gt;Strengthen regulatory checks and balances for AI governance.&lt;/em&gt;&lt;br /&gt;While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india'&gt;http://editors.cis-india.org/internet-governance/blog/ai-for-healthcare-understanding-data-supply-chain-and-auditability-in-india&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Amrita Sengupta (PI), Shweta Mohandas (Co-PI), (In alphabetical order) Abhineet Nayyar, Chetna VM, Puthiya Purayil Sneha, Yatharth</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Health Tech</dc:subject>
    
    
        <dc:subject>RAW Publications</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Featured</dc:subject>
    
    
        <dc:subject>Healthcare</dc:subject>
    
    
        <dc:subject>Homepage</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2024-11-30T08:17:48Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/news/ai-for-good-workshop">
    <title>AI for Good Workshop</title>
    <link>http://editors.cis-india.org/internet-governance/news/ai-for-good-workshop</link>
    <description>
        &lt;b&gt;Pranav Manjesh Bidare attended a workshop on AI for Good, organised by Swissnex India, and Wadhwani AI in Bangalore on May 22, 2019. &lt;/b&gt;
        &lt;p&gt;The workshop was a forerunner to the &lt;a class="external-link" href="https://aiforgood.itu.int/"&gt;AI for Good Global Summit&lt;/a&gt;. More recommendations can be made at  &lt;a class="moz-txt-link-freetext" href="https://www.policykitchen.com/group/19/stream"&gt;https://www.policykitchen.com/group/19/stream&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/news/ai-for-good-workshop'&gt;http://editors.cis-india.org/internet-governance/news/ai-for-good-workshop&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Admin</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-06-05T14:47:27Z</dc:date>
   <dc:type>News Item</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival">
    <title>AI for Good</title>
    <link>http://editors.cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival</link>
    <description>
        &lt;b&gt;CIS organised a workshop titled ‘AI for Good’ at the Unbox Festival in Bangalore from 15th to 17th February, 2019. The workshop was led by Shweta Mohandas and Saumyaa Naidu. In the hour long workshop, the participants were asked to imagine an AI based product to bring forward the idea of ‘AI for social good’.&lt;/b&gt;
        &lt;p&gt;The report was edited by Elonnai Hickok.&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;The workshop was aimed at examining the current narratives around AI and imagining how these may transform with time. It raised questions about how we can build an AI for the future, and traced the implications relating to social impact, policy, gender, design, and privacy.&lt;/p&gt;
&lt;h3&gt;Methodology&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The rationale for conducting this workshop in a design festival was to ensure a diverse mix of participants. The participants in the workshop came from varied educational and professional backgrounds who had different levels of understanding of technology. The workshop began with a discussion on the existing applications of artificial intelligence, and how people interact and engage with it on a daily basis. This was followed by an activity where the participants were provided with a form and were asked to conceptualise their own AI application which could be used for social good. The participants were asked to think about a problem that they wanted the AI application to address and think of ways in which it would solve the problem. They were also asked to mention who will use the application. It prompted participants to provide details of the AI application in terms of the form, colour, gender, visual design, and medium of interaction (voice/ text). This was intended to nudge the participants into thinking about the characteristics of the application, and how it will lend to the overall purpose. The form was structured and designed to enable participants to both describe and draw their ideas. The next section of the form gave them multiple pairs of principles. They were asked to choose one principle from each pair. These were conflicting options such as ‘Openness’ or ‘Proprietary’, and ‘Free Speech’ or ‘Moderated Speech’. The objective of this section was to illustrate how a perceived ideal AI that satisfies all stakeholders can be difficult to achieve, and that the AI developers at times may be faced with a decision between profitability and user rights.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;Participants were asked to keep their responses anonymous. These responses were then collected and discussed with the group. The activity led to the participants engaging in a discussion on the principles mentioned in the form. Questions around where the input data to train the AI would come from, or what type of data the application will collect were discussed. The responses were used to derive implications on gender, privacy, design, and accessibility.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="http://editors.cis-india.org/home-images/ConceptualiseAI.jpg" alt="Conceptualise AI" class="image-inline" title="Conceptualise AI" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Responses&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;img src="http://editors.cis-india.org/home-images/Responses.jpg" alt="" class="image-inline" title="" /&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Analysis&lt;/h3&gt;
&lt;p&gt;Even as the responses were varied, they had a few key similarities and observations.&lt;/p&gt;
&lt;h3&gt;Participants’ Familiarity with AI&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The participants’ understanding of AI was based on what they read and heard from various sources. While discussing the examples of AI, the participants were familiar with not just the physical manifestation of AI such as robots, but also AI software. However when asked to define an AI the most common explanations were, bots, software, and the use of algorithms to make decisions using large amounts of data. The participants were optimistic of the way AI could be used for social good. However, some of them showed concern about the implications on privacy.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Perception of AI Among Participants&lt;/h3&gt;
&lt;p class="Normal1"&gt;With the workshop, our aim was to have the participants reflect on their perception of AI based on their exposure to the narratives around AI by companies and the government.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were given the brief to imagine an AI that could solve a problem or be used for social good. Most participants considered AI to be a positive tool for social impact. It was seen as a problem solver. The ideas conceptualised by the participants varied from countering fake news, wildlife conservation, resource distribution, and mental health. This brought to focus the range of areas that were seen as pertinent for an AI intervention. Most of the responses dealt with concerns that affect humans directly, the one aimed at wildlife conservation being the only exception.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;On being asked, who will use the AI application, it was interesting to note that all the responses considered different stakeholders such as individuals, non profits, governments and private companies to be the end user. However, it was interesting that through the discussion the harms that might be caused by the use of AI by these stakeholders were not brought up. For example, the use of AI for resource distribution did not take into consideration the fact that the government could provide unequal distribution based on the existing biased datasets.&lt;/span&gt; &lt;a name="fr1"&gt;&lt;/a&gt; &lt;span&gt;Several of the AI applications were conceptualised to work without any human intervention. For example, one of the ideas proposed was to use AI as a mental health counsellor which was conceptualised as a chatbot that would learn more about human psychology with each interaction. It was assumed that such a service would be better than a human psychologist who can be emotionally biased. Similarly, while discussing the idea behind the use of AI for preventing the spread of fake news, the participant believed that the indication coming from an AI would have greater impact than one coming from a human. They believed that the AI could provide the correct information and prevent the spread of fake news. &lt;/span&gt;&lt;span&gt;By discussing these cases we were able to highlight that the complete reliance on technology could have severe consequences.&lt;/span&gt;&lt;a name="fr2"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Form and Visual Design of the AI Concepts&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;In most cases, the participants decided the form and visual design of their AI concepts keeping in mind its purpose. For instance, the therapy providing AI mentioned earlier, was envisioned as a textual platform, while a ‘clippy type’ add on AI tool was thought of for detecting fake news. Most participants imagined the AI application to have a software form, while the legal aid AI application was conceptualised to have a human form. This revealed that the participants perceived AI to be both a software and a physical device such as a robot.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Accessibility of the Interfaces&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;The purpose of including the type of interface (voice or text) while conceptualising the AI application was to push the participants towards thinking about accessibility features. We aimed to have the participants think about the default use of the interface, both in terms of language and accessibility. The participants though cognizant of the need to have a large number of users, preferred to have only textual input into the interface, not anticipating the accessibility concerns.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;The choices between access vs cost, and accessibility vs scalability were also questioned by the participants during the workshop. They enquired about the meaning of the terms as well as discussed the difficulty in having an all inclusive interface. Some of the responses consisted only of text inputs, especially for sensitive issues involving interactions, such as for therapy or helplines. This exercise made the participants think about the end user as well as the ‘AI for all’ narrative. We decided to add these questions that made the participants think about how the default ability, language, and technological capability of the user is taken for granted, and how simple features could help more people interact with the application. This discussion led to the inference that there is a need to think about accessibility by design during the creation of the application and not as an afterthought.&lt;a name="fr3"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Biases Based on Gender&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;We intended for the participants to think about the inherent biases that creep into creating an AI concept. These biases were evident from deciding identifiably male names, to deciding a male voice when the application needed to be assertive, or a female voice and name for when it was dealing with school children. Most of the other participants either did not mention the gender or they said that the AI could be gender neutral or changeable.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;These observations are also revealing of the existing narrative around AI. The popular AI interfaces have been noted to exemplify existing gender stereotypes. For example, the virtual assistants were given female identifiable names and default female voices such as Siri, Alexa, and Cortana. The more advanced AI were given male identifiable names and default male voices such as Watson, Holmes etc.&lt;a name="fr4"&gt;&lt;/a&gt; &lt;span&gt;Although these concerns have been pointed out by several researchers, there needs to be a visible shift towards moving away from existing gender biases.&lt;/span&gt;&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Concerns around Privacy&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Though the participants were aware of the privacy implications of data driven technologies, they were unsure of how their own AI concept could deal with questions of privacy. The participants voiced concerns about how they would procure the data to train the AI but were uncertain about their data processing practices. This included how they would store the data, anonymise the data, or prevent third parties from accessing it. For example, during the activity, it was pointed out to the participants that there would be sensitive data collected in applications such as therapy provision, legal aid for victims of abuse, and assistance for people with social anxiety. In these cases, the participants stated that they would ensure that the data was shared responsibly, but did not consider the potential uses or misuses of this shared data.&lt;/p&gt;
&lt;h3 style="text-align: justify; "&gt;Choices between Principles&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;This part of the exercise was intended to familiarise the participants with certain ethical and policy questions about AI, as well as to look at the possible choices that AI developers have to make. Along with discussing the broader questions around the form and interface of AI, we wanted the participants to also look at making decisions about the way the AI would function. The intent behind this component of the exercise was to encourage the participants to question the practices of AI companies, as well as understand the implications of choices while creating an AI. As the language in this section was based on law and policy, we spent some time describing the terms to the participants. Even as some of the options presented by us were not exhaustive or absolute extremes, we placed this section to demonstrate the complexity in creating an AI that is beneficial for all. We intended for the participants to understand that an AI that is profitable to the company, free for people, accessible, privacy respecting, and open source, though desirable may be in competition with other interests such as profitability and scalability.&lt;/p&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The participants were urged to think about how decisions regarding who can use the service, how much transparency and privacy the company will provide, are also part of building an AI. Taking an example from the responses, we talked about how having a closed proprietary software in case of AI applications such as providing legal aid to victims of abuse would deter the creation of similar applications. However, after the terms were explained, the participants mostly chose openness over proprietary software, and access over paid services.&lt;/p&gt;
&lt;h3 class="Normal1" style="text-align: justify; "&gt;Conclusion&lt;/h3&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;The aim of this exercise was to understand the popular perception of AI. The participants had varied understanding of AI, but were familiar with the term. They also knew of the popular products that claim to use AI. Since the exercise was designed for people as an introduction to AI policy, we intended to keep questions around data practices out of the concept form. Eventually, with this exercise, we, along with the participants, were able to look at how popular media sells AI as an effective and cheaper solution to social issues. The exercise also allowed the participants to understand certain biases with gender, language, and ability. It also shed light on how questions of access and user rights should be placed before the creation of a technological solution. New technologies such as AI are being featured as problem solvers by companies, the media and governments. However, there is a need to also think about how these technologies can be exclusionary, misused, or how they amplify existing socio economic inequities.&lt;/p&gt;
&lt;hr /&gt;
&lt;p class="Normal1" style="text-align: justify; "&gt;&lt;span&gt;[1]. &lt;/span&gt;&lt;a class="external-link" href="https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html"&gt;https://www.bizjournals.com/sanfrancisco/news/2019/08/26/maximizing-the-potential-of-ai-starts-with-trust.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[2]. &lt;a class="external-link" href="https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/"&gt;https://qz.com/1023448/if-youre-not-a-white-male-artificial-intelligences-use-in-healthcare-could-be-dangerous/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[3]. &lt;a class="external-link" href="https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition"&gt;https://www.vox.com/the-goods/2018/11/29/18118469/instagram-accessibility-automatic-alt-text-object-recognition&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;[4]. &lt;a class="external-link" href="https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied"&gt;https://www.theguardian.com/pwc-partner-zone/2019/mar/26/why-are-virtual-assistants-always-female-gender-bias-in-ai-must-be-remedied&lt;/a&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival'&gt;http://editors.cis-india.org/internet-governance/blog/ai-for-good-event-report-on-workshop-conducted-at-unbox-festival&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Saumyaa Naidu</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2019-10-13T05:32:28Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward">
    <title>AI and Manufacturing and Services in India: Looking Forward</title>
    <link>http://editors.cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward</link>
    <description>
        &lt;b&gt;This Report provides an overview of the proceedings of the Roundtable on Artificial Intelligence (AI) in Manufacturing and Services: Looking Forward (hereinafter referred to as ‘the Roundtable’), conducted at The Energy Resource Institute (TERI), in Bangalore on January 19, 2018.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Event Report: &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-manufacturing-services"&gt;Download&lt;/a&gt; (PDF)&lt;/h4&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify;"&gt;The Roundtable comprised of participants from different sides of the AI and manufacturing and services spectrum including practitioners, representatives from multinational companies, think tanks, academicians, and researchers. The Roundtable discussed various questions regarding AI in the manufacturing and services industry in India.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;The round of discussions began with initial observations from the in progress research that the Centre for Internet and Society (CIS) is undertaking, on the use of AI in manufacturing and services. Some of the uses of AI that the research had thus far identified across various sectors included AI platforms in IT services for accurate forecasting for businesses, AI driven automation of routine tasks in manufacturing and production, and AI driven analytics for forecasting in the agriculture sector. The discussion then proceeded to the benefits of using AI - including efficient and effective results, precision, and automation of repetitive maintenance tasks. The draft research also acknowledges that although the use of AI is beneficial in many ways, there are also some key concerns around job displacement, privacy, lack of awareness, and a needed capacity to fully understand and use new AI technologies. The draft research also identified a few key AI initiatives in India, such as Wipro Holmes, TCS Ignio, and G.E, that were providing solutions to help automating software maintenance tasks and helping in the smooth working of SAP (Systems, Applications &amp;amp; Products) operations. Innovative uses of AI in areas such as crop production (M.I.T.R.A.) and dairy optimization (StellApps) were also identified.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;To understand the present state of AI and impact of the same, the session was opened to discussion on the following questions: See the &lt;a class="external-link" href="http://cis-india.org/internet-governance/files/ai-and-manufacturing-services"&gt;&lt;strong&gt;full report here.&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward'&gt;http://editors.cis-india.org/internet-governance/blog/ai-and-manufacturing-and-services-in-india-looking-forward&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>Shweta Mohandas and Pranav M. Bidare</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2018-02-14T11:13:56Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/raw/ai-hype-cycles-and-artistic-subversions">
    <title>A.I. Hype Cycles and Artistic Subversions</title>
    <link>http://editors.cis-india.org/raw/ai-hype-cycles-and-artistic-subversions</link>
    <description>
        &lt;b&gt;Gene Kogan will give a talk on "A.I. hype cycles and artistic subversions" on Friday, January 22, 2016 at the Centre for Internet and Society office, 6 pm - 8 pm.&lt;/b&gt;
        
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src="http://www.genekogan.com/images/style-transfer/ml_egypt_crab_maps.jpg" alt="Gene Kogan - Style Transfer - Mona Lisa" width="800" /&gt;&lt;/p&gt;
&lt;h6&gt;Mona Lisa restyled by Egyptian hieroglyphs, the Crab Nebula, and Google Maps. &lt;a href="http://www.genekogan.com/works/style-transfer.html"&gt;Style Transfer&lt;/a&gt;. Gene Kogan.&lt;/h6&gt;
&lt;p&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;Recent years have seen a resurgence of popular interest in machine learning and artificial intelligence, as emerging methods have set new scientific benchmarks and introduced classes of neural networks capable of imitating human behavior, among other impressive feats. More importantly, the study of these algorithms is rapidly crossing over into mainstream culture and industry as AI applications begin to inhabit more of our daily lives. Numerous initiatives have appeared, attempting to demystify and make these previously obscure research tracks more accessible to the public. Open source software like Torch, Theano, and TensorFlow have equipped amateurs with the same software which is achieving state-of-the-art results in industry and academia.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;This talk will examine the most recent wave of artistic projects applying these methods in various cultural contexts, producing troves of machine-hallucinated text, images, sounds, and videos, demonstrating a previously unseen capacity for imitating human style and sensibility. These experimental works attempt to show the capacity of these machines for producing aesthetically meaningful media, yet challenging and subverting them to illuminate their most obscure and counterintuitive properties.&lt;/p&gt;
&lt;p&gt;A recent article by the speaker about this: &lt;a href="http://bit.ly/1OhFcQr"&gt;From Pixels to Paragraphs: How artistic experiments with deep learning guard us from hype&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Relevant projects by the speaker that will be presented include: &lt;a href="http://bit.ly/1RyUH76"&gt;Style Transfer&lt;/a&gt;, &lt;a href="http://bit.ly/1QDNxOI"&gt;A Book from the Sky 天书&lt;/a&gt;, &lt;a href="http://bit.ly/1QDNClo"&gt;Learning to Generate Text and Audio&lt;/a&gt;, and &lt;a href="http://bit.ly/1QDNG4D"&gt;Deepdream Prototypes&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Gene Kogan&lt;/h2&gt;
&lt;p style="text-align: justify;"&gt;Gene Kogan is an artist and programmer who is interested in generative systems and applications of emerging technology in artistic and expressive contexts. He writes code for live music, performance, and visual art. He contributes to numerous open-source software projects and frequently gives workshops and demonstrations on topics related to code and art.&lt;/p&gt;
&lt;p style="text-align: justify;"&gt;He is a contributor to openFrameworks, Processing, and p5.js, an adjunct professor at Bennington College and NYU, a former resident at Eyebeam Art &amp;amp; Technology Center, and a former Fulbright scholar in Bangalore, India, 2012-2013.&lt;/p&gt;

        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/raw/ai-hype-cycles-and-artistic-subversions'&gt;http://editors.cis-india.org/raw/ai-hype-cycles-and-artistic-subversions&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>sharath</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Generative Art</dc:subject>
    
    
        <dc:subject>Art</dc:subject>
    
    
        <dc:subject>Practice</dc:subject>
    
    
        <dc:subject>Machine Learning</dc:subject>
    
    
        <dc:subject>Researchers at Work</dc:subject>
    
    
        <dc:subject>Event</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2016-01-01T07:52:20Z</dc:date>
   <dc:type>Event</dc:type>
   </item>


    <item rdf:about="http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific">
    <title>‘Techplomacy’ and the negotiation of AI standards for the Indo-Pacific</title>
    <link>http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific</link>
    <description>
        &lt;b&gt;Researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific.&lt;/b&gt;
        &lt;p&gt;This is a modified version of the post that appeared in&lt;strong&gt; &lt;/strong&gt;&lt;a href="https://www.aspistrategist.org.au/high-time-for-australia-and-india-to-step-up-their-tech-diplomacy/"&gt;&lt;strong&gt;The Strategist&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;&lt;span&gt; &lt;/span&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;By Arindrajit Basu with inputs from  and review by Amrita Sengupta and Isha Suri&lt;/strong&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Later this month, UN member states elected  American candidate Doreen Bogdan-Martin "&lt;/span&gt;&lt;a href="https://www.brookings.edu/blog/techtank/2022/08/12/the-most-important-election-you-never-heard-of/"&gt;the most important election you have never heard off&lt;/a&gt;&lt;span&gt;" to elect the next secretary-general of the International Telecommunications Union (ITU). While this technical body's work may be esoteric, the election was  fiercely contested with  Russian candidate (and former Huawei executive; aptly reflecting the geopolitical competition that is underway in determining the “&lt;/span&gt;&lt;a href="https://www.lowyinstitute.org/the-interpreter/election-future-internet"&gt;future of the internet”&lt;/a&gt;&lt;span&gt; through the technical standards that underpin it. The  “Internet Protocol” (IP) that is the set of rules governing the communication and exchange of data over the internet itself is being subjected to political contestation between a Sino-Russian vision that would see the standard give way to greater government control and a US vision ostensibly rooted in more inclusive multi-stakeholder participation.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;As critical and emerging technologies take the geopolitical centre-stage, the global tug of war over the development, utilisation, and deployment  is playing out most ferociously at standard-setting organisations, an arms’ length away from the media limelight. Powerful state and non-state actors alike are already seeking to shape standards in ways that suit their economic, political, and normative priorities. It is time for emerging economies, middle powers and a wider array of private actors and members from the civil society to play a more meaningful and tangible role in the process.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;What are standards and why do they matter&lt;/strong&gt;&lt;/h3&gt;
&lt;p style="text-align: justify; "&gt;Simply put, standards are blueprints or protocols with requirements which ‘standardise’ products and related processes around the world, thus ensuring that they are interoperable, safe and sustainable. For example, USB, WiFi or a QWERTY keyboard can be used around the world because they are built on technical standards that enable equipment produced adopting these standards to be used around the world.Standards are negotiated both domestically-at domestic standard-setting bodies such as the Bureau of Indian Standards (BIS) or Standards Australia (SA) or global standard-development organisations such as the International Telecommunications Union (ITU) or the International Standardisation Organisation (ISO). While standards are not legally binding  unless they are explicitly imposed as requirements in a legislation, they have immense coercive value. Not adhering to recognised standards means that certain products may not reach markets as they are not compatible with consumer requirements or cannot claim to meet health or safety expectations. The harmonisation of internationally recognised standards serves as  the bedrock for global trade and commerce. Complying with a global standard is particularly critical because of its applicability across several markets. Further, international trade law proclaims that World Trade Organisation (WTO) members can impose trade restrictive domestic measures only on the basis of published or soon to be published international standards.(Article 2.4 of the &lt;a href="https://www.wto.org/english/tratop_e/tbt_e/tbt_e.htm"&gt;Technical Barriers to Trade&lt;/a&gt; Agreement)&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;Shaping global standards is of immense geopolitical and economic value to states and the private sector alike. States that are able to ‘export’ their domestic technological standards internationally enable their companies to reap a significant economic advantage because it is cheaper for them to adopt global standards. Further, companies draw huge revenue by holding patents to technologies that are essential to comply with a certain standard popularly known as Standard Essential Patents or SEPs and licensing them to other players who want to enter the market. For context, IPlytics &lt;a href="https://www.lightreading.com/5g/nokia-boasts-of-essential-5g-patents-milestone/d/d-id/773445"&gt;estimated&lt;/a&gt; that cumulative global royalty income from licensing SEPs was USD 20 billion in 2020, anticipated to increase significantly in the coming years due to massive technological upgradation currently underway.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;China’s push for dominance to influence the 5G standard at the Third Generation Partnership Project (3GPP) illustrates how prioritising standards-setting both through domestic industrial policy and foreign policy could provide rich economic and geopolitical dividends. After failing to meaningfully influence the setting of the 3G and 4G standards,the Chinese government commenced a national effort that sought to harmonise domestic standards, improve government coordination of standard-setting efforts, and obtain a first movers advantage over other nations developing their own domestic 5G standards. This was combined with a diplomatic push that saw vigorous private sector &lt;a href="https://asia.nikkei.com/Politics/International-relations/China-leads-the-way-on-global-standards-for-5G-and-beyond"&gt;participation &lt;/a&gt;(Huawei put in 20 5G related proposals whereas Ericsson and Nokia put in just 16 and 10 respectively);&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;packing key leadership positions in Working Groups with representatives from Chinese companies and institutions; and ensuring that all Chinese participants vote in unison for any proposal. It is no surprise therefore that Chinese companies now lead the way on 5G with Huawei &lt;a href="https://insights.greyb.com/company-with-most-5g-patents/"&gt;owning&lt;/a&gt; the most number of 5G patents and has &lt;a href="https://www.cfr.org/blog/china-huawei-5g"&gt;finalised&lt;/a&gt; more 5G contracts than any other company despite restrictions placed on Huawei’s gear by some countries. As detailed in its “Make in China”strategy, China will now activelyapply its winning strategy to other standard-setting avenues as well&lt;/p&gt;
&lt;h3&gt;&lt;span&gt;Standards for Artificial Intelligence&lt;/span&gt;&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;A  number of institutions, including private actors such as Huawei and Cloud Walk have contributed to China’s 2018 &lt;a href="https://cset.georgetown.edu/publication/artificial-intelligence-standardization-white-paper-2021-edition/"&gt;AI standardisation white paper&lt;/a&gt; that was revised and updated in 2021.The white paper maps the work of SDOs in the field of AI standards and outlines a number of recommendations on how Chinese actors can use global SDOs to boost industrial competitiveness and globally promote “Chinese wisdom.” While there are cursory references to the role of standards in furthering “ethics” and “privacy,” the document does not outline how China will look to promote these values at SDOs.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Artificial Intelligence (AI) is a general purpose technology that has various outcomes and use-cases.Top down regulation of AI by governments is emerging across jurisdictions but this may not keep pace with the rapidly evolving technology  being developed by the private sector or adequately check the diversity of use-cases. On the other hand, private sector driven self-regulatory initiatives focussing on ‘ethical AI’ are very broad and provide too much leeway to technology companies to evade the law. Technical standards offer a middle ground where multiple stakeholders can come together to devise uniform requirements on various stages of the AI development lifecycle. Of course, technical standards must co-exist with government driven regulation as well as self regulatory codes to holistically govern the deployment of AI globally. However, while the first two modes of regulation has received plenty of attention from policy-makers and scholars alike, AI standard-setting is an emerging field that has yet to be concretely evaluated from a strategic and diplomatic perspective.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;&lt;strong&gt;Introducing a new CIS-ASPI project&lt;/strong&gt;&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt; &lt;/strong&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;This is why researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific. Given the immense economic value of shaping global technical standards, it is imperative that SDOs not be dominated only by the likes of the US, Europe or China. The standards likely to impact a majority of nations, devised only from the purview of  a few countries may be context agnostic to the needs of emerging economies. Further, there are values at stake here. An excessive focus on security, accuracy or quality of AI-driven products may make some technology  palatable across the world even if the technology  undermines core democratic values such as privacy, and anti-discrimination. China’s&lt;a href="https://www.ft.com/content/c3555a3c-0d3e-11ea-b2d6-9bf4d1957a67"&gt; efforts&lt;/a&gt; at shaping Facial Recognition Technology (FRT) standards at the ITU have been criticised for moving beyond mere technical specifications into the domain of policy recommendations despite there being a lack of representation of experts on human rights, consumer protection or data protection at the ITU. Accordingly, diversity of representation in terms of expertise, gender, and nationality at SDOs, including in leadership positions, are aspects our project will explore with an eye towards creating more inclusive participation.&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;Through this project ,we hope to identify how key stakeholders drive these initiatives and how technological standards can be devised in line both with core democratic values and strategic priorities. Through extensive consultations with several stakeholder groups, we plan to offer learning products to policy makers and technical delegates alike to enable Australian and Indian delegates to serve as ambassadors for our respective nations.&lt;/span&gt;&lt;/p&gt;
&lt;p style="text-align: justify; "&gt;&lt;span&gt;For more information on this new and exciting project funded by the Australian Departmentfor Foreign Affairs and Trade as part of the Australia India Cyber and Critical Technology Partnership grants, visit &lt;/span&gt;&lt;a href="http://www.aspi.org.au/techdiplomacy"&gt;www.aspi.org.au/techdiplomacy&lt;/a&gt;&lt;span&gt; and https://www.internationalcybertech.gov.au/AICCTP-grant-round-two&lt;/span&gt;&lt;/p&gt;
        &lt;p&gt;
        For more details visit &lt;a href='http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific'&gt;http://editors.cis-india.org/internet-governance/blog/techplomacy-and-negotiation-of-ai-standards-for-indo-pacific&lt;/a&gt;
        &lt;/p&gt;
    </description>
    <dc:publisher>No publisher</dc:publisher>
    <dc:creator>arindrajit</dc:creator>
    <dc:rights></dc:rights>

    
        <dc:subject>Internet Governance</dc:subject>
    
    
        <dc:subject>Artificial Intelligence</dc:subject>
    

   <dc:date>2022-10-21T17:16:10Z</dc:date>
   <dc:type>Blog Entry</dc:type>
   </item>




</rdf:RDF>
