Blog
Notes for India as the digital trade juggernaut rolls on
The article by Arindrajit Basu was published in the Hindu on February 8, 2022
Despite the cancellation of the Twelfth Ministerial Conference (MC12) of the World Trade Organization (WTO) late last year (scheduled date, November 30, 2021-December 3, 2021) due to COVID-19, digital trade negotiations continue their ambitious march forward. On December 14, Australia, Japan, and Singapore, co-convenors of the plurilateral Joint Statement Initiative (JSI) on e-commerce, welcomed the ‘substantial progress’ made at the talks over the past three years and stated that they expected a convergence on more issues by the end of 2022.
Holding out
But therein lies the rub: even though JSI members account for over 90% of global trade, and the initiative welcomes newer entrants, over half of WTO members (largely from the developing world) continue to opt out of these negotiations. They fear being arm-twisted into accepting global rules that could etiolate domestic policymaking and economic growth. India and South Africa have led the resistance and been the JSI’s most vocal critics. India has thus far resisted pressures from the developed world to jump onto the JSI bandwagon, largely through coherent legal argumentation against the JSI and a long-term developmental vision. Yet, given the increasingly fragmented global trading landscape and the rising importance of the global digital economy, can India tailor its engagement with the WTO to better accommodate its economic and geopolitical interests?
Global rules on digital trade
The WTO emerged in a largely analogue world in 1994. It was only at the Second Ministerial Conference (1998) that members agreed on core rules for e-commerce regulation. A temporary moratorium was imposed on customs duties relating to the electronic transmission of goods and services. This moratorium has been renewed continuously, to consistent opposition from India and South Africa. They argue that the moratorium imposes significant costs on developing countries as they are unable to benefit from the revenue customs duties would bring.
The members also agreed to set up a work programme on e-commerce across four issue areas at the General Council: goods, services, intellectual property, and development. Frustrated by a lack of progress in the two decades that followed, 70 members brokered the JSI in December 2017 to initiate exploratory work on the trade-related aspects of e-commerce. Several countries, including developing countries, signed up in 2019 despite holding contrary views to most JSI members on key issues. Surprise entrants, China and Indonesia, argued that they sought to shape the rules from within the initiative rather than sitting on the sidelines.
India and South Africa have rightly pointed out that the JSI contravenes the WTO’s consensus-based framework, where every member has a voice and vote regardless of economic standing. Unlike the General Council Work Programme, which India and South Africa have attempted to revitalise in the past year, the JSI does not include all WTO members. For the process to be legally valid, the initiative must either build consensus or negotiate a plurilateral agreement outside the aegis of the WTO.
India and South Africa’s positioning strikes a chord at the heart of the global trading regime: how to balance the sovereign right of states to shape domestic policy with international obligations that would enable them to reap the benefits of a global trading system.
A contested regime
There are several issues upon which the developed and developing worlds disagree. One such issue concerns international rules relating to the free flow of data across borders. Several countries, both within and outside the JSI, have imposed data localisation mandates that compel corporations to store and process data within territorial borders. This is a key policy priority for India. Several payment card companies, including Mastercard and American Express, were prohibited from issuing new cards for failure to comply with a 2018 financial data localisation directive from the Reserve Bank of India. The Joint Parliamentary Committee (JPC) on data protection has recommended stringent localisation measures for sensitive personal data and critical personal data in India’s data protection legislation. However, for nations and industries in the developed world looking to access new digital markets, these restrictions impose unnecessary compliance costs, thus arguably hampering innovation and supposedly amounting to unfair protectionism.
There is a similar disagreement regarding domestic laws that mandate the disclosure of source codes. Developed countries believe that this hampers innovation, whereas developing countries believe it is essential for algorithmic transparency and fairness — which was another key recommendation of the JPC report in December 2021.
India’s choices
India’s global position is reinforced through narrative building by political and industrial leaders alike. Data sovereignty is championed as a means of resisting ‘data colonialism’, the exploitative economic practices and intensive lobbying of Silicon Valley companies. Policymaking for India’s digital economy is at a critical juncture. Surveillance reform, personal data protection, algorithmic governance, and non-personal data regulation must be galvanised through evidenced insights,and work for individuals, communities, and aspiring local businesses — not just established larger players.
Hastily signing trading obligations could reduce the space available to frame appropriate policy. But sitting out trade negotiations will mean that the digital trade juggernaut will continue unchecked, through mega-regional trading agreements such as the Regional Comprehensive Economic Partnership (RCEP) and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). India could risk becoming an unwitting standard-taker in an already fragmented trading regime and lose out on opportunities to shape these rules instead.
Alternatives exist; negotiations need not mean compromise. For example, exceptions to digital trade rules, such as ‘legitimate public policy objective’ or ‘essential security interests’, could be negotiated to preserve policymaking where needed while still acquiescing to the larger agreement. Further, any outcome need not be an all-or-nothing arrangement. Taking a cue from the Digital Economy Partnership Agreement (DEPA) between Singapore, Chile, and New Zealand, India can push for a framework where countries can pick and choose modules with which they wish to comply. These combinations can be amassed incrementally as emerging economies such as India work through domestic regulations.
Despite its failings, the WTO plays a critical role in global governance and is vital to India’s strategic interests. Negotiating without surrendering domestic policy-making holds the key to India’s digital future.
Arindrajit Basu is Research Lead at the Centre for Internet and Society, India. The views expressed are personal. The author would like to thank The Clean Copy for edits on a draft of this article.
CIS Comments and Recommendations on the Data Protection Bill, 2021
After nearly two years of deliberations and a few changes in its composition, the Joint Parliamentary Committee (JPC), on 17 December 2021, submitted its report on the Personal Data Protection Bill, 2019 (2019 Bill). The report also contains a new version of the law titled the Data Protection Bill, 2021 (2021 Bill). Although there were no major revisions from the previous version other than the inclusion of all data under the ambit of the bill, some provisions were amended.
This document is a revised version of the comments we provided on the 2019 Bill on 20 February 2020, with updates based on the amendments in the 2021 Bill. Through this document we aim to shed light on the issues that we highlighted in our previous comments that have not yet been addressed, along with additional comments on sections that have become more relevant since the pandemic began. In several instances our previous comments have either not been addressed or only partially been addressed; in such instances, we reiterate them.
These general comments should be read in conjunction with our previous recommendations for the reader to get a comprehensive overview of what has changed from the previous version and what has remained the same. This document can also be read while referencing the new Data Protection Bill 2021 and the JPC’s report to understand some of the significant provisions of the bill.
Read on to access the comments | Review and editing by Arindrajit Basu. Copy editing: The Clean Copy; Shared under Creative Commons Attribution 4.0 International license
How Function Of State May Limit Informed Consent: Examining Clause 12 Of The Data Protection Bill
The blog post was published in Medianama on February 18, 2022. This is the first of a two-part series by Amber Sinha.
In 2018, hours after the Committee of Experts led by Justice Srikrishna Committee released their report and draft bill, I wrote an opinion piece providing my quick take on what was good and bad about the bill. A section of my analysis focused on Clause 12 (then Clause 13) which provides for non-consensual processing of personal data for state functions. I called this provision a ‘carte-blanche’ which effectively allowed the state to process a citizen’s data for practically all interactions between them without having to deal with the inconvenience of seeking consent. My former colleague, Pranesh Prakash pointed out that this was not a correct interpretation of the provision as I had missed the significance of the word ‘necessary’ which was inserted to act as a check on the powers of the state. He also pointed out, correctly, that in its construction, this provision is equivalent to the position in European General Data Protection Regulation (Article 6 (i) (e)), and is perhaps even more restrictive.
While I agree with what Pranesh says above (his claims are largely factual, and there can be no basis for disagreement), my view of Clause 12 has not changed. While Clause 35 has been a focus of considerable discourse and analysis, for good reason, I continue to believe that Clause 12 remains among the most dangerous provisions of this bill, and I will try to unpack here, why.
The Data Protection Bill 2021 has a chapter on the grounds for processing personal data, and one of those grounds is consent by the individual. The rest of the grounds deal with various situations in which personal data can be processed without seeking consent from the individual. Clause 12 lays down one of the grounds. It allows the state to process data without the consent of the individual in the following cases —
a) where it is necessary to respond to a medical emergency
b) where it is necessary for state to provide a service or benefit to the individual
c) where it is necessary for the state to issue any certification, licence or permit
d) where it is necessary under any central or state legislation, or to comply with a judicial order
e) where it is necessary for any measures during an epidemic, outbreak or public health
f) where it is necessary for safety procedures during disaster or breakdown of public order
In order to carry out (b) and (c), there is also the added requirement that the state function must be authorised by law.
Twin restrictions in Clause 12
The use of the words ‘necessary’ and ‘authorised by law’ is intended to pose checks on the powers of the state. The first restriction seeks to limit actions to only those cases where the processing of personal data would be necessary for the exercise of the state function. This should mean that if the state function can be exercised without non-consensual processing of personal data, then it must be done so. Therefore, while acting under this provision, the state should only process my data if it needs to do so, to provide me with the service or benefit. The second restriction means that this would apply to only those state functions which are authorised by law, meaning only those functions which are supported by validly enacted legislation.
What we need to keep in mind regarding Clause 12 is that the requirement of ‘authorised by law’ does not mean that legislation must provide for that specific kind of data processing. It simply means that the larger state function must have legal backing. The danger is how these provisions may be used with broad mandates. If the activity in question is non-consensual collection and processing of, say, demographic data of citizens to create state resident hubs which will assist in the provision of services such as healthcare, housing, and other welfare functions; all that may be required is that the welfare functions are authorised by law.
Scope of privacy under Puttaswamy
It would be worthwhile, at this point, to delve into the nature of restrictions that the landmark Puttaswamy judgement discussed that the state can impose on privacy. The judgement clearly identifies the principles of informed consent and purpose limitation as central to informational privacy. As discussed repeatedly during the course of the hearings and in the judgement, privacy, like any other fundamental right, is not absolute. However, restrictions on the right must be reasonable in nature. In the case of Clause 12, the restrictions on privacy in the form of denial of informed consent need to be tested against a constitutional standard. In Puttaswamy, the bench was not required to provide a legal test to determine the extent and scope of the right to privacy, but they do provide sufficient guidance for us to contemplate how the limits and scope of the constitutional right to privacy could be determined in future cases.
The Puttaswamy judgement clearly states that “the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.” By locating the right not just in Article 21 but also in the entirety of Part III, the bench clearly requires that “the drill of various Articles to which the right relates must be scrupulously followed.” This means that where transgressions on privacy relate to different provisions in Part III, the different tests under those provisions will apply along with those in Article 21. For instance, where the restrictions relate to personal freedoms, the tests under both Article 19 (right to freedoms) and Article 21 (right to life and liberty) will apply.
In the case of Clause 12, the three tests laid down by Justice Chandrachud are most operative —
a) the existence of a “law”
b) a “legitimate State interest”
c) the requirement of “proportionality”.
The first test is already reflected in the use of the phrase ‘authorised by law’ in Clause 12. The test under Article 21 would imply that the function of the state should not merely be authorised by law, but that the law, in both its substance and procedure, must be ‘fair, just and reasonable.’ The next test is that of ‘legitimate state interest’. In its report, the Joint Parliamentary Committee places emphasis on Justice Chandrachud’s use of “allocation of resources for human development” in an illustrative list of legitimate state interests. The report claims that the ground, functions of the state, thus satisfies the legitimate state interest. We do not dispute this claim.
Proportionality and Clause 12
It is the final test of ‘proportionality’ articulated by the Puttaswamy judgement, which is most operative in this context. Unlike Clauses 42 and 43 which include the twin tests of necessity and proportionality, the committee has chosen to only employ one ground in Clause 12. Proportionality is a commonly employed ground in European jurisprudence and common law countries such as Canada and South Africa, and it is also an integral part of Indian jurisprudence. As commonly understood, the proportionality test consists of three parts —
a) the limiting measures must be carefully designed, or rationally connected, to the objective
b) they must impair the right as little as possible
c) the effects of the limiting measures must not be so severe on individual or group rights that the legitimate state interest, albeit important, is outweighed by the abridgement of rights.
The first test is similar to the test of proximity under Article 19. The test of ‘necessity’ in Clause 12 must be viewed in this context. It must be remembered that the test of necessity is not limited to only situations where it may not be possible to obtain consent while providing benefits. My reservations with the sufficiency of this standard stem from observations made in the report, as well as the relatively small amount of jurisprudence on this term in Indian law.
The Srikrishna Report interestingly mentions three kinds of scenarios where consent should not be required — where it is not appropriate, necessary, or relevant for processing. The report goes on to give an example of inappropriateness. In cases where data is being gathered to provide welfare services, there is an imbalance in power between the citizen and the state. Having made that observation, the committee inexplicably arrives at a conclusion that the response to this problem is to further erode the power available to citizens by removing the need for consent altogether under Clause 12. There is limited jurisprudence on the standard of ‘necessity’ under Indian law. The Supreme Court has articulated this test as ‘having reasonable relation to the object the legislation has in view.’ If we look elsewhere for guidance on how to read ‘necessity’, the ECHR in Handyside v United Kingdom held it to be neither “synonymous with indispensable” nor does it have the “flexibility of such expressions as admissible, ordinary, useful, reasonable or desirable.” In short, there must be a pressing social need to satisfy this ground.
However, the other two tests of proportionality do not find a mention in Clause 12 at all. There is no requirement of ‘narrow tailoring’, that the scope of non-consensual processing must impair the right as little as possible. It is doubly unfortunate that this test does not find a place, as unlike necessity, ‘narrow tailoring’ is a test well understood in Indian law. This means that while there is a requirement to show that processing personal data was necessary to provide a service or benefit, there is no requirement to process data in a way that there is minimal non-consensual processing. The fear is that as long as there is a reasonable relation between processing data and the object of the function of state, state authorities and other bodies authorised by it, do not need to bother with obtaining consent.
Similarly, the third test of proportionality is also not represented in this provision. It provides a test between the abridgement of individual rights and legitimate state interest in question, and it requires that the first must not outweigh the second. The absence of the proportionality test leaves Clause 12 devoid of any such consideration. Therefore, as long as the test of necessity is met under this law, it need not evaluate the denial of consent against the service or benefit that is being provided.
The collective implication of leaving out ‘proportionality’ from Clause 12 is to provide very wide discretionary powers to the state, by setting the threshold to circumvent informed consent extremely low. In the next post, I will demonstrate the ease with which Clause 12 can allow indiscriminate data sharing by focusing on the Indian government’s digital healthcare schemes.
Clause 12 Of The Data Protection Bill And Digital Healthcare: A Case Study
The blog post was published in Medianama on February 21, 2022. This is the second in a two-part series by Amber Sinha.
In the previous post, I looked at provisions on non-consensual data processing for state functions under the most recent version of recommendations by the Joint Parliamentary Committee on India’s Data Protection Bill (DPB). The true impact of these provisions can only be appreciated in light of ongoing policy developments and real-life implications.
To appreciate the significance of the dilutions in Clause 12, let us consider the Indian state’s range of schemes promoting digital healthcare. In July 2018, NITI Aayog, a central government policy think tank in India released a strategy and approach paper (Strategy Paper) on the formulation of the National Health Stack which envisions the creation of a federated application programming interface (API)-enabled health information ecosystem. While the Ministry of Health and Family Welfare has focused on the creation of Electronic Health Records (EHR) Standards for India during the last few years and also identified a contractor for the creation of a centralised health information platform (IHIP), this Strategy Paper advocates a completely different approach, which is described as a Personal Health Records (PHR) framework. In 2021, the National Digital Health Mission (NDHM) was launched under which a citizen shall have the option to obtain a digital health ID. A digital health ID is a unique ID and will carry all health records of a person.
A Stack Model for Big Data Ecosystem in Healthcare
A stack model as envisaged in the Strategy Paper, consists of several layers of open APIs connected to each other, often tied together by a unique health identifier. The open nature of APIs has the advantage that it allows public and private actors to build solutions on top of it, which are interoperable with all parts of the stack. It is however worth considering both the ‘openness’ and the role that the state plays in it.
Even though the APIs are themselves open, they are a part of a pre-decided technological paradigm, built by private actors and blessed by the state. Even though innovators can build on it, the options available to them are limited by the information architecture created by the stack model. When such a technological paradigm is created for healthcare reform and health data, the stack model poses additional challenges. By tying the stack model to the unique identity, without appropriate processes in place for access control, siloed information, and encrypted communication, the stack model poses tremendous privacy and security concerns. The broad language under Clause 12 of the DPB needs to be looked at in this context.
Clause 12 allows non-consensual processing of personal data where it is necessary “for the performance of any function of the state authorised by law” in order to provide a service or benefit from the State. In the previous post, I had highlighted the import of the use of only ‘necessity’ to the exclusion of ‘proportionality’. Now, we need to consider its significance in light of the emerging digital healthcare apparatus being created by the state.
The National Health Stack and National Digital Health Mission together envision an intricate system of data collection and exchange which in a regulatory vacuum would ensure unfettered access to sensitive healthcare data for both the state and private actors registered with the platforms. The Stack framework relies on repositories where data may be accessed from multiple nodes within the system. Importantly, the Strategy Paper also envisions health data fiduciaries to facilitate consent-driven interaction between entities that generate the health data and entities that want to consume the health records for delivering services to the individual. The cast of characters involve the National Health Authority, health care providers and insurers who access the National Health Electronic Registries, unified data from different programmes such as National Health Resource Repository (NHRR), NIN database, NIC and the Registry of Hospitals in Network of Insurance (ROHINI), private actors such as Swasth, iSpirt who assist the Mission as volunteers. The currency that government and private actors are interested in is data.
The promised benefits of healthcare data in an anonymised and aggregate form range from Disease Surveillance to Pharmacovigilance as well as Health Schemes Management Systems and Nutrition Management, benefits which have only been more acutely emphasised during the pandemic. However, the pandemic has also normalised the sharing of sensitive healthcare data with a variety of actors, without much thinking on much-needed data minimisation practises.
The potential misuses of healthcare data include greater state surveillance and control, predatory and discriminatory practices by private actors which rely on Clause 12 to do away with even the pretense of informed consent so long as the processing of data is deemed necessary by the state and its private sector partners to provide any service or benefit.
Subclause (e) in Clause 12, which was added in the last version of the Bill drafted by MeitY and has been retained by the JPC, allows processing wherever it is necessary for ‘any measures’ to provide medical treatment or health services during an epidemic, outbreak or threat to public health. Yet again, the overly-broad language used here is designed to ensure that any annoyances of informed consent can be easily brushed aside wherever the state intends to take any measures under any scheme related to public health.
Effectively, how does the framework under Clause 12 alter the consent and purpose limitation model? Data protection laws introduce an element of control by tying purpose limitation to consent. Individuals provide consent to specified purposes, and data processors are required to respect that choice. Where there is no consent, the purposes of data processing are sought to be limited by the necessity principle in Clause 12. The state (or authorised parties) must be able to demonstrate necessity to the exercise of state function, and data must only be processed for those purposes which flow out of this necessity. However, unlike the consent model, this provides an opportunity to keep reinventing purposes for different state functions.
In the absence of a data protection law, data collected by one agency is shared indiscriminately with other agencies and used for multiple purposes beyond the purpose for which it was collected. The consent and purpose limitation model would have addressed this issue. But, by having a low threshold for non-consensual processing under Clause 12, this form of data processing is effectively being legitimised.
Nothing to Kid About – Children's Data Under the New Data Protection Bill
The article was originally published in the Indian Journal of Law and Technology
For children, the internet has shifted from being a form of entertainment to a medium to connect with friends and seek knowledge and education. However, each time they access the internet, data about them and their choices are inadvertently recorded by companies and unknown third parties. The growth of EdTech apps in India has led to growing concerns regarding children's data privacy. This has led to the creation of a self-regulatory body, the Indian EdTech Consortium. More recently, the Advertising Standard Council of India has also started looking at passing a draft regulation to keep a check on EdTech advertisements.
The Joint Parliamentary Committee (JPC), tasked with drafting and revising the Data Protection Bill, had to consider the number of changes that had happened after the release of the 2019 version of the Bill. While the most significant change was the removal of the term “personal data” from the title of the Bill, in a move to create a comprehensive Data Protection Bill that includes both personal and non personal data. Certain other provisions of the Bill also featured additions and removals. The JPC, in its revised version of the Bill has removed an entire class of data fiduciaries – guardian data fiduciary – which was tasked with greater responsibility for managing children's data. While the JPC justified the removal of the guardian data fiduciary stating that consent from the guardian of the child is enough to meet the end for which personal data of children are processed by the data fiduciary. While thought has been given to looking at how consent is given by the guardian on behalf of the child, there was no change in the age of children in the Bill. Keeping the age of consent under the Bill as the same as the age of majority to enter into a contract under the 1872 Indian Contract Act – 18 years – reveals the disconnect the law has with the ground reality of how children interact with the internet.
In the current state of affairs where Indian children are navigating the digital world on their own there is a need to look deeply at the processing of children’s data as well as ways to ensure that children have information about consent and informational privacy. By placing the onus of granting consent on parents, the PDP Bill fails to look at how consent works in a privacy policy–based consent model and how this, in turn, harms children in the long run.
1. Age of Consent
By setting the age of consent as 18 years under the Data Protection Bill, 2021, it brings all individuals under 18 years of age under one umbrella without making a distinction between the internet usage of a 5-year-old child and a 16-year-old teenager. There is a need to look at the current internet usage habits of children and assess whether requiring parental consent is reasonable or even practical. It is also pertinent to note that the law in the offline world does make the distinction between age and maturity. For example, it has been highlighted that Section 82 of the Indian Penal Code, read with Section 83, states that any act by a child under the age of 12 years shall not be considered an offence, while the maturity of those aged between 12–18 years will be decided by the court (individuals between the age of 16–18 years can also be tried as adults for heinous crimes). Similarly, child labour laws in the country allow children above the age of 14 years to work in non-hazardous industries, which would qualify them to fall under Section 13 of the Bill, which deals with employee data.
A 2019 report suggests that two-thirds of India’s internet users are in the 12–29 years age group, accounting for about 21.5% of the total internet usage in metro cities. With the emergence of cheaper phones equipped with faster processing and low internet data costs, children are no longer passive consumers of the internet. They have social media accounts and use several applications to interact with others and make purchases. There is a need to examine how children and teenagers interact with the internet as well as the practicality of requiring parental consent for the usage of applications.
Most applications that require age data request users to type in their date of birth; it is not difficult for a child to input a suitable date that would make it appear that they are over 18. In this case they are still children but the content that will be presented to them would be those that are meant for adults including content that might be disturbing or those involving use of alcohol and gambling. Additionally, in their privacy policies, applications sometimes state that they are not suited for and restricted from users under 18. Here, data fiduciaries avoid liability by placing the onus on the user to declare their age and properly read and understand the privacy policy.
Reservations about the age of consent under the Bill have also been highlighted by some members of the JPC through their dissenting opinions. MP Ritesh Pandey suggested that the age of consent should be reduced to 14 years keeping the best interest of the children in mind as well as to support children in benefiting from technological advances. Similarly, MP Manish Tiwari in his dissenting opinion suggested regulating data fiduciaries based on the type of content they provide or data they collect.
2. How is the 2021 Bill Different from the 2019 Bill?
The 2019 draft of the Bill consisted of a class of data fiduciaries called guardian data fiduciaries – entities that operate commercial websites or online services directed at children or which process large volumes of children’s personal data. This class of fiduciaries was barred from profiling, tracking, behavioural monitoring, and running targeted advertising directed at children and undertaking any other processing of personal data that can cause significant harm to the child. In the previous draft, such data fiduciaries were not allowed to engage in ‘profiling, tracking, behavioural monitoring of children, or direct targeted advertising at children’. There was also a prohibition on conducting any activities that might significantly harm the child. As per Chapter IV, any violation could attract a penalty of up to INR 15 crore of the worldwide turnover of the data fiduciary for the preceding financial year, whichever is higher. However, this separate class of data fiduciaries do not have any additional responsibilities. It is also unclear as to whether a data fiduciary that does not by definition fall within such a category would be allowed to engage in activities that could cause ‘significant harm’ to children.
The new Bill also does not provide any mechanisms for age verification and only lays down considerations that verification processes should be undertaken. Furthermore, the JPC has suggested that consent options available to the child when they attain the age of majority i.e. 18 years should be included within the rule frame by the Data Protection Authority instead of being an amendment in the Bill.
3. In the Absence of a Guardian Data Fiduciary
The 2018 and 2019 drafts of the PDP Bill consider a child to be any person below the age of 18 years. For a child to access online services, the data fiduciary must first verify the age of the child and obtain consent from their guardian. The Bill does not provide an explicit process for age verification apart from stating that regulations shall be drafted in this regard. The 2019 Bill states that the Data Protection Authority shall specify codes of practice in this matter. Taking best practices into account, there is a need for ‘user-friendly and privacy-protecting age verification techniques’ to encourage safe navigation across the internet. This will require looking at technological developments and different standards worldwide. There is a need to hold companies accountable for the protection of children’s online privacy and the harm that their algorithms cause children and to make sure that they are not continued.
The JPC in the 2021 version of the Bill removed provisions about guardian data fiduciaries, stating that there was no advantage in creating a different class of data fiduciary. As per the JPC, even those data fiduciaries that did not fall within the said classification would also need to comply with rules pertaining to the personal data of children i.e. with Section 16 of the Bill. Section 16 of the Bill requires the data fiduciary to verify the child’s age and obtain consent from the parent/guardian. The manner of age verification has also een spelt out. Furthermore, since ‘significant data fiduciaries’ is an existing class, there is still a need to comply with rules related to data processing. The JPC also removed the phrase “in the best interests of, the child” and “is in the best interests of, the child” under sub-clause 16(1), implying that the entire Bill concerned the rights of the data principal and the use of such terms dilutes the purpose of the legislation and could give way to manipulation by the data fiduciary.
Conclusion
Over the past two years, there has been a significant increase in applications that are targeted at children. There has been a proliferation of EduTech apps, which ideally should have more responsibility as they are processing children's data. We recommend that instead of creating a separate category, such fiduciaries collecting children's data or providing services to children be seen as ‘significant data fiduciaries’ that need to take up additional compliance measures.
Furthermore, any blanket prohibition on tracking children may obstruct safety measures that could be implemented by data fiduciaries. These fears are also increasing in other jurisdictions as there is a likelihood to restrict data fiduciaries from using software that looks out for such as Child Sexual Abuse Material as well as online predatory behaviour. Additionally, concerning the age of consent under the Bill, the JPC could look at international best practices and come up with ways to make sure that children can use the internet and have rights over their data, which would enable them to grow up with more awareness about data protection and privacy. One such example to look at could be the Children's Online Privacy Protection Rule (COPPA) in the US, where the rules apply to operators of websites and online services that collect personal information from kids under 13 or provide services to children that are directed at a general audience, but have actual knowledge that they collect personal information from such children. A form of combination of this system and the significant data fiduciary classification could be one possible way to ensure that children’s data and privacy are preserved online.
The authors are researchers at the Centre for Internet and Society and thank their colleague Arindrajit Basu for his inputs.
Response to MeitY's India Digital Ecosystem Architecture 2.0 Comment Period
This submission presents a response by the Centre for Internet & Society (CIS) to MeitY's India Digital Ecosystem Architecture 2.0 Comment Period (hereinafter, the “Consultation”) released in February 2022. CIS appreciates MeitY's consultations, and is grateful for the opportunity to put forth its views and comments.
Read the response here
Cybernorms: Do they matter IRL (In Real Life): Event Report
During the first half of the year, multilateral forums including the United Nations made some progress in identifying norms, rules, and principles to guide responsible state behaviour in cyberspace, even though the need for political compromise between opposing geopolitical blocs stymied progress to a certain extent.
There is certainly a need to formulate more concrete rules and norms. However, at the same time, the international community must assess the extent to which existing norms are being implemented by states and non-state actors alike. Applying agreed norms to "real life" throws up challenges of interpretation and enforcement, to which the only long-term solution remains regular dialogue and exchange both between states and other stakeholders.
This was the thinking behind the session titled "Cybernorms: Do They Hold Up IRL (in Real Life)?", organised at RightsCon 2021 by four non-governmental organisations: the Association for Progressive Communications (APC), the Centre for Internet & Society (CIS), Global Partners Digital (GPD), and Research ICT Africa (RIA). Cyber norms do not work unless states and other actors call out violations of norms, actively observe and implement them, and hold each other accountable. As the organisers of the event, we devised hypothetical scenarios based on three real-life examples of large-scale incidents and engaged with discussants who sought to apply agreed cyber norms to them. We chose to create scenarios without referring to real states as we wanted the discussion to focus on the implementation and interpretation of norms rather than the specific political situation of each actor.
Through this interactive exercise involving an array of expert stakeholders (including academics, civil society, the technical community, and governments) and communities from different regions, we sought to answer whether and how the application of cyber norms can mitigate harms, especially to vulnerable communities, and identify possible gaps in current normative frameworks. For each scenario, we aimed to diagnose whether cyber norms have been violated, and if so, what could and should be done, by identifying the next steps that can be taken by all the stakeholders present. For each scenario, we highlight why we chose it, outline the main points of discussion, and articulate key takeaways for norm implementation and interpretation. We hope this exercise will feed into future conversations around both norm creation and enforcement by serving as a framework for guiding optimal norm enforcement.
Read the full report here
CIS Seminar Series
The first seminar series was held on 7th and 8th October on the theme of ‘Information Disorder: Mis-, Dis- and Malinformation’,
Theme for the Second Seminar (to be held online)
Moderating Data, Moderating Lives: Debating visions of (automated) content moderation in the contemporary
Artificial Intelligence (AI) and Machine Learning (ML) based approaches have become increasingly popular as “solutions” to curb the extent of mis-, dis- mal-information, hate speech, online violence and harassment on social media. The pandemic and the ensuing work from home policy forced many platforms to shift to automated moderation which further highlighted the inefficacy of existing models (Gillespie, 2020) to deal with the surge in misinformation and harassment. These efforts, however, raise a range of interrelated concerns such as freedom and regulation of speech on the privately public sphere of social media platforms; algorithmic governance, censorship and surveillance; the relation between virality, hate, algorithmic design and profits; and social, political and cultural implications of ordering social relations through computational logics of AI/ML.
On one hand, large-scale content moderation approaches (that include automated AI/ML-based approaches) have been deemed “necessary” given the enormity of data generated (Gillespie, 2020), on the other hand, they have been regarded as “technological fixtures” offered by the Silicon Valley (Morozov, 2013), or “tyrannical” as they erode existing democratic measures (Harari, 2018). Alternatively, decolonial, feminist and postcolonial approaches insist on designing AI/ML models that centre voices of those excluded to sustain and further civic spaces on social media (Siapera, 2022).
From the global south perspective, issues around content moderation foreground the hierarchies inbuilt in the existing knowledge infrastructures. First, platforms remain unwilling to moderate content in under-resourced languages of the global south citing technological difficulties. Second, given the scale and reach of social media platforms and inefficient moderation models, the work is outsourced to workers in the global south who are meant to do the dirty work of scavenging content off these platforms for the global north. Such concerns allow us to interrogate the techno-solutionist approaches as well as their critiques situated in the global north. These realities demand that we articulate a different relationship with AI/ML while also being critical of AI/ML as an instrument of social empowerment for those at the “bottom of the pyramid” (Arora, 2016).
The seminar invites scholars interested in articulating nuanced responses to content moderation that take into account the harms perpetrated by algorithmic governance of social relations and irresponsible intermediaries while being cognizant of the harmful effects of mis-, dis- mal-information, hate speech, online violence and harassment on social media.
We invite abstract submissions that respond to these complexities vis-a-vis content moderation models or propose provocations regarding automated moderation models and their in/efficacy in furthering egalitarian relationships on social media, especially in the global south.
Submissions can reflect on the following themes using legal, policy, social, cultural and political approaches. Also, the list is not exhaustive and abstracts addressing other ancillary concerns are most welcome:
- Metaphors of (content) moderation: mediating utopia, dystopia, scepticism surrounding AI/ML approaches to moderation.
- From toxic to healthy, from purity to impurity: Interrogating gendered, racist, colonial tropes used to legitimize content moderation
- Negotiating the link between content moderation, censorship and surveillance in the global south
- Whose values decide what is and is not harmful?
- Challenges of building moderation models for under resourced languages.
- Content moderation, algorithmic governance and social relations.
- Communicating algorithmic governance on social media to the not so “tech-savvy” among us.
- Speculative horizons of content moderation and the future of social relations on the internet.
- Scavenging abuse on social media: Immaterial/invisible labour for making for-profit platforms safer to use.
- Do different platforms moderate differently? Interrogating content moderation on diverse social media platforms, and multimedia content.
- What should and should not be automated? Understanding prevalence of irony, sarcasm, humour, explicit language as counterspeech.
- Maybe we should not automate: Alternative, bottom-up approaches to content moderation
Seminar Format
We are happy to welcome abstracts for one of two tracks:
Working paper presentation
A working paper presentation would ideally involve a working draft that is presented for about 15 minutes followed by feedback from workshop participants. Abstracts for this track should be 600-800 words in length with clear research questions, methodology, and questions for discussion at the seminar. Ideally, for this track, authors should be able to submit a draft paper two weeks before the conference for circulation to participants.
Coffee-shop conversations
In contrast to the formal paper presentation format, the point of the coffee-shop conversations is to enable an informal space for presentation and discussion of ideas. Simply put, it is an opportunity for researchers to “think out loud” and get feedback on future research agendas. Provocations for this should be 100-150 words containing a short description of the idea you want to discuss.
We will try to accommodate as many abstracts as possible given time constraints. We welcome submissions from students and early career researchers, especially those from under-represented communities.
All discussions will be private and conducted under the Chatham House Rule. Drafts will only be circulated among registered participants.
Please send your abstracts to [email protected].
Timeline
- Abstract Submission Deadline: 18th April
- Results of Abstract review: 25th April
- Full submissions (of draft papers): 25th May
- Seminar date: Tentative 31st May
References
Arora, P. (2016). Bottom of the Data Pyramid: Big Data and the Global South. International Journal of Communication, 10(0), 19.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 2053951720943234. https://doi.org/10.1177/2053951720943234
Harari, Y. N. (2018, August 30). Why Technology Favors Tyranny. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism (First edition). PublicAffairs.
Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. International Journal of Bullying Prevention, 4(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7
Personal Data Protection Bill must examine data collection practices that emerged during pandemic
The article by Shweta Mohandas and Anamika Kundu was originally published by news nine on November 29, 2021.
The Personal Data Protection Bill (PDP) is speculated to be introduced during the winter session of the parliament soon, and the report of the Joint Parliamentary Committee (JPC) has already been adopted by the committee on Monday. The Report of the JPC comes after almost two years of deliberation and secrecy over how the final version of the Personal Data Protection Bill will be. Since the publication of the 2019 version of the PDP Bill, the Covid 19 pandemic and the public safety measures have opened the way for a number of new organisations and reasons to collect personal data that was non-existent in 2019. Hence along with changes that have been suggested by multiple civil society organisations, the dissent notes submitted by the members of the JPC, the new version of the PDP Bill must also look at how data processing has changed over the span of two years.
Concerns with the bill
At the outset there are certain parts of the PDP Bill which need to be revised in order to uphold the spirit of privacy and individual autonomy laid out in the Puttaswamy judgement. The two sections that need to be in line with the privacy judgement are the ones that allow for non consensual processing of data by the government, and by employers. The PDP Bill in its current form provides wide-ranging exemptions which allow government agencies to process citizen's data in order to fulfil its responsibilities.
In the 2018 version of bill, drafted by the Justice Srikrishna Committee exemptions granted to the State with regard to processing of data was subject to a four pronged test which required the processing to be (i) authorised by law; (ii) in accordance with the procedure laid down by the law; (iii) necessary; and (iv) proportionate to the interests being achieved. This four pronged test was in line with the principles laid down by the Supreme Court in the Puttaswamy judgement. The 2019 version of the PDP Bill has diluted this principle by merely retaining the 'necessity principle' and removing the other requirements which is not in consonance with the test laid down by the Supreme Court in Puttaswamy.
Section 35 was also widely discussed in the panel meetings where members had argued the removal of 'public order' as a ground for exemption. The panel also insisted for 'judicial or parliamentary oversight' to grant such exemptions. The final report did not accept these suggestions stating a need to balance national security, liberty and privacy of an individual. There ought to be prior judicial review of the written order exempting the governmental agency from any provisions of the bill. Allowing the government to claim an exemption if it is satisfied to be "necessary or expedient" can be misused.
Another clause which gives the data principal a wide berth is with respect to employee data Section 13 of the current version of the bill provides the employer with a leeway into processing employee data (other than sensitive personal data) without consent based on two grounds: when consent is not appropriate, or when obtaining consent would involve disproportionate effort on the part of the employer.
The personal data so collected can only be collected for recruitment, termination, attendance, provision of any service or benefit, and assessing performance. This covers almost all of the activities that require data of the employee. Although the 2019 version of the bill excludes non-consensual collection of sensitive personal data (a provision that was missing in the 2018 version of the bill), there is still a lot of scope to improve this provision and provide employees further right to their data. At the outset the bill does not define employee and employer, which could result in confusion as there is no one definition of these terms across Indian Labour Laws.
Additionally, the bill distinguishes between employee and consumer, where the consumer of the same company or service has a greater right to their data than an employee. In the sense that the consumer as a data principal has the option to use any other product or service and also has the right to withdraw consent at any time, in the case of an employee the consequence of refusing consent or withdrawing consent would be being terminated from the employment. It is understood that there is a requirement for employee data to be collected, and that consent does not work the same way as it does in the case of a consumer.
The bill could ensure that employers have some responsibility towards the data they collect from the employees, such as ensuring that they are only used for the purpose for which they were collected, the employee knows how long their data will be retained, and know if the data is being processed by third parties. It is also worth mentioning that the Indian government is India's largest employer spanning a variety of agencies and public enterprises.
Concerns highlighted by JPC Members
Going back to the few members of the JPC who have moved dissent notes, specifically with regard to governmental exemptions. Jairam Ramesh filed a dissent note, to which many other opposition members followed suit. While Jairam Ramesh praised the JPC's functioning, he disagreed with certain aspects of the Report. According to him, the 2019 bill is designed in a manner where the right to privacy is given importance only in cases of private activities. He raised concerns regarding the unbridled powers given to the government to exempt itself from any of the provisions.
The amendment suggested by him would require parliamentary approval before exemption would take place. He also added that Section 12 of the bill which provided certain scenarios where consent was not needed for processing of personal data should have been made 'less sweeping'. Similarly, Gaurav Gogoi's note stated that the exemptions would create a surveillance state and similarly criticised Section 12 and 35 of the bill. He also mentioned that there ought to be parliamentary oversight for the exemptions provided in the bill.
On the same issue, Congress leader Manish Tiwari noted that the bill creates 'parallel universes' - one for the private sector which needs to be compliant and the other for the State which can exempt itself. He has opposed the entire bill stating there exists an "inherent design flaw". He has raised specific objections to 37 clauses and stated that any blanket exemptions to the state goes against the Puttaswamy Judgement.
In their joint dissent note, Derek O'Brien and Mahua Mitra have said that there is a lack of adequate safeguards to protect the data principals' privacy and the lack of time and opportunity for stakeholder consultations. They have also pointed out that the independence of the DPA will cease to exist with the present provision of allowing the government powers to choose members and the chairman. Amar Patnaik is to object to the lack of inclusion of state level authorities in the bill. Without such bodies, he says, there would be federal override.
Conclusion
While a number of issues were highlighted by civil society, the members of the JPC, and the media, the new version of the bill should also need to take into account the shifts that have taken place in view of the pandemic. The new version of the data protection bill should take into consideration the changes and new data collection practices that have emerged during the pandemic, be comprehensive and leave very little provisions to be decided later by the Rules.
Comments to the draft Motor Vehicle Aggregators Scheme, 2021
CIS, established in Bengaluru in 2008 as a non-profit organisation, undertakes interdisciplinary research on internet and digital technologies from public policy andacademic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and regulatory practices around internet, technology,and society in India, and elsewhere.
CIS is grateful for the opportunity to submit its comments to the draft Scheme. Please find below our thematically organised comments.
Click here to read more.
Decoding India’s Central Bank Digital Currency (CBDC)
In her budget speech presented in the Parliament on 1 February 2022, the Finance Minister of India – Nirmala Sitharaman – announced that India will launch its own Central Bank Digital Currency (CBDC) from the financial year 2022–23. The lack of information regarding the Indian CBDC project has resulted in limited discussions in the public sphere. This article is an attempt to briefly discuss the basics of CBDCs such as the definition, necessity, risks, models, and associated technologies so as to shed more light on India’s CBDC project.
1. What is a CBDC?
Before delving into the various aspects of a CBDC, we must first define it. A CBDC in its simplest form has been described by the RBI as “the same as currency issued by a central bank but [which] takes a different form than paper (or polymer). It is sovereign currency in an electronic form and it would appear as liability (currency in circulation) on a central bank’s balance sheet. The underlying technology, form and use of a CBDC can be moulded for specific requirements. CBDCs should be exchangeable at par with cash.”
2. Policy Goals
Launching any CBDC involves the setting up of infrastructure, which comes with notable costs. It is therefore imperative that the CBDC provides significant advantages that can justify the investment it entails. Some of the major arguments in favour of CBDCs and their relevance in the Indian context are as follows.
Financial Inclusion: In countries with underdeveloped banking and payment systems, proponents believe that CBDCs can boost financial inclusion through the provision of basic accounts and an electronic payment system operated by the central bank. However, financial inclusion may not be a powerful motive in India, where at least one member in 99% of rural and urban households have a bank account, according to some surveys. Even the US Federal Reserve recognises that further research is needed to assess the potential of CBDCs to expand financial inclusion, especially among underserved and lower-income households.
Access to Payments: – It is claimed that CBDCs provide scope for improving the existing payments landscape by offering fast and efficient payment services to users. Further, supporters claim that a well-designed, robust, open CBDC platform could enable a wide variety of firms to compete to offer payment services. It could also enable them to innovate and generate new capabilities to meet the evolving needs of an increasingly digitalised economy. However, it is not yet clear exactly how CBDCs would achieve this objective and whether there would be any noticeable improvements in the payment systems space in India, which already boasts of a fairly advanced and well-developed payment systems market.
Increased System Resilience: Countries with a highly developed digital payments landscape are aware of their reliance on electronic payment systems. The operational resilience of these systems is of critical importance to the entire payments landscape. The CBDC would not only act as a backup to existing payment systems in case of an emergency but also reduce the credit risk and liquidity risk, i.e., the risk that payment system providers will turn insolvent and run out of liquidity. Such risks can also be mitigated through robust regulatory supervision of the entities in the payment systems space.
Increasing Competition: A CBDC has the potential to increase competition in the country’s payments sector in two main ways, (i) directly – by providing an alternative payment system that competes with existing private players, and (ii) by providing an open platform for private players, thereby reducing entry barriers for newer players offering more innovative services at lower costs.
Addressing Illicit Transactions: Cash offers a level of anonymity that is not always available with existing payment systems. If a CBDC offers the same level of anonymity as cash then it would pose a greater CFT/AML (Combating Financial Terrorism/ Anti-Money Laundering) risk. However, if appropriate CFT/AML requirements are built into the design of the CBDC, it could address some of the concerns regarding its usage in illegal transactions. Such CFT/AML requirements are already being followed by existing banks and payment systems providers.
Reduced Costs: If a CBDC is adopted to the extent that it begins to act as a substitute for cash, it could allow the central bank to print lesser currency, thereby saving costs on printing, transporting, storing, and distributing currency. Such a cost reduction is not exclusive to only CBDTs but can also be achieved through the widespread adoption of existing payment systems.
Reduction in Private Virtual Currencies (VCs): Central banks are of the view that a widely used CBDC will provide users with an alternative toexisting private cryptocurrencies and thereby limit various risks including credit risks, volatility risks, risk of fraud, etc. However if a CBDC does not offer the same level of anonymity or potential for high return on investment that is available with existing VCs, it may not be considered an attractive alternative.
Serving Future Needs: Several central banks see the potential for “programmable money” that can be used to conduct transactions automatically on the fulfilment of certain conditions, rules, or events. Such a feature may be used for automatic routing of tax payments to authorities at the point of sale, shares programmed to pay dividends directly to shareholders, etc. Specific programmable CBDCs can also be issued for certain types of payments such as toward subway fees, shared bike fees, or bus fares. This characteristic of CBDCs has huge potential in India in terms of delivery of various subsidies.
3. Potential Risks
As with most things, CBDCs have certain drawbacks and risks that need to be considered and mitigated in the designing phase itself. A successful and widely adopted CBDC could change the structure and functions of various stakeholders and institutions in the economy.
Both private and public sector banks rely on bank deposits to fund their loan activities. Since bank deposits offer a safe and risk-freeway to park one’s savings, a large number of people utilise this facility, thereby providing banks with a large pool of funds that is utilised for lending activities. A CBDC could offer the public a safer alternative to bank deposits since it eradicates even the minute risk of the bank becoming insolvent making it more secure than regular bank deposits. A widely accepted CBDC could adversely affect bank deposits, thereby reducing the availability of funds for lending by banks and adversely affecting credit facilities in the economy. Further, since a CBDC is a safer form of money, in times of stress, people may opt to convert funds stored in banks into safer CBDCs, which might cause a bank run. However, these issues can be mitigated by making the CBDC deposits non-interest-bearing, thus reducing their attractiveness as an alternative to bank deposits. Further, in times of monetary stress, the central bank could impose restrictions on the amount of bank money that can be converted into the CBDC, just as it has done in the case of cash withdrawals from specific banks when it finds that such banks are undergoing extreme financial stress.
If a significantly large portion of a country’s population adopts a private digital currency, it could seriously hamper the ability of the central bank to carry out several crucial functions, such as implementing the monetary policy, controlling inflation, etc.
It may be safe to say that the question of how CBDCs may affect the economy in general and more specifically, the central bank’s ability to implement monetary policy, seigniorage, financial stability, etc. requires further research and widespread consultation to mitigate any potential risk factors.
4. The Role of the Central Bank in a CBDC
The next issue that requires attention when dealing with CBDCs is the role and level of involvement of the central bank. This would depend not only on the number of additional functions that the central bank is comfortable adopting but also on the maturity of the fintech ecosystem in the country. Broadly speaking, there are three basic models concerning the role of the central bank in CBDCs:
(i) Unilateral CBDCs: Where the central bank performs all the functions right from issuing the CBDC to carrying out and verifying transactions and also dealing with the users by maintaining their accounts.
(ii) Hybrid or Intermediate Model: In this model, the CBDCs are issued by the central bank, but private firms carry out some of the other functions such as providing wallets to end users, verifying transactions, updating ledgers, etc. These private entities will be regulated by the central bank to ensure that there is sufficient supervision.
(iii) Synthetic CBDCs: In this model, the CBDC itself is not issued by the central bank but by private players. However, these CBDCs are backed by central bank liabilities, thus providing the sovereign stability that is the hallmark of a CBDC.
The mentioned models could also be modified to suit the needs of the economy; e.g., the second model could be modified by not only allowing private players to perform the user-facing functions, but also offering the same functions either by the central bank or even some other public sector enterprise. Such a scenario has the potential to offer services at a reduced price (perhaps with reduced functionalities) thereby fulfilling the financial inclusion and cost reduction policy goals mentioned above.
5. Role of Blockchain Technology
While it is true that the entire concept of a CBDC evolved from cryptocurrencies and that popular cryptocurrencies like Bitcoin and Ether are based on blockchain technology, recent research seems to suggest that blockchain may not necessarily be the default technology for a CBDC. Additionally, different jurisdictions have their own views on the merits and demerits of this technology, for example, the Bahamas and the Eastern Caribbean Central Bank have DLT-based systems; however, China has decided that DLT-based systems do not have adequate capacity to process transactions and store data to meet its system requirements.
Similarly, a project by the Massachusetts Institute of Technology (MIT) Currency Initiative and the Federal Reserve Bank of Boston titled “Project Hamilton” to explore the CBDC design space and its technical challenges and opportunities has surmised that a distributed ledger operating under the jurisdiction of different actors is not necessarily crucial. It was found that even if controlled by a single actor, the DLT architecture has downsides such as performance bottlenecks and significantly reduced transaction throughput scalability compared to other options.
6. Conclusion
Although a CBDC potentially offers some advantages, launching one is an expensive and complicated proposition, requiring in-depth research and detailed analyses of a large number of issues, only some of which have been highlighted here. Therefore, before launching a CBDC, central banks issue white papers and consult with the public in addition to major stakeholders, conduct pilot projects, etc. to ensure that the issue is analysed from all possible angles. Although the Reserve Bank of India is examining various issues such as whether the CBDC would be retail or wholesale, the validation mechanism, the underlying technology to be used, distribution architecture, degree of anonymity, etc., it has not yet released any consultation papers or confirmed the completion of any pilot programmes for the CBDC project.
It is, therefore, unclear whether there has been any detailed cost–benefit analysis by the government or the RBI regarding its feasibility and benefits over existing payment systems and whether such benefits justify the costs of investing in a CBDC. For example, several of the potential advantages discussed here, such as financial inclusion and improved payment systems may not be relevant in the Indian context, while others such as reduced costs and a reduction in illegal transactions may be achieved by improving the existing systems. It must be noted that the current system of distribution of central bank money has worked well over the years, and any systemic changes should be made only if the potential upside justifies such fundamental changes.
The Government of India has already announced the launch of the Indian CBDC in early 2023, but the lack of public consultation on such an important project is a matter of concern. The last time the RBI took a major decision in the crypto space without consulting stakeholders was when it banned financial institutions from having any dealings with crypto entities. On that occasion, the circular imposing the ban was struck down by the Supreme Court as violating the fundamental right to trade and profession. It is, therefore, imperative that the government and the Reserve Bank conduct wide-ranging consultations with experts and the public to conduct a detailed and thorough cost–benefit analysis to determine the feasibility of such a project before deciding on the launch of an Indian CBDC.
Response to the Pegasus Questionnaire issued by the SC Technical Committee
The questionnaire had 11 questions and the responses had to be submitted through an online form- which was available here. The last date for submitting the response was March 31, 2022. CIS had submitted the following responses to the questions in the questionnaire. Access the Response to the Questionnaire
Rethinking Acquisition of Digital Devices by Law Enforcement Agencies
Read the article originally published in RGNUL Student Research Review (RSRR) Journal
Abstract
The Criminal Procedure Code was created in the 1970s when the concept of the right to privacy was highly unacknowledged. Following the Puttuswamy I (2017) judgement of the Supreme Court affirming the right to privacy, these antiquated codes must be re-evaluated. Today, the police can acquire digital devices through summons and gain direct access to a person’s life, despite the summons mechanism having been intended for targeted, narrow enquiries. Once in possession of a device, the police attempt to circumvent the right against self-incrimination by demanding biometric passwords, arguing that the right does not cover biometric information . However, due to the extent of information available on digital devices, courts ought to be cautious and strive to limit the power of the police to compel such disclosures, taking into consideration the right to privacy judgement.
Keywords: Privacy, Criminal Procedural Law, CrPc, Constitutional Law
Introduction
New challenges confront the Indian criminal investigation framework, particularly in the context of law enforcement agencies (LEAs) acquiring digital devices and their passwords. Criminal procedure codes delimiting police authority and procedures were created before the widespread use of digital devices and are no longer pertinent to the modern age due to the magnitude of information available on a single device. A single device could provide more information to LEAs than a complete search of a person’s home; yet, the acquisition of a digital device is not treated with the severity and caution it deserves. Following the affirmation of the right to privacy in Puttuswamy I (2017), criminal procedure codes must be revamped, taking into consideration that the acquisition of a person’s digital device constitutes a major infringement on their right to privacy.
Acquisition of digital devices by LEAs through summons
Section 91 of the Criminal Procedure Code (CrPc) grants powers to a court or police officer in charge of a police station to compel a person to produce any form of document or ‘thing’ necessary and desirable to a criminal investigation. In Rama Krishna v State, ‘necessary’ and ‘desirable’ have been interpreted as any piece of evidence relevant to the investigation or a link in the chain of evidence. Abhinav Sekhri, a criminal law litigator and writer, has argued that the wide wording of this section allows summons to be directed towards the retrieval of specific digital devices.
As summons are target-specific, the section has minimal safeguards. However, several issues arise in the context of summons regarding digital devices. In the current day, access to a user’s personal device can provide comprehensive insight into their life and personality due to the vast amounts of private and personal information stored on it. In Riley v California, the Supreme Court of the United States (SCOTUS) observed that due to the nature of the content present on digital devices, summons for them are equivalent to a roving search, i.e., demanding the simultaneous production of all contents of the home, bank records, call records, and lockers. The Riley decision correctly highlights the need for courts to recognise that digital devices ought to be treated distinctly compared to other forms of physical evidence due to the repository of information stored on digital devices.
The burden the state must surpass in order to issue summons is low as the relevancy requirement is easily provable. As noted in Riley, police must identify which evidence on a device is relevant. Due to the sheer amount of data on phones, it is very easy for police to claim that there will surely be some form of connection between the content on the device and the case. Due to the wide range of offences available for Indian LEAs to cite, it is easy for them to argue that the content on the device is relevant to any number of possible offences. LEAs rarely face consequences for slamming the accused with a huge roster of charges – even if many of them are baseless – leading to the system being prone to abuse. The Indian Supreme Court in its judgement in Canara Bank noted that the burden of proof must be higher for LEAs when investigations violate the right to privacy. Tarun Krishnakumar notes that the trickle-down effect of Puttuswamy I will lead to new privacy challenges with regards to a summons to appear in court. Puttuswamy I, will provide the bedrock and constitutional framework, within which future challenges to the criminal process will be undertaken. It is important for the court to recognise the transformative potential within the Puttuswamy judgement to help ensure that the right to privacy of citizens is safeguarded. The colonial logic of policing – wherein criminal procedure law was merely a tool to maximise the interest of the state at the cost of the people – must be abandoned. Courts ought to devise a framework under Section 91 to ensure that summons are narrowly framed to target specific information or content within digital devices. Additionally, the digital device must be collected following a judicial authority issuing the summons and not a police authority. Prior judicial warrants will require LEAs to demonstrate their requirement for the digital device; on estimating the impact on privacy, the authority can issue a suitable summons. Currently, the only consideration is if the item will furnish evidence relevant to the investigation; however, judges ought to balance the need for the digital device in the LEA’s investigation with the users’ right to privacy, dignity, and autonomy.
Puttuswamy I provides a triple test encompassing legality, necessity, and proportionality to test privacy claims. Legality requires that the measure be prescribed by law, necessity analyses if it is the least restrictive means being adopted by the state, and proportionality checks if the objective pursued by the measure is proportional to the degree of infringement of the right. The relevance standard, as mentioned before, is inadequate as it does not provide enough safeguards against abuse. The police can issue summons based on the slightest of suspicions and thus get access to a digital device, following which they can conduct a roving enquiry of the device to find evidence of any other offence, unrelated to the original cause of suspicion.
Unilateral police summons of digital devices cannot pass the triple test as it is grossly disproportionate and lacks any form of safeguard against the police. The current system has no mechanism for overseeing the LEAs; as long as LEAs themselves are of the view that they require the device, they can acquire it. In Riley, SCOTUS has already held that warrantless seizure of digital devices constitutes a violation of the right to privacy. India ought to also adopt a requirement of a prior judicial warrant for the procurement of devices by LEAs. A re-imagined criminal process would have to abide by the triple test in particular proportionality wherein the benefit claimed by the state ought not to be disproportionate to the impact on the fundamental right to privacy; and further, a framework must be proposed to provide safeguards against abuse.
Compelling the production of passwords of devices
In police investigations, gaining possession of a physical device is merely the first step in acquiring the data on the device, as the LEAs still require the passcodes needed to unlock the device. LEAs compelling the production of passcodes to gain access to potentially incriminating data raises obvious questions regarding the right against self-incrimination; however, in the context of digital devices, several privacy issues may crop up as well.
In Kathi Kalu Oghad, the SC held that compelling the production of fingerprints of an accused person to compare them with fingerprints discovered by the LEA in the course of their investigation does not violate the right to protection against self-incrimination of the accused. It has been argued that the ratio in the judgement prohibits the compelling of disclosure of passwords and biometrics for unlocking devices because Kathi Kalu Oghad only dealt with the production of fingerprints in order to compare the fingerprints with pre-existing evidence, as opposed to unlocking new evidence by utilising the fingerprint. However, the judgement deals with self-incrimination and does not address any privacy issues.
The right against self-incrimination approach alone may not be enough to resolve all concerns. Firstly, there may be varying levels of protection provided to different forms of password protections on digital devices; text- and pattern-based passcodes are inarguably protected under Art. 20(3) of the Constitution. However, the protection of biometrics-based passcodes relies upon the correct interpretation of the Kathi Kalu Oghad precedent. Secondly, Art. 20(3) only protects the accused in investigations and not when non-accused digital devices are acquired by LEAs and the passcodes of the devices demanded.
Therefore, considering the aforementioned points, it is pertinent to remember that the right against self-incrimination does not exist in a vacuum separate from privacy. It originates from the concept of decisional autonomy – the right of individuals to make decisions about matters intimate to their life without interference from the state and society. Puttuswamy I observed that decisional autonomy is the bedrock of the right to privacy, as privacy allows an individual to make these intimate decisions away from the glare of society and/or the state. This has heightened importance in this context as interference with such autonomy could lead to the person in question facing criminal prosecution. The SC in Selvi v Karnataka and Puttuswamy I has repeatedly affirmed that the right against self-incrimination and the right to privacy are linked concepts, with the court observing that the right to remain silent is an integral aspect of decisional autonomy.
In Virendra Khanna, the Karnataka High Court (HC) dealt with the privacy and self-incrimination concerns caused by LEAs compelling the disclosure of passwords. The HC brushes aside concerns related to privacy by noting that the right to privacy is not absolute and that an exception to the right to privacy is state interest and protection of law and order (para 5.11), and that unlawful disclosure of material to third parties could be an actionable wrong (para 15). The court’s interpretation of privacy effectively provides a free pass for the police to interfere with the right to privacy under the pretext of a criminal investigation. This conception of privacy is inadequate as the issue of proportionality is avoided, and the court does not attempt to ensure that the interference is proportionate with the outcome.
US courts also see the compelling of production of passcodes as an issue of self-incrimination as well as privacy. In its judgement in Application for a Search Warrant, a US court observed that compelling the disclosure of passcodes existed at an intersection of the right to privacy and self-incrimination; the right against self-incrimination serves to protect the privacy interests of suspects.
Disclosure of passwords to digital devices amounts to an intrusion of the privacy of the suspect as the collective contents on the digital device effectively amount to providing LEAs with a method to observe a person’s mind and identity. Police investigative techniques cannot override fundamental rights and must respect the personal autonomy of suspects – particularly, the choice between silence and speech. Through the production of passwords, LEAs can effectively get a snapshot of a suspect’s mind. This is analogous to the polygraph and narco-analysis test struck down as unconstitutional by the SC in Selvi as it violates decisional autonomy.
As Sekhri noted, a criminal process that reflects the aspirations of the Puttuswamy judgement would require LEAs to first explain with reasonable detail the material which they wish to find in the digital devices. Secondly, they must provide a timeline for the investigation to ensure that individuals are not subjected to inexhaustible investigations with police roving through their devices indefinitely. Thirdly, such a criminal process must demand, a higher burden to be discharged from the state if the privacy of the individual is infringed upon. These aspirations should form the bedrock of a system of judicial warrants that LEAs ought to be required to comply with if they wish to compel the disclosure of passwords from individuals. The framework proposed above is similar to the Virendra Khanna guidelines, as they provide a system of checks and balances that ensure that the intrusion on privacy is carried out proportionately; additionally, it would require LEAs to show a real requirement to demand access to the device. The independent eyes of a judicial magistrate provide a mechanism of oversight and a check against abuse of power by LEAs.
Conclusion
The criminal law apparatus is the most coercive power available to the state, and, therefore, privacy rights will become meaningless unless they can withstand it. Several criminal procedures in the country are rooted in colonial statutes, where the rights of the populace being policed were never a consideration; hence, a radical shift is required. However, post-1947 and Puttuswamy, the ignorance and refusal to submit to the rights of the population can no longer be justified and significant reformulation is necessary to guarantee meaningful protections to device owners. There is a need to ensure that the rights of individuals are protected, especially when the motivation for their infringement is the supposed noble intentions of the criminal justice system. Failing to defend the right to privacy in these moments would be an invitation for allowing the power of the state to increase and inevitably become absolute.
CCTVs in Public Spaces and the Data Protection Bill, 2021
The article by Anamika Kundu and Digvijay S. Chaudhary was originally published by RGNUL Student Research Review on April 20, 2022
Introduction
In recent times, Indian cities have seen an expansion of state deployed CCTV cameras. According to a recent report, in terms of CCTVs deployed, Delhi was considered as the most surveilled city in the world, surpassing even the most surveilled cities in China. Delhi was not the only Indian city in that list, Chennai and Mumbai also made it to the list. In Hyderabad as well, the development of a Command and Control Centre aims to link the city’s surveillance infrastructure in real-time. Even though studies have shown that there is little correlation between CCTVs and crime control, deployment of CCTV cameras has been justified on the basis of national security and crime deterrence. Such an activity brings about the collection and retention of audio-visual/visual information of all individuals frequenting spaces where CCTV cameras are deployed. This information could be used to identify them (directly or indirectly) based on their looks or other attributes. Potential risks associated with the misuse, and processing of such personal data also arise. These risks include large scale profiling, criminal abuse (law enforcement misusing CCTV information for personal gains), and discriminatory targeting (law enforcement disproportionately focusing on a particular group of people). As these devices capture personal data of individuals, this article seeks data protection safeguards available to data principals against CCTV surveillance employed by the State in a public space under the proposed Data Protection Bill, 2021 (the “DPB”).
Safeguards Available Under the Data Protection Bill, 2021
To use CCTV surveillance, the measures and compliance listed under the DPB have to be followed. Obligations of data fiduciaries available under Chapter II, such as consent (clause 11), notice requirement (clause 7), and fair and reasonable processing (clause 5) are common to all data processing entities for a variety of activities. Similarly, as the DPB follows the principles of data minimisation (clause 6), storage limitation (clause 9), purpose limitation (clause 5), lawful and fair processing (clause 4), transparency (clause 23), and privacy by design (clause 22), these safeguards too are common to all data processing entities/activities. If a data fiduciary processes personal data of children, it has to comply with the standards stated under clause 16.
Under the DPB, compliance differs on the basis of grounds and purpose of data processing. As such, if compliance standards differ, so do the availability of safeguards under the DPB. Of relevance to this article, there are three standards of compliance under the DPB wherein the standards of safeguards available to a data principal differ. First, cases which would fall under Chapter III and hence, not require consent. Chapter III lists grounds for processing of personal data without consent. Second, cases which would fall under exemption clauses in Chapter VIII. In such cases, the DPB or some of its provisions would be inapplicable. Clause 35 under Chapter VIII gives power to the Central Government to exempt any agency from the application of the DPB. Similarly, Clause 36 under Chapter VIII, exempts certain provisions for certain processing of personal data. Third, cases which would not fall under either of the above Chapters. In such cases, all safeguards available under the DPB would be available to the data principals. Consequently, safeguards available to data principals in each of these standards are different. We will go through each of these separately.
First, if the grounds of processing of CCTV information is such that it falls under the scope of Chapter III of the DPB, wherein the consent requirement is done away with, then in those cases, the notice requirement has to reflect such purpose, meaning that even if consent is not necessary for certain cases, other requirements under the DPB would still apply. Here, we must note that CCTV deployment by the state on such a large scale may be justified on the basis of conditions stated under clauses 12 and 14 of DPB – specifically, the condition for the performance of state function authorised by law, and public interest. The requirement under clause 12 of “authorised by law” simply means that the state function should have legal backing. Deployment of CCTVs is most likely to fall under clause 12 as various states have enacted legislations providing for CCTV deployment in the name of public safety. As a result, even if section 12 takes away the requirement of consent for certain cases, data principals should be able to exercise all rights accorded to them under the DPB (chapter V) except the right to data portability under clause 19.
Second, processing of personal data via CCTVs by government agencies could be exempted from DPB under clause 35 for certain cases under the clause. Another exemption that is particularly concerning with regard to the use of CCTVs is the exemption provided under clause 36(a). Section 36(a) says that the provisions of chapters II-VII would not apply where the data is processed in the interest of prevention, detection, investigation, and prosecution of any offence under the law. Chapters II-VII govern the obligations of data fiduciaries, grounds where consent would not be required, personal data of children, rights of data principals, transparency and accountability measures, and restrictions on transfer of personal data outside India respectively. In these cases, the requirement of fair and reasonable processing under clause 5 would also not apply. As a broad justification provided for CCTVs deployment by the government is crime control, it is possible that section 36(a) justification can be used to exempt the processing of CCTV footage from the above-mentioned safeguards.
From the above discussion, the following can be concluded. First, if the grounds of processing fall under Chapter III, then standards of fair and reasonable processing, notice requirement, and all rights except the right to data portability u/s 19 would be available to data principals. Second, if the grounds of processing fall under clause 36, then, in that case, consent requirement, notice requirement, and the rights under DPB would be unavailable as that section mandates the non-application of those chapters. In such a case, even the processing requirements of a fair and reasonable manner stand suspended. Third, if the grounds of processing of CCTV information doesn’t fall under Chapter III, then all obligations listed under Chapter II would have to be followed. Moreover, the data principal would be able to exercise all the rights available under Chapter V of the DPB.
Constitutional Standards
When the Supreme Court recognised privacy as a fundamental right in the case of Puttaswamy v. Union of India (“Puttaswamy”), it located the principles of informed consent and purpose limitation as central to informational privacy. It recognised that privacy inheres not in spaces but in an individual. It also recognised that privacy is not an absolute right and certain restrictions may be imposed on the exercise of the right. Before listing the constitutional standards that activities infringing privacy must adhere to, it’s important to answer whether there exists a reasonable expectation of privacy in CCTV footage deployed in a public space by the State?
In Puttaswamy, the court recognised that privacy is not denuded in public spaces. Writing for the plurality judgement, Chandrachud J. recognised that the notion of a reasonable expectation of privacy has elements both of a subjective and objective nature. Defining these concepts, he writes, “Privacy at a subjective level is a reflection of those areas where an individual desire to be left alone. On an objective plane, privacy is defined by those constitutional values which shape the content of the protected zone where the individual ought to be left alone…hence while the individual is entitled to a zone of privacy, its extent is based not only on the subjective expectation of the individual but on an objective principle which defines a reasonable expectation.” Note how in the above sentences, the plurality judgement recognises “a reasonable expectation” to be inherent in “constitutional values”. This is important as the meaning of what’s reasonable is to be constituted according to constitutional values and not societal norms. A second consideration that the phrase “reasonable expectation of privacy” requires is that an individual’s reasonable expectation is allied to the purpose for which the information is provided, as held in the case of Hyderabad v. Canara Bank (“Canara Bank”). Finally, the third consideration in defining the phrase is that it is context dependent. For example, in the case of In the matter of an application by JR38 for Judicial Review (Northern Ireland) 242 (2015) (link here), the UK Supreme Court was faced with a scenario where the police published the CCTV footage of the appellant involved in riotous behaviour. The question before the court was: “Whether the publication of photographs by the police to identify a young person suspected of being involved in riotous behaviour and attempted criminal damage can ever be a necessary and proportionate interference with that person’s article 8 [privacy] rights?” The majority held that there was no reasonable expectation of privacy in the case because of the nature of the criminal activity the appellant was involved in. However, the majority’s formulation of this conclusion was based on the reasoning that “expectation of privacy” was dependent on the “identification” purpose of the police. The court stated, “Thus, if the photographs had been published for some reason other than identification, the position would have been different and might well have engaged his rights to respect for his private life within article 8.1”. Therefore, as the purpose of publishing the footage was “identification” of the wrongdoer, the reasonable expectation of privacy stood excluded. The Canara Bank case was relied on by the SC in Puttaswamy. The plurality judgement in Puttaswamy also quoted the above paragraphs from the UK Supreme Court judgement.
Finally, the SC in the Aadhaar case, laid down the factors of “reasonable expectation of privacy.” Relying on those factors, the Supreme Court observed that demographic information and photographs do not raise a reasonable expectation of privacy. It further held that face photographs for the purpose of identification are not covered by a reasonable expectation of privacy. As this author has recognised, the majority in the Aadhaar case misconstrued the “reasonable expectation of privacy” to lie not in constitutional values as held in Puttaswamy but in societal norms. Even with the misapplication of the Puttaswamy principles by the majority in Aadhaar, it is clear that the exclusion of a “reasonable expectation of privacy” in face photographs is valid only for the purpose of “identification”. For purposes other than “identification”, there should exist a reasonable expectation of privacy in CCTV footage. Having recognised the existence of “reasonable expectation of privacy” in CCTV footage, let’s see how the safeguards mentioned under the DPB stand the constitutional standards of privacy laid down in Puttaswamy.
The bench in Puttaswamy located privacy not only in Article 21 but the entirety of part III of the Indian Constitution. Where transgression to privacy relates to different provisions under Part III, the tests evolved under those Articles would apply. Puttaswamy recognised that national security and crime control are legitimate state objectives. However, it also recognised that any limitation on the right must satisfy the proportionality test. The proportionality test requires a legitimate state aim, rational nexus, necessity, and balancing of interests. Infringement on the right to privacy occurs under the first and second standard. The first requirement of proportionality stands justified as national security and crime control have been recognised to be legitimate state objectives. However, it must be noted that the EU Guidelines on Processing of Personal Data through video devices state that the mere purpose of “safety” or “for your safety” is not sufficiently specific and is contrary to the principle that personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject. The second requirement is a rational nexus. As stated above, there is little correlation between crime control and surveillance measures. Even if the state justifies a rational nexus between state aim and the action employed, it is the necessity part of the proportionality test where the CCTV surveillance measures fail (as explained by this author). Necessity requires us to draw a list of alternatives and their impact on an individual, and then do a balancing analysis with regard to the alternatives. Here, judicial scrutiny of the exemption order under clause 35 is a viable alternative that respects individual rights while at the same time, not interfering with the state’s aim.
Conclusion
Informed consent and purpose limitation were stated to be central principles of informational privacy in Puttaswamy. Among the three standards we identified, the principles of informed consent and purpose limitation remain available only in the third standard. In the first standard, even though the requirement of consent has become unavailable, the principle of purpose limitation would still be applicable to the processing of such data. The second standard is of particular concern wherein neither of those principles is available to data principals. It is worth mentioning here that in large scale monitoring activities such as CCTV surveillance, the safeguards which the DPB lists out would inevitably have an implementation flaw. The reason is that in scenarios where individuals refuse consent for large scale CCTV monitoring, what alternatives would the government offer to those individuals? Practically, CCTV surveillance would fall under clause 12 standards where consent would not be required. Even in those cases, would the notice requirement safeguard be diminished to “you are under surveillance” notices? When we talk about exercise of rights available under the DPB, how would an individual effectively exercise their right when the data processing is not limited to a particular individual? These questions arise because the safeguards under the DPB (and data protection laws in general) are based on individualistic notions of privacy. Interestingly, individual use cases of CCTVs have also increased with an increase in state use of CCTVs. Deployment of CCTVs for personal or domestic purposes would be exempt from the above-mentioned compliances as that would fall under the exemption provision of clause 36(d). Two additional concerns arise in relation to processing of data concerning CCTVs – the JPC report’s inclusion of Non-Personal Data (“NPD”) within the ambit of DPB, and the government’s plan to develop a National Automated Facial Recognition System (“AFRS”). A significant part of the data collected by CCTVs would fall within the ambit of NPD.With the JPC’s recommendation, it will be interesting to follow the processing standards for NPD under the DPB. AFRS has been imagined as a national database of photographs gathered from various agencies to be used in conjunction with facial recognition technology. The use of facial recognition technology with CCTV cameras raises concerns surrounding biometric data, and risks of large scale profiling. Indeed, section 27 of the DPB reflects this risk and mandates a data protection impact assessment to be undertaken by the data fiduciary with respect to processing involving new technologies or large scale profiling or use of biometric data by such technologies, however the DPB does not define what “new technology” means. Concerns around biometric data are outside the scope of the present article, however, it would be interesting to look at how the use of facial recognition technology with CCTVs could impact the safeguards under DPB.
Comments to the Draft National Health Data Management Policy 2.0
This is a joint submission on behalf of (i) Access Now, (ii) Article 21, (iii) Centre for New Economic Studies, (iv) Center for Internet and Society, (v) Internet Freedom Foundation, (vi) Centre for Justice, Law and Society at Jindal Global Law School, (vii) Priyam Lizmary Cherian, Advocate, High Court of Delhi (ix) Swasti-Health Catalyst, (x) Population Fund of India.
At the outset, we would like to thank the National Health Authority (NHA) for inviting public comments on the draft version of the National Health Data Management Policy 2.0 (NDHMPolicy 2.0) (Policy) We have not provided comments to each section/clause, but have instead highlighted specific broad concerns which we believe are essential to be addressed prior tothe launch of NDHM Policy 2.0.
Read on to view the full submission here
Issue Brief_Regulating Crypto-asset advertising in India
CC Edited_Comparing advertising standards for crypto_TCC.docx.pdf — PDF document, 310 kB (317993 bytes)
CIS Issue Brief on regulating Crypto-asset advertising in India
Over the past decade, crypto-assets have established themselves within the digital global zeitgeist. Crypto-asset (alternatively referred to as cryptocurrency) trading and investments continue to skyrocket, with centralised crypto exchanges seeing upwards of USD 14 trillion (or around INR 1086 trillion) in trading volume.
One of the key elements behind this exponential growth and embedding of crypto-assets into the global cultural consciousness has been the marketing and advertising efforts of crypto-asset providers and crypto-asset-related service providers.In India alone, crypto-exchange advertisements have permeated into all forms of media and seem to be increasing as the market continues to mature. At the same time, however, financial regulators such as the RBI have consistently pointed out concerns associated with crypto-assets, even going so far as to warn consumers and investors of the dangers that may arise from investing in crypto-assets through a multitude of circulars.
In light of this, we analyse the regulations governing crypto-assets in India by examining the potential and actual limitations posed by them. We then compare them with the regulations governing the advertising of another financial instrument, mutual funds. Finally, we perform a comparative analysis of crypto-asset advertising regulations in four jurisdictions - The EU, Singapore, Spain and the United Kingdom- and identify clear and actionable recommendations that policymakers can implement to ensure the safety and fairness of crypto-asset advertising in India.
The full issue brief can be accessed Here
Making Voices Heard
We believe that voice interfaces have the potential to democratise the use of the internet by addressing limitations related to reading and writing on digital text-only platforms and devices. This report examines the current landscape of voice interfaces in India, with a focus on concerns related to privacy and data protection, linguistic barriers, and accessibility for persons with disabilities (PwDs).
The report features a visual mapping of 23 voice interfaces and technologies publicly available in India, along with a literature survey, a policy brief towards development and use of voice interfaces and a design brief documenting best practices and users’ needs, both with a focus on privacy, languages, and accessibility considerations, and a set of case studies on three voice technology platforms. Read and download the full report here
Credits
Research: Shweta Mohandas, Saumyaa Naidu, Deepika Nandagudi Srinivasa, Divya Pinheiro, and Sweta Bisht.
Conceptualisation, Planning, and Research Inputs: Sumandro Chattapadhyay, and Puthiya Purayil Sneha.
Illustration: Kruthika NS (Instagram @theworkplacedoodler). Website Design Saumyaa Naidu. Website Development Sumandro Chattapadhyay, and Pranav M Bidare.
Review and Editing: Puthiya Purayil Sneha, Divyank Katira, Pranav M Bidare, Torsha Sarkar, Pallavi Bedi, and Divya Pinheiro.
Copy Editing: The Clean Copy
Working paper on Non-Financial Use Cases of Blockchain Technology
Ever since its initial conceptualisation in 2009, blockchain technology has been synonymous with financial products and services - most notably crypto-assets like Bitcoin. However, while often associated with the financial sector, blockchain technology represents an opportunity for multiple industries to reinvent and improve their legacy processes. In India, the 2020 discussion Paper on Blockchain Technology by the Niti Aayog as well as the National Blockchain Strategy of 2021 by the Ministry of Electronics and Information Technology have attempted to articulate this opportunity. These documents examine the potential benefits that would arise from blockchain’s introduction across multiple non financial sectors.
This working paper looks to examine three specific use cases mentioned in the above mentioned government documents: Land record management, certification verification and pharmaceutical supply chain management. We look to provide an overview of what blockchain technology is and document the ongoing attempts to integrate blockchain technology into the aforementioned fields. We also assess the possible costs and benefits associated with blockchain’s introduction and look to draw insights from instances of such integration in other jurisdictions.
The full working paper can be found here.
The Government’s Increased Focus on Regulating Non-Personal Data: A Look at the Draft National Data Governance Framework Policy
Introduction
Non Personal Data (‘NPD’) can be understood as any information not relating to an identified or identifiable natural person. The origin of such data can be both human and non-human. Human NPD would be such data which has been anonymised in such a way that the person to whom the data relates cannot be re-identified. Non-human NPD would mean any such data that did not relate to a human being in the first place, for example, weather data. There has been a gradual demonstrated interest in NPD by the government in recent times. This new focus on regulating non personal data can be owed to the economic incentive it provides. In its report, the Sri Krishna committee, released in 2018 agreed that NPD holds considerable strategic or economic interest for the nation, however, it left the questions surrounding NPD to a future committee.
History of NPD Regulation
In 2020, the Ministry of Electronics and Information Technology (‘MEITY’) constituted an expert committee (‘NPD Committee’) to study various issues relating to NPD and to make suggestions on the regulation of non-personal data. The NPD Committee differentiated NPD into human and non-human NPD, based on the data’s origin. Human NPD would include all information that has been stripped of any personally identifiable information and non-human NPD meant any information that did not contain any personally identifiable information in the first place (eg. weather data). The final report of the NPD Committee is awaited but the Committee came out with a revised draft of its recommendations in December 2020. In its December 2020 report, the NPD Committee proposed the creation of a National Data Protection Authority (‘NPDA’) as it felt this is a new and emerging area of regulation. Thereafter, the Joint Parliamentary Committee on the Personal Data Protection Bill, 2019 (‘JPC’) came out with its version of the Data Protection Bill where it amended the short title of the PDP Bill 2019 to Data Protection Bill, 2021 widening the ambit of the Bill to include all types of data. The JPC report focuses only on human NPD, noting that non-personal data is essentially derived from one of the three sets of data - personal data, sensitive personal data, critical personal data - which is either anonymized or is in some way converted into non-re-identifiable data.
On February 21, 2022, the Ministry of Electronics and Information Technology (‘MEITY’) came out with the Draft India Data Accessibility and Use Policy, 2022 (‘Draft Policy’). The Draft Policy was strongly criticised mainly due to its aims to monetise data through its sale and licensing to body corporates. The Draft Policy had stated that anonymised and non-personal data collected by the State that has “undergone value addition” could be sold for an “appropriate price”. During the Draft Policy’s consultation process, it had been withdrawn several times and then finally removed from the website. The National Data Governance Framework Policy (‘NDGF Policy’) is a successor to this Draft Policy. There is a change in the language put forth in the NDGF Policy from the Draft Policy, where the latter mainly focused on monetary growth. The new NDGF Policy aims to regulate anonymised non-personal data (‘NPD’) kept with governmental authorities and make it accessible for research and improving governance. It wishes to create an ‘India Datasets programme’ which will consist of the aforementioned datasets. While MEITY has opened the draft for public comments, is a need to spell out the procedure in some ways for stakeholders to draft recommendations for the NDGF policies in an informed manner. Through this piece, we discuss the NDGF Policy in terms of issues related to the absence of a comprehensive Data Protection Framework in India and the jurisdictional overlap of authorities under the NDGF Policy and DPB.
What the National Data Governance Framework Policy Says
Presently in India, NPD is stored in a variety of governmental departments and bodies. It is difficult to access and use this stored data for governmental functions without modernising collection and management of governmental data. Through the NDGF Policy, the government aims to build an Indian data storehouse of anonymised non-personal datasets and make it accessible for both improving governance and encouraging research. It imagines the establishment of an Indian Data Office (‘IDO’) set up by MEITY , which shall be responsible for consolidating data access and sharing of non-personal data across the government. In addition, it also mandates a Data Management Unit for every Ministry/department that would work closely with the IDO. IDO will also be responsible for issuing protocols for sharing NPD. The policy further imagines an Indian Data Council (‘IDC’) whose function would be to define frameworks for important datasets, finalise data standards, and Metadata standards and also review the implementation of the policy. The NDGF Policy has provided a broad structure concerning the setting up of anonymisation standards, data retention policies, data quality, and data sharing toolkit. The NDGF Policy states that these standards shall be developed and notified by the IDO or MEITY or the Ministry in question and need to be adhered to by all entities.
The Data Protection Framework in India
The report adopted by the JPC, felt that it is simpler to enact a single law and a single regulator to oversee all the data that originates from any data principal and is in the custody of any data fiduciary. According to the JPC, the draft Bill deals with various kinds of data at various levels of security. The JPC also recommended that since the Data Protection Bill (‘DPB’) will handle both personal and non-personal data, any further policy / legal framework on non-personal data may be made a part of the same enactment instead of any separate legislation. The draft DPB states that what is to be done with the NDP shall be decided by the government from time to time according to its policy. As such, neither the DPB, 2021 nor the NDGF Policy go into details of regulating NPD but only provide a broad structure of facilitating free-flow of NPD, without taking into account the specific concerns that have been raised since the NPD committee came out with its draft report on regulating NPD dated December 2020.
Jurisdictional overlaps among authorities and other concerns
Under the NDGF policy, all guidelines and rules shall be published by a body known as the Indian Data Management Office (‘IDMO’). The IDMO is set to function under the MEITY and work with the Central government, state governments and other stakeholders to set standards. Currently, there is no sign of when the DPB will be passed as law. According to the JPC, the reason for including NPD within the DPB was because of the impossibility to differentiate between PD and NPD. There are also certain overlaps between the DPB and the NDGF which are not discussed by the NDGF. NDGF does not discuss the overlap between the IDMO and Data Protection Authority (‘DPA’) established under the DPB 2021.
Under the DPB, the DPA is tasked with specifying codes of practice under clause 49. On the other hand, the NDGF has imagined the setting up of IDO, IDMO, and the IDC, which shall be responsible for issuing codes of practice such as data retention, and data anonymisation, and data quality standards. As such, there appears to be some overlap in the functions of the to-be-constituted DPA and the NDGF Policy.
Furthermore, while the NDGF Policy aims to promote openness with respect to government data, there is a conflict with open government data (‘OGD’) principles when there is a price attached to such data. OGD is data which is collected and processed by the government for free use, reuse and distribution. Any database created by the government must be publicly accessible to ensure compliance with the OGD principles.
Conclusion
Streamlining datasets across different authorities is a huge challenge for the government and hence the NGDF policy in its current draft requires a lot of clarification. The government can take inspiration from the European Union which in 2018, came out with a principles-based approach coupled with self-regulation on the framework of the free flow of non-personal data. The guidance on the free-flow of non-personal data defines non-personal data based on the origin of data - data which originally did not relate to any personal data (non-human NPD) and data which originated from personal data but was subsequently anonymised (human NPD). The regulation further realises the reality of mixed data sets and regulates only the non-personal part of such datasets and where the datasets are inextricably linked, the GDPR would apply to such datasets. Moreover, any policy that seeks to govern the free flow of NPD ought to make it clear that in case of re-identification of anonymised data, such re-identified data would be considered personal data. The DPB, 2021 and the NGDF, both fail to take into account this difference.
Central Bank Digital Currencies: A solution to India’s financial woes or just a piece of the puzzle?
Central Bank Digital Currencies (CBDCs) have, over the last couple of years, stepped firmly into the global financial spotlight. India is no exception to this trend, with both the Reserve Bank of India (RBI) and the Finance Minister referring to an Indian CBDC that is currently under development.
With the introduction of this CBDC a matter of when and not if, India and many other countries stand on the precipice of re-imagining their financial systems. It is therefore imperative that any attempt at introducing a CBDC is preceded by a detailed analysis of its scope, benefits, limitations, and how it has been implemented in other jurisdictions. This policy brief looks to achieve that by examining the form that a CBDC could take, what its policy goals would be in India, the considerations the RBI would have to account for and whether a CBDC would work in present-day India. Finally, it also looks at the case of Nigeria to draw insights that could also be applied to the introduction and operationalisation of a CBDC in the Indian context.
The full issue brief can be accessed here.
Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
These comments examine whether the proposed amendments are in adherence to established principles of constitutional law, intermediary liability and other relevant legal doctrines. We thank the Ministry of Electronics and Information Technology (MEITY) for allowing us this opportunity. Our comments are divided into two parts. In the first part, we reiterate some of our comments to the existing version of the rules, which we believe holds relevance for the proposed amendments as well. And in the second part, we provide issue-wise comments that we believe need to be addressed prior to finalising the amendments to the rules.
To access the full text of the Comments to the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, click here
What Are The Consumer Protection Concerns With Crypto-Assets?
The article was published in Medianama on July 8, 2022
Crypto-asset regulation is at the forefront of India’s financial regulator’s minds. On the 6th of June, the Securities and Exchange Board of India (SEBI) in a response to the Parliamentary Standing Committee on Finance expressed clear consumer protection concerns associated with crypto-assets.
This statement follows multiple notices issued by the Reserve Bank of India (RBI) warning consumers of the risks related to crypto-assets, and even a failed attempt to prevent banks from transacting with any individual trading crypto-assets. Yet, in spite of these multiple warnings, and a significant drop in trading volume due to the introduction of a new taxation structure, crypto-assets still have managed to establish themselves as a legitimate financial instrument in the minds of many.
Recent global developments, however, seem to validate the concerns held by both the RBI and SEBI.
The bear market that crypto finds itself in has sent shockwaves throughout the ecosystem, crippling some of the most established tokens in the space. Take, for example, the death spiral of the algorithmic stablecoin Terra USD and its sister token Luna—with Terra USD going from a top-10-traded crypto-token to being practically worthless. The volatility of token prices has had a significant knock-on effect on crypto-related services. Following Terra’s crash, the Centralised Finance Platform (CeFi) Celsius—which provided quasi-banking facilities for crypto holders—also halted all withdrawals. More recently, the crypto-asset hedge fund Three Arrows also filed for bankruptcy following its inability to meet its debt obligations and protect its assets from creditors looking to get their money back.
Underpinning these stories of failing corporations are the very real experiences of investors and consumers—many of whom have lost a significant amount of wealth. This has been a direct result of the messaging around crypto-assets. Crypto-assets have been promoted through popular culture as a means of achieving financial freedom and accruing wealth quickly. It is this narrative that lured numerous regular citizens to invest substantial portions of their income into crypto-asset trading. At the same time, the crypto-asset space is littered with a number of scams and schemes designed to trick unaware consumers. These schemes, primarily taking the form of ‘pump and dump’ schemes, represent a significant issue for investors in the space.
It seems, therefore, that any attempt to ensure consumer protection in the crypto-space must adopt two key strategies:
- First, it must re-orient the narrative from crypto as a simple means of getting wealthy—and ensure that those consumers who invest in crypto do so with full knowledge of the risks associated with crypto-assets
- Second, it must provide consumers with sufficient recourse in cases where they have been subject to fraud.
In this article, we examine the existing regulatory framework around grievance redressal for consumers in India—and whether these safeguards are sufficient to protect consumers trading crypto-assets. We further suggest practical measures that the government can adopt going forward.
What is the Current Consumer Protection Framework Around Crypto-assets?
Safeguards Under the Consumer Protection Act and E-commerce Rules
The increased adoption of e-commerce by consumers in India forced legislators to address the lack of regulation for the protection of consumer interests. This legislative expansion may extend to protecting the interests of investors and consumers trading in crypto-assets.
The groundwork for consumer welfare was laid in the new Consumer Protection Act, 2019 which defined e-commerce as the “buying or selling of goods or services including digital products over digital or electronic network.” It also empowered the Union Government to take measures and issue rules for the protection of consumer rights and interests, and the prevention of unfair trade practices in e-commerce.
Within a year, the Union Government exercised its power to issue operative rules known as the Consumer Protection (E-Commerce) Rules, 2020 (the “Rules”), which amongst other things, sought to prohibit unfair trade practices across all models of e-commerce. The Rules define an e-commerce entity as one which owns, operates or manages a digital or electronic facility or platform (which includes a website as well as mobile applications) for electronic commerce.
The definition of e-commerce is not limited only to physical goods but also includes services as well as digital products. So, one can plausibly assume that it would be applicable to a number of crypto-exchanges, as well as certain entities offering decentralized finance (DeFi) services. This is because crypto tokens—be it cryptocurrencies like Bitcoin, Ethereum, or Dogecoin—are not considered currency or securities within Indian law, but can be said to be digital products since they are digital goods.
The fact that the digital products being traded on the e-commerce entity originated outside Indian territory would make no difference as far as the applicability of the Rules is concerned. The Rules apply even to e-commerce entities not established in India, but which systematically offer goods or services to consumers in India. The concept of systematically offering goods or services across territorial boundaries appears to have been taken from the E-evidence Directive of the European Union and seeks to target only those entities which intend to do substantial business within India while excluding those who do not focus on the Indian market and have only a minuscule presence here.
Additionally, the Rules impose certain duties and obligations on e-commerce entities, such as:
- The appointment of a nodal officer or a senior designated functionary who is resident in India, to ensure compliance with the provisions of the Consumer Protection Act;
- The prohibition on the adoption of any unfair trading practices, thereby making the most important requirements of consumer protection applicable to e-commerce;
- The establishment of a grievance redressal mechanism and specifying an outer limit of one month for redressal of complaints;
- The prohibition on imposing cancellation charges on the consumer, unless a similar charge is also borne by the e-commerce entity if it cancels the purchase order unilaterally for any reason;
- The prohibition on price manipulation to gain unreasonable profit by imposing an unjustified price on the consumers;
- The prohibition on discrimination between consumers of the same class or an arbitrary classification of consumers that affects their rights; etc.
The Rules also impose certain liabilities on e-commerce entities relating to the tracking of shipments, the accuracy of the information on the goods or services being offered, information and ranking of sellers, tracking complaints, and information regarding payment mechanisms. Most importantly, the Rules explicitly make the grievance redressal mechanism under the Consumer Protection Act, 2019 applicable to e-commerce entities in case they violate any of the requirements under the Rules.
What this means is that at present crypto-exchanges and crypto-service providers clearly fall within the ambit of consumer protection legislation in India. In real terms, this means that consumers can rest assured that in any crypto transaction their rights must be accounted for by the corporation.
With crypto related scams exploding globally following 2021, it is likely that Indian investors will come into contact, or be subject to various scams and schemes in the crypto marketplace. Therefore, it is imperative that consumers and investors the steps they can take in case they fall victim to a scam. Currently, any consumer who is the victim of a fraud or scam in the crypto space would as per the current legal regime, have two primary redressal remedies:
- Lodging a criminal complaint with the police, usually the cyber cell, regarding the fraud. It then becomes the police’s responsibility to investigate the case, trace the perpetrators, and ensure that they are held accountable under relevant legal provisions.
- Lodging a civil complaint before the consumer forum or even the civil courts claiming compensation and damages for the loss caused. In this process, the onus is on the consumer to follow up and prove that they have been defrauded.
Filing a consumer complaint may impose an extra burden on the consumer to prove the fraud—especially if the consumer is unable to get complete and accurate information regarding the transaction. Additionally, in most cases, a consumer complaint is filed when the perpetrator is still accessible and can be located by the consumer. However, in case the perpetrator has absconded, the consumer would have no choice but to lodge a criminal complaint. That said, if the perpetrators have already absconded, it may be difficult even for the police to be of much help considering the anonymity that is built into technology.
Therefore, perhaps the best protection that can be afforded to the consumer is where the regulatory regime is geared towards the prevention of frauds and scams by establishing a licensing and supervisory regime for crypto businesses.
A Practical Guide to Consumer Protection and Crypto-assets
What is apparent is that existing regulations are not sufficient to cover the extent of protection that a crypto-investor would require. Ideally, this gap would be covered by dedicated legislation that looks to cover the range of issues within the crypto-ecosystem. However, in the absence of the (still pending) government crypto bill, we are forced to consider how consumers can currently be protected and made aware of the risks associated with crypto-assets.
On the question of informing customers of the risks associated, we must address one of the primary means through which consumers become aware of crypto-assets: advertising. Currently, crypto-asset advertising follows a code set down by the Advertising Standards Council of India, a self-regulating, non-government body. As such, there is currently no government body that enforces binding advertising standards on crypto and crypto-service providers.
While self-regulation has generally been an acceptable practice in the case of advertising, the advertising of financial products has differed slightly. For example, Schedule VI of the Securities and Exchange Board of India (Mutual Funds) Regulations, 1996, lays down detailed guidelines associated with the advertising of mutual funds. Crypto-assets can, depending on their form, perform similar functions to currencies, securities, and assets. Moreover, they carry a clear financial risk—as such their advertising should come under the purview of a recognised financial regulator. In the absence of a dedicated crypto bill, an existing regulator—such as SEBI or the RBI—should use their ad-hoc power to bring crypto-assets and their advertising under their purview.
This would allow for the government to not only ensure that advertising guidelines are followed, but to dictate the exact nature of these guidelines. This allows it to issue standards pertaining to disclaimers and prevent crypto service providers from advertising crypto as being easy to understand, having a guaranteed return on investment, or other misleading messages.
Moreover, financial institutions such as the RBI and SEBI may consider increasing efforts to inform consumers of the financial and economic risks associated with crypto-assets by undertaking dedicated public awareness campaigns. Strongly enforced advertising guidelines, coupled with widespread and comprehensive awareness efforts, would allow the average consumer to understand the risks associated with crypto-assets, thereby re-orienting the prevailing narrative around them.
On the question of providing consumers with clear recourse, current financial regulators might consider setting up a joint working group to examine the extent of financial fraud associated with crypto-assets. Such a body can be tasked with providing consumers with clear information related to crypto-asset scams and schemes, how to spot them, and the next steps they must take in case they fall victim to one.
Aman Nair is a policy officer at the Centre for Internet & Society (CIS), India, focusing on fintech, data governance, and digital cooperative research. Vipul Kharbanda is a non-resident fellow at CIS, focusing on the fintech research agenda of the organisation.
Deployment of Digital Health Policies and Technologies: During Covid-19
Digitisation of public services in India began with taxation, land record keeping, and passport details recording, but it was soon extended to cover most governmental services - with the latest being public health. The digitisation of healthcare system in India had begun prior to the pandemic. However, given the push digital health has received in recent years especially with an increase in the intensity of activity during the pandemic, we thought it is important to undertake a comprehensive study of India's digital health policies and implementation. The project report comprises a desk-based research review of the existing literature on digital health technologies in India and interviews with on-field healthcare professionals who are responsible for implementing technologies on the ground.
The report by Privacy International and the Centre for Internet & Society can be accessed here.
Surveillance Enabling Identity Systems in Africa: Tracing the Fingerprints of Aadhaar
In this report, we identify the different external actors that influencing this “developmental” agenda. These range from philanthropic organisations, private companies, and technology vendors, to state and international institutions. Most notable among these is the World Bank, whose influence we investigated in the form of case studies of Nigeria and Kenya. We also explored the role played by the “success” of the Aadhaar programme in India on these new ID systems. A key characteristic of the growing “digital identity for development” trend is the consolidation of different databases that record beneficiary data for government programmes into one unified platform, accessed by a unique biometric ID. This “Aadhaar model” has emerged as a default model to be adopted in developing countries, with little concern for the risks it introduces. Read and download the full report here.
NHA Data Sharing Guidelines – Yet Another Policy in the Absence of a Data Protection Act
Reviewed and edited by Anubha Sinha
Launched in 2018, PM-JAY is a public health insurance scheme set to cover 10 crore poor and vulnerable families across the country for secondary and tertiary care hospitalisation. Eligible candidates can use the scheme to avail of cashless benefits at any public/private hospital falling under this scheme. Considering the scale and sensitivity of the data, the creation of a well-thought-out data-sharing document is a much-needed step. However, the document – though only a draft – has certain portions that need to be reconsidered, including parts that are not aligned with other healthcare policy documents. In addition, the guidelines should be able to work in tandem with the Personal Data Protection Act whenever it comes into force. With no prior intimation of the publication of the guidelines, and the provision of a mere 10 days for consultation, there was very little scope for stakeholders to submit their comments and participate in the consultation. While the guidelines pertain to the PM-JAY scheme, it is an important document to understand the government’s concerns and stance on the sharing of health data, especially by insurance companies.
Definitions: Ambiguous and incompatible with similar policy documents
The draft guidelines add to the list of health data–related policies that have been published since the beginning of the pandemic. These include three draft health data management policies published within two years, which have already covered the sharing and management of health data. The draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; in this case, the guidelines fail to refer to the draft National Digital Health Data Management Policy (published in April 2022). To add to this, the document – by placing the definitions at the end – is difficult to read and understand, especially when terms such as ‘beneficiary’, ‘data principal’, and ‘individual’ are used interchangeably. In the same vein, the document uses the terms ‘data principal’ and ‘data fiduciary’, and the definitions of health data and personal data, from the 2019 PDP Bill, while also referring to the IT Act SDPI Rules and its definition of ‘sensitive personal data’. While the guidelines state that the IT Act and Rules will be the legislation to refer to for these guidelines, it is to be noted that the IT Act under the SPDI Rules covers ‘body corporates’, which under Section 43A(1), is defined as “any company and includes a firm, sole proprietorship or other association of individuals engaged in commercial or professional activities;”. It is difficult to add responsibility and accountability to the organisations under the guidelines when they might not even be covered under this definition.
With each new policy, civil society organisations have been pointing out the need to have a data protection act before introducing policies and guidelines that deal with the processing and sharing of the data of individuals. Ideally, these policies – even in draft form – should have been published after the Personal Data Protection Bill was enacted, to ensure consistency with the provisions of the law. For example, the guidelines introduce a new category of governance mechanisms under the data-sharing committee headed by a data-sharing officer (DSO). The responsibilities and powers of the DSO are similar to that of the data protection officer under the draft PDP Bill as well as the National Data Health Management Policy (NHDMP). This, in turn, raises the question of whether the DSO and the DPOs under both the PDP Bill and the draft NDMP will have the same responsibilities. Clarity in terms of which of the policies are in force and how they intersect is needed to ensure a smooth implementation. Ideally, having multiple sources of definitions should be addressed at the drafting stage itself.
Guiding Principles: Need to look beyond privacy
The guidelines enumerate certain principles to govern the use, collection, processing, and transmission of the personal or sensitive personal data of beneficiaries. These principles are accountability, privacy by design, choice and consent, openness/transparency, etc. While these provisions are much needed, their explanation at times misses the mark of why these principles were added. For example, in the case of accountability, the guidelines state that the ‘data fiduciary’ shall be accountable for complying with measures based on the guiding principles However, it does not specify who the fiduciaries would be accountable to and what the steps are to ensure accountability. Similarly, in the case of openness and transparency, the guidelines state that the policies and practices relating to the management of personal data will be available to all stakeholders. However, openness and transparency need to go beyond policies and practices and should consider other aspects of openness, including open data and the use of open-source software and open standards. This again will add to transparency, in that it would specify the rights of the data principal, as the current draft looks at the rights of the data principal merely from a privacy perspective. In the case of purpose limitation as well, the guidelines are tied to the privacy notice, which again puts the burden on the individual (in this case, beneficiary) when the onus should actually be on the data fiduciary. Lastly, under the empowerment of beneficiaries, the guidelines state that the “data principal shall be able to seek correction, amendments, or deletion of such data where it is inaccurate;”. The right to deletion should not be conditional on inaccuracy, especially when entering the scheme is optional and consent-based.
Data sharing with third parties without adequate safeguards
The guidelines outline certain cases where personal data can be collected, used, or disclosed without the consent of the individual. One of these cases is when the data is anonymised. However, the guidelines do not detail how this anonymisation would be achieved and ensured through the life cycle of the data, especially when the clause states that the data will also be collected without consent. The guidelines also state that the anonymised data could be used for public health management, clinical research, or academic research. The guidelines should have limited the scope of academic research or added certain criteria to gain access to the data; the use of vague terminology could lead to this data (sometimes collected without consent) being de-anonymised or used for studies that could cause harm to the data principal or even a particular community. The guidelines state that the data can be shared as ‘protected health information’ with a government agency for oversight activities authorised by law, epidemic control, or in response to court orders. With the sharing of data, care should be taken to ensure data minimisation and purpose limitations that go beyond the explanations added in the body of the guidelines. In addition, the guidelines also introduce the concept of a ‘clean room’, which is defined as “a secure sandboxed area with access controls, where aggregated and anonymised or de-identified data may be shared for the purposes of developing inference or training models”. The definition does not state who will be developing these training models; it could be a cause of worry if AI companies or even insurance companies have the potential to use this data to train models that could eventually make decisions based on the results. The term ‘sandbox’ is explained under the now revoked DP Bill 2021 as “such live testing of new products or services in a controlled or test regulatory environment for which the Authority may or may not permit certain regulatory relaxations for a
specified period for the limited purpose of the testing”. Neither the 2019 Bill nor the IT Act/Rules defines ‘sandbox’; the guidelines should have ideally spent more time explaining how the sandbox system in the ‘Clean Room’ works.
Conclusion
The draft Data Sharing Guidelines are a welcome step in ensuring that the entities sharing and processing data have guidelines to adhere to, especially since the Data Protection Bill has not been passed yet. The mention of the best practices for data sharing in annexures, including practices for people who have access to the data, is a step in the right direction, which could be made better with regular training and sensitisation. While the guidelines are a good starting point, they still suffer from the issues that have been highlighted in similar health data policies, including not referring to older policies, adding new entities, and the reliance on digital and mobile technology. The guidelines could have added more nuance to the consent and privacy by design sections to ensure other forms of notice, e.g., notice in audio form in different Indian languages. While PM-JAY aims to reach 10 crore poor and vulnerable families, there is a need to look at how to ensure that consent is given according to the guidelines that are “free, informed, clear, and specific”.
Getting the (Digital) Indo-Pacific Economic Framework Right
The article was originally published in Directions on 16 September 2022.
It is still early days. Given the broad and noncommittal scope of the economic arrangement, it is unlikely that the IPEF will lead to a trade deal among members in the short run. Instead, experts believe that this new arrangement is designed to serve as a ‘framework or starting point’ for members to cooperate on geo-economic issues relevant to the Indo-Pacific, buoyed in no small part by the United States’ desire to make up lost ground and counter Chinese economic influence in the region.
United States Trade Representative (USTR) Katherine Tai has underscored the relevance of the Indo-Pacific digital economy to the US agenda with the IPEF. She has emphasized the importance of collaboratively addressing key connectivity and technology challenges, including standards on cross-border data flows, data localisation and online privacy, as well as the discriminatory and unethical use of artificial intelligence. This is an ambitious agenda given the divergence among members in terms of technological advancement, domestic policy preferences and international negotiating stances at digital trade forums. There is a significant risk that imposing external standards or values on this evolving and politically-contested digital economy landscape will not work, and may even undermine the core potential of the IPEF in the Indo-Pacific. This post evaluates the domestic policy preferences and strategic interests of the Framework’s member states, and how the IPEF can navigate key points of divergence in order to achieve meaningful outcomes.
State of domestic digital policy among IPEF members
Data localisation is a core point of divergence in global digital policymaking. It continues to dominate discourse and trigger dissent at all international trade forums, including the World Trade Organization. IPEF members have a range of domestic mandates restricting cross-border flows, which vary in scope, format and rigidity (see table below). Most countries only have a conditional data localisation requirement, meaning data can only be transferred to countries where it is accorded an equivalent level of protection – unless the individual whose data is being transferred consents to said transfer. Australia and the United States have sectoral localisation requirements for health and defence data respectively. India presently has multiple sectoral data localisation requirements. In particular, a 2018 Reserve Bank of India (RBI) directive imposed strict local storage requirements along with a 24-hour window for foreign processing of payments data generated in India. The RBI imposed a moratorium on the issuance of new cards by several US-based card companies until compliance issues with the data localisation directive were resolved. Furthermore, several iterations of India’s recently withdrawn Personal Data Protection Bill contained localisation requirements for some categories of personal data.
Indonesia and Vietnam have diluted the scopes of their data localisation mandates to apply, respectively, only to companies providing public services and to companies not complying with other local laws. These dilutions may have occurred in response to concerted pushback from foreign technology companies operating in these countries. In addition to sectoral restrictions on the transfer of geospatial data, South Korea retains several procedural checks on cross-border flows, including formalities regarding providing notice to individual users.
Moving onto another issue flagged by USTR Tai, while all IPEF members recognise the right to information privacy at an overarching or constitutional level, the legal and policy contours of data protection are at different stages of evolution in different countries. Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore and Thailand have data protection frameworks in place. Data protection frameworks in India and Brunei are under consultation. Notably, the US does not have a comprehensive federal framework on data privacy, although there are patchworks of data privacy regulations at both the federal and state levels.
Regulation and strategic thinking on artificial intelligence (AI) are also at varying levels of development among IPEF members. India has produced a slew of policy papers on Responsible Artificial Intelligence. The most recent policy paper published by NITI AAYOG (the Indian government’s think tank) refers to constitutional values and endorses a risk-based approach to AI regulation, much like that adopted by the EU. The US National Security Commission on Artificial Intelligence (NSCAI), chaired by Google CEO Eric Schmidt, expressed concerns about the US ceding AI leadership ground to China. The NSCAI’s final report emphasised the need for US leadership of a ‘coalition of democracies’ as an alternative to China’s autocratic and control-oriented model. Singapore has also made key strides on trusted AI, launching A.I. verify – the world’s first AI Governance Testing Framework for companies that wish to demonstrate their use of responsible AI through a minimum verifiable product.
IPEF and pipe dreams of digital trade
Some members of the IPEF are signatories to other regional trade agreements. With the exception of Fiji, India and the US, all the IPEF countries are members of the Regional Comprehensive Economic Partnership (RCEP), which also includes China. Five IPEF member countries are also members of the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP) that President Trump backed out of in 2017. Several IPEF members also have bilateral or trilateral trading agreements among themselves, an example being the Digital Economic Partnership Agreement (DEPA) between Singapore, New Zealand and Chile.
All these ‘mega-regional’ trading agreements contain provisions on data flows, including prohibitions on domestic legal provisions that mandate local computing facilities or restrict cross-border data transfers. Notably, these agreements also incorporate exceptions to these rules. The CPTPP includes within its ambit an exception on the grounds of ‘legitimate public policy objectives’ of the member, while the RCEP incorporates an additional exception for ‘essential security interests’.
IPEF members are also spearheading multilateral efforts related to the digital economy: Australia, Japan and Singapore are working as convenors of the plurilateral Joint Statement Initiative (JSI) at the World Trade Organization (WTO), which counts 86 WTO members as parties. India (along with South Africa) vehemently opposes this plurilateral push on the grounds that the WTO is a multilateral forum functioning on consensus and a plurilateral trade agreement should not be negotiated within the aegis of the WTO. They fear, rightly, that such gambits close out the domestic policy space, especially for evolving digital economy regimes where keen debate and contestation exist among domestic stakeholders. While wary of the implications of the JSI, other IPEF members, such as Indonesia, have cautiously joined the initiative to ensure that they have a voice at the table.
It is unlikely that the IPEF will lead to a digital trade arrangement in the short run. Policymaking on issues as complex as the digital economy that must respond to specific social, economic and (geo)political realities cannot be steamrolled through external trade agreements. For instance, after the Los Angeles Ministerial India opted out of the IPEF trade pillar citing both India’s evolving domestic legislative framework on data and privacy as well as a broader lack of consensus among IPEF members on several issues, including digital trade. Commerce Minister Piyush Goyal explained that India would wait for the “final contours” of the digital trade track to emerge before making any commitments.
Besides, brokering a trade agreement through the IPEF runs a risk of redundancy. Already, there exists a ‘spaghetti bowl’ of regional trading agreements that IPEF members can choose from, in addition to forming bilateral trade ties with each other.
This is why Washington has been clear about calling the IPEF an ‘economic arrangement’ and not a trade agreement. Membership does not imply any legal obligations. Rather than duplicating ongoing efforts or setting unrealistic targets, the IPEF is an opportunity for all players to shape conversations, share best practices and reach compromises, which could feed back into ongoing efforts to negotiate trade deals. For example, several members of RCEP have domestic data localisation mandates that do not violate trade deals because the agreement carves out exceptions that legitimise domestic policy decisions. Exchanges on how these exceptions work in future trade agreements could be a part of the IPEF arrangement and nudge states towards framing digital trade negotiations through other channels, including at the WTO. Furthermore, states like Singapore that have launched AI self-governance mechanisms could share best practices on how these mechanisms were developed as well as evaluations of how they have helped policy goals be met. And these exchanges shouldn’t be limited to existing IPEF members. If the forum works well, countries that share strategic interests in the region with IPEF members, including, most notably, the European Union, may also want to get involved and further develop partnerships in the region.
Countering China
Talking shop on digital trade should certainly not be the only objective of the IPEF. The US has made it clear that they want the message emanating from the IPEF ‘to be heard in Beijing’. Indeed, the IPEF offers an opportunity for the reassertion of US economic interests in a region where President Trump’s withdrawal from the CPTPP has left a vacuum for China to fill. Accordingly, it is no surprise that the IPEF has representation from several regions of the Indo-Pacific: South Asia, Southeast Asia and the Pacific.
This should be an urgent policy priority for all IPEF members. Since its initial announcement in 2015, the Digital Silk Road (DSR), the digital arm of China’s Belt and Road Initiative, has spearheaded massive investments by the Chinese private sector (allegedly under close control of the Chinese state) in e-commerce, fintech, smart cities, data centres, fibre optic cables and telecom networks. This expansion has also happened in the Indo-Pacific, unhampered by China’s aggressive geopolitical posturing in the region through maritime land grabs in the South China Sea. With the exception of Vietnam, which remains wary of China’s economic expansionism, countries in Southeast Asia welcome Chinese investments, extolling their developmental benefits. Several IPEF members – including Indonesia, Malaysia and Singapore – have associations with Chinese private sector companies, predominantly Huawei and ZTE. A study evaluating Indonesia’s response to such investments indicates that while they are aware of the risks posed by Chinese infrastructure, their calculus remains unaltered: development and capacity building remain their primary focuses. Furthermore, on the specific question of surveillance, given evidence of other countries such as the US and Australia also using digital infrastructure for surveillance, the threat from China is not perceived as a unique risk.
Setting expectations and approaches
Still, the risks of excessive dependence on one country for the development of digital infrastructure are well known. While the IPEF cannot realistically expect to displace the DSR, it can be utilised to provide countries with alternatives. This can only be done by issuing carrots rather than sticks. A US narrative extolling ‘digital democracy’ is unlikely to gain traction in a region characterised by a diversity of political systems that is focused on economic and development needs. At the same time, an excessive focus on thorny domestic policy issues – such as data localisation and the pipe dream of yet another mega-regional trade deal – could risk derailing the geo-economic benefits of the IPEF.
Instead, the IPEF must focus on capacity building, training and private sector investment in infrastructure across the Indo-Pacific. The US must position itself as a geopolitically reliable ally, interested in the overall stability of the digital Indo-Pacific, beyond its own economic or policy preferences. This applies equally to other external actors, like the EU, who may be interested in engaging with or shaping the digital economic landscape in the Indo-Pacific.
Countering Chinese economic influence and complementing security agendas set through other fora – such as the Quadrilateral Security Dialogue – should be the primary objective of the IPEF. It is crucial that unrealistic ambitions seeking convergence on values or domestic policy do not undermine strategic interests and dilute the immense potential of the IPEF in catalysing a more competitive and secure digital Indo-Pacific.
Table: Domestic policy positions on data localisation and data protection
Demystifying Data Breaches in India
Edited by Arindrajit Basu and Saumyaa Naidu
India saw a 62% drop in data breaches in the first quarter of 2022. Yet, it ranked fifth on the list of countries most hit by cyberattacks according to a 2022 report by Surfshark, a Netherlands-based VPN company. Another report on the cost of data breaches researched by the Ponemon Institute and published by IBM reveals that the breach of about 29500 records between March 2021 and March 2022 resulted in a 25% increase in the average cost from INR 165 million in 2021 to INR 176 million in 2022.
These statistics are certainly a cause for concern, especially in the context of India’s rapidly burgeoning digital economy shaped by the pervasive platformization of private and public services such as welfare, banking, finance, health, and shopping among others. Despite the rate at which data breaches occur and are reported in the media, there seems to be little information about how and when they are resolved. This post examines the discourse on data breaches in India with respect to their historical forms, with a focus on how the specific terminology to describe data security incidents has evolved in mainstream news media reportage.
While expert articulations of cybersecurity in general and data breaches in particular tend to predominate the public discourse on data privacy, this post aims to situate broader understandings of data breaches within the historical context of India’s IT revolution and delve into specific concepts and terminology that have shaped the broader discourse on data protection. The late 1990s and early 2000s offer a useful point of entry into the genesis of the data security landscape in India.
Data Breaches and their Predecessor Forms
The articulation of data security concerns around the late 1990s and early 2000s isn’t always consistent in deploying the phrase, ‘data breach’ to signal cybersecurity concerns in India. The terms such as ‘data/ identity theft’ and ‘data leak’ figure prominently in the public articulation of concerns with the handling of personal information by IT systems, particularly in the context of business process outsourcing (BPO) and e-commerce activities. Other pertinent terms such as “security breach”, “data security”, and ‘“cyberfraud” also capture the specificity of growing concerns around outsourced data to India. At the time, i.e. around mid-2000s regulatory frameworks were still evolving to accommodate and address the complexities arising from a dynamic reconfiguration of the telecommunications and IT landscape in India.
Some of the formative cases that instantiate the usage of the aforementioned terms are instructive to understand shifts in the reporting of such incidents over time. The earliest case during that period concerns a 2002 case concerning the theft and sale of source code by an IIT Kharagpur student who intended to sell the code to two undercover FBI agents who worked with the CBI to catch the thief. A straightforward case of data theft was framed by media stories around the time as a cybercrime involving the illegal sale of the source code of a software package, as software theft of intellectual property in the context of outsourcing and as an instance of industrial espionage in poor nations without laws protecting foreign companies. This case became the basis of the earliest calls for the protection of data privacy and security in the context of the Indian BPO sector. The Indian IT Act, 2000 at the time only covered unauthorized access and data theft from computers and networks without any provisions for data protection, interception or computer forgery. The BPO boom in India brought with it employment opportunities for India’s English-speaking, educated youth but in the absence of concrete data privacy legislation, the country was regarded as an unsafe destination for outsourcing aside from the political ramifications concerning the loss of American jobs.
In a major 2005 incident, employees of the Mphasis BFL call centre in Pune extracted sensitive bank account information of Citibank’s American customers to divert INR 1.90 crore into new accounts set up in India. The media coverage of this incident calls it India’s first outsourcing cyberfraud and a well planned scam, a cybercrime in a globalized world, and a case of financial fraud and a scam that required no hacking skills, and a case of data theft and misuse. Within the ambit of cybercrime, media reports of these incidents refer to them as cases of “fraud”, “scam” and “theft''.
Two other incidents in 2005 set the trend for a critical spotlight on data security practices in India. In a June 2005 incident, an employee of a Delhi-based BPO firm, Infinity e-systems, sold the account numbers and passwords of 1000 bank customers to the British Tabloid, The Sun. The Indian newspaper, Telegraph India, carried an online story headlined, “BPO Blot in British Backlash: Indian Sells Secret Data,” which reported that the employee, Kkaran Bahree, 24, was set up by a British journalist, Oliver Harvey. Harvey filmed Bahree accepting wads of cash for the stolen data. Bahree’s theft of sensitive information is described both as a data fraud and a leak in the above 2005 BBC story by Soutik Biswar. Another story on the incident calls it a “scam” involving the leakage of credit card information. The use of the term ‘leak’ appears consistently across other media accounts such as a 2005 story on Karan Bahree in the Times of India and another story in the Economic Times about the Australian Broadcasting Corporation’s (ABC) sting operation similar to the one in Delhi, describing the scam by the fraudsters as a leak of the online information of Australians. Another media account of the coverage describes the incident in more generic terms such as an “outsourcing crime”.
The other case concerned four former employees of Parsec technologies who stole classified information and diverted calls from potential customers, causing a sudden drop in the productivity of call centres managed by the company in November 2005. Another call centre fraud came to light in 2009 through a BBC sting operation in which British reporters went to Delhi and secretly filmed a deal with a man selling credit card and debit card details obtained from Symantec call centres, which sold software made by Norton. This BBC story uses the term “breach” to refer to the incident.
In the broader framing of these cases generally understood as cybercrime, which received transnational media coverage, the terms “fraud”, “leak”, “scam”, and “theft” appear interchangeably. The term “data breach” does not seem to be a popular or common usage in these media accounts of the BPO-related incidents. A broader sense of breach (of confidentiality, privacy) figures in the media reportage in implicitly racial terms of cultural trust, as a matter of ethics and professionalism and in the language of scandal in some cases.
These early cases typify a specific kind of cybercrime concerning the theft or misappropriation of outsourced personal data belonging to British or American residents. What’s remarkable about these cases is the utmost sensitivity of the stolen personal information including financial details, bank account and credit/debit card numbers, passwords, and in one case, source code. While these cases rang the alarm bells on the Indian BPO sector’s data security protocols, they also directed attention to concerns around the training of Indian employees on the ethics of data confidentiality and vetting through psychometric tests for character assessment. In the wake of these incidents, the National Association of Software and Service Companies (NASSCOM), an Indian non-governmental trade and advocacy group, launched a National Skills Registry for IT professionals to enable employers to conduct background checks in 2006.
These data theft incidents earned India a global reputation of an unsafe destination for business process outsourcing, seen to be lacking both, a culture of maintaining data confidentiality and concrete legislation for data protection at the time. Importantly, the incidents of data theft or misappropriation were also traceable back to a known source, a BPO employee or a group of malefactors, who often sold sensitive data belonging to foreign nationals to others in India.
The phrase “data leak” also caught on in another register in the context of the widespread use of camera-equipped mobile phones in India. The 2004 Delhi MMS case offers an instance of a date leak, recapitulating the language of scandal in moralistic terms.
The Delhi MMS Case
The infamous 2004 incident involved two underage Delhi Public School (DPS) students who recorded themselves in a sexually explicit act on a cellular phone. After a fall out, the male student passed the low-resolution clip on to his friend in which his female friend’s face is seen. The clip, distributed far and wide in India, ended up on the famous e-shopping and auction website, bazee.com leading to the arrest of the website’s CEO Avinash Bajaj for hosting the listing for sale. Another similar case in 2004 mimicked the mechanics of visual capture through hand-held MMS-enabled mobile phones. A two-minute MMS of a top South-Indian actress taking a shower went viral on the Internet in 2004, the year when another MMS of two prominent Bollywood actors kissing had already done the rounds. The MMS case also marked the onset of a national moral panic around the amateur uses of mobile phone technologies, capable of corrupting young Indian minds under a sneaky regime of new media modernity. The MMS case, not strictly the classic case of a data breach - non-visual information generally stored in databases - became an iconic case of a data leak framed in the media as a scandal that shocked the country, with calls for the regulation of mobile phone use in schools. The case continued its scandalous afterlife in a 2009 Bollywood film, Dev D and another 2010 film, Love, Sex and Dhokha,
Taken together, the BPO data thefts and frauds and the data leak scandals prefigure the contemporary discourse on data breaches in the second decade of the 21st century, or what may also be called the Decade of Datafication. The launch of the Indian biometric identity project, Aadhaar, in 2009, which linked access to public services and welfare delivery with biometric identification, resulted in large-scale data collection of the scheme’s subscribers. Such linking raised the spectre of state surveillance as alleged by the critics of Aadhaar, marking a watershed moment in the discourse on data privacy and protection.
Aadhaar Data Security and Other Data Breaches
Aadhaar was challenged in the Indian Supreme Court in 2012 when it was made mandatory for welfare and other services such as banking, taxation and mobile telephony. The national debate on the status of privacy as a cultural practice in Indian society and a fundamental right in the Indian Constitution led to two landmark judgments - the 2017 Puttaswamy ruling holding privacy to be a constitutional right subject to limitations and the 2018 Supreme Court judgment holding mandatory Aadhaar to be constitutional only for welfare and taxation but no other service.
While these judgments sought to rein in Aadhaar’s proliferating mandatory uses, biometric verification remained the most common mode of identity authentication with most organizations claiming it to be mandatory for various purposes. During the same period from 2010 onwards, a range of data security events concerning Aadhaar came to light. These included app-based flaws, government websites publishing Aadhaar details of subscribers, third party leaks of demographic data, duplicate and forged Aadhaar cards and other misuses.
In 2015, the Indian government launched its ambitious Digital India Campaign to provide government services to Indian citizens through online platforms. Yet, data security breach incidents continued to increase, particularly the trade in the sale and purchase of sensitive financial information related to bank accounts and credit card numbers. The online availability of a rich trove of data, accessible via a simple Google search without the use of any extractive software or hacking skills within a thriving shadow economy of data buyers and sellers makes India a particularly vulnerable digital economy, especially in the absence of robust legislation. The lack of awareness around digital crimes and low digital literacy further exacerbates the situation given that datafication via government portals, e-commerce, and online apps has outpaced the enforcement of legislative frameworks for data protection and cybersecurity.
In the context of Aadhaar data security issues, the term “data leak” seems to have more traction in media stories followed by the term “security breach”. Given the complexity of the myriad ways in which Aadhaar data has been breached, terms such as data leak and exposure (of 11 crore Indian farmers’ sensitive information) add to the specificity of the data security compromise. The term “fraud” also makes a comeback in the context of Aadhaar-related data security incidents. These cases represent a mix of data frauds involving fake identities, theft of thumb prints for instance from land registries and inadvertent data leaks in numerous incidents involving government employees in Jharkhand, voter ID information of Indian citizens in Andhra Pradesh and Telangana and activist reports of Indian government websites leaking Aadhaar data.
Aadhaar-related data security events parallel the increase in corporate data breaches during the decade of datafication. The term “data leak” again alternates with the term “data breach” in most media accounts while other terms such as “theft” and “scam” all but disappear in the media coverage of corporate data breaches.
From 2016 onwards, incidents of corporate data breaches in India continued to rise. A massive debit card data breach involving the YES Bank ATMs and point-of-sale (PoS) machines compromised through malware between May and July of 2016 resulted in the exposure of ATM PINs and non-personal identifiable information of customers. It went undetected for nearly three months. Another data leak in 2018 concerned a system run by Indane, a state-owned utility company, which allowed anyone to download private information on all Aadhaar holders including their names, services they were connected to and the unique 12-digit Aadhaar number. Data breaches continued to be reported in India concurrent with the incidents of data mismanagement related to Aadhaar. Some prominent data breaches included a cyberattack on the systems of airline data service provider SITA resulting in the leak of Air India passenger data, leakage of the personal details of the Common Admission Test (CAT) applicants, details of credit card and order preferences of Domino’s pizza customers on the dark web, leakage of COVID-19 patients’ test results leaked by government websites, user data of Justpay and Big Basket for sale on the dark web and an SBI data breach among others between 2019 and 2021.
The media reportage of these data breaches use the term “cyberattack” to describe the activities of hackers and cybercriminals operating within a shadow economy or the dark web. Recent examples of cyberattacks by hackers who leak user data for sale on the dark web include 8.2 terabytes of 110 million sensitive financial data (KYC details, Aadhaar, credit/debit cards and phone numbers) of the payments app MobiKwik users, 180 million Domino’s pizza orders (name, location, emails, mobile numbers), and Flipkart’s Cleartrip users’ data. In these incidents again, three terms appear prominently in the media reportage - cyberattack, data breach, and leak. The term “data breach” remains the most frequently used epithet in the media coverage of the lapses of data security. While it alternates with the term “leak” in the stories, the term “data breach” appears consistently across most headlines in the news stories.
The exposure of sensitive, personal, and non-personal data by public and private entities in India is certainly a cause for concern, given the ongoing data protection legislative vacuum.
The media coverage of data breaches tends to emphasize the quantum of compromised user data aside from the types of data exposed. The media framing of these breaches in quantitative terms of financial loss as well as the magnitude and the number of breaches certainly highlights the gravity of these incidents but harm to individual users is often not addressed.
Evolving Terminology and the Source of Data Harms
The main difference in the media reportage of the BPO cybersecurity incidents during the early aughts and the contemporary context of datafication is the usage of the term, “data breach”, which figures prominently in contemporary reportage of data security incidents but not so much in the BPO-related cybercrimes.
THe BPO incidents of data theft and the attendant fraud must be understood in the context of the anxieties brought on by a globalizing world of Internet-enabled systems and transnational communications. In most of these incidents regarded as cybercrimes, the language of fraud and scam ventures further to attribute such illegal actions of the identifiable malefactors to cultural factors such as lack of ethics and professionalism.The usage of the term “data leak” in these media reports functions more specifically to underscore a broader lapse in data security as well as a lack of robust cybersecurity laws. The broader term, “breach”, is occasionally used to refer to these incidents but the term, “data breach” doesn’t appear as such.
The term “data breach” gains more prominence in media accounts from 2009 onwards in the context of Aadhaar and the online delivery of goods and services by public and private players. The term “data breach” is often used interchangeably with the term “leak” within the broader ambit of cyberattacks in the corporate sector. The media reportage frames Aadhaar-related security lapses as instances of security/data breaches, data leaks, fraud, and occasionally scam.
In contrast to the handful of data security cases in the BPO sector, data breaches have abounded in the second decade of the twenty-first century. What further differentiates the BPO-related incidents to the contemporary data breaches is the source of the data security lapse. Most corporate data breaches remain attributable to the actions of hackers and cybercriminals while the BPO security lapses were traceable back to ex-employees or insiders with access to sensitive data. We also see in the coverage of the BPO-related incidents, the attribution of such data security lapses to cultural factors including a lack of ethics and professionalism often in racial overtones. The media reportage of the BBC and ABC sting operations suggests that the India BPOs lack of preparedness to handle and maintain personal data confidentiality of foreigners point to the absence of a privacy culture in India. Interestingly, this transnational attribution recurs in a different form in the national debate on Aadhaar and how Indians don’t care about their privacy.
The question of the harms of data breaches to individuals is also an important one. In the discourse on contemporary data breaches, the actual material harm to an individual user is rarely ever established in the media reportage and generally framed as potential harm that could be devastating given the sensitivity of the compromised data. The harm is reported to be predominantly a function of organizational cybersecurity weakness or attributed to hackers and cybercriminals.
The reporting of harm in collective terms of the number of accounts breached, financial costs of a data breach, the sheer number of breaches and the global rankings of countries with the highest reported cases certainly suggests a problem with cybersecurity and the lack of organizational preparedness. However, this collective framing of a data breach’s impact usually elides an individual user’s experience of harm. Even in the case of Aadhaar-related breaches - a mix of leaking data on government websites and other online portals and breaches - the notion of harm owing to exposed data isn’t clearly established. This is, however, different from the extensively documented cases of Aadhaar-related issues in which welfare benefits have been denied, identities stolen and legitimate beneficiaries erased from the system due to technological errors.
Future Directions of Research
This brief, qualitative foray into the media coverage of data breaches over two decades has aimed to trace the usage of various terms in two different contexts - the Indian BPO-related incidents and the contemporary context of datafication. It would be worth exploring at length, the relationship between frequent reports of data breaches, and the language used to convey harm in the contemporary context of a concrete data protection legislation vacuum. It would be instructive to examine the specific uses of the terms such as “fraud”, “leak”, “scam”, “theft” and “breach” in media reporting of such data security incidents more exhaustively. Such analysis would elucidate how media reportage shapes public perception towards the safety of user data and an anticipation of attendant harm as data protection legislation continues to evolve.
Especially with Aadhaar, which represents a paradigm shift in identity verification through digital means, it would be useful to conduct a sentiment analysis of how biometric identity related frauds, scams, and leaks are reported by the mainstream news media. A study of user attitudes and behaviours in response to the specific terminology of data security lapses such as the terms “breach”, “leak”, “fraud”, “scam”, “cybercrime”, and “cyberattack” would further contribute to how lay users understand the gravity of a data security lapse. Such research would go beyond expert understandings of data security incidents that tend to dominate media reportage to elucidate the concerns of lay users and further clarify the cultural meanings of data privacy.
‘Techplomacy’ and the negotiation of AI standards for the Indo-Pacific
This is a modified version of the post that appeared in The Strategist
By Arindrajit Basu with inputs from and review by Amrita Sengupta and Isha Suri
Later this month, UN member states elected American candidate Doreen Bogdan-Martin "the most important election you have never heard off" to elect the next secretary-general of the International Telecommunications Union (ITU). While this technical body's work may be esoteric, the election was fiercely contested with Russian candidate (and former Huawei executive; aptly reflecting the geopolitical competition that is underway in determining the “future of the internet” through the technical standards that underpin it. The “Internet Protocol” (IP) that is the set of rules governing the communication and exchange of data over the internet itself is being subjected to political contestation between a Sino-Russian vision that would see the standard give way to greater government control and a US vision ostensibly rooted in more inclusive multi-stakeholder participation.
As critical and emerging technologies take the geopolitical centre-stage, the global tug of war over the development, utilisation, and deployment is playing out most ferociously at standard-setting organisations, an arms’ length away from the media limelight. Powerful state and non-state actors alike are already seeking to shape standards in ways that suit their economic, political, and normative priorities. It is time for emerging economies, middle powers and a wider array of private actors and members from the civil society to play a more meaningful and tangible role in the process.
What are standards and why do they matter
Simply put, standards are blueprints or protocols with requirements which ‘standardise’ products and related processes around the world, thus ensuring that they are interoperable, safe and sustainable. For example, USB, WiFi or a QWERTY keyboard can be used around the world because they are built on technical standards that enable equipment produced adopting these standards to be used around the world.Standards are negotiated both domestically-at domestic standard-setting bodies such as the Bureau of Indian Standards (BIS) or Standards Australia (SA) or global standard-development organisations such as the International Telecommunications Union (ITU) or the International Standardisation Organisation (ISO). While standards are not legally binding unless they are explicitly imposed as requirements in a legislation, they have immense coercive value. Not adhering to recognised standards means that certain products may not reach markets as they are not compatible with consumer requirements or cannot claim to meet health or safety expectations. The harmonisation of internationally recognised standards serves as the bedrock for global trade and commerce. Complying with a global standard is particularly critical because of its applicability across several markets. Further, international trade law proclaims that World Trade Organisation (WTO) members can impose trade restrictive domestic measures only on the basis of published or soon to be published international standards.(Article 2.4 of the Technical Barriers to Trade Agreement)
Shaping global standards is of immense geopolitical and economic value to states and the private sector alike. States that are able to ‘export’ their domestic technological standards internationally enable their companies to reap a significant economic advantage because it is cheaper for them to adopt global standards. Further, companies draw huge revenue by holding patents to technologies that are essential to comply with a certain standard popularly known as Standard Essential Patents or SEPs and licensing them to other players who want to enter the market. For context, IPlytics estimated that cumulative global royalty income from licensing SEPs was USD 20 billion in 2020, anticipated to increase significantly in the coming years due to massive technological upgradation currently underway.
China’s push for dominance to influence the 5G standard at the Third Generation Partnership Project (3GPP) illustrates how prioritising standards-setting both through domestic industrial policy and foreign policy could provide rich economic and geopolitical dividends. After failing to meaningfully influence the setting of the 3G and 4G standards,the Chinese government commenced a national effort that sought to harmonise domestic standards, improve government coordination of standard-setting efforts, and obtain a first movers advantage over other nations developing their own domestic 5G standards. This was combined with a diplomatic push that saw vigorous private sector participation (Huawei put in 20 5G related proposals whereas Ericsson and Nokia put in just 16 and 10 respectively);
packing key leadership positions in Working Groups with representatives from Chinese companies and institutions; and ensuring that all Chinese participants vote in unison for any proposal. It is no surprise therefore that Chinese companies now lead the way on 5G with Huawei owning the most number of 5G patents and has finalised more 5G contracts than any other company despite restrictions placed on Huawei’s gear by some countries. As detailed in its “Make in China”strategy, China will now activelyapply its winning strategy to other standard-setting avenues as well
Standards for Artificial Intelligence
A number of institutions, including private actors such as Huawei and Cloud Walk have contributed to China’s 2018 AI standardisation white paper that was revised and updated in 2021.The white paper maps the work of SDOs in the field of AI standards and outlines a number of recommendations on how Chinese actors can use global SDOs to boost industrial competitiveness and globally promote “Chinese wisdom.” While there are cursory references to the role of standards in furthering “ethics” and “privacy,” the document does not outline how China will look to promote these values at SDOs.
Artificial Intelligence (AI) is a general purpose technology that has various outcomes and use-cases.Top down regulation of AI by governments is emerging across jurisdictions but this may not keep pace with the rapidly evolving technology being developed by the private sector or adequately check the diversity of use-cases. On the other hand, private sector driven self-regulatory initiatives focussing on ‘ethical AI’ are very broad and provide too much leeway to technology companies to evade the law. Technical standards offer a middle ground where multiple stakeholders can come together to devise uniform requirements on various stages of the AI development lifecycle. Of course, technical standards must co-exist with government driven regulation as well as self regulatory codes to holistically govern the deployment of AI globally. However, while the first two modes of regulation has received plenty of attention from policy-makers and scholars alike, AI standard-setting is an emerging field that has yet to be concretely evaluated from a strategic and diplomatic perspective.
Introducing a new CIS-ASPI project
This is why researchers at the Australian Strategic Policy Institute have partnered with the Centre for Internet and Society (Bengaluru) to produce a ‘techplomacy guide’ on negotiating AI standards for stakeholders in the Indo-Pacific. Given the immense economic value of shaping global technical standards, it is imperative that SDOs not be dominated only by the likes of the US, Europe or China. The standards likely to impact a majority of nations, devised only from the purview of a few countries may be context agnostic to the needs of emerging economies. Further, there are values at stake here. An excessive focus on security, accuracy or quality of AI-driven products may make some technology palatable across the world even if the technology undermines core democratic values such as privacy, and anti-discrimination. China’s efforts at shaping Facial Recognition Technology (FRT) standards at the ITU have been criticised for moving beyond mere technical specifications into the domain of policy recommendations despite there being a lack of representation of experts on human rights, consumer protection or data protection at the ITU. Accordingly, diversity of representation in terms of expertise, gender, and nationality at SDOs, including in leadership positions, are aspects our project will explore with an eye towards creating more inclusive participation.
Through this project ,we hope to identify how key stakeholders drive these initiatives and how technological standards can be devised in line both with core democratic values and strategic priorities. Through extensive consultations with several stakeholder groups, we plan to offer learning products to policy makers and technical delegates alike to enable Australian and Indian delegates to serve as ambassadors for our respective nations.
For more information on this new and exciting project funded by the Australian Departmentfor Foreign Affairs and Trade as part of the Australia India Cyber and Critical Technology Partnership grants, visit www.aspi.org.au/techdiplomacy and https://www.internationalcybertech.gov.au/AICCTP-grant-round-two
Big Tech’s privacy promise to consumers could be good news — and also bad news
It remains to be seen whether Google’s Privacy Sandbox project will be truly privacy-preserving. (Reuters Illustration: Francois Lenoir)
In February, Facebook, rebranded as Meta, stated that its revenue in 2022 is anticipated to reduce by $10 billion due to steps undertaken by Apple to enhance user privacy on its mobile operating system. More specifically, Meta attributed this loss to a new AppTrackingTransparency feature that requires apps to request permission from users before tracking them across other apps and websites or sharing their information with and from third parties. Through this change, Apple effectively shut the door on “permissionless” internet tracking and has given consumers more control over how their data is used. Meta alleged that this would hurt small businesses benefiting from access to targeted advertising services and charged Apple with abusing its market power by using its app store to disadvantage competitors under the garb of enhancing user privacy.
Access the full article published in the Indian Express on April 13, 2022
The Centre for Internet and Society’s comments and recommendations to the: The Digital Data Protection Bill 2022
High Level Comments
1. Rationale for removing the distinction between personal data and sensitive personal data is unclear.
All the earlier iterations of the Bill as well as the rules made under Section 43A of the Information Technology Act, 2000[1] had classified data into two categories; (i) personal data; and (ii) sensitive personal data. The 2022 version of the Bill has removed this distinction and clubbed all personal data under one umbrella heading of personal data. The rationale for this is unclear, as sensitive personal data means such data which could reveal or be related to eminently private data such as financial data, health data, sexual orientations and biometric data. Considering the sensitive nature of the data, the data classified as sensitive personal data is accorded higher protection and safeguards from processing, therefore by clubbing all data as personal data, the higher protection such as the need for explicit consent to the processing of sensitive personal data, the bar on processing of sensitive personal data for employment purposes has also been removed.
2. No clear roadmap for the implementation of the Bill
The 2018 Bill had specified a roadmap for the different provisions of the Bill to come into effect from the date of the Act being notified.[2] It specifically stated the time period within which the Authority had to be established and the subsequent rules and regulations notified.
The present Bill does not specify any such blueprint; it does not provide any details on either when the Bill will be notified or the time period within which the Board shall be established and specific Rules and regulations notified. Considering that certain provisions have been deferred to Rules that have to be framed by the Central government, the absence and/or delayed notification of such rules and regulations will impact the effective functioning of the Bill. Provisions such as Section 10(1) which deals with verifiable parental consent for data of children, Section 13 (1) which states the manner in which a Data Principal can initiate a right to correction, the process of selection and functioning of consent manager under 3(7) are few such examples, that when the Act becomes applicable, the data principal will have to wait for the Rules to Act of these provisions, or to get clarity on entities created by the Act.
The absence of any sunrise or sunset provision may disincentivise political or industrial will to support or enforce the provisions of the Bill. An example of such a lack of political will was the establishment of the Cyber Appellate Tribunal. The tribunal was established in 2006 to redress cyber fraud. However, it was virtually a defunct body from 2011 onwards when the last chairperson retired. It was eventually merged with the Telecom Dispute Settlement and Appellate Tribunal in 2017.
We recommend that Bill clearly lays out a time period for the implementation of the different provisions of the Bill, especially a time frame for the establishment of the Board. This is important to give full and effective effect to the right of privacy of the individual. It is also important to ensure that individuals have an effective mechanism to enforce the right and seek recourse in case of any breach of obligations by the data fiduciaries.
The Board must ensure that Data Principals and Fiduciaries have sufficient awareness of the provisions of this Bill before bringing the provisions for punishment into force. This will allow the Data Fiduciaries to align their practices with the provisions of this new legislation and the Board will also have time to define and determine certain provisions that the Bill has left the Board to define. Additionally enforcing penalties for offenses initially must be in a staggered process, combined with provisions such as warnings, in order to allow first time and mistaken offenders which now could include data principals as well, from paying a high price. This will relieve the fear of smaller companies and startups and individuals who might fear processing data for the fear of paying penalties for offenses.
3. Independence of Data Protection Board of India.
The Bill proposes the creation of the Data Protection Board of India (Board) in place of the Data Protection Authority. In comparison with the powers of the Board with the 2018 and 2019 version of Personal Data Protection Bill, we witness an abrogation of powers of the Board to be created, in this Bill. Under Clause 19(2), the strength and composition of the Board, the process of selection, the terms and conditions of appointment and service, and the removal of its Chairperson and other Members shall be such as may be prescribed by the Union Government at a later stage. Further as per Clause 19(3), the Chief Executive of the Board will be appointed by the Union Government and the terms and conditions of her service will also be determined by the Union Government. The functions of the Board have also not been specified under the Bill, the Central Government may assign the functions to be performed by the Board.
In order to govern data protection effectively, there is a need for a responsive market regulator with a strong mandate, ability to act swiftly, and resources. The political nature of personal data also requires that the governance of data, particularly the rule-making and adjudicatory functions performed by the Board are independent of the Executive.
Chapter Wise Comments and Recommendations
CHAPTER I- PRELIMINARY
● Definition: While the Bill has added a few new definitions to the Bill including terms such as gains, loss, consent manager etc. there are a few key definitions that have been removed from the earlier versions of the Bill. The removal of certain definitions in the Bill, eg. sensitive personal data, health data, biometric data, transgender status, creating a legal uncertainty about the application of the Bill.
With respect to the existing definitions as well the definition of the term ‘harm’ has been significantly reduced to remove harms such as surveillance from the ambit of harms. In addition, with respect of the definition of the term of harms also, the 2019 version of the Bill under Clause 2 (20) the definition provides a non exhaustive list of harms, by using the phrase “harms include”, however in the new definition the phrase has been altered to “harm”, in relation to a Data Principal, means”, thereby removing the possibility of more harms that are not apparent currently from being within the purview of the Act. We recommend that the definition of harms be made into a non-exhaustive list.
CHAPTER II - OBLIGATIONS OF DATA FIDUCIARY
Notice: The revised Clause on notice does away with the comprehensive requirements which were laid out under Clause 7 of the PDP Bill 2019. The current clause does not mention in detail what the notice should contain, while stating that that the notice should be itemised. While it can be reasoned that the Data Fiduciary can find the contents of the notice throughout the bill, such as with the rights of the Data Principal, the removal of a detailed list could create uncertainty for Data Fiduciaries. By leaving the finer details of what a notice should contain, it could cause Data Fiduciaries from missing out key information from the list, which in turn provide incomplete information to the Data Principal. Even in terms of Data Fiduciaries they might not know if they are complying with the provisions of the bill, and could result in them invariably being penalised. In addition to this by requiring less work by the Data Fiduciary and processor, the burden falls on the Data Principal to make sure they know how their data is processed and collected. The purpose of this legislation is to create further rights for individuals and consumers, hence the Bill should strive to put the individual at the forefront.
In addition to this Clause 6(3) of the Bill states “The Data Fiduciary shall give the Data Principal the option to access the information referred to in sub-sections (1) and (2) in English or any language specified in the Eighth Schedule to the Constitution of India.” While the inclusion of regional language notices is a welcome step, we suggest that the text be revised as follows “The Data Fiduciary shall give the Data Principal the option to access the information referred to in sub-sections (1) and (2) in English and in any language specified in the Eighth Schedule to the Constitution of India.” While the main crux of notice is to let the person know before giving consent, notice in a language that a person cannot read would not lead to meaningful consent.
Consent
Clause 3 of the Bill states “request for consent would have the contact details of a Data Protection Officer, where applicable, or of any other person authorised by the Data Fiduciary to respond to any communication from the Data Principal for the purpose of exercise of her rights under the provisions of this Act.” Ideally this provision should be a part of the notice and should be mentioned in the above section. This is similar to Clause 7(1)(c) of the draft Personal Data Protetion Bill 2019 which requires the notice to state “the identity and contact details of the data fiduciary and the contact details of the data protection officer, if applicable;”.
Deemed Consent
The Bill introduces a new type of consent that was absent in the earlier versions of the Bill. We are of the understanding that deemed consent is used to redefine non consensual processing of personal data. The use of the term deemed consent and the provisions under the section while more concise than the earlier versions could create more confusion for Data Principals and Fiduciaries alike. The definition and the examples do not shed light on one of the key issues with voluntary consent - the absence of notice. In addition to this the Bill is also silent on whether deemed consent can be withdrawn or if the data principal has the same rights as those that come from processing of data they have consented to.
Personal Data Protection of Children
The age to determine whether a person has the ability to legally consent in the online world has been intertwined with the age of consent under the Indian Contract Act; i.e. 18 years. The Bill makes no distinction between a 5 year old and a 17 year old- both are treated in the same manner. It assumes the same level of maturity for all persons under the age of 18. It is pertinent to note that the law in the offline world does recognise that distinction and also acknowledges the changes in the level of maturity. As per Section 82 of the Indian Penal Code read with Section 83, any act by a child under the age of 12 shall not be considered as an offence. While the maturity of those aged between 12–18 years will be decided by court (individuals between the age of 16–18 years can also be tried as adults for heinous crimes). Similarly, child labour laws in the country allow children above the age of 14 years to work in non-hazardous industry
There is a need to evaluate and rethink the idea that children are passive consumers of the internet and hence the consent of the parent is enough. Additionally, the bracketing of all individuals under the age of 18 as children fails to look at how teenages and young people use the internet. This is more important looking at the 2019 data which suggests that two-thirds of India’s internet users are in the 12–29 years age group, with those in the 12–19 age group accounting for about 21.5% of the total internet usage in metro cities. Given that the pandemic has compelled students and schools to adopt and adapt to virtual schools, the reliance on the internet has become ubiquitous with education. Out of an estimated 504 million internet users, nearly one-third are aged under 19. As per the Annual Status on Education Report (ASER) 2020, more than one-third of all schoolchildren are pursuing digital education, either through online classes or recorded videos.
Instead of setting a blanket age for determining valid consent, we could look at alternative means to determine the appropriate age for children at different levels of maturity, similar to what had been developed by the U.K. Information Commissioner’s Office. The Age Appropriate Code prescribes 15 standards that online services need to follow. It broadly applies to online services "provided for remuneration"—including those supported by online advertising—that process the personal data of and are "likely to be accessed" by children under 18 years of age, even if those services are not targeted at children. This includes apps, search engines, social media platforms, online games and marketplaces, news or educational websites, content streaming services, online messaging services.
The reservation to definition of child under the Bill has also been expressed by some members of the JPC through their dissenting opinion. MP Ritesh Pandey stated that keeping in mind the best interest of the child the Bill should consider a child to be a person who is less than 14 years of age. This would ensure that young people could benefit from the advances in technology without parental consent and reduce the social barriers that young women face in accessing the internet. Similarly Manish Tiwari in his dissenting note also observed that the regulation of the processing of data of children should be based on the type of content or data. The JPC Report observed that the Bill does not require the data fiduciary to take fresh consent of the child, once the child has attained the age of majority, and it also does not give the child the option to withdraw their consent upon reaching the majority age. It therefore, made the following recommendations:
Registration of data fiduciaries, exclusively dealing with children’s data. Application of the Majority Act to a contract with a child. Obligation of Data fiduciary to inform a child to provide their consent, three months before such child attains majority Continuation of the services until the child opts out or gives a fresh consent, upon achieving majority. However, these recommendations have not been incorporated into the provisions of the Bill. In addition to this the Bill is silent on the status of non consensual processing and deemed consent with respect to the data of children.
We recommend that fiduciaries who have services targeted at children should be considered as significant Data Fiduciaries. In addition to this the Bill should also state that the guardians could approach the Data Protection Board on behalf of the child. With these obligations in place, the age of mandatory consent could be reduced and the data fiduciary could have an added responsibility of informing the children in the simplest manner how their data will be used. Such an approach places a responsibility on Data Fiduciaires when implementing services that will be used by children and allows the children to be aware of data processing, when they are interacting with technology.
Chapter III-RIGHTS AND DUTIES OF DATA PRINCIPAL
Rights of Data Principal
Clause 12(3) of the Bill while providing the Data Principal the right to be informed of the identities of all the Data Fiduciaries with whom the personal data has been shared, also states that the data principal has the right to be informed of the categories of personal data shared. However the current version of the Bill provides only one category of data that is personal data.
Clause 14 of the Bill talks about the Right of Grievance Redressal, and states that the Data Principal has the right to readily available means of registering a grievance, however the Bill does not provide in the Notice provisions the need to mention details of a grievance officer or a grievance redressal mechanism. It is only the additional obligations on significant data fiduciary that mentions the need for a Data Protection officer to be the contact for the grievance redressal mechanism under the provisions of this Bill. The Bill could ideally re-use the provisions of the IT Act SPDI Rules 2011 in which Section 5(7) states “Body corporate shall address any discrepancies and grievances of their provider of the information with respect to processing of information in a time bound manner. For this purpose, the body corporate shall designate a Grievance Officer and publish his name and contact details on its website. The Grievance Officer shall redress the grievances or provider of information expeditiously but within one month ' from the date of receipt of grievance.”
The above framing would not only bring clarity to the data fiduciaries on what process to follow for a grievance redressal, it also would reduce the significant burden of theBoard.
Duties of Data Principals
The Bill while entisting duties of the Data Principal states that the “Data Principal shall not register a false or frivolous grievance or complaint with a Data Fiduciary or the Board”, however it is very difficult for a Data Principal to and even for the Board to determine what constitutes a “frivolous grievance”. In addition to this the absence of a defined notice provision and the inclusion of deemed consent would mean that the Data Fiduciary could have more information about the matter than the Data Principal. This could mean that the fiduciary could prove that a claim was false or frivolous. Clause 21(12) states that “At any stage after receipt of a complaint, if the Board determines that the complaint is devoid of merit, it may issue a warning or impose costs on the complainant.” In addition to this Clause 25(1) states that “ If the Board determines on conclusion of an inquiry that non- compliance by a person is significant, it may, after giving the person a reasonable opportunity of being heard, impose such financial penalty as specified in Schedule 1, not exceeding rupees five hundred crore in each instance.” The use of the term “person” in this case includes data which could mean that they could be penalised under the provisions of the Bill, which could also include not complying with the duties.
CHAPTER IV- SPECIAL PROVISIONS
Transfer of Personal Data outside India
Clause 17 of the Bill has removed the requirement of data localisation which the 2018 and 2019 Bill required. Personal data can be transferred to countries that will be notified by the central government. There is no need for a copy of the data to be stored locally and no prohibition on transferring sensitive personal data and critical data. Though it is a welcome change that personal data can be transferred outside of India, we would highlight the concerns in permitting unrestricted access to and transfer of all types of data. Certain data such as defence and health data do require sectoral regulation and ringfencing of the transfer of data.
Exemptions
Clause 18 of the Bill has widened the scope of government exemptions. Blanket exemption has been given to the State under Clause 18(4) from deleting the personal data even when the purpose for which the data was collected is no longer served or when retention is no longer necessary. The requirement of proportionality, reasonableness and fairness have been removed for the Central Government to exempt any department or instrumentality from the ambit of the Bill. By doing away with the four pronged test, this provision is not in consonance with test laid down by the Supreme Court and are also incompatible with an effective privacy regulation. There is also no provision for either a prior judicial review of the order by a district judge as envisaged by the Justice Srikrishna Committee Report or post facto review by an oversight committee of the order as laid down under the Indian Telegraph Rules, 1951[3] and the rules framed under Information Technology Act[4]. The provision states that such processing of personal data shall be subject to the procedure, safeguard and oversight mechanisms that may be prescribed.
[1] Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.
[2] Clause 97 of the 2018 Bill states“(1) For the purposes of this Chapter, the term ‘notified date’ refers to the date notified by the Central Government under sub-section (3) of section 1. (2)The notified date shall be any date within twelve months from the date of enactment of this Act. (3)The following provisions shall come into force on the notified date-(a) Chapter X; (b) Section 107; and (c) Section 108. (4)The Central Government shall, no later than three months from the notified date establish the Authority. (5)The Authority shall, no later than twelve months from the notified date notify the grounds of processing of personal data in respect of the activities listed in sub-section (2) of section 17. (6) The Authority shall no, later than twelve months from the date notified date issue codes of practice on the following matters-(a) notice under section 8; (b) data quality under section 9; (c) storage limitation under section 10; (d) processing of personal data under Chapter III; (e) processing of sensitive personal data under Chapter IV; (f) security safeguards under section 31; (g) research purposes under section 45;(h) exercise of data principal rights under Chapter VI; (i) methods of de-identification and anonymisation; (j) transparency and accountability measures under Chapter VII. (7)Section 40 shall come into force on such date as is notified by the Central Government for the purpose of that section.(8)The remaining provision of the Act shall come into force eighteen months from the notified date.”
[3] Rule 419A (16): The Central Government or the State Government shall constitute a Review Committee.
Rule 419 A(17): The Review Committee shall meet at least once in two months and record its findings whether the directions issued under sub-rule (1) are in accordance with the provisions of sub-section (2) of Section 5 of the said Act. When the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above it may set aside the directions and orders for destruction of the copies of the intercepted message or class of messages.
[4] Rule 22 of Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009: The Review Committee shall meet at least once in two months and record its findings whether the directions issued under rule 3 are in accordance with the provisions of sub-section (2) of section 69 of the Act and where the Review Committee is of the opinion that the directions are not in accordance with the provisions referred to above, it may set aside the directions and issue an order for destruction of the copies, including corresponding electronic record of the intercepted or monitored or decrypted information.
Comments to the proposed amendments to The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
Preliminary
In these comments, we examine the constitutional validity of the proposed amendments, as well as whether the language of the amendments provide sufficient clarity for its intended recipients. This commentary is in-line with CIS’ previous engagement with other iterations of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
General Comments
Ultra vires the parent act
Section 79(1) of the Information Technology (IT) Act states that the intermediary will not be held liable for any third-party information if the intermediary complies with the conditions laid out in Section 79(2). One of these conditions is that the intermediary observe “due diligence while discharging his duties under this Act and also observe such other guidelines as the Central Government may prescribe in this behalf.” Further, Section 87(2)(zg) empowers the central government to prescribe “guidelines to be observed by the intermediaries under sub-section (2) of section 79.”
A combined reading of Section 79(2) read with Section 89(2)(zg) makes it clear that the power of the Central Government is limited to prescribing guidelines related to the due diligence to be observed by the intermediaries while discharging its duties under the IT Act. However, the proposed amendments extend the original scope of the provisions within the IT Act.
In particular, the IT Act does not prescribe for any classification of intermediaries. Section 2(1) (w) of the Act defines intermediaries as “with respect to any particular electronic records, means any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record and includes telecom service providers, network service providers, internet service providers, web-hosting service providers, search engines, online payment sites, online-auction sites, online-market places and cyber cafes”. Intermediaries are treated and regarded as a single monolithic entity with the same responsibilities and obligations.
The proposed amendments have now established a new category of intermediaries, namely online gaming intermediary. This classification comes with additional obligations, codified within Rule 4A of the proposed amendments, including enabling the verification of user-identity and setting up grievance redressal mechanisms. The additional obligations placed on online gaming intermediaries find no basis in the IT Act, which does not specify or demarcate between different categories of intermediaries.
The 2021 Rules have been prescribed under Section 87(1) and Section 87(2)(z) and (zg) of the IT Act. These provisions do not empower the Central Government to make any amendment to Section 2(w) or create any classification of intermediaries. As has been held by the Supreme Court in State of Karnataka and Another v. Ganesh Kamath & Ors that: “It is a well settled principle of interpretation of statutes that conferment of rule making power by an Act does not enable the rule making authority to make a rule which travels beyond the scope of the enabling Act or which is inconsistent therewith or repugnant thereto.” In this light, we argue that the proposed amendment cannot go beyond the parent act or prescribe policies in the absence of any law/regulation authorising them to do so.
Recommendation
We recommend that a regulatory intervention seeking to classify intermediaries and prescribe regulations specific to the unique nature of specific intermediaries should happen through an amendment to the parent act. The amendment should prescribe additional responsibilities and obligations of online gaming intermediaries.
A note on the following sections
Since the legality of classifying intermediaries into further categories is under question, our subsequent discussions on the language of the provisions related to online gaming intermediary are recommended to be taken into account for formulating any new legislations relating to these entities.
Specific comments
Fact checking amendment
Amendment to Rule 3(1)(b)(v) states that intermediaries are obligated to ask their users to not host any content that is, inter alia, “identified as fake or false by the fact check unit at the Press Information Bureau of the Ministry of Information and Broadcasting or other agency authorised by the Central Government for fact checking”.
Read together with Rule 3(1)(c), which gives intermediaries the prerogative to terminate user access to their resources on non-compliance with their rules and regulations, Rule 3(1)(b)(v) essentially affirms the intermediary’s right to remove content that the Central government deems to be ‘fake’. However, in the larger context of the intermediary liability framework of India, where intermediaries found to be not complying with the legal framework of section 79 lose their immunity, provisions such as Rule 3(1)(b)(v) compel intermediaries to actively censor content, on the apprehension of legal sanctions.
In this light, we argue that Rule 3(1)(b)(v) is constitutionally invalid, inasmuch that Article 19(2), which prescribes grounds under which the government restrict the right to free speech, does not permit restricting speech on the ground that it is ostensibly “fake or false”. In addition, the net effect of this rule would be that the government would be the ultimate arbiter of what is considered ‘truth’, and every contradictions to this narrative would be deemed to be false. In a democratic system like India’s, this cannot be a tenable position, and would go against a rich jurisprudence of constitutional history on the need for plurality.
For instance, in Indian Express Newspapers v Union of India, the Supreme Court had held that ‘the freedom of the press rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.’ Applying this interpretation to the present case, it could be said that the government’s monopoly on directing what constitutes “fake or false” in the online space would prevent citizens from accessing dissenting voices and counterpoints to government policies .
This is problematic when one considers that in the Indian context, freedom of speech and expression has always been valued for its instrumental role in ensuring a healthy democracy, and its power to influence public opinion. In the present case, the government, far from facilitating any such condition, is instead actively indulging in guardianship of the public mind (Sarkar et al, 2019).
Other provisions in the IT Act which permit for censorship of content, including section 69A, permit the government to only do so when content is relatable to grounds enumerated in Article 19(2) of the Constitution. In addition, in the case of Shreya Singhal vs Union of India, where, the constitutionality of section 69A was challenged, the Supreme Court upheld the provision because of the legal safeguards inherent in the provision, including offering a hearing to the originator of the impugned content and reasons for censoring content to be recorded in writing.
In contrast, a fact check by the Press Information Bureau or by another authorised agency provides no such safeguards, and does not relate to any constitutionally recognized ground for restricting speech.
Recommendation
The proposed amendment to Rule 3(1)(b)(v) is unconstitutional, and should be removed from the final draft of the law.
Clarifications are needed for online games rules definitions
The definitions of an "online game" and "online gaming intermediary" are currently extremely unclear and require further clarification.
As the proposed amendments stand, online games are characterised by the user's “deposit with the expectation of earning winnings”. Both deposit and winnings can be “cash” or “in kind", which does not adequately draw a boundary on the type of games this amendment seeks to cover. Can the time invested by the player in playing a game be answered under the “in kind” definition of deposit? If the game provides a virtual in-game currency that can be exchanged for internal power ups, even if there are no cash or gift cards used as payout, is that considered to be an “in kind” winnings? The rules, as currently drafted, are vague in their reference towards “in kind” deposits and payouts.
This definition of online games also does not differentiate between single or multiplayer games, and traditional games like chess which have found an audience online such as Candy Crush (single player), Minecraft (multiplayer collaborative) or chess (traditional). It is unclear whether these games were intended to fall within the purview of these amendments to the rules, and if they are all subjected to the same due diligence requirements as pay-to-play games. This, in conjunction with the proposed rule 6A which allows the Ministry to term any other game as an online game for the purposes of the rules, also provides them with broad, unpredictable powers . This ambiguity hinders clear comprehension of the expectations among the target stakeholders, thus affecting the consistency and predictability of the implementation of the rules.
Similarly, "online gaming intermediaries" are also defined very broadly as "intermediary that offers one or more than one online game". As defined, any intermediary that even hosts a link to a game is classified as an online gaming intermediary since the game is now "offered" through the intermediary. As drafted, there does not seem to be a material distinction between an "intermediary" as defined by the act and "online gaming intermediary" as specified by these rules.
Recommendation
We recommend further clarification on the definitions of these terms, especially for “in kind” and “offers” which are currently extremely vague terms that provide overbroad powers to the Ministry.
Intermediaries and Games
"Online gaming intermediaries" are defined very broadly as "intermediary that offers one or more than one online game". Intermediaries are defined in the Act as "any person who on behalf of another person receives, stores or transmits that message or provides any service with respect to that message".
According to the media coverage (Barik, 2023) around these amendments, it seems that there is an effort to classify gaming companies as "online gaming intermediaries" but the language of the drafted amendments do not support this. An “intermediary” status is given to a company due to its functional role in primarily offering third party content. It is not a classification for different types of internet companies that exist and thus must not be used to make rules for entities that do not perform this function.
Not all gaming companies present a collection of games for their users to play. According to the drafted definition multiple platforms where games might be present like, an app stores where multiple game developers can publish their games for access by users, a website that lists links to online games, a social media platform that acts as an intermediary between two users exchanging links to games, as well as websites that host games for users to directly access may all be classified as an "online gaming intermediary" since they "offer" games to users. These are a rather broad range of companies and functions to be singularly classified an "online gaming intermediary".
Recommendation
We recommend a thoroughly researched legislative solution to regulating gaming companies that operate online rather than through amendments to intermediary rules. If some companies are indeed to be classified as “online gaming intermediaries”, there is a need for further reasoning on which type of gaming companies and their functions are intermediary functions for the purposes of these Rules.
Comments can be downloaded here
Civil Society’s second opinion on a UHI prescription
The article originally published by Internet Freedom Foundation can be accessed here.
The National Health Authority (NHA) released the Consultation Paper on Operationalising Unified Health Interface (UHI) in India on December 14, 2022. The deadline for submission of comments was January 13, 2023. We collaborated with the Centre for Health Equity, Law & Policy, the Centre for Internet & Society, & the Forum for Medical Ethics Society to submit comments on the paper.
Background
The UHI is proposed to be a “foundational layer of the Ayushman Bharat Digital Health Mission (ABDM)” and is “envisioned to enable interoperability of health services in India through open protocols”. The ABDM, previously known as the National Digital Health Mission, was announced by the Prime Minister on the 74th Independence Day, and it envisages the creation of a National Digital Health Ecosystem with six key features: Health ID, Digi Doctor, Health Facility Registry, Personal Health Records, Telemedicine, and e-Pharmacy. After launching the programme in six Union Territories, the National Health Authority issued a press release on August 26, 2020 announcing the public consultation for the Draft Health Data Management Policy for NDHM. While the government has repeatedly claimed that creation of a health ID is purely voluntary, contrary reports have emerged. In our comments as part of the public consultation, our primary recommendation was that deployment of any digital health ID programme must be preceded by the enactment of general and sectoral data protection laws by the Parliament of India; and meaningful public consultation which reaches out to vulnerable groups which face the greatest privacy risks.
As per the synopsis document which accompanies the consultation paper, it aims to “seek feedback on how different elements of UHI should function. Inviting public feedback will allow for early course correction, which will in-turn engender trust in the network and enhance market adoption. The feedback received through this consultation will be used to refine the functionalities of UHI so as to limit any operational issues going forward.” The consultation paper contains a set of close-ended questions at the end of each section through which specific feedback has been invited from interested stakeholders. We have collaborated with the Centre for Health Equity, Law & Policy, the Centre for Internet & Society, & the Forum for Medical Ethics Society to draft the comments on this consultation paper.
Our main concern relates to the approach the Government of India and concerned Ministries adopt to draft a consultation paper without explicitly outlining how the proposed UHI fits into the broader healthcare ecosystem and quantifying how it improves it rendering the consultation paper and public engagement efforts inadequate. Additionally, it doesn’t allow the public at large, and other stakeholders to understand how it may contribute to people’s access to quality care towards ensuring realisation of their constitutional right to health and health care. The close-ended nature of the consultation process, wherein specific questions have been posed, restricts stakeholders from questioning the structure of the ABDM itself and forces us to engage with its parts, thereby incorrectly assuming that there is support for the direction in which the ABDM is being developed.
Our submissions
A. General comments
a. Absence of underlying legal framework
Ensuring health data privacy requires legislation at three levels- comprehensive laws, sectoral laws and informal rules. Here, the existing proposal for the data protection legislation, i.e., the draft Digital Personal Data Protection Bill, 2022 (DPDPB, 2022) which could act as the comprehensive legal framework, is inadequate to sufficiently protect health data. This inadequacy arises from the failure of the DPDPB, 2022 to give higher degree of protection to sensitive personal data and allowing for non-consensual processing of health data in certain situations under Clause 8 which relates to “deemed consent”. Here, it may also be noted that the DPDPB, 2022 fails to specifically define either health or health data. Further, the proposed Digital Information Security in Healthcare Act, 2017, which may have acted as a sectoral law, is presently before the Parliament and has not been enacted. Here, the absence of safeguards allows for data capture by health insurance firms and subsequent exclusion/higher costs for vulnerable groups of people. Similarly, such data capture by other third parties potentially leads to commercial interests creeping in at the cost of users of health care services and breach of their privacy and dignity.
b. Issues pertaining to scope
Clarity is needed on whether UHI will be only providing healthcare services through private entities, or will also include the public health care system and various health care schemes and programs of the government, such as eSanjeevani.
c. Pre-existing concerns
- Exclusion: Access to health services through the Unified Health Interface should not be made contingent upon possessing an ABHA ID, as alluded to in the section on ‘UHI protocols in action: An example’ under Chapter 2(b). Such an approach is contrary to the Health Data Management Policy that is based on individual autonomy and voluntary participation. Clause 16.4 of the Policy clearly states that nobody will “be denied access to any health facility or service or any other right in any manner by any government or private entity, merely by reason of not creating a Health ID or disclosing their Health ID…or for not being in possession of a Health ID.” Moreover, the National Medical Commission Guidelines for Telemedicine in India also does not create any obligation for the patient to possess an ABHA ID in order to access any telehealth service. The UHI should explicitly state that a patient can log in on the network using any identification and not just ABHA.
- Consent: As per media reports, registration for a UHID under the NDHM, which is an earlier version of the ABHA number under the ABDM, may have been voluntary on paper but it was being made mandatory in practice by hospital administrators and heads of departments. Similarly, reports suggest that people who received vaccination against COVID-19 were assigned a UHID number without their consent or knowledge.
- Function creep: In the absence of an underlying legal framework, concerns also arise that the health data under the NDHM scheme may suffer from function creep, i.e., the collected data being used for purposes other than for which consent has been obtained. These concerns arise due to similar function creep taking place in the context of data collected by the Aarogya Setu application, which has now pivoted from being a contact-tracing application to “health app of the nation”. Here, it must be noted that as per a RTI response dated June 8, 2022 from NIC, the Aarogya Setu Data Access And Knowledge Sharing Protocol “has been discontinued".
- Issues with the United Payments Interface may be replicated by the UHI: The consultation paper cites the United Payments Interface (UPI) as “strong public digital infrastructure” which the UHI aims to leverage. However, a trend towards market concentration can be witnessed in UPI: the two largest entities, GooglePay and PhonePe, have seen their market share hover around 35% and 47% (by volume) for some time now (their share by value transacted is even higher). Meanwhile, the share of the NPCI’s own app (BHIM) has fallen from 40% in August 2017 to 0.74% in September 2021. Thus, if such a model is to be adopted, it is important to study the UPI model to understand such threats and ensure that a similar trend towards oligopoly or monopoly formation in UHI is addressed. This is all the more important in a country in which the decreasing share of the public health sector has led to skyrocketing healthcare costs for citizens.
B. Our response also addressed specific questions about search and discovery, service booking, grievance redressal, and fake reviews and scores. Our responses on these questions can be found in our comments here.
Our previous submissions on health data
We have consistently engaged with the government since the announcement of the NDHM in 2020. Some of our submissions and other outputs are linked below:
- IFF’s comment on the Draft Health Data Management Policy dated May 21, 2022 (link)
- IFF’s comments on the consultation Paper on Healthcare Professionals Registry dated July 20, 2021 (link)
- IFF and C-HELP Working Paper: ‘Analysing the NDHM Health Data Management Policy’ dated June 11, 2021 (link)
- IFF’s Consultation Response to Draft Health Data Retention Policy dated January 6, 2021 (link)
- IFF’s comments on the National Digital Health Mission’s Health Data Management Policy dated September 21, 2020 (link)
Important documents
- Response on the Consultation Paper on Operationalising Unified Health Interface (UHI) in India by Centre for Health Equity, Law & Policy, the Centre for Internet & Society, the Forum for Medical Ethics Society, & IFF dated January 13, 2023 (link)
- NHA’s Consultation Paper on Operationalising Unified Health Interface (UHI) in India dated December 14, 2022 (link)
- Synopsis of NHA’s Consultation Paper on Operationalising Unified Health Interface (UHI) in India dated December 14, 2022 (link)
CensorWatch: On the Implementation of Online Censorship in India
Abstract: State authorities in India order domestic internet service providers (ISPs) to block access to websites and services. We developed a mobile application, CensorWatch, that runs network tests to study inconsistencies in how ISPs conduct censorship. We analyse the censorship of 10,372 sites, with measurements collected across 71 networks from 25 states in the country. We find that ISPs in India rely on different methods of censorship with larger ISPs utilizing methods that are harder to circumvent. By comparing blocklists and contextualising them with specific legal orders, we find concrete evidence that ISPs in India are blocking different websites and engaging in arbitrary blocking, in violation of Indian law.
The paper authored by Divyank Katira, Gurshabad Grover, Kushagra Singh and Varun Bansal appeared as part of the conference on Free and Open Communications on the Internet (FOCI '23) and can be accessed here.
The authors would like to thank Pooja Saxena and Akash Sheshadri for contributing to the visual design of Censorwatch; Aayush Rathi, Amber Sinha and Vipul Kharbanda for their valuable legal inputs; Internet Freedom Foundation for their support; ipinfo.io for providing free access to their data and services. The work was made possible because of research grants to the Centre for Internet and Society from the MacArthur Foundation, Article 19, the East-West Management Institute and the New Venture Fund. Gurshabad Grover’s contributions were supported by a research fellowship from the Open Tech Fund.
CoWIN Breach: What Makes India's Health Data an Easy Target for Bad Actors?
The article was originally published in the Quint on 19 June 2023.
Last week, it was reported that due to an alleged breach of the CoWIN platform, details such as Aadhaar and passport numbers of Indians were made public via a Telegram bot.
While Minister of State for Information Technology Rajeev Chandrashekar put out information acknowledging that there was some form of a data breach, there is no information on how the breach took place or when a past breach may have taken place.
This data leak is yet another example of our health records being exposed in the recent past – during the pandemic, there were reports of COVID-19 test results being leaked online. The leaked information included patients’ full names, dates of birth, testing dates, and names of centres in which the tests were held.
In December last year, five servers of the All India Institute of Medical Science (AIIMS) in Delhi were under a cyberattack, leaving sensitive personal data of around 3-4 crore patients compromised.
In such cases, the Indian Computer Emergency Response Team (CERT-In) is the agency responsible for looking into the vulnerabilities that may have led to them. However, till date, CERT-In has not made its technical findings into such attacks publicly available.
The COVID-19 Pandemic Created Opportunity
The pandemic saw a number of digitisation policies being rolled out in the health sector; the most notable one being the National Digital Health Mission (or NDHM, later re-branded as the Ayushman Bharat Digital Mission).
Mobile phone apps and web portals launched by the central and state governments during the pandemic are also examples of this health digitisation push. The rollout of the COVID-19 vaccinations also saw the deployment of the CoWIN platform.
Initially, it was mandatory for individuals to register on CoWIN to get an appointment for vaccination, and there was no option for walk-in-registration or to book an appointment. But, the Centre subsequently modified this rule and walk-in appointments and registrations on CoWIN became permissible from June 2021.
However, a study conducted by the Centre for Internet and Society (CIS) found that states such as Jharkhand and Chhattisgarh, which have low internet penetration, permitted on-site registration for vaccinations from the beginning.
The rollout of the NDHM also saw Health IDs being generated for citizens.
In several reported cases across states, this rollout happened during the COVID-19 vaccination process – without the informed consent of the concerned person.
The beneficiaries who have had their Health IDs created through the vaccination process had not been informed about the creation of such an ID or their right to opt out of the digital health ecosystem.
A Web of Health Data Policies
Even before the pandemic, India was working towards a Health ID and a health data management system.
The components of the umbrella National Digital Health Ecosystem (NDHE) are the National Digital Health Blueprint published in 2019 (NDHB) and the NDHM.
The Blueprint was created to implement the National Health Stack (published in 2018) which facilitated the creation of Health IDs. Whereas the NDHM was drafted to drive the implementation of the Blueprint, and promote and facilitate the evolution of NDHE.
The National Health Authority (NHA), established in 2018, has been given the responsibility of implementing the National Digital Health Mission.
2018 also saw the Digital Information Security in Healthcare Act (DISHA), which was to regulate the generation, collection, access, storage, transmission, and use of Digital Health Data ("DHD") and associated personal data.
However, since its call for public consultation, no progress has been made on this front.
In addition to documents that chalk out the functioning and the ecosystem of a digitised healthcare system, the NHA has released policy documents such as:
-
the Health Data Management Policy (which was revised three times; the latest version released in April 2022)
-
the Health Data Retention Policy (released in April 2021)
-
Consultation paper on the Unified Health Interface (UHI) (released in December 2022)
Along with these policies, in 2022, the NHA released the NHA Data Sharing Guidelines for the Pradhan Mantri Jan Aarogya Yojana (PM-JAY) – India’s state health insurance policy.
However these draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; the PM-JAY’s Data Sharing Guidelines, published in August 2022, did not even refer to the draft National Digital Health Data Management Policy (published in April 2022).
Interestingly, the recent health data policies do not mention CoWIN. Failing to cross-reference or mention preceding policies creates a lack of clarity on which documents are being used as guidelines by healthcare providers.
Can a Data Protection Bill Be the Solution?
The draft Data Protection Bill, 2021, defined health data as “…the data related to the state of physical or mental health of the data principal and includes records regarding the past, present or future state of the health of such data principal, data collected in the course of registration for, or provision of health services, data associated with the data principal to the provision of specific health services.”
However, this definition as well as the definition of sensitive personal data was removed from the current version of the Bill (Digital Personal Data Protection Bill, 2022).
Omitting these definitions from the Bill removes a set of data which, if collected, warrants increased responsibility and increased liability. Handling of health data, financial data, government identifiers, etc, need to come with a higher level of responsibility as they are a list of sensitive details of a person.
The threats posed as a result of this data being leaked are not limited to spam messages or fraud and impersonation, but also of companies that can get a hand on this coveted data and gather insights and train their systems and algorithms, without the need to seek consent from anyone, or without facing the consequences of harm caused.
While the current version of the draft DPDP Bill states that the data fiduciary shall notify the data principal of any breach, the draft Bill also states that the Data Protection Board “may” direct the data fiduciary to adopt measures that remedy the breach or mitigate harm caused to the data principal.
The Bill also prescribes penalties of upto Rs 250 crore if the data fiduciary fails to take reasonable security safeguards to prevent a personal data breach, and a penalty of upto Rs 200 crore if the fiduciary fails to notify the data protection board and the data principal of such breach.
While these steps, if implemented through legislation, would make organisations processing data take their data security more seriously, the removal of sensitive personal data from the definition of the Bill, would mean that data fiduciaries processing health data will not have to take additional steps other than reasonable security safeguards.
The absence of a clear indication of security standards will affect data principals and fiduciaries.
Looking to bring more efficiency to governance systems, the Centre launched the Digital India Mission in 2015. The press release by the central government reporting the approval of the programme by the Cabinet of Ministers speaks of ‘cradle to grave’ digital identity as one of its vision areas.
The ambitious Universal Health ID and health data management policies are an example of this digitisation mission.
However breaches like this are reminders that without proper data security measures, and a system for having a person responsible for data security, the data is always vulnerable to an attack.
While the UK and Australia have also seen massive data breaches in the past, India is at the start of its health data digitisation journey and has the ability to set up strong security measures, employ experienced professionals, and establish legal resources to ensure that data breaches are minimised and swift action can be taken in case of a breach.
The first step to understand the vulnerabilities would be to present the CERT-In reports of this breach, and guide other institutions to check for the same so that they are better prepared for future breaches and attacks.
Health Data Management Policies - Differences Between the EU and India
This issue brief was reviewed and edited by Pallavi Bedi
Introduction
Health data has seen an increased interest the world over, on account of the amount of information and inferences that can be drawn not just about a person but also about the population in general. The Covid 19 pandemic also brought about an increased focus on health data, and brought players that earlier did not collect health data to be required to collect such data, including offices and public spaces. This increased interest has led to further thought on how health data is regulated and a greater understanding of the sensitivity of such data, because of which countries are in varying processes to get health data regulated over and above the existing data protection regulations. The regulations not only look at ensuring the privacy of the individual but also look at ways in which this data can be shared with companies, researchers and public bodies to foster innovation and to monetise this valuable data. However for a number of countries the effort is still on the digitisation of health data. India has been in the process of implementing a nationwide health ID that can be used by a person to get all their medical records in one place. The National Health Authority (NHA) has also since 2017 been publishing policies that look at the framework and ecosystem of health data, as well as the management and sharing of health data. However these policies and a scattered implementation of the health ID are being carried out without a data protection legislation in place. In comparison, Europe, which already has an established health Id system, and a data protection legislation (GDPR) is looking at the next stage of health data management through the EU Health Data Space (EUHDS). Through this issue brief we would like to highlight the differences in approaches to health data management taken by the EU and India, and look at possible recommendations for India, in creating a privacy preserving health data management policy.
Background
EU Health Data Space
The EU Health Data Space (EUHDS) was proposed by the EU Council as a way to create an ecosystem which combines rules, standards, practices and infrastructure, around health data under a common governance framework. The EUHDS is set to rely on two pillars; namelyMyHealth@EU and HealthData@EU, where MyHealth@EU facilitates easy flow of health data between patients and healthcare professionals within member states, the HealthData@EU,faciliates secondary use of data which allows policy makers,researchers access to health data to foster research and innovation.[1] The EUHDS aims to provide a trustworthy system to access and process health data and builds up from the General Data Protection Regulation (GDPR), proposed Data Governance Act.[2]
India’s health data policies:
The last few years has seen a flurry of health policies and documents being published and the creation of a framework for the evolution of a National Digital Health Ecosystem (NDHE). The components for this ecosystem were the National Digital Health Blueprint published in 2019 (NDHB) and the National Digital Health Mission (NDHM). The BluePrint was created to implement the National Health Stack (published in 2018) which facilitated the creation of Health IDs.[3] Whereas the NDHM was drafted to drive the implementation of the Blueprint, and promote and facilitate the evolution of NDHE.[4]
The National Health Authority (NHA) established in 2018 has been given the responsibility of implementing the National Digital Health Mission. 2018 also saw the Digital Information Security in Healthcare Act (DISHA) which was to be a legislation that laid down provisions that regulate the generation, collection, access, storage, transmission and use of Digital Health Data ("DHD") and associated personal data.[5] However since its call for public consultation no progress has been made on this front.
Along with these three strategy documents the NHA has also released policy documents more particularly the Health Data Management Policy (which was revised three times; the latest version released in April 2022), the Health Data Retention Policy (released April 2021), and the Consultation Paper on Unified Health Interface (UHI) (released March 2021). Along with this in 2022 the NHA released the NHA Data Sharing Guidelines for the Pradhan Mantri Jan Aarogya Yojana (PM-JAY) India’s state health insurance policy.
However these draft guidelines repeat the pattern of earlier policies on health data, wherein there is no reference to the policies that predated it; the PM-JAY’s Data Sharing Guidelines published in August 2022 did not even refer to the draft National Digital Health Data Management Policy (published in April 2022). As stated through the examples above these documents do not cross-refer or mention preceding health data documents, creating a lack of clarity of which documents are being used as guidelines by health care providers.
In addition to this the Personal Data Protection Bill has been revised three times since its release in 2018. The latest version was published for public comments on November 18, 2022; the Bill has removed the distinction between sensitive personal data and personal data and clubbed all personal data under one umbrella heading of personal data. Health and health data definition has also been deleted; creating further uncertainty with respect to health data as the different policies mentioned above rely on the data protection legislation to define health data.
Comparison of the Health Data Management Approaches
Interoperability with Data Protection Legislations
At the outset the key difference between the EU and India’s health data management policies has been the legal backing of GDPR which the EUHDS has. EUHDS has a strong base in terms of rules for privacy and data protection as it follows, draws inference and works in tandem with the General Data Protection Regulation (GDPR). The provisions also build upon legislation such as Medical Devices Regulation and the In Vitro Diagnostics Regulation. With particular respect to GDPR the EUHDS draws from the rights set out for protection of personal data including that of electronic health data.
The Indian Health data policies however currently exist in the vacuum created by the multiple versions of the Data Protection Bill that are published and repealed or replaced. The current version called the Digital Personal Data Protection Bill 2022 seems to take a step backward in terms of health data. The current version does away with sensitive personal data (which health data was a part of) and keeps only one category of data - personal data. It can be construed that the Bill currently considers all personal data as needing the same level of protection but it is not so in practice. The Bill does not at the moment mandate more responsibilities on data fiduciaries[6] that deal with health data (something that was present in all the earlier versions of the Bill) and in other data protection legislation across different jurisdictions and leaves the creation of Significant Data Fiduciaries (who have more responsibilities) to be created by rules, based on the sensitivity of data decided by the government at a later date.[7] In addition to this the Bill does not define “health data”, the reason why this is a cause for worry is that the existing health data policies also do not define health data often relying on the definition mentioned in the versions of Data Protection Bill.
Definitions and Scope
The EUHDS defines ‘personal electronic health data’ as data concerning health and genetic data as defined in Regulation (EU) 2016/679[8], as well as data referring to determinants of health, or data processed in relation to the provision of healthcare services, processed in an electronic form. Health data by these parameters would then include not just data about the status of health of a person which includes reports and diagnosis, but also data from medical devices.
In India the Health Data Management Policy 2022, defines “Personal Health Records” (PHR) as a health record that is initiated and maintained by an individual. The policy also states that a PHR would be able to reveal a complete and accurate summary of the health and medical history of an individual by gathering data from multiple sources and making this accessible online. However there is no definition of health data which can be used by companies or users to know what comes under health data. The 2018, 2019 and 2021 version of the Data Protection Legislation had definitions of the term health data, however the 2022 version of the Bill does away with the definition.
Health data and wearable devices
One of the forward looking provisions in the EUHDS is the inclusion of devices that records health data into this legislation. This also includes the requirement of them to be added to registries to provide easy access and scrutiny. The document also requires voluntary labeling of wellness applications and registration of EHR systems and wellness applications. This is not just for the regulation point of view but also in the case of data portability, in order for people to control the data they share. In addition to this in the case where manufacturers of medical devices and high-risk AI systems declare interoperability with the EHR systems, they will need to comply with the essential requirements on interoperability under the EHDS.
In India the health data management policy 2022 while stating the applicable entities and individuals who are part of the ABDM ecosystem[9] mention medical device manufacturers, does not mention device sellers or use terms such as wellness applications or wearable devices. Currently the regulation of medical devices falls under the purview of the Drugs and Cosmetics Act, 1940 (DCA) read along with the Medical Device Rules, 2017 (MDR). However in 2020 possibly due to the pandemic the Indian Government along with the Drugs Technical Advisory Board (DTAB) issued two notifications the first one expanded the scope of medical devices which earlier was limited to only 37 categories excluding medical apps, and second one notified the Medical Device (Amendment) Rules, 2020. These two changes together brought all medical devices under the DCA as well as expanded the categories of medical devices. However it is still unclear whether fitness tracker apps that come with devices are regulated, as the rules and the DCA still rely on the manufacturer to self-identify as a medical device.[10] However, this regulatory uncertainty has not brought about any change in how this data is being used and insurance companies at times encourage people to sync their fitness tracker data.[11]
Multiple use of health data
The EUHDS states two types of uses of data: primary and secondary use of data. In the document the EU states that while there are a number of organisations collecting data, this data is not made available for purposes other than for which it was collected. In order to ensure that researchers, innovators and policy makers can use this data. the EU encourages the data holders to contribute to this effort in making different categories of electronic health data they are holding available for secondary use. The data that can be used for secondary use would also include user generated data such as from devices, applications or other wearables and digital health applications.However, the regulation cautions against using this data for measures and making decisions that are detrimental to the individual, in ways such as increasing insurance premiums. The EUHDS also states that as the data is sensitive personal data care should be taken by the data access bodies, to ensure that while data is being shared it is necessary to ensure that the data will be processed in a privacy preserving manner. This could include through pseudonymisation, anonymisation, generalisation, suppression and randomisation of personal data.
While the document states how important it is to have secondary use of the data for public health, research and innovation it also requires that the data is not provided without adequate checks. The EUHDS requires the organisation seeking access to provide several pieces of information and be evaluated by the data access body. The information should include legitimate interest, the necessity and the process the data will go through. In the case where the organisation is seeking pseudonymised data, there is a need to explain why anonymous data would not be sufficient. In order to ensure a comprehensive approach between health data access bodies, the EUHDS states that the European Commission should support the harmonisation of data application, as well as data request.
In India, while multiple health data documents state the need to share data for public interest, research and innovation, not much thought has been given to ensuring that the data is not misused and that there is harmonisation between bodies that provide the data. Most recently the PMJay documents states that the NHA shall make aggregated and anonymised data available through a public dashboard for the purpose of facilitating health and clinical research, academic research, archiving, statistical analysis, policy formulation, the development and promotion of diagnostic solutions and such other purposes as may be specified by the NHA. Such data can be accessed through a request to the Data Sharing Committee[12] for the sharing of such information through secure modes, including clean rooms and other such secure modes specified by NHA. However the document does not mention what clean rooms are in this context.
The Health Data Management Policy 2022 states that Data fiduciaries (data controllers/ processors according to the data protection legislation) can themselves make anonymised or de-identified data in an aggregated form available based in technical processes and anonymisation protocols which may be specified by the NDHM in consultation with the MeitY. The purposes mentioned in this policy included health and clinical research, academic research, archiving, statistical analysis, policy formulation, the development and promotion of diagnostic solutions and such other purposes as may be specified by the NDHMP. The policy states that in order to access the anonymised or de-identified data the entity requesting the data would have to provide relevant information such as name, purpose of use and nodal person of contact details. While the policy does not go into details about the scrutiny of the organisations seeking this data, it does state that the data will be provided based on the term as may be stipulated.
However the issue arises as both the documents published by the NHA do not have a similar process for getting the data, for example the NDHMP requires the data fiduciary to share the data directly, while the PMJay guidelines requires the data to be shared by the Data Sharing Committee, creating duplicate datasets as well as affecting the quality of the data being shared.
Recommendations for India
Need for a data protection legislation:
While the EUHDS is still a draft document and the end result could be different based on the consultations and deliberations, the document has a strong base with respect to the privacy and data protection based on the earlier regulations and the GDPR. The definitions of what counts as health data, and the parameters for managing the data creates a more streamlined process for all stakeholders. More importantly the GDPR and other regulations provide a way of recourse for people. In India the health data related policies and strategy documents have been published and enforced before the data protection legislation is passed. In addition to this India, unlike the EU has just begun looking at a universal health ID and digitisation of the healthcare system, ideally it would be better to take each step at a time, and at first look at the issues that may arise due to the universal health ID. In addition to this, multiple policies, without a strong data protection legislation providing parameters and definitions could mean that the health data management policies only benefit certain people. This also creates uncertainty in terms of where an individual will go in case of harms caused by the processing of their data, and who would be the authority to govern questions around health data. The division of health data management between different documents also creates multiple silos of data management which creates data duplication and issues with data quality.
Secondary use of data
While both the EUHDS and India's Health Data Management Policy look at the sharing of health data with researchers and private organisations in order to foster innovation, the division of sharing of data based on who uses the data is a good way to ensure that only interested parties have access to the data. With respect to the health data policies in India, a number of policies talk about the sharing of anonymised data with researchers, however the documents being scattered could cause the same data to be shared by multiple health data entities, making it possible to identify people. For example, the health data management policy could share anonymised data of health services used by a person, whereas the PMJAY policy could share data about insurance covers, and the researcher could probably match the data and be closer to identifying people. It has also been revealed in multiple studies that anonymisation of data is not permanent and that the anonymisation can be broken. This is more concerning since the polices do not put limits or checks on who the researchers are and what is the end goal of the data sought by them, the policies seem to rely on the anonymisation of the data as the only check for privacy. This data could be used to de-anonymise people, could be used by companies working with the researchers to get large amounts of data to train their systems,
train data that could lead to greater surveillance, increase insurance scrutiny etc. The NHA and Indian health policy makers could look at the restrictions and checks that the EUHDS creates for the secondary use of data and create systems of checks and categories of researchers and organisations seeking data to ensure minimal risks to an individual’s data.
Conclusion
While the EU Health data space has been criticised for facilitating vast amounts of data with private companies and the collecting of data by governments, the codification of the legislation does in some way give some way to regulate the flow of health data. While India does not have to emulate the EU and have a similar document, it could look at the best practices and issues that are being highlighted with the EUHDS. Indian lawmakers have looked at the GDPR for guidance for the draft data protection legislation, similarly it could do so with regard to health data and health data management. One possible way to ensure both the free flow of health data and the safeguards of a regulation could be to re-introduce the DISHA Act which much like the EUHDS could act as a legislation which provides an anchor to the multiple health data policies, including standard definition of health data, grievance redressal bodies, and adjudicating authorities and their functions. In addition a legislation dedicated to the health data would also remove the existing burden on the to be formed data protection authority.
[1] “European Health Data Space”, European Commission, 03 May 2022,https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en
[2]“European Health Data Space”
[3] “National Digital Health Blueprint”, Ministry of Health and Family Welfare Government of India, https://abdm.gov.in:8081/uploads/ndhb_1_56ec695bc8.pdf
[4] “National Digital Health Blueprint”
[5] “Mondaq” “DISHA – India's Probable Response To The Law On Protection Of Digital Health Data” accessed 13 June 2023,https://www.mondaq.com/india/healthcare/1059266/disha-india39s-probable-response-to-the-law-on-protection-of-digital-health-data
[6]“The Digital Personal Data Protection Bill 2022”, accessed 13 June 2023 , https://www.meity.gov.in/writereaddata/files/The%20Digital%20Personal%20Data%20Potection%20Bill%2C%202022_0.pdf
[7]The Digital Personal Data Protection Bill 2022
[8] Regulation (EU) 2016/679 defines health data as “Personal data concerning health should include all data pertaining to the health status of a data subject which reveal information relating to the past, current or future physical or mental health status of the data subject. This includes information about the natural person collected in the course of the registration for, or the provision of, health care services as referred to in Directive 2011/24/EU of the European Parliament and of the Council (1) to that natural person; a number, symbol or particular assigned to a natural person to uniquely identify the natural person for health purposes; information derived from the testing or examination of a body part or bodily substance, including from genetic data and biological samples; and any information on, for example, a disease, disability, disease risk, medical history, clinical treatment or the physiological or biomedical state of the data subject independent of its source, for example from a physician or other health professional, a hospital, a medical device or an in vitro diagnostic test.
[9] For creating an integrated, uniform and interoperable ecosystem in a patient or individual centric manner, all the government healthcare facilities and programs, in a gradual/phased manner, should start assigning the same number for providing any benefit to individuals.
[10] For example a manufacturer of a fitness tracker which is capable of monitoring heart rate could state that the intended purpose of the device was fitness or wellness as opposed to early detection of heart disease thereby not falling under the purview of the regulation.
[11]“Healthcare Executive” “GOQii Launches GOQii Smart Vital 2.0, an ECG-Enabled Smart Watch with Integrated Outcome based Health Insurance & Life Insurance, accessed 13 June 2023
https://www.healthcareexecutive.in/blog/ecg-enabled-smart-watch
[12] The guidelines only state that the Committee will be responsible for ensuring the compliance of the guidelines in relation to the personal data under its control. And does not go into details of defining the Committee.
Deceptive Design in Voice Interfaces: Impact on Inclusivity, Accessibility, and Privacy
The original blog post can be accessed here.
Introduction
Voice Interfaces (VIs) have come a long way in recent years and are easily available as inbuilt technology with smartphones, downloadable applications, or standalone devices. In line with growing mobile and internet connectivity, there is now an increasing interest in India in internet-based multilingual VIs which have the potential to enable people to access services that were earlier restricted by language (primarily English) and interface (text-based systems). This current interest has seen even global voice applications such as Google Home and Amazon’s Alexa being available in Hindi (Singal, 2019) as well as the growth of multilingual voice bots for certain banks, hotels, and hospitals (Mohandas, 2022).
The design of VIs can have a significant impact on the behavior of the people using them. Deceptive design patterns or design practices that trick people into taking actions they might otherwise not take (Tech Policy Design Lab, n.d.), have gradually become pervasive in most digital products and services. Their use in visual interfaces has been widely criticized by researchers (Narayanan, Mathur, Chetty, and Kshirsagar, 2020), along with recent policy interventions (Schroeder and Lützow-Holm Myrstad, 2022) as well. As VIs become more relevant and mainstream, it is critical to anticipate and address the use of deceptive design patterns in them. This article, based on our learnings from the study of VIs in India, examines the various types of deceptive design patterns in VIs and focuses on their implications in terms of linguistic barriers, accessibility, and privacy.
Potential deceptive design patterns in VIs
Our research findings suggest that VIs in India are still a long way off from being inclusive, accessible and privacy-preserving. While there has been some development in multilingual VIs in India, their compatibility has been limited to a few Indian languages (Mohandas, 2022) (Naidu, 2022)., The potential of VIs as a tool for people with vision loss and certain cognitive disabilities such as dyslexia is widely recognized (Pradhan, Mehta, and Findlater, 2018), but our conversations suggest that most developers and designers do not consider accessibility when conceptualizing a voice-based product, which leads to interfaces that do not understand non standard speech patterns, or have only text-based privacy policies (Mohandas, 2022). Inaccessible privacy policies full of legal jargon along with the lack of regulations specific to VIs, also make people vulnerable to privacy risks.
Deceptive design patterns can be used by companies to further these gaps in VIs. As with visual interfaces, the affordances and attributes of VI can determine the way in which they can be used to manipulate behavior. Kentrell Owens, et.al in their recent research lay down six unique properties of VIs that may be used to implement deceptive design patterns (Owens, Gunawan, Choffnes, Emami-Naeini, Kohno, and Roesner, 2022). Expanding upon these properties, and drawing from our research, we look at how they can be exacerbated in India.
Making processes cumbersome
VIs are often limited by their inability to share large amounts of information through voice. They thus operate in combination with a smartphone app or a website. This can be intentionally used by platforms to make processes such as changing privacy settings or accessing the full privacy notice inconvenient for people to carry out. In India, this is experienced while unsubscribing from services such as Amazon Prime (Owens et al., 2022). Amazon Echo Dot presently allows individuals to subscribe to an Amazon Prime membership using a voice command, but directs them to use the website in order to unsubscribe from the membership. This can also manifest in the form of canceling orders and changing privacy settings.
VIs follow a predetermined linear structure that ensures a tightly controlled interaction. People make decisions based on the information they are provided with at various steps. Changing their decision or switching contexts could involve going back several steps. People may accept undesirable actions from the VI in order to avoid this added effort (Owens et al., 2022). The urgency to make decisions on each step can also cause people to make unfavorable choices such as allowing consent to third party apps. The VI may prompt advertisements and push for the company’s preferred services in this controlled conversation structure, which the user cannot side-step. For example, while setting up the Google voice assistant on any device, it nudges people to sign into their Google account. This means the voice assistant gets access to their web and app activity and location history at this step. While the data management of Google accounts can be tweaked through the settings, it may get skipped during a linear set-up structure. Voice assistants can also push people to opt into features such as ads personalisation, default news sources, and location tracking.
Making options difficult to find
Discoverability is another challenge for VIs. This means that people might find it difficult to discover available actions or options using just voice commands. This gap can be misused by companies to trick people into making undesirable choices. For instance, while purchasing items, the VI may suggest products that have been sponsored and not share full information on other cheaper products, forcing people to choose without complete knowledge of their options. Many mobile based voice apps in India use a combination of images or icons with the voice prompts to enable discoverability of options and potential actions, which excludes people with vision loss (Naidu, 2022). These apps comprise a voice layer added to an otherwise touch-based visual platform so that people are able to understand and navigate through all available options using the visual interface, and use voice only for purposes such as searching or narrating. This means that these apps cannot be used through voice alone, making them disadvantageous for people with vision loss.
Discreet integration with third parties
VIs can use the same voice for varying contexts. In the case of Alexa, Skills, which are apps on its platform, have the same voice output and invocation phrases as its own in-built features. End users find it difficult to differentiate between an interaction with Amazon and that with Skills which are third-party applications. This can cause users to share information that they otherwise would not have with third parties (Mozilla Foundation, 2022). There are numerous Amazon Skills inHindi and people might not be aware that the developers of these Skills are not vetted by Amazon. This misunderstanding can create significant privacy or security risks if Skills are linked to contacts, banking, or social media accounts.
Lack of language inclusivity
The lack of local language support, colloquial translations, and accents can lead to individuals not receiving clear and complete information. VI’s failure to understand certain accents can also make people feel isolated (Harwell, 2018). While in India voice assistants and even voice bots are available in few Indic languages, the default initial setup, privacy policies, and terms and conditions are still in English. The translated policies also use literary language which is difficult for people to understand, and miss out on colloquial terms. This could mean that the person might have not fully understood these notices and hence not have given informed consent. Such use of unclear language and unavailability of information in Indic languages can be viewed as a deceptive design pattern.
Making certain choices more apparent
The different dimensions of voice such as volume, pitch, rate, fluency, pronunciation, articulation, and emphasis can be controlled and manipulated to implement deceptive design patterns. VIs may present the more privacy-invasive options more loudly or clearly, and the more privacy-preserving options more softly or quickly. It can use tone modulations to shame people into making a specific choice (Owens et al., 2022). For example, media streaming platforms may ask people to subscribe for a premium account to avoid ads in normal volume and mention the option to keep ads in a lower volume. Companies have also been observed to discreetly integrate product advertisements in voice assistants using tone. SKIN, a neurotargeting advertising strategy business, used a change of tone of the voice assistant to suggest a dry throat to advertise a drink (Chatellier, Delcroix, Hary, and Girard-Chanudet, 2019).
The attribution of gender, race, class, and age through stereotyping can create a persona of the VI for the user. This can extend to personality traits, such as an extroverted or an introverted, docile or aggressive character (Simone, 2020). The default use of female voices with a friendly and polite persona for voice assistants has drawn criticism for perpetuating harmful gender stereotypes (Cambre and Kulkarni, 2019). Although there is an option to change the wake word “Alexa” in Amazon’s devices, certain devices and third party apps do not work with another wake word (Ard, 2021). Further, projection of demographics can also be used to employ deceptive design patterns. For example, a VI persona that is constructed to create a perception of intelligence, reliability, and credibility can have a stronger influence on people’s decisions. Additionally, the effort to make voice assistants as human sounding as possible without letting people know they are human, could create a number of issues (X. Chen and Metz, 2019). First time users might divulge sensitive information thinking that they are interacting with a person. This becomes more ethically challenging when persons with vision loss are not able to know who they are interacting with.
Recording without notification
Owens et al speak about VIs occupying physical domains due to which they have a much wider impact as opposed to a visual interface (Owens et al., 2022). The always-on nature of virtual assistants could result in personal information of a guest being recorded without their knowledge or consent as consent is only given at the setup stage by the owner of the device or smartphone.
Making personalization more convenient through data collection
VIs are trained to adapt to the experience and expertise of the user. Virtual assistants provide personalization and the possibility to download a number of skills, save payment information, and phone contacts. In order to facilitate differentiation between multiple users on the same VI, individuals talking to the device are profiled based on their speech patterns and/or voice biometrics. This also helps in controlling or restricting content for children (Naidu, 2022). There is also tracking of commands to identify and list their intent for future use. The increase of specific and verified data can be used to provide better targeted advertisements, as well possibly be shared with law enforcement agencies in certain cases. Recently, a payment gateway company was made to share customer information to the law enforcement without their customer’s knowledge. This included not just the information about the client but also revealed sensitive personal data of the people who had used the gateway for transactions to the customer. While providing such details are not illegal and companies are meant to comply with requests from law enforcement, if more people knew of the possibility of every conversation of the house being accessible to law enforcement they would make more informed choices of what the VI records.
Reducing friction in actions desired by the platform
One of the fundamental advantages of VIs is that it can reduce several steps to perform an action using a single command. While this is helpful to people interacting with it, the feature can also be used to reduce friction from actions that the platform wants them to take. These actions could include sharing sensitive information, providing consent to further data sharing, and making purchases. An example of this can be seen where children have found it very easy to purchase items using Alexa (BILD, 2019).
Recommendations for Designers and Policymakers
Through these deceptive design patterns, VIs can obstruct and control information according to the preferences of the platform. This can result in a heightened impact on people with less experience with technology. Presently, profitability is a key driving factor for development and design of VI products. There is more importance given to data-based and technical approaches, and interfaces are often conceptualized by people with technical expertise with lack of inputs from designers at the early stages (Naidu, 2022). Designers also focus more on the usability and functionality of the interfaces by enabling personalization, but are often not as sensitive to safeguarding the rights of individuals using them. In order to tackle deceptive design, designers must work towards prioritizing ethical practice, and building in more agency and control for people who use VIs.
Many of the potential deceptive design patterns can be addressed by designing for accessibility and inclusivity in a privacy preserving manner. This includes vetting third-party apps, providing opt-outs, and clearly communicating privacy notices. Privacy implications can also be prompted by the interface at the time of taking actions. There should be clear notice mechanisms such as a prominent visual cue to alert people when a device is on and recording, along with an easy way to turn off the ‘always listening’ mode. The use of different voice outputs for third party apps can also signal to people about who they are interacting with and what information they would like to share in that context.
Training data that covers a diverse population should be built for more inclusivity. A linear and time-efficient architecture is helpful for people with cognitive disabilities. But, this linearity can be offset by adding conversational markers that let the individual know where they are in the conversation (Pearl, 2016). This could address discoverability as well, allowing people to easily switch between different steps. Speech-only interactions can also allow people with vision loss to access the interface with clarity.
A number of policy documents including the 2019 version of India’s Personal Data Protection Bill, emphasize on the need for privacy by design. But, they do not mention how deceptive design practices could be identified and avoided, or prescribe penalties for using these practices (Naidu, Sheshadri, Mohandas, and Bidare, 2020). In the case of VI particularly, there is a need to look at it as biometric data that is being collected and have related regulations in place to prevent harm to users. In terms of accessibility as well, there could be policies that require not just websites but also apps (including voice based apps) to be compliant with international accessibility guidelines , and to conduct regular audits to ensure that the apps are meeting the accessibility threshold.
Detecting Encrypted Client Hello (ECH) Blocking
This blogpost was edited by Torsha Sarkar.
The Transport Layer Security (TLS) protocol, which is widely recognised as the lock sign in a web browser’s URL bar, encrypts the contents of internet connections when an internet user visits a website so that network intermediaries (such as Internet Service Providers, Internet Exchanges, undersea cable operators, etc.) cannot view the private information being exchanged with the website.
TLS, however, suffers from a privacy issue – the protocol transmits a piece of information known as the Server Name Indication (or SNI) which contains the name of the website a user is visiting. While the purpose of TLS is to encrypt private information, the SNI remains unencrypted – leaking the names of the websites internet users visit to network intermediaries, who use this metadata to surveil internet users and censor access to certain websites. In India, two large internet service providers – Reliance Jio and Bharti Airtel – have been previously found using the SNI field to block access to websites.
Encrypted Client Hello (or ECH) is a new internet protocol that has been under development since 2018 at the Internet Engineering Task Force (IETF) and is now being tested for a small percentage of internet users before a wider rollout. It seeks to address this privacy limitation by encrypting the SNI information that leaks the names of visited websites to internet intermediaries. The ECH protocol significantly raises the bar for censors – the SNI is the last bit of unencrypted metadata in internet connections that censors can reliably use to detect which websites an internet user is visiting. After this protocol is deployed, censors will find it harder to block websites by interfering with network connections and will be forced to utilise blocking methods such as website fingerprinting and man-in-the-middle attacks that are either expensive and less accurate, or unfeasible in most cases.
We have been tracking the development of this privacy enhancement. To assist the successful deployment of the ECH protocol, we contributed a new censorship test to the Open Observatory for Network Interference (OONI) late last year. The new test attempts to connect to websites using the ECH protocol and records any interference from censors to the connection. As censors in some countries were found blocking a previous version of the protocol entirely, this test gives important early feedback to the protocol developers on whether censors are able to detect and block the protocol.
We conducted ECH tests during the first week of September 2023 from four popular Indian ISPs, namely Airtel, Atria Convergence Technologies (ACT), Reliance Jio, and Vodafone Idea, which account for around 95% of the Indian internet subscriber base. The results indicated that ECH connections to a popular website were successful and are not currently being blocked. This was the expected result, as the protocol is still under development. We will continue to monitor for interference from censors closer to the time of completion of the protocol to ensure that this privacy enhancing protocol is successfully deployed.
Digital Delivery and Data System for Farmer Income Support
Executive Summary
This study provides an in-depth analysis of two direct cash transfer schemes in India – Krushak Assistance for Livelihood and Income Augmentation (KALIA) and Pradhan Mantri Kisan Samman Nidhi (PM-KISAN) – which aim to provide income support to farmers. The paper examines the role of data systems in the delivery and transfer of funds to the beneficiaries of these schemes, and analyses their technological framework and processes.
We find that the use of digital technologies, such as direct benefit transfer (DBT) systems, can improve the efficiency and ensure timely transfer of funds. However, we observe that the technology-only system is not designed with the last beneficiaries in mind; these people not only have no or minimal digital literacy but are also faced with a lack of technological infrastructure, including internet connectivity and access to the system that is largely digital.
Necessary processes need to be implemented and personnel on the ground enhanced in the existing system, to promptly address the grievances of farmers and other challenges.
This study critically analyses the direct cash transfer scheme and its impact on the beneficiaries. We find that despite the benefits of direct benefit transfer (DBT) systems, there have been many instances of failures, such as the exclusion of several eligible households from the database.
The study also looks at gender as one of the components shaping the impact of digitisation on beneficiaries. We also identify infrastructural and policy constraints, in sync with the technological framework adopted and implemented, that impact the implementation of digital systems for the delivery of welfare. These include a lack of reliable internet connectivity in rural areas and low digital literacy among farmers. We analyse policy frameworks at the central and state levels and find discrepancies between the discourse of these schemes and their implementation on the ground.
We conclude the study by discussing the implications of datafication, which is the process of collecting, analysing, and managing data through the lens of data justice. Datafication can play a crucial role in improving the efficiency and transparency of income support schemes for farmers. However, it is important to ensure that the interests of primary beneficiaries are considered – the system should work as an enabling, not a disabling, factor. This appears to be the case in many instances since the current system does not give primacy to the interests of farmers. We offer recommendations for policymakers and other stakeholders to strengthen these schemes and improve the welfare of farmers and end users.
DoT’s order to trace server IP addresses will lead to unintended censorship
This post was reviewed and edited by Isha Suri and Nishant Shankar.
In December 2023, the Department of Telecommunications (DoT) issued instructions to internet service providers (ISPs) to maintain and share a list of “customer owned” IP addresses that host internet services through Indian ISPs so that they can be immediately traced in case “they are required to be blocked as per orders of [the court], etc”.
For the purposes of the notification, tracing customer-owned IP addresses implies identifying the network location of a subset of web services that possess their own IP addresses, as opposed to renting them from the ISP. These web services purchase IP Transit from Indian ISPs in order to connect their servers to the internet. In such cases, it is not immediately apparent which ISP routes to a particular IP address, requiring some amount of manual tracing to locate the host and immediately cut off access to the service. The order notes that “It has been observed that many times it is time consuming to trace location of such servers specially in case the IP address of servers is customer owned and not allocated by the Licensed Internet Service Provider”.
This indicates that, not only is the DoT blocking access to web services based on their IP addresses, but is doing so often enough for manual tracing of IP addresses to be a time consuming process for them.
While our legal framework allows courts and the government to issue content takedown orders, it is well documented that blocking web services based on their IP addresses is ineffectual and disruptive. An explainer on content blocking by the Internet Society notes, “Generally, IP blocking is a poor filtering technique that is not very effective, is difficult to maintain effectively, has a high level of unintended additional blockage, and is easily evaded by publishers who move content to new servers (with new IP addresses)”. The practice of virtual hosting is very common on the internet, which entails that a single web service can span multiple IP addresses and a single IP address can be shared by hundreds, or even thousands, of web services. Blocking access to a particular IP address can cause unrelated web services to fail in subtle and unpredictable ways, leading to collateral censorship. For example, a 2022 Austrian court order to block 11 IP addresses associated with 14 websites that engaged in copyright infringement rendered thousands of unrelated websites inaccessible.
The unintended effects of IP blocking have also been observed in practice in India. In 2021, US-based OneSignal Inc. approached the Delhi High Court challenging the blockage of one of its IP addresses by ISPs in India. With OneSignal being an online marketing company, there did not appear to be any legitimate reason for it to be blocked. In response to the petition the Government said that they had already issued unblocking orders for the IP address. There have also been numerous reports by internet users of inexplicable blocking of innocuous websites hosted on content delivery networks (which are known to often share IP addresses between customers).
We urge the ISPs, government departments and courts issuing and implementing website blocking orders to refrain from utilising overly broad censorship mechanisms like IP blocking which can lead to failure of unrelated services on the internet.
Information Disorders and their Regulation
In the last few years, ‘fake news’ has garnered interest across the political spectrum, as affiliates of both the ruling party and its opposition have seemingly partaken in its proliferation. The COVID-19 pandemic added to this phenomenon, allowing for xenophobic, communal narratives, and false information about health-protective behaviour to flourish, all with potentially deadly effects. This report maps and analyses the government’s regulatory approach to information disorders in India and makes suggestions for how to respond to the issue.
In this study, we gathered information by scouring general search engines, legal databases, and crime statistics databases to cull out data on a) regulations, notifications, ordinances, judgments, tender documents, and any other legal and quasi-legal materials that have attempted to regulate ‘fake news’ in any format; and b) news reports and accounts of arrests made for allegedly spreading ‘fake news’. Analysing this data allows us to determine the flaws and scope for misuse in the existing system. It also gives us a sense of the challenges associated with regulating this increasingly complicated issue while trying to avoid the pitfalls of the present system.
Click to download the full report here.
Reconfiguring Data Governance: Insights from India and the EU
The workshop aimed to compare and assess lessons from data governance from India and the European Union, and to make recommendations on how to design fit-for-purpose institutions for governing data and AI in the European Union and India.
This policy paper collates key takeaways from the workshop by grounding them across three key themes: how we conceptualise data; how institutional mechanisms as well as community-centric mechanisms can work to empower individuals, and what notions of justice these embody; and finally a case study of enforcement of data governance in India to illustrate and evaluate the claims in the first two sections.
This report was a collaborative effort between researchers Siddharth Peter De Souza, Linnet Taylor, and Anushka Mittal at the Tilburg Institute for Law, Technology and Society (Netherlands), Swati Punia, Sristhti Joshi, and Jhalak M. Kakkar at the Centre for Communication Governance at the National Law University Delhi (India) and Isha Suri, and Arindrajit Basu at the Centre for Internet & Society, India.
Click to download the report
India’s parental control directive and the need to improve stalkerware detection
This post was reviewed and edited by Amrita Sengupta.
Stalkerware is a form of surveillance targeted primarily at partners, employees and children in abusive relationships. These are software tools that enable abusers to spy on a person’s mobile device, allowing them to remotely access all data on the device, including calls, messages, photos, location history, browsing history, app data, and more. Stalkerware apps run hidden in the background without the knowledge or consent of the person being surveilled.[1] Such applications are easily available online and can be installed by anyone with little technical know-how and physical access to the device.
News reports indicate that the Ministry of Electronics and Information Technology (MeitY) is supporting the development of an app called “SafeNet”[2] that allows parents to monitor activity and set content filters on children’s devices. Following a directive from the Prime Minister’s office to “incorporate parental controls in data usage” by July 2024, the Internet Service Providers Association of India (ISPAI) has suggested that the app should come preloaded on mobile phones and personal computers sold in the country. The Department of Telecom is also asking schools to raise awareness about such parental control solutions.[3][4]
The beta version of the app is available for Android devices on the Google Play Store and advertises a range of functionalities including location access, monitoring website and app usage, call and SMS logs, screen time management and content filtering. The content filtering functionality warrants a separate analysis and this post will only focus on the surveillance capabilities of this app.
Applications like Safenet, that do not attempt to hide themselves and claim to operate with the knowledge of the person being surveilled, are sometimes referred to as “watchware”.[5] However, for all practical purposes, these apps are indistinguishable from stalkerware. They possess the same surveillance capabilities and can be deployed in the exact same ways. Such apps sometimes incorporate safeguards to notify users that their device is being monitored. These include persistent notifications on the device’s status bar or a visible app icon on the device’s home screen. However, such safeguards can be circumvented with little effort. The notifications can simply be turned off on some devices and there are third-party Android tools that allow app icons and notifications to be hidden from the device user, allowing watchware to be repurposed as stalkerware and operate secretly on a device. This leaves very little room for distinction between stalkerware and watchware apps.[6] In fact, the developers of stalkerware apps often advertise their tools as watchware, instructing users to only use them for legitimate purposes.
Even in cases where stalkerware applications are used in line with their stated purpose of monitoring minors’ internet usage, the effectiveness of a surveillance-centric approach is suspect. Our previous work on children’s privacy has questioned the treatment of all minors under the age of 18 as a homogenous group, arguing for a distinction between the internet usage of a 5-year-old child and a 17-year-old teenager. We argue that educating and empowering children to identify and report online harms is more effective than attempts to surveil them.[7][8] Most smartphones already come with options to enact parental controls on screen time and application usage[9][10], and the need for third-party applications with surveillance capabilities is not justified.
Studies and news reports show the increasing role of technology in intimate partner violence (IPV).[11][12] Interviews with IPV survivors and support professionals indicate an interplay of socio-technical factors, showing that abusers leverage the intimate nature of such relationships to gain access to accounts and devices to exert control over the victim. They also indicate the prevalence of “dual-use” apps such as child-monitoring and anti-theft apps that are repurposed by abusers to track victims.[13]
There is some data available that indicates the use of stalkerware apps in India. Kaspersky anti-virus’ annual State of Stalkerware reports consistently place India among the top 4 countries with the most number of infections detected by its product, with a few thousand infections reported each year between 2020 and 2023.[14][15][16[17] TechCrunch’s Spyware Lookup Tool, which compiles information from data leaks from more than nine stalkerware apps to notify victims, also identifies India as a hotspot for infections.[18] Avast, another antivirus provider, reported a 20% rise in the use of stalkerware apps during COVID-19 lockdowns.[19] The high rates of incidence of intimate partner violence in India, with the National Family Health Survey reporting that about a third of all married women aged 18–49 years have experienced spousal violence [20], also increases the risk of digitally-mediated abuse.
Survivors of digitally-mediated abuse often require specialised support in handling such cases to avoid alerting abusers and potential escalations. As part of our ongoing work on countering digital surveillance, we conducted an analysis of seven stalkerware applications, including two that are based in India, to understand and improve how survivors and support professionals can detect their presence on devices.
In some cases, where it is safe to operate the device, antivirus solutions can be of use. Antivirus tools can often identify the presence of stalkerware and watchware on a device, categorising them as a type of malware. We measured how effective various commercial antivirus solutions are at detecting stalkerware applications. Our results, which are detailed in the Appendix, indicate a reasonably good coverage, with six out of the seven apps being flagged as malicious by various antivirus solutions. We found that Safenet, the newest app on the list, was not detected by any antivirus. We also compared the detection results with a similar study conducted in 2019 [21] and found that some newer versions of previously known apps saw lower rates of detection. This indicates that antivirus solutions need to analyse new apps and newer versions of apps more frequently to improve coverage and understand how they are able to evade detection.
In cases where the device cannot be operated safely, support workers use specialised forensic tools such as the Mobile Verification Toolkit [22] and Tinycheck [23], which can be used to analyse devices without modifying them. We conducted malware analysis on the stalkerware apps to document the traces they leave on devices and submitted them to an online repository of indicators of compromise (IOCs).[24] These indicators are incorporated in detection tools used by experts to detect stalkerware infections.
Despite efforts to support survivors and stop the spread of stalkerware applications, the use of technology in abusive relationships continues to grow.[25] Making a surveillance tool like Safenet available for free, publicising it for widespread use, and potentially preloading it on mobile devices and personal computers sold in the country, is an ill-conceived way to enact parental controls and will lead to an increase in digitally-mediated abuse. The government should immediately take this application out of the public domain and work on developing alternate child protection policies that are not rooted in distrust and surveillance.
If you are affected by stalkerware there are some resources available here:
https://stopstalkerware.org/information-for-survivors/
https://stopstalkerware.org/resources/
Appendix
Our analysis covered two apps based in India, SafeNet and OneMonitar, and five other apps, Hoverwatch, TheTruthSpy, Cerberus, mSpy and FlexiSPY. All samples were directly obtained from the developer’s websites. The details of the samples are as follows:
Name |
File name |
Version |
Date sample was obtained |
SHA-1 Hash |
SafeNet |
Safenet_Child.apk |
0.15 |
16th March, 2024 |
d97a19dc2212112353ebd84299d49ccfe8869454 |
OneMonitar |
ss-kids.apk |
5.1.9 |
19th March, 2024 |
519e68ab75cd77ffb95d905c2fe0447af0c05bb2 |
Hoverwatch |
setup-p9a8.apk |
7.4.360 |
5th March, 2024 |
50bae562553d990ce3c364dc1ecf44b44f6af633 |
TheTruthSpy |
TheTruthSpy.apk |
23.24 |
5th March, 2024 |
8867ac8e2bce3223323f38bd889e468be7740eab |
Cerberus |
Cerberus_disguised.apk |
3.7.9 |
4th March, 2024 |
75ff89327503374358f8ea146cfa9054db09b7cb |
mSpy |
bt.apk |
7.6.0.1 |
21st March, 2024 |
f01f8964242f328e0bb507508015a379dba84c07 |
FlexiSPY |
5009_5.2.2_1361.apk |
5.2.2 |
26th March, 2024 |
5092ece94efdc2f76857101fe9f47ac855fb7a34 |
We analysed the network activity of these apps to check what web servers they send their data to. With increasing popularity of Content Delivery Networks (CDNs) and cloud infrastructure, these results may not always give us an accurate idea about where these apps originate, but can sometimes offer useful information:
Name | Domain | IP Address[26] | Country | ASN Name and Number |
SafeNet | safenet.family | 103.10.24.124 | India | Amrita Vishwa Vidyapeetham, AS58703 |
OneMonitar | onemonitar.com | 3.15.113.141 | United States | Amazon.com, Inc., AS16509 |
OneMonitar | api.cp.onemonitar.com | 3.23.25.254 | United States | Amazon.com, Inc., AS16509 |
Hoverwatch | hoverwatch.com | 104.236.73.120 | United States | DigitalOcean, LLC, AS14061 |
Hoverwatch | a.syncvch.com | 158.69.24.236 | Canada | OVH SAS, AS16276 |
TheTruthSpy | thetruthspy.com | 172.67.174.162 | United States | Cloudflare, Inc., AS13335 |
TheTruthSpy | protocol-a946.thetruthspy.com | 176.123.5.22 | Moldova | ALEXHOST SRL, AS200019 |
Cerberus | cerberusapp.com | 104.26.9.137 | United States | Cloudflare, Inc., AS13335 |
mSpy | mspy.com | 104.22.76.136 | United States | Cloudflare, Inc., AS13335 |
mSpy | mobile-gw.thd.cc | 104.26.4.141 | United States | Cloudflare, Inc., AS13335 |
FlexiSPY | flexispy.com | 104.26.9.173 | United States | Cloudflare, Inc., AS13335 |
FlexiSPY | djp.bz | 119.8.35.235 | Hong Kong | HUAWEI CLOUDS, AS136907 |
To understand whether commercial antivirus solutions are able to categorise stalkerware apps as malicious, we used a tool called VirusTotal, which aggregates checks from over 70 antivirus scanners.[27] We uploaded hashes (i.e. unique signatures) of each sample to VirusTotal and recorded the total number of detections by various antivirus solutions. We compared our results to a similar study by Citizen Lab in 2019 [28] that looked at a similar set of apps to identify changes in detection rates over time.
Product |
VirusTotal Detections (March 2024) |
VirusTotal Detections (January 2019) (By Citizen Lab) |
SafeNet [29] |
0/67 (0 %) |
N/A |
OneMonitar [30] |
17/65 (26.1%) |
N/A |
Hoverwatch |
24/58 (41.4%) |
22/59 (37.3%) |
TheTruthSpy |
38/66 (57.6%) |
0 |
Cerberus |
8/62 (12.9%) |
6/63 (9.5%) |
mSpy |
8/63 (12.7%) |
20/63 (31.7%) |
Flexispy [31] |
18/66 (27.3%) |
34/63 (54.0%) |
We also checked if Google’s Play Protect service [32], a malware detection tool that is built-in to Android devices using Google’s Play Store. These results were also compared with similar checks performed by Citizen Lab in 2019.
Product |
Detected by Play Protect (March 2024) |
Detected by Play Protect (January 2019) (By Citizen Lab) |
SafeNet |
no |
N/A |
OneMonitar |
yes |
N/A |
Hoverwatch |
yes |
yes |
TheTruthSpy |
yes |
yes |
Cerberus |
yes |
no |
mSpy |
yes |
yes |
Flexispy |
yes |
yes |
Endnotes
1. Definition adapted from Coalition Against Stalkerware, https://stopstalkerware.org/
2. https://web.archive.org/web/20240316060649/https://safenet.family/
5. https://github.com/AssoEchap/stalkerware-indicators/blob/master/README.md
6. https://cybernews.com/privacy/difference-between-parenting-apps-and-stalkerware/
7. https://timesofindia.indiatimes.com/blogs/voices/shepherding-children-in-the-digital-age/
8. https://blog.avast.com/stalkerware-and-children-avast
9. https://safety.google/families/parental-supervision/
10. https://support.apple.com/en-in/105121
11. R. Chatterjee et al., "The Spyware Used in Intimate Partner Violence," 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 441-458.
13. D. Freed et al., "Digital technologies and intimate partner violence: A qualitative analysis with multiple stakeholders", PACM: Human-Computer Interaction: Computer-Supported Cooperative Work and Social Computing (CSCW), vol. 1, no. 2, 2017.
18. https://techcrunch.com/pages/thetruthspy-investigation/
19. https://www.thenewsminute.com/atom/avast-finds-20-rise-use-spying-and-stalkerware-apps-india-during-lockdown-129155
20. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10071919/
21. https://citizenlab.ca/docs/stalkerware-holistic.pdf
22. https://docs.mvt.re/en/latest/
23. https://tiny-check.com/
24. https://github.com/AssoEchap/stalkerware-indicators/pull/125
25. https://stopstalkerware.org/2023/05/15/report-shows-stalkerware-is-not-declining/
26. IP information provided by https://ipinfo.io/
27. https://docs.virustotal.com/docs/how-it-works
28. https://citizenlab.ca/docs/stalkerware-holistic.pdf
29. Sample was not known to VirusTotal, it was uploaded at the time of analysis
30. Sample was not known to VirusTotal, it was uploaded at the time of analysis
31. Sample was not known to VirusTotal, it was uploaded at the time of analysis
Consultation on Gendered Information Disorder in India
The event was convened by Amrita Sengupta (Research and Programme Lead, CIS), Yesha Tshering Paul (Researcher, CIS), Bishakha Datta (Programme Lead, POV) and Prarthana Mitra (Project Anchor, POV)..* Download the event report here.
The event brought together experts, researchers and grassroots activists from Maharashtra and across the country to discuss their experiences with information disorder, and the multifaceted challenges posed by misinformation, disinformation and malinformation targeting gender and sexual identities.
Understanding Information Disorders: The consultation commenced with a look at the wide spectrum of information disorder by Yesha Tshering Paul and Amrita Sengupta. Misinformation[1] was highlighted as false information disseminated unintentionally, such as inaccurate COVID cures that spread rapidly during the pandemic. In contrast, disinformation involves the intentional spread of false information to cause harm, exemplified by instances like deepfake pornography. A less recognized form, malinformation, involves the deliberate misuse of accurate information to cause harm, as seen in the misleading representation of regret rates among trans individuals who have undertaken gender affirming procedures. Yesha highlighted that the definitions of these concepts are often varied, and thus the importance of moving beyond definitions to centre user experiences of this phenomenon.
The central theme of this discussion was the concept of “gendered” information disorder, referring to the targeted dissemination of false or harmful online content based on gender and sexual identity. This form of digital misogyny intersects with other societal marginalizations, disproportionately affecting marginalised genders and sexualities. The session also emphasised the critical link between information disorders and gendered violence (both online and in real life). Such disorders perpetuate stereotypes, gender-based violence, and silences victims, fostering an environment that empowers perpetrators and undermines victims' experiences.
Feminist Digital Infrastructure: Digital infrastructures shape our online spaces. Sneha PP (Senior Researcher, CIS) introduced the concept of feminist infrastructures as a potential solution that helps mediate discourse around gender, sexuality, and feminism in the digital realm. Participant discussions emphasised the need for accessible, inclusive, and design-conscious digital infrastructures that consider the intersectionality and systemic inequalities impacting content creation and dissemination. Strategies were discussed to address online gender-based violence and misinformation, focusing on survivor-centric approaches and leveraging technology for storytelling.
Gendered Financial Mis-/Dis-information: Garima Agrawal (Researcher, CIS) with inputs by Debarati Das (Co-Lead, Capacity Building at PoV) and Chhaya Rajput (Helpline Facilitator, Tech Sakhi) led the session by highlighting gender disparities in digital and financial literacy and access to digital devices and financial services in India, despite women constituting a higher percentage of new internet users. This makes marginalised users more vulnerable to financial scams. Drawing from the ongoing financial harms project at CIS, Garima spoke about the diverse manifestations of financial information disorders arising from misleading information that results in financial harm, ranging from financial influencers (and in some cases deepfakes of celebrities) endorsing platforms they do not use, to fake or unregulated loan and investment services deceiving users. Breakout groups of participants then analysed several case studies of real-life financial frauds that targeted women and the queer community to identify instances of misinformation, disinformation and malinformation. Emotional manipulation and the exploitation of trust were identified as key tactics used to deceive victims, with repercussions extending beyond monetary loss to emotional, verbal, and even sexual violence against these individuals.
Fact-Checking Fake News and Stories: The pervasive issue of fake news in India was discussed in depth, especially in the era of widespread social media usage. Only 41% of Indians trust the veracity of the information encountered online. Aishwarya Varma, who works at Webqoof (The Quint’s fact checking initiative) as a Fact Check Correspondent, led an informative session detailing the various accessible tools that can be used to fact-check and debunk false information. Participants engaged in hands-on activities by using their smartphones for reverse image searches, emphasising the importance of verifying images and their sources. Archiving was identified as another crucial aspect to preserve accurate information and debunk misinformation.
Gendered Health Mis-/Dis-information: This participant-led discussion highlighted structural gender biases in healthcare and limited knowledge about mental health and menstrual health as significant concerns, along with the discrimination and social stigma faced by the LGBTQ+ community in healthcare facilities. One participant brought up their difficulty accessing sensitive and non-judgmental healthcare, and the insensitivity and mockery faced by them and other trans individuals in healthcare facilities. Participants suggested the increased need for government-funded campaigns on sexual and reproductive health rights and menstrual health, and the importance of involving marginalised communities in healthcare related decision-making to bring about meaningful change.
Mis-/Dis-information around Sex, Sexuality, and Sexual Orientation: Paromita Vohra, Founder and Creative Director of Agents of Ishq—a multi-media project about sex, love and desire that uses various artistic mediums to create informational material and an inclusive, positive space for different expressions of sex and sexuality—led this session. She started with an examination of the term “disorder” and its historical implications, and highlighted how religion, law, medicine, and psychiatry had previously led to the classification of homosexuality as a “disorder”. The session delved into the misconceptions surrounding sex and sexuality in India, advocating for a broader understanding that goes beyond colonial knowledge systems and standardised sex education. She brought up the role of media in altering perspectives on factual events, and the need for more initiatives like Agents of Ishq to address the need for culturally sensitive and inclusive sexuality language and education that considers diverse experiences, emotions, and identities.
Artificial Intelligence and Mis-/Dis-information: Padmini Ray Murray, Founder of Design Beku—a collective that emerged from a desire to explore how technology and design can be decolonial, local, and ethical— talked about the role of AI in amplifying information disorder and its ethical considerations, stemming from its biases in language representation and content generation. Hindi and regional Indian languages remain significantly under-represented in comparison to English content, leading to skewed AI-generated content. Search results reflect the gendered biases in AI and further perpetuate existing stereotypes and reinforce societal biases. She highlighted the real-world impacts of AI on critical decision-making processes such as loan approvals, and the influence of AI on public opinion via media and social platforms. Participants expressed concerns about the ethical considerations of AI, and emphasised the need for responsible AI development, clear policies, and collaborative efforts between tech experts, policymakers, and the public.
* The Centre for Internet and Society undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. Point of View focuses on sexuality, disability and technology to empower women and other marginalised genders to shape and inhabit digital spaces.
[1] Claire Wardle, Understanding Information Disorder (2020). https://firstdraftnews.org/long-form-article/understanding-information-disorder/.
Comments to the Draft Digital Competition Bill, 2024
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
We would like to thank the Ministry of Corporate Affairs for soliciting public comments on this important legislation and are grateful for this opportunity.
At the outset, CIS affirms the Committee’s approach to transition from a predominantly ex-post to an ex-ante approach for regulating competition in digital markets. The Committee’s assessment of the ex-post regime being too time-consuming for the digital domain has been substantiated by frequent and expensive delays in antitrust disputes, a fact that has also recently drawn the attention of the Ministry of Corporate Affairs. And not just in India, the ex-post regime has been found to be too time-consuming in other jurisdictions as well, as a consequence of which many other countries are also moving towards an ex-post regime for digital markets. This also allows India to be in harmony with both developing and developed countries, which makes regulating global competition more consistent and efficient. In fact, “international cooperation between competition authorities” and “greater coherence between regulatory frameworks” are key in facilitating global investigations and lowering the cost of doing business.
Moreover, by adopting a principles-based approach to designing the law’s obligations, the draft Bill also addresses the concern that ex-ante regulations, due to their prescriptive nature, tend to be sector-agnostic. The fact that these principles are based on the findings of the Parliamentary Standing Committee’s (PSC) Report on ‘Anti-Competitive Practices by Big Tech Companies’ only lends them more evidence. The draft DCB empowers the Commission to clarify the Obligations for different services, and also provides CCI with the flexibility to undertake independent consultations to accommodate varying contexts and the needs of different core digital services. We do, however, have specific comments regarding implementing some of these provisions, which are elaborated in the accompanying document.
We would also like to emphasise that adequate enforcement of an ex-ante approach requires bolstering and strengthening regulatory capacity. Therefore, to minimise risks relating to underenforcement as well as overenforcement, CCI, its Digital Markets and Data Unit (DMDU), and the Director General’s (DG) office will have to substantially increase their technical capacity. A comparison of CCI’s current strength with its global counterparts that have adopted or are in the process of adopting an ex-ante approach to competition regulation reveals a stark picture. For example, the European Union (EU) had over 870 people in its DG COMP unit in 2022, and its DG CONNECT unit is expected to hire another 100 people in 2024 alone. Similarly, the United Kingdom’s Competition and Markets Authority (CMA) has a permanent staff of 800+, the Japan Fair Trade Commission (JTFC) has about 400 officials just for regulating anti-competitive conduct, and South Korea’s KFTC has about 600 employees. In contrast, CCI and DG, combined, have a sanctioned strength of only 195 posts, out of which 71 remain vacant. Bridging this capacity gap through frequent and high-quality recruitment is, therefore, the need of the hour. Most importantly, there is a need to create a culture of interdisciplinary coordination among legal, technical, and economic domains.
Moreover, as we come to rely on an increasingly digitised economy, most technology companies will work with critical technology components such as key infrastructure, algorithms, and Artificial Intelligence to business models that are based on data collection and processing practices. Consequently, there will be a need to bolster CCI’s capacity in the technical domain by hiring and integrating new roles including technologists, software and hardware engineers, product managers, UX designers, data scientists, investigative researchers, and subject matter experts dealing with new and emerging areas of technology.21 Therefore, we recommend CCI to ensure that the proposed DMDU has the requisite diversity of skills to effectively use existing tools for enforcement and is also able to keep pace with new and emerging technological developments.
Along with this overall observation of CCI's capacity, we have also submitted detailed comments on specific clauses of the draft DCB. These submissions are structured across the following six categories: i) Classification of Core Digital Services; ii) Designation of a Systemically Significant Digital Enterprise (SSDE) and Associate Digital Enterprise (ADE); iii) Obligations on SSDEs and ADEs; iv) Powers of the Commission to Conduct an Inquiry; v) Penalties and Appeals; and vi) Powers of the Central Government. In addition to these suggestions, the detailed comments and their summarised version focus on three important gaps in the draft DCB – limited representation from workers’ groups and MSMEs, exclusion of merger and acquisition (M&A) from the discussions, and lack of a formalised framework for interregulatory coordination.
For our full comments, click here
For a detailed summary of our comments, click here
A Guide to Navigating Your Digital Rights
The Digital Rights Guide gives practical guidance on the laws and procedures that affect internet freedoms. It covers the following topics:
- Internet Shutdowns
- Content Takedown
- Surveillance
- Device Seizure
The Digital Rights Guide can be viewed here.
Legal Advocacy Manual
Click to download the manual.
Draft Circular on Digital Lending – Transparency in Aggregation of Loan Products from Multiple Lenders
Edited and reviewed by Amrita Sengupta
The Centre for Internet and Society (CIS) is a non-profit organisation that undertakes interdisciplinary research on the internet and digital technologies from policy and academic perspectives. Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and practices around the internet, technology and society in India, and elsewhere.
CIS is grateful for the opportunity to submit comments on the “Draft Circular on Digital Lending: Transparency in Aggregation of Loan Products from Multiple Lenders” to the Reserve Bank of India. Over the last twelve years, CIS has worked extensively on research around privacy, online safety, cross border flows of data, security, and innovation. We welcome the opportunity provided to comment on the guidelines, and we hope that the final guidelines will consider the interests of all the stakeholders to ensure that it protects the privacy and digital rights of all consumers, including marginalised and vulnerable users, while encouraging innovation and improved service delivery in the fintech ecosystem.
Introduction
The draft circular on ‘Transparency in Aggregation of Loan Products from Multiple Lenders’ is a much needed and timely document that builds on the Guidelines on Digital Lending. Both documents have maintained the principles of customer centricity and transparency at their core. Reducing information asymmetry and deceptive patterns in the digital lending ecosystem is of utmost importance, given the adverse effects experienced by borrowers. Digital lending is one of the fastest-growing fintech segments in India,[1] having grown exponentially from nine billion U.S. dollars in 2012 to nearly 150 billion dollars by 2020, and is estimated to reach 515 billion USD by 2030.[2] At the same time, accessing digital credit through digital lending applications has been found to be associated with a high risk to financial and psychological health due to a host of practices that lead to overindebtedness.[3] These include post contract exploitation through hidden transaction fees, abusive debt collection practices, privacy violations and fluctuations in interest rates. Both illegal/fraudulent and licensed lending service providers have been employing aggressive marketing and debt collection tactics[4] that exacerbate the risks of all the above harms.[5] With additional safeguards in place, the guidelines can provide a suitable framework to ensure borrowers have the opportunity and information needed to make an informed decision while accessing intermediated credit, and reduce harmful financial and health related consequences.
In this submission, we seek to provide some comments on the broader issues the guidelines address. Our comments recommend additional safeguards, keeping in mind the gamut of services provided by lending service providers (LSPs). We will frame our comment around two main concerns addressed by the draft guidelines: 1) reducing information asymmetry 2) market fairness. In addition to this we will share comments around a third concern that requires additional scrutiny, i.e. 3) data privacy and security.
Reducing Information Asymmetry
The guidelines aim to define responsibilities of LSPs in maintaining transparency to ensure borrowers are aware of the identity of the regulated entity (RE) providing the loan, and make informed decisions based on consistent information to weigh their options.
Comments: Guideline iii suggests that the digital view should include information that helps the borrower to compare various loan offers. This includes “the name(s) of the regulated entity (RE) extending the loan offer, amount and tenor of loan, the Annual Percentage Rate (APR) and other key terms and conditions” alongside a link to the key facts statement (KFS). The earlier ‘Guidelines on Digital Lending’ specifies that APR should be an all-inclusive cost including margin, credit costs, operating costs, verification charges, processing fees etc. only excluding penalties, and late payment charges.
Recommendations: All users of digital lending services may not be aware that APR is inclusive of all non-contingent charges. Requiring digital loan aggregators to provide messages/notifications boosting consumer awareness of regulations and their rights can help reduce violations. We also recommend that this information is made available in various languages such that a wide range of users are able to access this information. Further we recommend that accountability be laid on the LSPs to adhere to an inclusive platform design that allows for easy access to this information.
Market Fairness
Guidelines ii-iv also serve to outline practices to curb anti-competitive placement of digital loan products through regulating use of dark patterns and increasing transparency.
Comments: Section ii mandates that LSPs must disclose the approach utilised to determine the willingness of lenders to offer a loan. Whether this estimation includes factors associated with the customer profile like age, income and occupation etc. should be clearly disclosed as well.
Recommendations: Alongside the predictive estimate of the lender’s willingness, to improve transparency loan aggregators may be asked to share an overall rate of rejection or approval as well within the digital view.
While the ‘Guidelines on Digital Lending’[6] clearly state that LSPs must charge any fees from the REs and not the borrowers, further clarification should be provided on whether LSPs can charge fees for loan aggregation services themselves, i.e. for providing information of available loan products.
Privacy and Data Security
The earlier ‘Guidelines on Digital Lending’[7] require LSPs to only store minimal contact data regarding the customer and provide consumers the ability to seek their data being removed, i.e. the right to be forgotten by the provider, once they are no longer seeking their services. Personal financial information is not to be stored by LSPs. It is the responsibility of REs to ensure that LSPs do not store extraneous customer data, and to stipulate clear policy guidelines regarding the storage and use of customer data.
Comments: It is important to ascertain the nature of anonymised and personally identifiable customer data that may be currently utilised by LSPs or processed on their platforms, in the course of providing a range of services within the digital credit ecosystem to borrowers and lenders.
Certain functions that loan aggregators perform may expand their role beyond a simple intermediary. LSPs also provide services assessing borrower’s creditworthiness, payment services, and agent-led debt collection services for lenders. Some LSPs may be involved in more than one stage of the loan process which may make them privy to additional personal information about a borrower. There may be cases in which a consumer registers on an LSP’s platform without going ahead with any loan applications. It is unclear who is responsible for maintaining data security and privacy or providing grievance redressal at these times.
Section ii allows them to provide estimates of lenders’ willingness to borrowers. Some LSPs connecting REs with borrowers may also provide services using alternative and even non-financial data to assess the creditworthiness of thin-file credit seekers. Whether there are any restrictions on the use of AI tools in these processes, and the handling of customer data should also be clarified or limited. The right to be forgotten may be difficult to enforce with the use of certain machine learning and other artificial intelligence models. As innovation in credit scoring mechanisms continues, it is also important to bring such financial service providers under the ambit of guidelines for digital lending platforms.
Recommendations: The burden of maintaining privacy and data security should fall on aggregators of loan products in addition to regulated entities as well. Include guidelines limiting the use of PII (and PFI if applicable) for purposes other than connecting borrowers to a loan provider without consumer consent. Informed and explicit consumer consent should be sought for any additional purposes like marketing, market research, product development, cross-selling, delivery of other financial and commercial services, including providing access to other loan products in the future.
Often consumers are required to register on a platform by providing contact details and other personal information. An initial digital view of loan products available could be displayed for all users without registering to help borrowers determine whether they would like to register for the LSP’s services. This can help reduce the amount of consumer contact information and other personally identifiable information (PII) that is collected by LSPs.
Emerging Risks
Emerging consumer risks within the digital lending ecosystem expose borrowers to additional risks like over-indebtedness and risks arising from fraud, data misuse, lack of transparency and inadequate redress mechanisms.[8] These draft guidelines clearly layout mechanisms to reduce risks arising from lack of transparency. Similar efforts need to be put behind reduction of data misuse by delimiting the time period and – and the risk for overindebtedness
One of the biggest sources of consumer risk has been at the debt recovery stage. Aggressive debt collection practices have had deleterious effects on consumers’ mental health, social standing and even lead some to consider suicide. Extant guidelines assume a recovery agent will be contacting the consumer.[9] LSPs may also set up automated payments and use digital communication like app notifications, messages and automated calls in the debt recovery process as well. The impact of repeated notifications and automated debt payments also needs to be considered in future iterations of guidelines addressing risk in the digital lending ecosystem.
[1] “Funding distribution of FinTech companies in India in second quarter of 2023, by segment”, Statista, accessed 30 May 2024, https://www.statista.com/statistics/1241994/india-fintech-companies-share-by-segment/
[2] Anushka Sengupta, “India’s digital lending market likely to grow $515 bn by 2030: Report”, Economic Times, 17 June 2023, https://bfsi.economictimes.indiatimes.com/news/fintech/indias-digital-lending-market-likely-to-grow-515-bn-by-2030-report/101057337
[3] “Mobile Instant Credit: Impacts, Challenges, and Lessons for Consumer Protection”, Center for Effective Global Action, September 2023, https://cega.berkeley.edu/wp-content/uploads/2023/09/FSP_Digital_Credit_Research_test.pdf
[4] Jinit Parmar, “Ruthless Recovery Agents, Aggressive Loan Outreach Put the Spotlight on Bajaj Finance”, Moneycontrol, 18 April 2023, https://www.moneycontrol.com/news/business/ruthless-recovery-agents-aggressive-loan-outreach-put-spotlight-on-bajaj-finance-10423961.html
[5] Prudhviraj Rupavath, “Suicide Deaths Mount after Unregulated Lending Apps Resort to Exploitative Recovery Practices”, Newsclick, 26 December 2020 https://www.newsclick.in/Suicide-Deaths-Mount-Unregulated-Lending-Apps-Resort-Exploitative-Recovery-Practices
Priti Gupta and Ben Morris, “India's loan scams leave victims scared for their lives”, BBC, 7 June 2022, https://www.bbc.com/news/business-61564038
[6] Section 4.1, Guidelines on Digital Lending, 2022.
[7] Section 11, Guidelines on Digital Lending, 2022.
[8] “The Evolution of the Nature and Scale of DFS Consumer Risks: A Review of Evidence”, CGAP, February 2022, https://www.cgap.org/sites/default/files/publications/slidedeck/2022_02_Slide_Deck_DFS_Consumer_Risks.pdf
[9] Section 2, Outsourcing of Financial Services - Responsibilities of regulated entities employing Recovery Agents, 2022.
Online Censorship: Perspectives From Content Creators and Comparative Law on Section 69A of the Information Technology Act
This paper was reviewed by Krishnesh Bapat and Torsha Sarkar.
Abstract: The Government of India has increasingly engaged in online censorship using powers in the Information Technology Act. The law lays out a procedure for online censorship that relies solely on the discretion of the executive. Using a constitutional and comparative legal analysis, we contend that the law has little to no oversight and lacks adequate due process for targets of censorship. Through semi-structured interviews with individuals whose content has been taken down by such orders, we shed light on experiences of content owners with government-authorised online censorship. We show that legal concerns about the lack of due process are confirmed empirically, and content owners are rarely afforded an opportunity for a hearing before they are censored. The law enabling online censorship (and its implementation) may be considered unconstitutional in how it inhibits avenues of remedy for targets of censorship or for the general public. We also show that online content blocking has far-reaching, chilling effects on the freedom of expression.
The paper is available on SSRN, and can also be downloaded here.
AI for Healthcare: Understanding Data Supply Chain and Auditability in India
Read our full report here.
The use of artificial intelligence (AI) technologies constitutes a significant development in the Indian healthcare sector, with industry and government actors showing keen interest in designing and deploying these technologies. Even as key stakeholders explore ways to incorporate AI systems into their products and workflows, a growing debate on the accessibility, success, and potential harms of these technologies continues, along with several concerns over their large-scale adoption. A recurring question in India and the world over is whether these technologies serve a wider interest in public health. For example, the discourse on ethical and responsible AI in the context of emerging technologies and their impact on marginalised populations, climate change, and labour practices has been especially contentious.
For the purposes of this study, we define AI in healthcare as the use of artificial intelligence and related technologies to support healthcare research and delivery. The use cases include assisted imaging and diagnosis, disease prediction, robotic surgery, automated patient monitoring, medical chatbots, hospital management, drug discovery, and epidemiology. The emergence of AI auditing mechanisms is an essential development in this context, with several stakeholders ranging from big-tech to smaller startups adopting various checks and balances while developing and deploying their products. While auditing as a practice is neither uniform nor widespread within healthcare or other sectors in India, it is one of the few available mechanisms that can act as guardrails in using AI systems.
Our primary research questions are as follows:
-
What is the current data supply chain infrastructure for organisations operating in the healthcare ecosystem in India?
-
What auditing practices, if any, are being followed by technology companies and healthcare institutions?
-
What best practices can organisations based in India adopt to improve AI auditability?
This was a mixed methods study, comprising a review of available literature in the field, followed by quantitative and qualitative data collection through surveys and in-depth interviews. The findings from the study offer essential insights into the current use of AI in the healthcare sector, the operationalisation of the data supply chain, and policies and practices related to health data sourcing, collection, management, and use. It also discusses ethical and practical challenges related to privacy, data protection and informed consent, and the emerging role of auditing and other related practices in the field. Some of the key learnings related to the data supply chain and auditing include:
-
Technology companies, medical institutions, and medical practitioners rely on an equal mix of proprietary and open sources of health data and there is significant reliance on datasets from the Global North.
-
Data quality checks are extant, but they are seen as an additional burden; with the removal of personally identifiable information being a priority during processing.
-
Collaboration between medical practitioners and AI developers remains limited, and feedback between users and developers of these technologies is limited.
-
There is a heavy reliance on external vendors to develop AI models, with many models replicated from existing systems in the Global North.
-
Healthcare professionals are hesitant to integrate AI systems into their workflows, with a significant gap stemming from a lack of training and infrastructure to integrate these systems successfully.
-
The understanding and application of audits are not uniform across the sector, with many stakeholders prioritising more mainstream and intersectional concepts such as data privacy and security in their scope.
Based on these findings, this report offers a set of recommendations addressed to different stakeholders such as healthcare professionals and institutions, AI developers, technology companies, startups, academia, and civil society groups working in health and social welfare. These include:
-
Improve data management across the AI data supply chain
Adopt standardised data-sharing policies. This would entail building a standardised policy that adopts an intersectional approach to include all stakeholders and areas where data is collected to ensure their participation in the process. This would also require robust feedback loops and better collaboration between the users, developers, and implementers of the policy (medical professionals and institutions), and technologists working in AI and healthcare.
Emphasise not just data quantity but also data quality. Given that the limited quantity and quality of Indian healthcare datasets present significant challenges, institutions engaged in data collection must consider their interoperability to make them available to diverse stakeholders and ensure their security. This would include recruiting additional support staff for digitisation to ensure accuracy and safety and maintain data quality.
-
Streamline AI auditing as a form of governance
Standardise the practice of AI auditing. A certain level of standardisation in AI auditing would contribute to the growth and contextualisation of these practices in the Indian healthcare sector. Similarly, it would also aid in decision-making among implementing institutions.
Build organisational knowledge and inter-stakeholder collaboration. It is imperative to build knowledge and capacity among technical experts, healthcare professionals, and auditors on the technical details of the underlying architecture and socioeconomic realities of public health. Hence, collaboration and feedback are essential to enhance model development and AI auditing.
Prioritise transparency and public accountability in auditing standards. Given that most healthcare institutions procure externally developed AI systems, some form of internal or external AI audit would contribute to better public accountability and transparency of these technologies.
-
Centre public good in India’s AI industrial policy
Adopt focused and transparent approaches to investing in and financing AI projects. An equitable distribution of AI spending and associated benefits is essential to guarantee that these investments and their applications extend beyond private healthcare, and that implementation approaches prioritise the public good. This would involve investing in entire AI life cycles instead of merely focusing on development and promoting transparent public–private partnerships.
Strengthen regulatory checks and balances for AI governance.
While an overarching law to regulate AI technologies may still be under debate, existing regulations may be amended to bring AI within their ambit. Furthermore, all regulations must be informed by stakeholder consultations to guarantee that the process is transparent, addresses the rights and concerns of all the parties involved, and prioritises the public good.
Technology-facilitated Gender-based Violence and Women’s Political Participation in India: A Position Paper
Read the full paper here.
Political participation of women is fundamental to democratic processes and promotes building of more equitable and just futures. Rapid adoption of technology has created avenues for women to access the virtual public sphere, where they may have traditionally struggled to access the physical public spaces, due to patriarchal norms and violence in the physical sphere. While technology has provided tools for political participation, information seeking, and mobilization, it has also created unsafe online spaces for women, thus often limiting their ability to actively engage online.
This essay examines the emotional and technological underpinnings of gender-based violence faced by women in politics. It further explores how gender-based violence is weaponised to diminish the political participation and influence of women in the public eye. Through real-life examples of gendered disinformation and sexist hate speech targeting women in politics in India, we identify affective patterns in the strategies deployed to adversely impact public opinion and democratic processes. We highlight the emotional triggers that play a role in exacerbating online gendered harms, particularly for women in public life. We also examine the critical role of technology and online platforms in this ecosystem – both in perpetuating and amplifying this violence as well as attempting to combat it.
We argue that it is critical to investigate and understand the affective structures in place, and the operation of patriarchal hegemony that continues to create unsafe access to public spheres, both online and offline, for women. We also advocate for understanding technology design and identifying tools that can actually aid in combating TFGBV. Further, we point to the continued need for greater accountability from platforms, to mainstream gender related harms and combat it through diversified approaches.
Document Actions