The Mother and Child Tracking System - understanding data trail in the Indian healthcare systems
This article was first published by Privacy International, on October 17, 2019
Case study of MCTS: Read
On October 17th 2019, the UN Special Rapporteur (UNSR) on Extreme Poverty and Human Rights, Philip Alston, released his thematic report on digital technology, social protection and human rights. Understanding the impact of technology on the provision of social protection – and, by extent, its impact on people in vulnerable situations – has been part of the work the Centre for Internet and Society (CIS) and Privacy International (PI) have been doing.
Earlier this year, PI responded to the UNSR's consultation on this topic. We highlighted what we perceived as some of the most pressing issues we had observed around the world when it comes to the use of technology for the delivery of social protection and its impact on the right to privacy and dignity of benefit claimants.
Among them, automation and the increasing reliance on AI is a topic of particular concern - countries including Australia, India, the UK and the US have already started to adopt these technologies in digital welfare programmes. This adoption raises significant concerns about a quickly approaching future, in which computers decide whether or not we get access to the services that allow us to survive. There's an even more pressing problem. More than a few stories have emerged revealing the extent of the bias in many AI systems, biases that create serious issues for people in vulnerable situations, who are already exposed to discrimination, and made worse by increasing reliance on automation.
Beyond the issue of AI, we think it is important to look at welfare and automation with a wider lens. In order for an AI to function it needs to be trained on a dataset, so that it can understand what it is looking for. That requires the collection large quantities of data. That data would then be used to train and AI to recognise what fraudulent use of public benefits would look like. That means we need to think about every data point being collected as one that, in the long run, will likely be used for automation purposes.
These systems incentivise the mass collection of people's data, across a huge range of government services, from welfare to health - where women and gender-diverse people are uniquely impacted. CIS have been looking specifically at reproductive health programmes in India, work which offers a unique insight into the ways in which mass data collection in systems like these can enable abuse.
Reproductive health programmes in India have been digitising extensive data about pregnant women for over a decade, as part of multiple health information systems. These can be seen as precursors to current conceptions of big data systems within health informatics. India’s health programme instituted such an information system in 2009, the Mother and Child Tracking System (MCTS), which is aimed at collecting data on maternal and child health. The Centre for Internet and Society, India, undertook a case study of the MCTS as an example of public data-driven initiatives in reproductive health. The case study was supported by the Big Data for Development network supported by the International Development Research Centre, Canada. The objective of the case study was to focus on the data flows and architecture of the system, and identify areas of concern as newer systems of health informatics are introduced on top of existing ones. The case study is also relevant from the perspective of Sustainable Development Goals, which aim to rectify the tendency of global development initiatives to ignore national HIS and create purpose-specific monitoring systems.
After being launched in 2011, 120 million (12 crore) pregnant women and 111 million (11 crore) children have been registered on the MCTS as of 2018. The central database collects data on each visit of the woman from conception to 42 days postpartum, including details of direct benefit transfer of maternity benefit schemes. While data-driven monitoring is a critical exercise to improve health care provision, publicly available documents on the MCTS reflect the complete absence of robust data protection measures. The risk associated with data leaks are amplified due to the stigma associated with abortion, especially for unmarried women or survivors of rape.
The historical landscape of reproductive healthcare provision and family planning in India has been dominated by a target-based approach. Geared at population control, this approach sought to maximise family planning targets without protecting decisional autonomy and bodily privacy for women. At the policy level, this approach was shifted in favour of a rights-based approach to family planning in 1994. However, targets continue to be set for women’s sterilisation on the ground. Surveillance practices in reproductive healthcare are then used to monitor under-performing regions and meet sterilisation targets for women, this continues to be the primary mode of contraception offered by public family planning initiatives.
More recently, this database - among others collecting data about reproductive health - is adding biometric information through linkage with the Aadhaar infrastructure. This data adds to the sensitive information being collected and stored without adhering to any publicly available data protection practices. Biometric linkage is aimed to fulfill multiple functions - primarily authentication of welfare beneficiaries of the national maternal benefits scheme. Making Aadhaar details mandatory could directly contribute to the denial of service to legitimate patients and beneficiaries - as has already been seen in some cases.
The added layer of biometric surveillance also has the potential to enable other forms of abuse of privacy for pregnant women. In 2016, the union minister for Women and Child Development under the previous government suggested the use of strict biometric-based monitoring to discourage gender-biased sex selection. Activists critiqued the policy for its paternalistic approach to reduce the rampant practice of gender-biased sex selection, rather than addressing the root causes of gender inequality in the country.
There is an urgent need to rethink the objectives and practices of data collection in public reproductive health provision in India. Rather than continued focus on meeting high-level targets, monitoring systems should enable local usage and protect the decisional autonomy of patients. In addition, the data protection legislation in India - expected to be tabled in the next session in parliament - should place free and informed consent, and informational privacy at the centre of data-driven practices in reproductive health provision.
This is why the systematic mass collection of data in health services is all the more worrying. When the collection of our data becomes a condition for accessing health services, it is not only a threat to our right to health that should not be conditional on data sharing but also it raises questions as to how this data will be used in the age of automation.
This is why understanding what data is collected and how it is collected in the context of health and social protection programmes is so important.