Blog
Should Aadhaar be mandatory?
The article was published in Deccan Herald on December 9, 2017.
Getting their day in the court to hear interim matters is but a small victory in what has been a long and frustrating fight for the petitioners. In 2012, Justice K S Puttaswamy, a former Karnataka High Court judge, filed a petition before the Supreme Court questioning the validity of the Aadhaar project due its lack of legislative basis (the Aadhaar Act was passed by Parliament in 2016) and its transgressions on our fundamental rights.
Over time, a number of other petitions also made their way to the apex court challenging different aspects of the Aadhaar project. Since then, five different interim orders of the Supreme Court have stated that no person should suffer because they do not have an Aadhaar number.
Aadhaar, according to the Supreme Court, could not be made mandatory to avail benefits and services from government schemes. Further, the court has limited the use of Aadhaar to only specific schemes, namely LPG, PDS, MNREGA, National Social Assistance Program, the Pradhan Mantri Jan Dhan Yojna and EPFO.
The then Attorney General, Mukul Rohatgi, in a hearing before the court in July 2015 stated that there is no constitutionally guaranteed right to privacy. But the judgement by the nine-judge bench earlier this year was an emphatic endorsement of the constitutional right to privacy.
In the course of a 547-page judgement, the bench affirmed the fundamental nature of the right to privacy, reading it into the values of dignity and liberty.
Yet months after the judgement, the Supreme Court has failed to hear arguments in the Aadhaar matter. The reference to a larger bench and subsequent deferrals have since delayed the entire matter, even as the government has moved to make Aadhaar mandatory for a number of government schemes.
At this point, up to 140 government services have made linking with Aadhaar mandatory to avail these services. Chief Justice of India Dipak Misra has promised a constitution bench this week, likely to look only into interim matters of stay on the deadline of Aadhaar-linking. It is likely that the hearings for the final arguments are still some months away. The refusal of the court to adjudicate on this issue has been extremely disappointing, and a grave disservice to the court's intended role as the champion of individual rights.
It is worth noting that the interim orders by the Supreme Court that no person should suffer because they do not have an Aadhaar number, and limiting its use only to specified schemes, still stand.
However, since the passage of the Aadhaar Act, which allows the use of Aadhaar by both private and public parties, permits making it mandatory for availing any benefits, subsidies and services funded by the Consolidated Fund of India, the spate of services for which Aadhaar has been made mandatory suggests that as per the government, the Aadhaar Act has, in effect, nullified the orders by the Supreme Court.
This was stated in so many words by Union Law Minister Ravi Shankar Prasad in the Rajya Sabha in April. This view is an erroneous one. While acts of Parliament can supersede previous judicial orders, they must do so either through an express statement in the objects of the Act, or implied when the two are mutually incompatible. In this case, the Aadhaar Act, while permitting the government authorities to make Aadhaar mandatory, does not impose a clear duty to do so.
Therefore, reading the orders and the legislation together leads one to the conclusion that all instances of Aadhaar being made mandatory under the Aadhaar Act are void.
The question may be more complicated for cases where Aadhaar has been made mandatory through other legislations, such as Prevention of Money Laundering Act, as they clearly mandate the linking of Aadhaar numbers, rather than merely allowing it. However, despite repeated appeals of the petitioners, the court has so far refused to engage with the question of the legality of such instances.
How may the issues finally be resolved? When the court deigns to hear final arguments, the Aadhaar case will be instructive in how the court defines the contours of the right to privacy. The right to privacy judgement, while instructive in its exposition of the different aspects of privacy, does not delve deeply into the question of what may be legitimate limitations on this right.
In one of the passages of the judgement, "ensuring that scarce public resources are not dissipated by the diversion of resources to persons who do not qualify as recipients" is mentioned as an example of a legitimate incursion into the right to privacy. However, it must be remembered that none of the opinions in the privacy judgement were majority judgements.
Therefore, in future cases, lawyers and judges must parse through the various opinions to arrive at an understanding of the majority opinion, supported by five or more judges. While the privacy judgement was a landmark one, its actual impact on the rights discourse and on matters like Aadhaar will depend extensively on the how the judges choose to interpret it.
It Hurts Them Too
Srinagar, J&K: For Mahender*, a member of the Central Reserve Police Force (CRPF) posted in Srinagar for the last two years, the internet has been a way to feel virtually close to his children and wife in Bihar, nearly 1,900 kms away. After duty every day, he finds a quiet corner to start video-calling his wife. At the other end, she ensures their two children are beside her. “We discuss how our day went. Most of our conversations revolve around the kids, their schooling and food, and about my parents who live near our house,” says Mahender, who identified himself only with his first name.
However, Mahender and thousands of security personnel like him posted in the Kashmir Valley haven't found this easy connectivity always reliable, courtesy the government's frequent internet shutdowns, phone data connectivity cuts, and social media bans.
Jammu & Kashmir has faced 55 internet shutdowns between 2012 and 2017, as recorded by the Software Freedom Law Centre. The administration justifies this crackdown by citing "law-and-order situations" that occur during encounters of security forces with militants and, later, when protests and marches are carried out by civilians during militants' funerals.
Hizbul Mujahideen commander Burhan Wani was killed by security forces and police on 8 July 2016, triggering a six-month-long “uprising” among civilians in Kashmir. Immediately after the shootout, security agencies shut the internet down. With 55 internet shutdowns in 2017 itself, it is something of a standard practice in Kashmir today to block social media or internet in a district or entire Valley each time there is an encounter. It is also a recurring practice of precaution against protests on Independence and Republic Day every year.
Security forces and police are not untouched by these shutdowns though. There are 47 CRPF battalions posted in the Kashmir region. “Our jawans experience difficulties during internet bans as they are not able to communicate with their families and friends as frequently as they do when internet is working,” says Srinagar-based CRPF Public Relations Officer Rajesh Yadav.
The J&K police, who are at the forefront of quelling protests and maintaining law & order in the Valley with a strength of nearly 100,000, also suffer. There have been growing instances of clashes between the Kashmiri police and protesters who believe their home force is being brutal during crowd control. The policemen have had to hide or operate in plain clothes. A senior police officer in Srinagar, who does not want to be named, says, “Our families are worried about our well-being when we are dealing with frequent agitations. In such a situation, when there is a ban, we find it difficult to stay in touch with our families.”
More dangerously, internet bans also hit the official communication of cops in action. Their offices are equipped with BSNL landline connections, which are rarely shut down, and they usually communicate through wireless; but for mobile internet most of them depend on private internet service providers, owing to their better connectivity, as the rest of the state. A senior police officer who deals with counter-insurgency in Kashmir speaks of the impact of cutting off phone data connectivity. "We have our own WhatsApp groups for quick official communication. We use broadband in offices only and can’t take it to sites of counter-insurgency operations.”
Yadav of the CRPF says, “While we have several effective means of communication for official purposes, social media is one that has accentuated our communication network. During internet bans, our work is not entirely hampered, but there is a little bit of pinch, since that speed and ease of working is not there.” Nevertheless, he defends the ban, insisting that Facebook and WhatsApp are handy tools for people to "flare up" the situation and "mobilise youths" during protests. "So, it becomes a compulsion for the administration to impose the ban."
Counter-insurgency forces have in the last few years created social media monitoring and surveillance cells. They say it is to equally match the extremists, including those in Pakistan, who use social media services like Telegram, Facebook and WhatsApp now, instead of their phones which can be tapped. It is also to keep an eye on suspected rumour-mongers and propagandists. For instance, 22-year-old Burhan Wani had gained the attention of security forces precisely because of the way he used his huge following, amassed through Facebook posts and gun-toting pictures, to inspire young Kashmiris to militancy.
“There is always monitoring and surveillance. If militants are using it, then they are within the loop,” says Yadav.
There is widespread public outrage against the state government and agencies who impose frequent net bans in Kashmir, but the CRPF official says it hampers their attempts to build an image and do public relations in Kashmir too. “We promote and highlight programmes like Civic Action and Sadhbhavana online, and that's not possible when there's no social media.”
"The public's criticism of the ban is justified,” the counter-insurgency official says. But they are compelled to use it in situations like during the recent scare around braid chopping, which was caused due to “rumour-mongering by persons with vested interests”. Kashmiri civil society had suggested that the police keep the internet up to issue online clarifications trashing the rumours, but it was not to be.
"The internet has made it possible to identify culprits while sitting in an office. But we have to shut it down in case of communal tensions which have the tendency to engulf the whole state,” says the senior cop. “When we have no option left, we go back to traditional human intelligence.”
Name changed to protect identity.
Mir Farhat is a journalist from Jammu & Kashmir, with an experience of reporting politics, conflict, environment, development and governance issues. His primary interests lie in reporting environment and development. He is a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Digital Banking Dreams: Interrupted
Srinagar, J&K: Inside a buzzing branch of the Jammu & Kashmir Bank in Srinagar, 27-year-old Falak Akhtar is busy processing routine transactions. A member of the technical team, this young banker says that almost half of the branch's customers have registered their accounts with the M-Pay mobile app. However, the application built for convenience is not always dependable. As she attends to the rush of customers inside the branch, Falak reminds us that whenever there is an internet shutdown, the app is of no use. “The customers have to resort to traditional banking.” she says.
Every day, Falak’s branch executes 53% of its transactions online. “If the customers do online transactions, the cost per transaction for the bank is only Rs 7. But every time an internet ban is enforced in Kashmir, the cost of each transaction goes up to Rs 54,” she says.
Given that internet shutdowns in Kashmir are usually accompanied by an imposition of a physical curfew, simply going to the bank can be impossible. Ironically, it is during political tensions that Kashmiris, stuck indoors due to curfew or avoiding the streets to keep safe, need internet banking the most.
Zahid Maqbool, an information officer with the J&K government, uses the J&K Bank’s mobile app regularly to transfer money or do transactions. “But last year, when my brother studying outside the state needed money, I couldn’t use the app because of the internet ban,” he says. “During the tense situation and curfew, I took a huge risk to reach to the branch in Tral, where only two employees were present." It took him around three hours to transfer Rs 12,000 ($185) to his brother’s account "because the bank’s internet line was also running very slow”.
Showkat (name changed), manager of an ICICI Bank branch in Srinagar, says they use internet facilities of BSNL and Airtel during normal days. “Our branch has 20,000 customers, and around 40% of them use digital banking through an app called I-Mobile,” he says. Last year, as Kashmir plunged into a six-month-long political unrest after the killing of Hizbul Mujahideen commander Burhan Wani on July 8, internet was snapped immediately and remained suspended for several months. The bank was not able to do online transactions throughout the summer. “And whenever there was a relaxation in curfew or strike, there used to be a huge rush of customers in the branch,” Showkat says.
“Whenever an internet ban is on in Kashmir, we suffer huge losses because we don’t manage to get new account holders,” says Showkat. “Since we run most of our operations online, the ban blocks the account holder from accessing the net and uploading scanned ID proofs.”
On an average, his branch opens 100 accounts per month. “But last year, amid the internet ban, we managed to open only 40 accounts in six months,” he says. For processing these account opening applications, the bank had to courier the forms to Chandigarh, the bank's nerve centre in North India. Account openings take 24 hours online, but here, the forms took six days to reach Chandigarh, after which it took another 8 days to process it.
To overcome hurdles faced during last year’s internet gag, the bank used the Indian Army’s VSAT network on lease. Showkat says such a line can be used for commercial purposes after clearance from the Army and a payment of Rs 15,000 per month. "Our ATMs were connected through that lease line," he says. "But the problem was that the gag had slowed down the VSAT as well.”
The slow-speed internet hampered cash withdrawals from ATMs, which created quite a furore. “The already frustrated customers started shouting that the bank employees were cheats, that we were irresponsible. It is very difficult to make them understand the technical aspects of it,” he says.
Although banks suffer during frequent internet gags, their plight is often overshadowed by the bigger political crisis in Kashmir. What's clear is that disrupted banking, fee payments, purchases and withdrawals, all severely cripple the everyday life of Kashmiris.
In 2016, angry customers, barred from e-banking due to internet clampdown, thronged banks after months, demanding they be given some respite on EMIs (monthly loan repayments) and other banking schemes. An official from the branch of a nationalised bank outside Srinagar says that when they refused to entertain such requests on procedural grounds, the customers entered into heated exchanges.
Showkat says that customers who had taken loans were neither able to repay the installments online, nor were they able to visit the branch because of unrest. “These customers then end up having to bear the high interest rate, and some of them had to face penalties.”
Mudasir Ahmad, the owner of a Kashmir Art Emporium in Central Kashmir’s Budgam, says that he had borrowed a loan of Rs 40 lakh ($62,400) from J&K Bank as capital for his handicraft business, but he had missed seven loan instalments last summer due to the internet clampdown. “I usually pay my loan installments through e-banking. Last year, when the internet was not working, I had to visit the bank to repay it. There are such long queues. It took me a whole day last year to pay one installment, which I otherwise pay within minutes through e-banking.”
Digital banking was introduced in Kashmir few years ago in an effort to reduce footfall in banks and increase online transactions. Online banking done through cards and apps was hailed as a step towards cashless economy. Abdul Rashid, a relationship executive of a State Bank of India branch in Srinagar, says, “But because of the internet gag at most times, we are not able to be a part of it."
Safeena Wani is an independent journalist from Kashmir. Her work has appeared in Al-Jazeera, Kashmir Reader and other regional publications. She is a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Amid Unrest in the Valley, Students See a Dark Wall
Srinagar, J&K: On November 18, Srinagar lost 3G and 4G connectivity after a militant and a sub-inspector of the Jammu & Kashmir police force were killed, and one militant caught alive in a brief encounter on the outskirts of the city, near Zakoora crossing. District authorities said data connectivity was snapped to “maintain law and order”.
Students in Srinagar’s SPS Library. Picture Courtesy: Aakash Hassan
But to Jasif Ayoub, an aspiring chartered accountant, it seemed like an obstruction to his exam preparations. Not being able to access lectures and texts online, Ayoub was perturbed. He had moved from Anantnag in south Kashmir, to Srinagar, only to have an easy access to the vast pool of information on the world wide web. “My hometown witnesses internet shutdowns very frequently. That is why I moved to live with relatives in Srinagar to prepare for my exams. But the internet speed here too is getting worse by the day,” says Ayoub.
The internet is usually the first administrative casualty when any law & order situation arises in the Kashmir Valley, which has been restive and agitated over the last two decades. Despite the frequency of shutdowns, the state still does not issue a prior warning, or offer emergency connectivity measures. Residents know the pattern now: the mobile internet and SMS are the first to go down, and then broadband and other lease-line service providers follow.
J&K tops the list of Indian states that have witnessed most number of internet shutdowns, with 27 being the count from 2012 to 2017, according to internetshutdowns.in, run by Software Freedom Law Centre. There has been a sharp rise in the curbs on internet imposed this year, with over 30 shutdowns until November 22. Government authorities who issue and implement these bans say it is the only way to undercut the strength of social media in organising movements and resistance. The prime example is Burhan Wani, the 21-year-old Hizb-ul-Mujahideen commander who had used his Facebook account to popularise and justify militant resistance. Wani’s death saw protests erupting across the Valley, which made the state snap internet services for about six months on prepaid mobile networks. For four months, there was no internet access on postpaid mobile networks too. These have been the longest intervals of ban. However, day-long, hour-long and even week-long periods of non-connectivity are alarmingly common.
The incessant disruption of internet services prevents students from accessing online education resources. Class IX student Haiba Jaan in Srinagar depends on lectures from Khan Academy, an online coaching centre, to clarify a lot of concepts. A resident of Hyderpora in Srinagar, Haiba points to the i-Pad in her hand. “This is the best way of learning," she says. "I was not satisfied with my teachers in school or tuition classes. I found studying on the internet quite useful. But, the problem with that is the regular internet shutdowns." Her parents got a postpaid broadband connection the previous year to help Haiba. "But even that gives up many times during total internet shutdowns," says Haiba.
In May this year, the government suspended the use of 22 social media and messaging platforms in Kashmir for a month. Skype was one of the messaging services banned. This put Mehraj Din through great trouble. Shortlisted for a summer programme at Istanbul, Turkey, this scholar of Islamic Studies at Kashmir University, had to appear for the final interview via Skype. "The ban could have ended all my chances to get selected had the organisers not agreed to an audio interview considering the ground situation here," says Mehraj, who is currently compiling his dissertation for the university. "I have a deadline to meet, but repeated shutdowns have affected my work," he says. "This a punishment from the State."
Full libraries, half studies
When home and mobile internet connections are snapped, the state government's e-learning initiative in public libraries provides some respite. Mehrosha Rasool wants to secure an MBBS seat through the NEET competitive exam. She visits the SPS library in Srinagar religiously to access the study material that has been downloaded and made available on computers. The 17-year-old resident of Nishat in Srinagar says libraries are useful since one never knows how long the internet services at home will stay stable. Irshad Ahmad, another student utilising the facilities at SPS library, says he moved to Srinagar from Pattan town of north Kashmir because "this facility of accessing education material is not available at the library in my tehsil."
Most prominent libraries in Srinagar have computers and tablets for students’ access, "But the rooms often become overcrowded as hundreds of students have registered at the libraries for internet facilities," says Mehrosha.
Schools in the Valley, meanwhile, rely on traditional means in the absence of the e-learning systems. Javaid Ahmad Wani, a political science teacher from south Kashmir’s Anantnag, believes that with little time in the year to even complete the basic syllabus thanks to frequent and sudden school closures during periods of unrest, supplementary e-learning is a distant possibility. Even when teachers and students do have access to these resources to stay updated, internet shutdowns make them unreliable. Therefore, teachers and schools stick to conventional means. Javaid admits that he has himself lost opportunities to an internet shutdown. “I could not submit the form for the main exam of the J&K public service last year because there was no Internet,” he says.
Curbs pinch civil service aspirants
Many among the civil service aspirants are dependent on the internet for preparations. Anees Malik, a resident of Shopian, is preparing for the civil service exams. "I cannot afford coaching, so I rely on the internet," he says, especially for mock exams and previous question papers. "In such a situation, losing connectivity almost every other week is the worst thing to happen.”
Sakib Wani, a Kupwara resident who is currently studying chemistry in Uttarakhand, notices a marked indifference in Kashmir to using online resources. "Those applying for scholarships and pursuing higher education may be using it but not to the extent that students in other states of India do it,” Sakib says. He believes that the repeated internet ban could be a possible reason for students to not opt for online educational resources. With colleges and schools shut for weeks during conflict periods, the internet could have been a great way to continue education formally and personally, but the repeated shutdowns have closed that door of opportunity too.
Aakash Hassan is a Srinagar-based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters. He has reported on conflict, environment, health and other issues for different publications across India.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Online or Offline, Protest Goes On
Srinagar, J&K: Ahead of the Srinagar parliamentary by-polls held on 9 April 2017, the Jammu & Kashmir state government suspended mobile data services to prevent protests around the election. The constituency went to polls with strict restrictions on movement, and with no access to mobile internet. As soon as the electoral staff reached their respective polling booths, however, there were protests. People at dozens of locations in central Kashmir’s Budgam district began to gather to demonstrate against the central and state governments, which they believed had not safeguarded Kashmiri interests.
Faizan, a 12-year-old schoolboy, was killed in the Dalwan shooting |
Abbas, 21, was one of the victims of the shooting in Dalwan |
Abbas’ home in Dalwan |
The school in Dalwan where the shooting occurred |
Picture Courtesy: Junaid Nabi Bazaz
In Dalwan village, a picture-postcard village atop a hill 35 kms from Budgam town, no votes were cast: the officers fled the polling station, and the paramilitary forces and police shot at protesters. Two people – a 21-year-old son of a policeman and a 12-year-old schoolboy – died on the spot.
People of Dalwan have been voting in droves in every parliamentary, legislative and local body election, even on occasions where much of Kashmir boycotted polls. But in April, residents said they were fed up with legislators not working to ensure uninterrupted power, water supply, concrete roads, or even a permanent doctor at its only dispensary. So, a village that has never demonstrated or produced any militants in the last 30 years of uprisings in the Kashmir Valley erupted in protest that election day. Now, the cemetery in which the two killed civilians are buried has been renamed as Martyr’s Graveyard.
Bazil Ahmad, a resident of Dalwan, says that nothing could have prevented the protests that day. “We protested against state, it was a spontaneous response,” says 22-year-old Ahmad who threw his first stones that day. “If the government believes that an internet blockade could prevent protests, they’re living in a fool’s paradise.” He sees the internet only as a free platform to express his anger and disappointment. “The actual trigger for the anger comes from the denial of rights and state aggression, not because of the internet,” says Ahmad.
As the news about the killings spread to neighbouring villages word-of-mouth, residents there too protested. Journalists in these villages updated their newsrooms. In a few days, all newspapers in Kashmir carried the news of eight deaths, scores of injuries, and the appalling 6.5% voter turnout in Budgam and Ganderbal districts.
After the ban was lifted, videos captured on polling day were posted on Facebook, Twitter and WhatsApp. One of them was a video of Farooq Dar, a voter returning from the polling booth, tied to the front bumper of a military vehicle as it patrolled villages. A paper with his name was tied to his chest, and a soldier announced on the loudspeaker, “Look at the fate of the stonepelter.” The video created an uproar internationally. The armed forces were accused of using a civilian as “a human shield”, pushing it to hold an inquiry, and the police to lodge an FIR.
After these videos emerged, the government on April 26 officially banned 22 social media sites and apps, including Facebook, WhatsApp and Twitter, for over a month. Once again, it seemed to have little effect on the protests – and protestors.
Sajad, who has been throwing stones for the past eight years at the armed forces, says, “The government is miscalculating the use of internet and the occurrence of protests.” The 28-year-old refers to the protests using the Kashmiri phrase kani jung, loosely translated as ‘stone battle’, which to him conveys a revolutionary zeal. Youths like Sajad who participate in the protests insist that they are provoked each time by an instance of human rights violation that exacerbates the long experience of militarisation, aspiration for “azadi”, and conflict in Kashmir. Internet shutdowns do nothing to erase this trigger, he says, and sometimes heighten their anger.
In just 2017, there have been 27 internet or social media bans in J&K, according to internetshutdowns.in. In the absence of evidence or study about its effects, it’s unclear if these blockades curb the spread of misinformation at all, or prevent the mobilisation of people for protests. For instance, on 15 April 2017, students from Degree College in south Kashmir’s Pulwama district protested against the armed forces for firing teargas and beating them. Though there was an internet ban in place, the incident went live on Facebook. It led to more student protests across the state. Schools, colleges and universities had to be closed for weeks.
Due to the frequency of blockades, several Kashmiris, including ministers, bureaucrats, civilians, protesters and police officers, have found a way out: they have turned to VPNs (Virtual Private Networks).
A VPN allows users to remain secure online and also enables them to access content or websites that are otherwise blocked. Sajad says, "A selective ban on the internet does not help, because we use VPNs. A person gains access to a network, and everyone in the area finds out how. Let the government block everything, it won’t stop protests.” To illustrate his point, Sajad gives the example of uprisings in the summer of 2016, during which internet, pre-paid and post-paid connections were shut for months. “Were there not protests?” he asks. “Kashmir was resisting Indian forces even before the internet existed, so why would it be difficult for us to use the same means now?”
Gulzar, a 30-year-old who has joined protests since he was 15, says the internet is more often used to disseminate information about the injustice, and not to organise protests. “A guy from Srinagar will only protest in Srinagar, and not go to other places. So, it is not too difficult to find out where protests are going on,” says Gulzar.
A DSP-rank police officer in the cyber crime cell of the J&K Police, on the condition of anonymity, says that bans have not yielded absolute results, but have been useful in preventing small-scale protests. He cited the example of district-level territorial internet blockades, done during gunfights between militants and the armed forces, to prevent immediate information sharing that may lead to the operation being compromised. “Say some militants are caught during an encounter in a village in Pulwama district. We block the internet as a precautionary measure in that area,” he says. “In case the district is violence-free, we reduce the bandwidth. That has now become the standard operating procedure.”
The police officer adds that accustomed to the bans, people now record the protests and later post videos on social media once the ban is lifted. “So, in effect, what the internet ban achieved is neutralised as soon as the internet is back on,” he says.
Names changed to protect identity.
Junaid Nabi Bazaz is a Srinagar-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters. He has been working as a journalist in Kashmir since 2010. He has covered human rights, economy, administration, crime and health over these years. He has also written for contributoria.com, an independent division of The Guardian.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Business Woes from Saharanpur's Internet Ban
Saharanpur: The violence between groups of Thakurs and Dalits that engulfed Saharanpur district in Uttar Pradesh between April and June 2017 continues to haunt its residents. The UP administration had ordered an internet shutdown for 10 days, reportedly to prevent the spread of rumours that had erupted after another caste clash on May 23 in Shabbirpur.
Those running businesses in Saharanpur say they were affected in unexpected ways. They struggled to make regular transactions and incurred losses they haven't yet recovered from.
Forty-eight-year-old Rajkumar Jatav has been manufacturing ladies' shoes for 25 years in Saharanpur town. Helped by his sons Sushant and Rajkkumar, he runs a small-scale factory which employs 15 workers who make flat slippers, sandals, heeled shoes and joothis for the local market. Jatav says he suffered a loss of about Rs 1.25 lakh ($2000) during the 10-day internet shutdown.
"I did not get raw materials like paste solutions, synthetic leather, heels and sequins from my suppliers based in Kanpur and Agra when I failed to pay them the 50% advance through online transfer," says Jatav. "The situation outside the town was also tense. So there was no chance I could go or send someone to the banks either."
Jatav had started using the digital payment system only after demonetisation. "I started doing online payments after November 8, 2016, after I faced a lot of problems with cash availability during that time. Internet payments came as a boon for me and also for my suppliers," he says. But within six months of getting used to online transactions, Jatav faced this new hurdle: an internet shutdown. "To complete the shoe order, we have to invest from our pocket first, but when I couldn't, my suppliers refused to send me the material, which meant I could not complete a big order," he says. He calculates that the cancelled order cost him Rs 2 lakhs. In addition, a few of Jatav's reliable and talented shoe workers quit because he was unable to pay their wages on time.
Jatav's annual business turnover is around Rs 24 lakh (Rs 200,000 a month), and he gets his raw material from the markets of New Delhi, Bareilly and Agra. "I even tried to give my suppliers an account payee cheque but they declined it saying that it will take a lot of time to clear. I requested them again and again but to no use. For a supplier there are thousands of Rajkumar Jatavs. I am no special client to get the raw materials on credit," he says. Jatav admits that he is not prepared for another shutdown, and he would not be able to run his business if it happened again.
Many traders in Saharanpur city say narrate similar experiences. In May, a family business of trading edible oil wholesale saw its most unfamiliar financial challenge yet. It had been only three years since Shailendra Bhushan Gupta had taken over his elder brother's 26-year-old store. Gupta started to expand and diversify too, by launching an agency to trade the Fortune brand of oils. He employs five people, and his monthly turnover ranges between Rs 30-40 lakhs ($46,600-62,200). The 40-year-old also modernised some of the business practices, shifting much of the payments to suppliers online, for speed and ease of use.
During the internet shutdown in Saharanpur, Gupta did not expect to be affected, given the stability of his store and the large sale volume of his agency. But unexpectedly, his supplying company cancelled his order of 1000 litres of oil when he could not make the payment. "As per the agreement, I have to deposit at least 50% of the order amount in advance, and the rest of the payment is made when the oil is delivered to us. But during those 10 days, I could not make payment through any means, and my order was declined by the supplying company," he says. Gupta also tried to make the payment through RTGS but couldn't do that. The oil trader says that he ended up suffering a jolt of Rs 18 lakhs ($28,000).
Gupta is slowly trying to make up for the monetary loss and credit worthiness with his suppliers. "How can an internet shutdown be a solution for anything?" he asks. "I seriously don't know what to do if it happens again."
A property dealer in same central market faced a direct hit during the internet ban. Ashok Pundeer, who has been selling and renting commercial and residential properties for the past five years, estimates that he suffered a loss of Rs 22 lakhs ($34,200) during the internet shutdown as he could not get many properties registered in that period. "I had to return the token money to many buyers because there was no internet," he says. "All of us know that registry (property) and documentation is now done online in Uttar Pradesh. The clients were new and they refused to take the deal forward."
A property dealer is not easily trusted, admits Pundeer. This means he is paid only after the deal is done, and a lot of word-of-mouth business depends on his image and credibility. Every lost client is a potential loss of more. "It's not just me, but many dealers have incurred huge losses due to shutdown," says Pundeer. "Koi ration ki dukaan to hai nahi property dealing. Jo kuch hona hai online hi hona hai. Ab kya batayein, dekha jayega jo hona hoga," he throws his hand up in frustration, saying the real estate business is no grocery shop, and if there's no access to online transactions, then very little can be done.
Trying to keep an optimistic outlook, Pundeer says, "Jitna kuan khodo, utna paani milega". For his business to recover, he will have to double down with more focus and effort.
(With inputs from Saurabh Sharma)
Mahesh Kumar Shiva is a Saharanpur-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters. has been reporting for 23 years on crime, healthcare, society, politics, culture, sports, agriculture and tourism in his city. He has previously worked with publications like Dainik Janwani, Dainik Jagran, Amar Ujala, Ajit Samachar and more.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Video
Rajkumar Jatav talks about the challenges his shoe manufacturing business faced during the shutdown. Video Courtesy: Mahesh Kumar Shiva
Internet and the Police: Tool to Some, Trash to Others
Panchkula, Haryana: Suspension of internet facilities to “prevent mishaps” has been a frequent exercise in Haryana during various agitations, but probing its effect on those responsible to maintain the law & order in the state shows a gap in acceptance of the information tool. There are some who understand its importance in bridging human interaction, and then, there are others who consider it nothing but an easy way to watch porn.
The tricity of Chandigarh, Panchkula and Mohali witnessed chaos and violence when Dera Sacha Sauda (DSS) chief Gurmeet Ram Rahim Singh was convicted in two rape cases on August 25. Mobile internet services were shut down across Punjab, Haryana and Chandigarh for 72 hours as over one lakh followers of the much-revered “godman” started pouring into Panchkula, camping around the district court complex where the special CBI court was hearing the case. The ban was later extended for another 48 hours to last till August 29.
Reports claimed that 38 people died in the interim violence between August 25 and 29. The internet shutdown, evidently, didn’t serve the purpose. But it did affect the efficiency of the mechanism put in place to control the law and order situation.
Shutdowns obstruct us too: Cops
Panchkula police commissioner Arshinder Singh Chawla said they faced challenges in ascertaining size of the crowd gathering at various locations after the mobile internet communication was temporarily killed. “We were until then sharing information and photos on WhatsApp to figure out the number of people pouring in the city from various points as it helped identify problem areas. DSS followers had started gathering August 22 onwards,” said Chawla, who was heading the operations when DSS followers went on a rampage in Panchkula.
Unavailability of internet had hindered police operations during the Jat agitation in 2016 as well. Jagdish Sharma, a retired DSP who was part of the team countering agitators at the Munak canal when they targeted the chief source of Delhi’s water supply, said his team faced challenges in gathering strength due to the absence of mobile communication. “The protesters had a much larger count than our personnel at the canal, but they weren’t aware of this. We were fearful that our wireless messages asking for reinforcements may be tapped into by them. We could have easily conveyed the message if WhatsApp was working then,” said Sharma. The cops retained control over Munak canal by remaining at their position for two days, until the reinforcements arrived, while posing as if they were prepared to take on the Jat agitators, Sharma added.
The Panchkula police commissioner said that the drone they were using to take photographs and videos during the DSS violence also fell out of use once mobile internet was curtailed. With drones in operation, their tasks would have been much easier, Chawla said.
Panchkula deputy commissioner Gauri Parashar Joshi faced the brunt when her security staff could not communicate with the security personnel at the district court complex. SP Krishan Murari, who was heading a commando squad on the day, said they had to help Joshi scale a wall to escape the court complex as they could not ascertain a safe escape route. The DSS supporters had surrounded the entrances to the complex and were ready to clash with police authorities, he said. Joshi said she could not reach out to her colleagues in the administration to share important messages and orders as the mobile internet services didn’t work.
‘Ban can’t always be boon’
Ram Singh Bishnoi, who was cyber security in-charge with the Haryana police until January 2017, believes a medium like internet should not be broken down. “I agree that rumours spread like wildfire, but the government should devise other ways to counter the problem than imposing a ban on net services,” he said.
IG (Telecommunication) Paramjit Singh Ahlawat, however, said there is not much use of the internet when the situation turns volatile in the region. Things like internet don’t matter to people when their lives and property are in danger; these services are enjoyed when law and order is under control, he said.
The cops in Haryana, where internet has been shut down over 11 times in the past two years, may find some learning in the way former Mumbai police commissioner Rakesh Maria avoided a scuffle from turning into a communal riot. Maria was lauded for using WhatsApp and SMS service to convince people not to believe rumours being circulated on their phones when clashes broke out between two communities in Lalbaug during Eid celebrations in early 2015.
Former Haryana DGP Mahender Singh Malik does not believe a ban on internet prevents any untoward incident. Government authorities take such a step in the name of maintaining law and order, but the real reason behind clamping internet is to avoid the masses from being aware of the blunders committed by the same authorities, alleged Malik, terming the decision to ban internet as “unwise” and “against the digital India” initiative of the Centre.
Malik also suggested that people should get compensation when internet shutdown is forced on them.
‘Internet is for the jobless’
However, not all officials in the police department seem to agree with the benefits of internet.
SP (Telecommunication) Vinod Kumar of Haryana Police said: “How does it (internet) matter to a common man? Internet is for those who have no serious job. It is for those who have time to kill on mobile phones, laptops and at cyber cafes.”
In nearby Uttar Pradesh as well, some cops were of the view that internet shutdown did not have much of an impact on their job or general administration. Sub-inspector Vijay Singh was posted in Saharanpur when internet was banned from May 24 for 10 days following caste clashes. “Internet band hone se farak sirf un logon ko pada jinhe din bhar keval mobile hee chalana hota hai. Kaam karne wala aadmi mobile aur internet par samay nahi bitata (Only those who have no work suffer because of internet ban. Those who have work in hand do not spend time on mobile and internet),” said Singh, who is now posted in Lucknow.
“Internet matlab kya - video, Facebook, blue film... aur kya? Agar itne bade gyani hai jinhe internet band hone se farak pada to wo yaha kya kar rahe hai, kahe nahi jakar ke IIT me admission le liye? (What does internet mean - videos, Facebook, porn films… what else? If you are so affected with internet being banned, why not go and study at IITs,” said Kaushlendra Pandey, another SI-rank policeman from Azamgarh district in UP.
The government of India, on the other hand, is campaigning to promote digital inclusion and accessibility across the country.
With additional inputs from Sat Singh and Saurabh Sharma, both members of 101Reporters.com
Manoj Kumar is a Chandigarh based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters. He has reported on a wide range of civic issues over the past 12 years. He has written for Dainik Jagran, Dainik Bhaskar, Amar Ujala, Outlook, etc.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
How Media beat the Shutdown in Darjeeling
Darjeeling, West Bengal: The West Bengal government banned internet in the hills of north Bengal on June 18. The ban was lifted on September 25, one hundred days later. The precautionary “law and order measure”, introduced in the wake of violence following the breakout of a fresh stir for separate Gorkhaland state, was used as a virtual tool by the administration to bargain for peace with protesters in subsequent weeks. Quite naturally, it caused severe hardships to over one million people. Journalists covering the agitation were among the most severely affected.
“It was a first for me — reporting breaking stories from the ground and having to dictate the development on the phone to my office back in Delhi,” says Amrita Madhukalya, a senior reporter with the DNA newspaper. “The first story I broke after reaching Darjeeling was how the agitation had caused losses in excess of Rs 100 crore ($15.6 million) for the tea industry. I sent that story via a string of five SMSes to office before reading it out to one of our subeditors to ensure no discrepancies crept in.”
Sometimes even phone networks were down. “I have a friend who owns a shop in a small market complex near Chowk Bazaar,” says another senior print journalist from New Delhi. “On this one occasion when even SMSes were not going through, this friend helped me access data from a location that only he knew of. There were at least five to ten journalists from national newspapers looking for internet in Darjeeling in mid-July. He clearly didn’t want to attract their or the district magistrate’s attention.”
The clampdown on internet connectivity began a day after three people died of bullet injuries following clashes between pro-Gorkhaland protesters and the police in the heart of Darjeeling town on June 17. One policeman was feared killed. It later came to light that, having braved a near fatal blow from a khukuri, a traditional Gorkha blade, he was severely injured but alive.
By the evening, several videos of an underprepared but infuriated police force thrashing protesters began to circulate on social media. The state intelligence informed Kolkata that the protesters were planning to march around town with the bodies of the three victims the next afternoon and that the social media outcry against the use of force by police was turning increasingly vitriolic. Internet services were clamped early next morning.
As the Gorkhaland movement lingered on and the intensity of violence waned, data services continued to remain a casualty. Chief Minister Mamata Banerjee said the service would be resumed once normality was restored. As the cycle of news shifted to more compelling narratives and senior journalists from big cities returned from Darjeeling, the vacuum was filled by Facebook news pages run by young social media activists, like With You Darjeeling, Chautari24, North Bengal Today, North Bengal Express, etc.
“A blanket ban on internet since June 17th, 2017 was the biggest challenge we faced,” says Rinchu D Dukpa, who edits the very popular Darjeeling Chronicle, a Facebook news page with over 140,000 subscribers. “Imagine over two months of no internet. Getting word out on important news events from the region was such a challenge those days. In addition, countering distorted, biased and unverified news and narratives spewed by mainstream media and even social media platforms paid for by the state was almost impossible due to lack of internet.”
On several occasions, especially after clashes between locals and the police, rumours quoting death toll would surface. During one such clash in Sukna near Siliguri, one news channel claimed three people had died. It later turned out that there was no casualty. One more interesting rumour that did the rounds was the imposition of President's rule in Darjeeling. Much of it was fuelled by a lack of healthy flow of information. That there was an internet ban did not help.
The administration of another popular Facebook page run from Darjeeling, which has over 35,000 likes, was taken over by the administrator’s friends in the US. Requesting that his and his page’s name be kept secret, the administrator says he requested his friends in the US to scour content from website reports and e-paper versions of the relevant newspapers.
The ban was eventually lifted on September 25, just five days after the Mamata Banerjee government succeeded in weaning away rebel leader Binay Tamang from the Gorkha Janmukti Morcha, the party leading the agitation. Binay went on to be appointed as the chairman of a new board of administrators for Darjeeling hills.
“The ban may have been very severe but Darjeeling’s geography did offer respite at certain locations,” says Biswa Yonzon, a freelance journalist. “Those area that face the hills of neighbouring Sikkim, would receive internet signals. The connectivity wasn’t always great but it did the job for most local journalists reporting for papers such as The Statesman, The Telegraph and The Times of India.”
In fact the area just behind Darjeeling’s town square Chowrasta, which faces the towns of Jorethang and Namchi in South Sikkim, is now known as the Jio hill, after the Reliance 4G network. In Kalimpong, the misty Carmichael hill too is called by the same name.
Manish Adhikary is a Siliguri-based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Was there an Unofficial Internet Shutdown in BHU & NTPC?
Varanasi/ Rae Bareli: , Uttar Pradesh: During the student-led protests at Banaras Hindu University in September, anger over how the university handled a sexual harassment complaint was exacerbated by the police brutality that rained down the protesting female students. Amidst this chaos, many students inexplicably found that they unable to communicate with their parents and peers because they couldn’t connect online.
Shraddha Singh, a second-year fine arts student at BHU, had to walk three kilometres to reserve her train ticket home and couldn’t call her mother to talk about the injuries she sustained during the lathi-charge on September 23. The 21-year-old student said, “First, the police came into the hostel to beat us up. Then the internet was blocked. Neither was the hostel WiFi working, nor the mobile internet. Forget about booking tickets, we weren’t even able to make calls.” She felt this was a deliberate attempt to disrupt the protest by those who were “afraid” of where it would lead.
Worse still, the hostel warden had asked the girls to vacate their dorms immediately, and the students were cast into the streets without access to the Internet. Tanjim Haroom, a Bangladeshi political science student at the university, found herself stranded in Varanasi like many of her classmates. "I go home only once in a year but this time, I was forced to vacate the hostel and I could not get in touch with any of my relatives or family due to this sudden shutdown of internet and phone services. I was helpless in this city and just had some Rs 700 ($11) with me. I finally got shelter at the Mumukshu Ashram and was able to contact my family from their landline phone.”
Predictably, officials from the university insisted that there wasn’t any clampdown on the internet. The then vice-chancellor, Professor Girish Chandra Tripathi, when asked about this unofficial shutdown, said that there was none. "There could have been a network issue because the internet was working fine in our office. I cannot say what the students have alleged. Making allegations is very easy," he said over the phone to 101reporters. Varanasi district magistrate Yogeshwar Ram Mishra also denied that internet or phone services were suspended during the protests.
But a worrying number of first-person accounts do prove otherwise. According to Avinash Ojha, a first-year post-graduate student at the university, internet and phone services were restricted in the varsity campus soon after the lathi-charge on the students. They weren’t able to get online from the night of September 23 to 25.
The students had to go to Assi Ghat or other far-flung places to talk to their families and make travel arrangements out of the city. Ojha also suspected the hand of the university’s vice-chancellor behind this move.
Another case of suspected unofficial shutdown might have occurred on November 1, when a boiler explosion occurred at the National Thermal Power Corporation plant in Rae Bareili, that has since killed 34 people.
A senior officer of NTPC, on the condition of anonymity, told 101reporters that Reliance Jio was asked to cap their services in the area until things settled down. "I had heard my seniors discussing the need for this in order to avoid panic. There are a large number of Jio users here, so that specific service was asked to restrict its internet speed and calling facility for a while.”
Here too, there is evidence that the outage affected several people in the area. Amresh Singh, a property dealer hailing from Baiswara area of Rae Bareli was in Unchahar when the explosion occurred. He discovered that his phone network was not working. "There was no internet on my mobile phone after 4pm. I was able to access internet only after reaching Jagatpur, which is around 10 kilometres away from Unchahar," said Singh. “It felt like the phone lines were deliberately disrupted. I initially thought something was wrong with my phone, but the people with me were also not able to use their phones. Maybe the government quietly shut down the network to prevent panic.”
Mantu Baruah, a labourer from Jharkhand working at the NTPC, had a near-identical experience. His Jio network stopped working after 4pm that day, and he was unable to contact his family on WhatsApp to tell them that he was safe. "I tried many times, maybe over a hundred times, to send an image but it didn’t work. Jio network was down. Neither video calls nor phone calls were working. The authorities had made this happen so people outside wouldn’t know what was going on here.”
But Ruchi Ratna, AGM (HR) at NTPC’s North Zone office in Lucknow, tells us that there was a network congestion that day, not a shutdown. "Even we were unable to talk to our officers and were getting our information through the media," she said. Sanjay Kumar Khatri, Rae Bareily's district magistrate said over the phone, "There is no question of an 'unofficial shutdown'. I myself faced issues in sending messages on WhatsApp but my BSNL mobile was working fine and even journalists here were sending images and videos real time.”
However, a senior communication manager at Reliance Jio's Vibhuti Khand office in Lucknow revealed to this reporter that the internet was indeed restricted in both these instances for 12 hours each. "This was only done on the order of the government. I do not hold any written information, but it must be with the head office," the communication manager said. At the time of publishing, our requests for comments from the official spokespeople of Jio had not received a response.
Arvind Kumar, principal secretary (Home), Uttar Pradesh government, said that there were no restrictions or shutdowns during either incident. "There could have been network issues. The government did not ask any service provider to restrict its services. I will look into the matter, about where the orders to restrict Jio were issued from, but it did not come from the Uttar Pradesh government," he said.
While activists have roundly criticised the Temporary Suspension of Telecom Services (Public Emergency of Public Safety) Rules, notified in August without public consultation, there is now a better-defined (albeit still vague) protocol for implementation of internet blackouts. For instance, only the central or state home secretary can issue the orders. Prior to this, internet restrictions were issued by various authorities, along with section 144 of the Criminal Procedure Code, aimed at preventing “obstruction, annoyance or injury”. This wide berth has allowed the administration to quietly get away with short-term internet bans without proper explanation. In fact, those monitoring these shutdowns are only able to maintain such records by tracking media reports; no official records are available to the public. Without official transparency, often, if there is no news story, it is like there was no internet ban.
Saurabh Sharma is a Lucknow-based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Internet and Banking: A Trust Broken
Darjeeling, West Bengal: As the Internet shutdown in Darjeeling touched the notorious landmark of 100 days in late September, its impact was felt by members of Gorkha Janmukti Morcha (GJM) — the party agitating for a separate state of Gorkhaland. The state government’s move had managed to impair the communication and coordination among the agitators.
However, for most residents, lack of access to the internet meant months of crippled bank transactions and mounting financial strain. The impact of the move was felt by all sections of society and most services experienced a slowdown or complete paralysis.
Students from the town were among the worst hit as the internet ban cut off a steady flow of money from home for academic purposes.
“I had to cut down my daily meals to once a day to save whatever little currency notes I had, especially since it was not clear when the ban would be lifted,” said Shradha Subba, a resident of Darjeeling who is pursuing her Bachelors degree in Kolkata.
Her parents were not able to send her money due to the ban and arranging cash from another state was also not an option. “I had no option but to borrow money and even that was difficult as all my friends were from the hills and faced the same problem,” said Subba.
The parents of many students also felt hard done by the shutdown and said they often found it difficult to communicate with their children. Transferring money for their monthly educational needs was also impossible. “We were able to make phone-calls to our children once in a while, but we could not see them as video-calling was out of the question. We also could not send the money for their semester fees on time and had to ask our relatives in Sikkim to arrange cash for them,” said a concerned mother whose daughter was studying in Delhi.
The ban on mobile internet was imposed on June 18, 2017. Two days later, broadband service was also restricted. The initial shutdown was meant to last for only a week but it had since seen several extensions owing to non-cessation of agitations. Banks were left helpless especially in the face of uncertainty regarding when the restrictions would be lifted.
“None of the banking services were functional and no transactions were done during the period of internet shutdown. Even the ATMs were closed and people could not be provided normal service,” said Jagabandhu Mondal, district branch manager, State Bank of India.
People routinely missed bill payments and no online transactions were done during the course of the ban. Reports emerged of people travelling over 80 kilometres, either to Siliguri or to Sikkim, just to withdraw some money.
Those who had purchased new vehicles found themselves struggling to pay their monthly instalments despite having cash in their accounts. Travelling to Siliguri to pay the instalment was also daunting as the road transportation was restricted by agitating political parties and supporters picketing on the streets.
Santosh Rai, a resident who had purchased a car just before the internet ban, said: “I could not go to Siliguri or even pay online. Now I’m facing claims for penalty. It was very hard for the vehicle owners to pay the EMI for three months along with a penalty. I asked for help from my friends but how long will they pay.”
He claimed that several people were forced to default on payments due to the blanket ban imposed by the government. “We could have deposited the EMI but the banks were closed, and that is not our fault,” said Rai.
Another victim, Mukesh Rai, also echoed Santosh’s sentiments while describing how he had to default on EMI payments towards his new car. “I used to walk towards Melli, Rangpo, or Singtam (all small towns in Sikkim) to withdraw money as my family and I were in need of liquid cash. Even that became difficult mid-monsoon,” he said.
Experts also pointed out that the ban was enforced even as the rest of the country discussed Digital India and a push towards cashless economy.
Another resident, Pema Namgyal, said he had lost a job because of the ban on internet services. He had opted to work from home for an advertising agency based out of Bangalore. “I had taken up an editing and copywriting job with an advertising agency. I had an issue with my spine and since long leaves are not possible in creative agencies, I opted to work from home. Five days after I reached here, an indefinite strike was called and the internet was shut down. I couldn’t work as per my client’s schedule and when I could not coordinate with him, he looked for another copywriter and asked me to refund an advance payment he had made,” said Namgyal.
The manager of an HDFC bank branch, Paul Tshring Lepcha, said, “We use BSNL connections usually for banking work and once the network was down we had a hard time updating our system… there are alternative portals like Airtel and Vodafone but even that was of no use at the time,” recalls Lepcha,
Book size of private banks too saw a drop in these 100 days and the regulation regarding monthly maintenance of ₹5,000 in their customers’ accounts could not be continued. Officials from Indusland Bank said that people even started preferring government banks as they have a lower maintenance requirement. “During the ban period, no new account holders were registered and the mutual funds market also experienced a lull,” said an official from a private bank.
Roshan Gupta is a Siliguri-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
The Rising Stars in Music Loath Losing their Only Platform
Srinagar, J&K: Amid the gaudy Old City area of Srinagar, where the air is heavy with the pungent smell of teargas shells, 25-year-old Ali Saifuddin has been busy working on compositions that he will perform at a prominent indie music festival in Pune in December 2017. Pune may be discovering Saifuddin’s music only now, but he has performed in Dubai and London too, owing to the fanbase he has garnered on social media.
Mehmeet Syed’s popularity on social media has taken her to countries like US, UK, Australia and Abu Dhabi (Picture Courtesy: Mehmeet Syed Facebook page) |
Umar Majeed shot to fame with his rendition of Pakistan’s national anthem on the Santoor | Yawar Abdal, a Kashmiri singer, says he doesn’t see the logic behind keeping the internet shut for months (Picture Courtesy: Yawar Abdal Facebook Page) |
It was in 2014 when the budding musician bought recording gear and created a Facebook page. Hours after uploading his first video, Saifuddin became an internet sensation. “I was stunned to see thousands of views on Facebook. People who I had never met with hailed my tunes and encouraged me to produce more,” Saifuddin says.
With 9,000 followers on Instagram and more than 6,000 ‘likes’ on his Facebook page, Saifuddin often gets offers to perform outside Kashmir.
“(As an artist) you need a platform, and in Kashmir, it is the internet that sides with you,” says Yawar Abdal, another popular Youtuber, whose song Tamanna has garnered over 400,000 views since June. “I uploaded a minute-long video on Facebook in April last year. It became viral and made me famous,” Abdal says.
The 23-year-old Pune University student has more than 13,000 followers on Instagram and above 10,000 likes on Facebook. “There are no shows organised in Kashmir. Internet is the only platform where people can broadcast what they posses,” he says.
Frequent curfews, even online, are like a curse for Kashmiris. Internet services are being clamped down in the Valley quite often, particularly after the killing of militant leader Burhan Wani on July 8. Wani’s killing sparked violent protests resulting in the deaths of 15 civilians the very next day. The clashes killed 383 people - including 145 civilians, 138 militants and 100 state and Central security personnel - and around 15,000 others were injured. While many were also put under illegal detention following the outbreak of deadly violence, the government suspended internet for more than six months in 2016.
In such a scenario, where shutdowns are stretching from streets to the social media, it is not surprising to see Kashmiris voice their dissent through art whenever they find a window open. In 2017, internet services were blocked 27 times across various districts of the Valley, either on mobile, or on both mobile and broadband, in the hope that it prevents rumour mongering and instigation of violence.
“This is unnatural and tantamount to choking a person’s right to free speech,” says Saifuddin, who has been criticising the human rights violations in Kashmir with songs that carry a political undertone. Son of medical doctors based in UK, Saifuddin got initiated to rock music through Jimi Hendrix and Led Zeppelin during school days, before heading to Delhi University for a BA degree in 2011. “There I found the treasure of music. I finally had a computer and an internet connection. Youtube became my first, and so far, the only teacher,” recalls Saifuddin. His songs on Youtube include Aye Raah-e-Haq Ke Shaheedon, Phir Se Hum Ubharaygay, and Manzoor Nahi - a song he posted to protest against Prime Minister Narendra Modi’s visit to Kashmir in November 2015.
For Mehmeet Syed, whose music was limited to CDs since 2004, internet opened new avenues. Her popularity on social media has taken her to countries like US, UK, Australia and Abu Dhabi among others. “Being on social media is very important as it lets people stay updated about my work. My popularity touched new heights after I took to the internet,” says Syed, who owns a verified Facebook page with more than 1.20 lakh followers. On Instagram, she is a novice. But an internet ban means “heartbreak” to her. “Internet is not shut down in other places witnessing violence and conflict…We are very unfortunate to face internet bans,” says Syed.
“As singers, we have to record songs, mail them for editing, or receive content from studio. Without internet, we are stuck, paralysed,” she says.
Explaining how internet is more than a means of free expression, Mehmeet says, “Times have changed. This is the era of iTunes and YouTube. The songs we release in Kashmir are watched online across the globe. And this is how you earn today.”
The freedom to share content has empowered even the marginalised lot who were only known locally for their talent. Abdul Rashid, a transgender wedding singer popular as ‘Reshma’ in Srinagar’s Old City, became an online sensation after one of her wedding songs was widely viewed on Facebook, and media followed up with stories around her.
“Nobody knew me outside my locality. But today, I get calls from across Kashmir to sing on weddings. This became possible through Facebook. It gave me wide publicity,” Reshma says.
Umar Majeed, a Class 12 student from Zainakoot in Srinagar, is keeping the folk tradition of Kashmir alive with the help of internet. While the 19-year-old inherited skills on Santoor from his father, Abdul Majeed, it was social media that propelled him to fame. Umar played the national anthem of Pakistan on Santoor, accompanied by two other musicians on Rabaab. “The instrumental composition was viewed 450,000 times in two days,” says Umar, adding that they are working on a musical theme of the Indian national anthem as well.
With 5,000 friends on Facebooķ and 2,500 followers on Instagram, Umar has a quite wide network for a schoolkid. “We get a lot of encouragement and confidence when people comment on and appreciate our work online,” he says. But repeated internet ban keeps the young musician away from the much needed feedback.
“When I get an idea, I instantly compose it on Santoor and upload it on Facebook to get viewers’ response… But when there is internet ban, I have no mood to play even when I get an idea, and soon I forget it,” he says.
Mehmeet points out that internet not only promises freedom of expression but also provides monetary support to indie artists through platforms like iTunes, Google Play, Pandora, Amazon and Sawaan. She has been generating revenue to support her music through 21 of her tracks uploaded on these platforms, Mehmeet says.
The repeated shutdown of internet during the Republic Day and Independence Day also sends a wrong message to Kashmiris, says Mehmeet. “We realise that such attitude is step-motherly, which is unacceptable. And we as Kashmiris have not yet reached the stage where we think we have got independence.” Saifuddin seconds her sentiments. “If it is a democracy, then I have a right to speak my heart out. Why would the government choke my voice?” he asks.
When asked if the clamping down of internet service affects his music and earning, Saifuddin retorts poetically: “If not for the internet, I wouldn’t be around. So yes, it pains to see Kashmir being sealed on streets and on the cyberspace as well.
“It makes you angry at times to see things that happen nowhere but in Kashmir.”
Abdal, on the contrary, wants his music to be apolitical. “I sing the songs of Sufi saints and strive to rejuvenate the dying Kashmiri music,” he says.
But, the ban on internet services leaves him perturbed. “Without listeners, you begin losing interest. I hope one day the government understands that there is no logic in keeping the internet shut for weeks and months,” says Abdal, adding that he also observes a drop in demand for live gigs in the absence of internet.
“When you have a lot to share, but the medium through which you could take it to people is blocked, discomfort is what you’re left with.”
Umar Shah and Mir Farhat are Srinagar-based freelance writers and members of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Stock Brokers Don't Love an Internet Shutdown
Ahmedabad, Gujarat: An internet shutdown means breaking contact with the lifeline of the stock market: information about share price movement. “The entire momentum for trading and investing comes from the control the trader feels he has on information about share prices," says Minesh Modi, a trader based in Ahmedabad. "The internet puts information on our fingertips, so the trader could play on the stock exchange. It gives you a sense of control on the data, and is also mechanism to trade."
So, when the Gujarat government shut the internet down for a week during the Patel agitation in September 2015, and for four hours to prevent cheating on phones during a Revenue Accountants Recruitment Exam in February 2016, Modi says, "That intense feeling of connect goes away, and the faith is shaken.”
An obvious fallout of mobile internet shutdown is that terminals connected via phone internet stop working, and mobile trade is not possible. However, many Gujarati investors say that while they check price variations and movements online, they still trade through brokerage houses. Playing the stock market is usually a part-time business activity for most Gujaratis. “I don’t trade online directly. I place actual orders of purchase/sell through my broker," says YK Gupta, an investor in the city. Still, he did struggle during the internet shutdown. “I couldn't keep a tab on the price movement, and had to call up my broker for updates. How many times can I take updates on the phone? The television gives prices of only a few stocks, and there is a delay of three to five minutes of prices on the television. Stock prices being as volatile as they are, that time gap can be life-changing in the stock market.” Not willing to risk a huge mistake, Gupta chose to stay away from making any stock transactions during internet shut down.
The stock market rides on people's aspirations and individual deductions about trends and data, which in turn impacts business valuations. Since internet penetration has increased, traders say there is a premium on speedy reaction as well. Anil Shah, a former director with the Bombay Stock Exchange (2011-14) and a member of the National Stock Exchange, believes that an internet shutdown, however partial, will paralyse the ecosystem that sustains the share market. “Most of our work is on the terminals and when they stop, the smooth flow gets disrupted. The information that is the base in the stock market, the actual trading and fund flow work, all this will stop. When the internet stops, data stops, and the flow of work stops. It's as simple as that,” he says.
Recalling the impact of the internet and how it has evolved and woven itself into the stock market ecosystem, Shah adds, “Earlier, when the telephone number was the basis of trading, we could establish connectivity via phones. But since 2006-7, we have slowly moved to the internet to establish interconnectivity. The more reliable, faster and cheaper the internet services got, the more it integrated itself into our trading patterns. More people shifted to it as a connecting platform. About 95% connections are now established online."
He says that NSE/BSE members now have a dedicated lease line so that they don’t lose contact with the stock market. "Many brokerage firms are connected via VSAT linkages, so that we, as Gujarat state, don’t get disconnected fully with the rest of India. The loss due to internet shutdown is not quantifiable. It will have to be measured as the cost of a missed opportunity."
It is not just the stocks, but also banking transactions that stop or decrease drastically in volume when the internet stops, Shah says. “During the Patidar agitation, mobile internet services in most areas were shut down. However, broadband services were not stopped, so the brokers managed to keep the ball rolling. But brokers will lose in volumes. It is difficult to put a figure to it, but the movement and momentum of trade goes down.”
Echoing a similar sentiment, VK Sharma, head of Public Consulting Group and Capital Market Strategy at HDFC Securities, says that large companies have the facility to call their other branch offices and get the transactions through. So only customers and traders who don’t have a landline fallback option will be affected. However, those who wish to transact on the stock market with help of mobiles will not be able to do so. “This way, the volume of transactions is not stopped completely, but definitely curtailed,” Sharma says. “The decrease can be roughly estimated to be around 3%, but the state-wise breakdown of transactions and impact is not available from the exchange. Moreover, internet slowdown or shutdown results in a lot of disputes among traders and brokers - about the price entered into for transaction and the price that the deal is finalised on.”
Sarit Choksi, an investor who trades regularly, lamented the absence of a recovery mechanism for the losses that the people incurred. “When the net shuts off, we have to call the broker, who does not have dedicated phone lines to handle the huge hike in calls, so getting through to him is itself a challenge,” he says. “Then, as we don’t have the information at our fingertips, we cannot adjust the mutual funds choice, ‘stop loss’ and set ‘buy or sell’ limits in tune with the market movement. By the time I see it on tv, and get through to the broker to execute the deal, the price has changed. Who is going to compensate for this loss?”
It’s impractical to tell the Stock Exchange Bureau of India or the traders that the transactions could not go through due to internet shutdown, or ask them to forgive the price difference due to the long waiting time on the telephone. If brokerage houses makes a mistake, Choksi explains, arbitration is available, but there is no platform to claim or address the kind of losses one incurs due to external limitations like an internet ban.
“If internet connectivity is put on ransom due to political ambitions, it is very disruptive,” Choksi says. “In a society deliberately being pushed to go digital, the impact of such a shutdown is felt in financial and social sectors. When such political decisions are taken without considering the other impacts, our bread and butter is affected, and we are left high and dry, with no recourse or means to compensate the loss.”
Binita Parikh is a Ahmedabad - based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Sorry, Business Closed until Internet is Back On
Vadodara, Gujarat: A household name in Vadodara, Jagdish Farshan has been famous for Gujarati snacks like Leelo Chevdo and Bakarwadi since 1938. Since the year 2000, they started exporting their snacks to the millions of Gujaratis settled across the globe, especially in Africa, USA, Australia, Canada and New Zealand. It is one of the many indigenous businesses that helps Gujarat contribute 25% of the total exports from India. But the outfit synonymous with both tradition and modernity for 79 years, was also one of the many exporters to receive an unexpected jolt in August 2015, during the week-long internet shutdown during the Patidar protests for reservations across the state.
Kalpesh Kandoi, the chairman of Jagdish Farshan Pvt Ltd says, “Gujaratis in various countries buy our snacks online through our website, or through email. During the internet ban, we suffered quite a lot due to the blockage of orders and failure of deliveries.” Since nearly 50% of their annual revenue comes from exports, the shutdown threw a significant spanner in the works. Although the government claims it banned only mobile data, many businesses admit to their broadband and WiFi also being hit, or seeing debilitating delays.
“Of course, if there is an emergency from the importers’ side, they can call us directly,” says Kandoi. “But then again, a kind of inconvenience is created to them from our side, which is very shameful. It destroys our trustworthiness and credibility.” Many of their production centres in Gujarat, especially Vadodara, fell back on meeting orders when bank payments were stuck, or orders weren't accessible. Thankfully for the company, its manufacturing unit in Australia was able to meet at least some of the international orders when most districts of Gujarat couldn't access the internet.
The ban seems to have had a domino effect outside India too. Preeti Shah, who imports snacks and sweets from Jagdish Farshan through her small home-based business in the USA, couldn't meet orders there during the internet ban in Gujarat. She told 101reporters on the phone from Philadelphia that when she started her business of selling Gujarati snacks 3 years ago, she marketed her service by calling her neighbours, friends and acquaintances personally. “I found that in return they emailed me their snack orders,” says Shah. “During the internet blockages in India, I had to apologise for not delivering the snacks to my clients because my orders were not fulfilled by the Gujarat-based exporters.” She lost 12 to 15 clients, most of them regulars. “The government has to realise the impact of the ban. What if I had lost all my clients just because of the internet ban?” she asks.
Gujarat is a major hub for several industries like dairy, automobile, gems, and pharmaceuticals, but its biggest exports are of cotton yarn, oilseeds, and seafood. With its highly advanced and well-equipped marine fish production techniques, it is able to export fish to UAE, Australia, USA, Japan, China, Canada, Brazil, Thailand, and Germany. Gems and jewellery too, though exported from Mumbai, are processed in Surat, Gujarat, one of the largest diamond hubs in the world. Already severely hit by demonetisation in November 2016, with large-scale closures, layoffs and losses, the diamond industry nearly buckled under the internet ban too.
Most of all, it is the unpredictable, ad hoc, and unannounced nature of the internet shutdowns that frustrates exporters, who liken it to annoying roadblocks traffic policemen install to allow VIP movement. For instance, in February 2016, the state suspended mobile internet services suddenly for four hours to prevent cheating during a revenue service exam.
Chandresh Shah, president of the Exporters and Importers (Exim) Club and the founder of Madhav Agro Foods, says that the entire export industry relies on the internet for over 95% of its business. “It is absurd on the part of government to ban internet for any reason especially when they know that it will hamper exporters to a great extent. They have to provide alternatives, or announce beforehand. People who are importing our products consider us unprofessional and we look foolish in the international markets. So such policies need to be revamped and rationalised properly.” He adds that the rising economic cost of such shutdowns must be factored in. A 2016 study by Brookings Institution that looked at 81 instances of internet shutdowns across 19 countries between July 2015 and June 2016 found that they had cost the world economy a total of $2.4 billion. India, at a conservative estimate of $968 million due to 22 shutdowns (as much as Iraq), was one of the biggest losers.
As the digital economy grows, the cost of frequent internet shutdowns will only accelerate. As the central government pushed the ‘Make in India’ initiative, Surat-based Falguni Patel (name changed) was inspired to start an online boutique in late 2014. A textiles student and first-time entrepreneur, she invested nearly Rs 10 lakhs ($15,600) through loans and savings. Unfortunately, a few months into her business, an internet ban was put in place. “It was a sheer coincidence that I received an order from Madhya Pradesh, along with an advance payment, just two days before the week-long internet ban. After that they mailed me four times – first with some requirements, then two follow-up emails and a final one demanding a refund of the advance –but I didn’t receive any of these due to the ban. Meanwhile, I used the advance to purchase raw materials needed.” After the ban was lifted, Patel realised what had happened. “When I called them personally and explained the situation, they called me unprofessional. When I said I would repay their money in 3-4 instalments, they filed a police complaint against me for theft.” Only a single order had turned bad, but it delivered a strong enough blow. Discouraged by the experience, and pressured by her parents who didn't want her to invest in the business anymore, Patel shut her website, and shelved her e-commerce dreams.
Some companies, like Dinesh Mills, one of Vadodara’s oldest textile companies, prevented losses by invoking their brand value and stepping up customer relations during the ban. Uday Shitole, General Manager – Sales, at Dinesh Mills, says the internet is a boon for the export industry due to its speed, web orders, low cost, and proper documentation. But he admits that in India, it's mandatory to have traditional back-up systems, even if this is much costlier, because political realities make even something as advanced as the internet unpredictable. Sudhir Purohit, Vice President (Exports), Dinesh Mills Ltd, says their decade-long relationships with suppliers and purchasers, initiated in the pre-internet days, stood the company in good stead. “We export the materials through digital orders too, but in our system, the negotiation of contracts has to be handled in person and non-negotiable ones can be done wholly through the internet. Without this, we will be vulnerable to any disruption, like internet ban, or accidents, that will definitely lead to delays and losses.”
Nalanda Tambe is a Vadodara- based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Internet Shutdowns: A Modern-day Siege
Bangalore, Karnataka: For thousands of years, military sieges have been an effective means of depriving a population into submission. Attackers would surround the fort or city and simply wait for the food to run out. In today’s connected age, you can mount a successful siege remotely with a single signed order that can shut down the internet and practically bring life to a standstill.
So, it’s not surprising that inter-governmental organisations and NGOs around the world are starting to promote the idea that access to internet is a fundamental right, and watchdogs declare any deliberate interference to this access to be a violation of human rights. “In today’s modern digital world, shutting down mobile and internet networks is a drastic action that infringes on everyone’s rights and is inherently disproportionate. Internet shutdowns cut off everyone’s ability to speak and access information, regardless of whether they have done anything wrong. Considering the broad harm to rights that shutdowns can cause, government officials should certainly take them more seriously as a human rights violation,” says Cynthia Wong, senior internet researcher at Human Rights Watch.
But in India, there is no legal recourse yet against such decisions. In 2015, a Public Interest Litigation filed in the Gujarat High Court against a week-long internet shutdown was dismissed (as was a Special Leave Petition filed in the Supreme Court in 2016 challenging this decision). In fact, tech entrepreneur and Rajya Sabha MP Rajeev Chandrasekar attributes the dramatic increase in the number of internet blocks in 2017, which has doubled since last year, to this ruling. “This dramatic increase in the number of internet blocks can be attributed to the Supreme Court ruling in February 2016 which upheld the right of districts and states to ban mobile internet services for maintaining law and order .”
Typically, mobile internet bans were enforced under Section 144 of the Code of Criminal Procedure which can prohibit assembly of more than four people and is usually invoked by a district magistrate. “Indeed, mobs come together due to the spread of misinformation over internet services such as Facebook and WhatsApp,” says Chandrasekar. “However, internet shutdowns also disabled authentic news organisations who can dispel such misinformation. I have argued that governments and administrations do have the right to shut down internet or take down content consistent with the Constitution’s Article 19 guarantee of fundamental right to free speech being subject to reasonable restrictions. So, the debate is not whether the government has a right to temporarily shut down the internet or not, but does the government or administration use this right reasonably and with clear guidelines,” he warns.
Enter the Temporary Suspension of Telecom Services (Public Emergency of Public Safety) Rules that were released in August. The primary concern of tech activists is that these ‘Suspension Rules’ set a dangerous precedent because they legalise internet shutdowns where ideally there should be none. But these rules also received a wary welcome.
"Use of an archaic law like Section 144 of CrPC for shutting down the internet is not justified. The new rules seem to have been hastily put together without much forethought," according to Prasanth Sugathan, legal director at Software Freedom Law Center (SFLC). “There is no transparency on how these rules were drafted as there was no consultation with the stakeholders. These rules are not conducive to ensuring the right to internet access of citizens which is essential for the success of initiatives like Digital India. As regulations go, these aren’t particularly robust, giving central and state governments the power to shut down telecom services, without having to cite further reasoning than “public safety” and “national security”. In fact, the rules don’t even specify a maximum duration after which services must be restored."
Calling the whole deal shoddy, Sugathan says it seems like they were put out just to subvert the illegality of Internet shutdowns.
Chandrasekar also feels the process should have been more consultation-driven. “The rules can and must be improved to remove adhocism and arbitrary use. As I say repeatedly, these kinds of government policies run the real risk of straying from the reasonable restrictions acceptable to our Constitution to an infringement of the Right to Expression. Governments, especially political leadership, should be careful that bureaucratic lack of imagination or paranoia or simply laziness doesn’t cause that crossover from right to wrong.”
According to SFLC, which has been tracking internet shutdowns in the country over the past five years, authorities in India have shut down networks 60 times just in 2017, spelling a staggering cost to the economy beyond the incalculable harm to human rights. Brookings estimated that the 22 network shutdowns in India from 2015-2016 cost the country’s economy $968 million. It’s baffling that while the government is pushing citizens to embrace ‘Digital India’ on one hand, they are concurrently pulling the rug from underneath these same users with these total and partial internet shutdowns. “From the perspective of promoting India’s digital economy, if people learn they cannot rely on their mobile phone service because of arbitrary disruptions, they are less likely to adopt digital technologies. If the Indian government truly wants to be a global leader in the digital age, it should cease all arbitrary and overbroad restrictions on internet access,” says Wong.
Osama Manzer, founder of Digital Empowerment Foundation (DEF), has an ever-expanding roster of people who were keenly affected by the shutdowns in their regions, irrespective of whether it last three days or three months. “One of the biggest impacts is that residents must live with is that their access to basic services becomes very limited. In Darjeeling, many state government employees were not paid their salaries because the banking system is online and centralised. The livelihood of sim card sellers and recharge shop owners, internet cafes and mom-and-pop shops that offer printing, scanning, online form filling services took a huge hit. It is especially detrimental to them since they rely on daily sales for their income,” he says.
While the economic impact of internet shutdowns has been documented, the social and psychological impact is just as crucial to investigate, says Manzer, especially in cases where these shutdowns are frequent and long term. DEF is in the final stages of releasing a report based on such a research. “We've found through our research that when shutdowns are ordered for a few days, residents can reason it out and some even find justifications for it. They may say the security and safety circumstances warranted it. But prolonged shutdowns have an acute negative impact on residents psychologically. Residents of Darjeeling, Kalimpong and J&K feel the impact of internet shutdowns acutely. They feel doubly isolated from the rest of the country and their faith in the government erodes. People we've interviewed have said they feel helpless and panicked. Some interviewees in Kashmir went so far as to question the democratic process and their right to it.”
Ayswarya Murthy is a Bangalore-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Days to Derail Work of Two Generations?
Saharanpur, Uttar Pradesh: It was reportedly Bahlul (Bahlol) Lodi, the founder of Lodi dynasty, who in the 15th century first settled some Afghani craftsmen and their families on the outskirts of the old town in Saharanpur. Today, this area houses the Lakdi Market, home to world-famous wood art and handicrafts. From large fretwork screens and doors to trays, bowls and trinket boxes, these intricately carved wooden objects are called for from as far as Europe, the Middle East and Australia. The woodworking industry is the mainstay of thousands of artists, workers and entrepreneurs here, many of whom are part of small mom-and-pop operations.
Craftsmen at Furqan Handicrafts in Saharanpur |
Mohammad Aarif, 28, heads one such business which has been in the family since two generations. Founded by his father four decades ago, Furqan Handicrafts has survived several challenges, such as rising prices of the fast exhausting raw material and middlemen, but the losses caused by a 10-day-long internet shutdown jolted him. He lost around Rs 7 lakh ($10,900) during this time. Six months on, he is still dealing with the repercussions, uncertain if he would ever recover the money.
Dalits and Thakurs in Shabbirpur village of Saharanpur district had their daggers drawn since violence first broke out in the village on May 5. The increasing friction led to a revenge cycle of violence, and subsequently to indefinite suspension of internet services on May 24, which went on till June 2, under the orders of the district magistrate to avoid rumour-mongering and hate messages being circulated on social media and messaging apps. The suspension of services in this west Uttar Pradesh city brought life to a standstill and Aarif’s business is just one of those which suffered dramatic losses during this one week.
Furqan Handicrafts is famous for its handicraft items and furniture, both in the country and abroad. Their products go as far as Malaysia, Finland and China. Aarif uses his mobile to make payments for the raw materials as he travels a lot, and this helps him conduct his business on the go.
“We have employed around 20 workers,” says Aarif. When the shutdown came into effect without warning on May 24, he had only around Rs 20,000-30,000 ($310-470) cash in hand. “Can you imagine running a business of this size, with a weekly turnover of Rs 10 lakhs, with so little cash in hand and having the liability of over 20 families on your head?” Aarif asks. “I ran out of cash on May 26 and then the real problems began. The banks were closed and the internet was shut down. We were left with no options. The situation was so tense outside that we could not even think of going to other districts to transact or to even our own banks when they eventually opened after two days,” the businessman says.
Moreover, Furqan Handicrafts has been accepting a good chunk of their orders online - either through their website or on WhatsApp. So the shutdown also affected the demand side of the business adversely. All the little consolatory lies he told himself to steel against the mounting panic didn’t help for long with the shutdown stretching on indefinitely. “I told my workers that the media said the situation would return to normal soon, and that helped us keep calm initially. We were hopeful that we would be able to conduct transactions in the next two days, but the situation worsened when the shutdown continued for over a week,” Aarif says.
“Our suppliers refused to sell us the raw materials without being paid first. Sometimes we may get some materials on loan, but most times only money does the talking. The chemicals that we get from Delhi have to be paid for fully in advance. We had more difficulties when we weren’t able to move our finished product. They were just lying there, collecting dust, and we incurred further losses in re-polishing them. And we were not able to pay our workers for the hours they had put in,” Aarif recalls.
It was not just his business that suffered, his employees felt the sting of the shutdown as well. Najeer Ahmad, a woodworker at Furqan Handicrafts, says that everything was normal in the beginning but situation started worsening after two days. “After the second day, work started slowing down and eventually, stopped completely. Our boss told us that we couldn’t get any raw materials because we weren’t able to pay the suppliers. Whatever little materials we had in the workshop, we used up, but then when there was none left, there was no work… since there was no work, there was no money. The boss usually settles our wages at the end of every week and gives us walking-around money every day. Without either of these, it became quite difficult to manage.”
Another of his employees, Rashid, was able to weather the shutdown because he had some cash lying around at home. “Aise to jumme ke jumme hisaab ho jaata hai (Usually, we get paid every Friday).” So, even though he wasn’t paid that Friday like he usually is, he made do. But he still lost wages because of the lack of work during that week.
“We have lost money in lakhs already. If something like this were to happen again it would ruin us,” says Aarif. But he still manages to see the silver lining in this suffering, and is glad that he did not lose his clients. “Allah ka shukar tha ki hamara koi bhi client toota nahi. Nuksaan ki bharpaayi to ab tak nahi ho paayi hai, lekin Allah chahega to jald hi ho jayegi (Thank god that we didn’t lose any of our clients. We haven’t been able to recover the losses yet, but god willing, we will be able to make up).”
Mahesh Kumar Shiva is a Lucknow - based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters. With inputs from Saurabh Sharma, a Lucknow-based reporter.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Darjeeling’s e-commerce Crumbles after 100 days sans Internet
Darjeeling, West Bengal: Chitra Dutta, 80, owner of a courier service in Darjeeling called Turant, says the 108 days of bandh (strike), including the 100-day ban on internet, had almost paralyzed her business. The shutdown on ground and that of the internet led to courier packages being undistributed for three months. Despite suffering severe loss of revenue, Dutta says she had to pay her employees’ salaries during the bandh, and “it won’t be before March next year” that she will be able to make up for the losses.
When Darjeeling suffered 108 days of bandh called by the Gorkha Janmukti Morcha (GJM) to press their demand for a separate state of Gorkhaland, the worst hit were businesses in the hills. What made it even more difficult for traders to cope up with the loss was the complete absence of internet services, as several of them depended on the medium to run their operations.
GJM’s movement for Gorkhaland picked up momentum when Mamata Banerjee’s Trinamool Congress (TMC) government tried to impose Bengali as a compulsory subject for all schools in West Bengal in early 2017. GJM party chief Bimal Gurung called for an indefinite bandh of all activities in the hills from June 15. It led to several incidents of arson, violence and deaths in retaliatory police action. From June 18, internet services were banned in Darjeeling and Kalimpong. The ban was lifted on September 25.
Dutta’s Turant, a third-party firm, has a tie-up with major courier service providers Bluedart and Ecospeed to distribute their consignments in Darjeeling and around. Another major player in the delivery business, Amazon, had finalized Turant as its service provider in the hills just before the internet ban, but the deal remained in a dicey state after the situation worsened and Darjeeling was cut off from rest of the state, she says. Her business largely depends on a software to track the goods and communicate with business providers and customers, but the prolonged breakdown of internet has brought it to a halt. Dutta says they used to deliver around 40 parcels per day before the shutdown, but no business materialized during the bandh.
Bitter days for tea trade
Girish Sarda, a third generation owner of Nathmulls Tea and Sunset Lounge, an online-cum-retail business outlet that exports Darjeeling tea, says he is disappointed with the state of affairs in the hills.
“Ninety per cent of my business is internet-based. In international trading if you stop supplies to your client for three months, they will source tea from elsewhere to run their business. Clients from Japan started asking me how I was surviving,” says Sarda.
Explaining the losses he faced due to the internet shutdown, he says, “Only 5% of my business is operational at present. I have six months of tea produce and I don’t know how I am going to sell that. It will take months for me to get back on my feet. I’m gone. Things are still hazy here and god only knows when the situation will return to normal.”
The harvest season’s second plucking (of tea leaves), called the second flush, is considered to provide high quality premium tea, and draws the best price. The shutdown in Darjeeling overlapped with the second and the third flush, which occur between the months of June and August, and October and November, respectively. Sarda says, “The bandh ensured there was no second flush and a poor third flush. The entire tea industry has seen the worst phase ever. It may take three years to get back to normalcy.”
Darjeeling produces around 8.9 million kg of tea per annum. Of this, around 20 lakh kg is premium tea and sold at high price, according to S K Saria, owner of Rohini and Gopaldhara Tea Estates. While 80% of the tea produce is sold through auction in Siliguri and Kolkata, the rest is sold directly by traders in Kolkata and Darjeeling, including the 45-60kg tea per day sold online.
Hotel business too saw a downfall in the Darjeeling hills. Vijay Khanna, secretary of Gorkha Hotel Owners Association, says, “Most of the hotel bookings are done online, and we need the internet to check these. The sudden shutdown has left the hotel industry in a bad shape. Clients from abroad could not be informed of the sudden closure of all establishments and few even failed to understand what a bandh is.”
“It was and still is a very difficult time for the industry. Neither the state nor the central government is interested in our plight. There are just a handful of tourists here. Darjeeling hills are out of business,” Khanna says.
Restraining GJM's 'message'
Bimal Gurung, the GJM chief who floated the party in 2007 to capitalize on the growing public disenchantment with Subhash Ghisingh’s way of leading Gorkha National Liberation Front (GNLF), realised the power of internet and social media early on, and utilized the medium to push the propaganda for Gorkhaland statehood through his party.
Several audio and video messages, where Gurung alleges the present TMC government and the chief minister of dividing the hill people by creating separate bodies for each tribe and taking them for a ride, had been going around on WhatsApp and other platforms before his call for an indefinite strike in Darjeeling. West Bengal government responded to the GJM’s call for strike with a heavy hand, initiating police action against protesters and raiding Gurung’s home and offices. However, the Gorkha community residing in the Dooars and Terai region kept on getting his messages throughout the shutdown period as internet was on in these regions.
The movement only kept the Gorkhas away from critical resources like internet that fortify their market, it has not led to any productive dialogue towards statehood yet. The combined effect of internet ban and indefinite strike has hurt the economy of the hills so bad that it will take months to recover. However, people are still unsure about the recovery.
Avijit Sarkar is a Siliguri-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Education and Employment Opportunities Tossed out of the Window
Darjeeling, West Bengal: When a shutdown was placed on internet services in Darjeeling on June 18, it was unclear how long it would last or what it would mean to the schools, colleges and the academic community at large.
However, the more time the town spent cut off from the web a picture emerged of an education system, which had increasingly taken most of its activities online, caught completely off-guard. Missed school payments, lack of clarity on admissions and important dates became commonplace. Students were forced to find new ways to share notes and study without search engines.
The shutdown was first announced for a week but it eventually lasted 100 days, with several extensions in between. This meant that the restrictions came at a particularly bad time with many important academic dates falling within this period.
The online registrations for schools following the Indian Certificate of Secondary Education (ICSE) syllabus were set to start mid-July but did not take place as planned. The ICSE council heads had to later give an assurance to extend the dates for registration till late August.
The ban was lifted only in late September and this extension eventually proved inadequate. Representatives of many schools said they had to travel to Siliguri to complete the online registration of students who would be appearing for their board exams next year.
“Most of the schools had to go to Siliguri to access fast internet for the registrations. Schools like St. Augustine and St. Joseph’s Convent could also not post results of their term examinations online,” said a source.
Saptashri Gyanpeeth, a school in Kalimpong, had designed a new website to post their results and other activities, but they had to wait until the shutdown was lifted to get it up and running. “We could not update our website, we could not post about the school openings and activities for the alumni,” said a teacher at the school.
Schools in the area also use the web to make available notes and study materials, and authorities said they were hard pressed to work around the restrictions that had been enforced. Other routine activities like independent research by the students or a basic Google search for unclear concepts quickly became a thing of the past.
“Most students study the material provided in the textbooks and guide books. But there are a few who are creative and look for new information and ideas, and they found it very difficult during the internet shutdown,” said Milan Chettri, a teacher in St. Mary School.
Teachers from several schools often had to take classes without adequate preparation. “Sometimes teachers also need the internet to cover all the angles of the topics we teach in class, our homework so to speak,” said Chettri.
Many parents claimed that paying school fees on time was cumbersome and inconvenient. Many schools were also unable to offer the parents time to make the payments as salaries for their staff was also due. “We used to pay fees online but not having internet for three months meant that we were put in a position where we had to pay a late fee,” said Dawa Tamang, whose daughter is set to take her board exam next year.
The clampdown on services also threw a spanner in the works of online admissions in several colleges. Late June to August-end is when these admissions take place and the new batch of students hit a major roadblock in securing entry to good colleges.
Many students also complained of not getting admissions in cities of their choice due to delayed applications. Some who didn't want to wait another year had no choice but to take admissions in local colleges.
Some colleges tried to ease the hassle by extending admissions but had a limited effect as it was not clear when services would be restored. The heads of all 46 colleges affiliated to North Bengal University (NBU) based in the Hills had negotiated with the varsity officials, seeking to extend the dates for the admission process. “We had received letters from the colleges, mostly from the Dooars, asking if the admission procedures could be extended,” confirmed Dr Nupur Das, Secretary of the Undergraduate Council, NBU.
Principal of Parimal Mitra Smriti College in Malbazar, Uma Maji Mukhrjee, said, “The suspension of internet services had cut down the opportunities for the students to apply. They had to visit the campus and take admissions manually.”
Colleges also had little way of letting the students know if they had been admitted. Principal of St. Joseph’s College, Darjeeling, Fr Dr Donatus Kujur SJ, said, “Our admission procedures run from June 5-15. We could not publish the merit list as we had no network.”
However, in late July, a few pockets — including areas like Mall road, adjoining areas of Bhanu Bhakta in Darjeeling Carmichael Road, Delo, Durpin and Chiso-pani in Kalimpong — did get data signal from Sikkim. As word spread, internet connections at these places, however slow or unreliable, proved to be a great relief for people.
“My sister had just graduated from college and she had come home for a few days. We often climbed up to the hotspots where we could receive internet signals, but the speed was so slow that pages couldn’t be loaded. She had a lot of trouble applying for jobs. Eventually, she was somehow able to apply, only to later find that she could not check any call letters or responses to those applications,” said Manisha Tamang, who was at the time on the lookout for jobs herself.
Months after the restrictions were lifted in late September, the registrations have now been completed and most schools in the Hills have adjusted their winter breaks to compensate for the 100-day paralysis. The final exams have also been rescheduled for January.
Roshan Gupta is a Siliguri-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Every Town had its Jio Dara
Bangalore, Karnataka: Alvin Lama writes rock music is his downtime, and these days his songs are rather politically charged. The 100-day internet shutdown in Darjeeling during the Gorkaland agitation in 2017 inspired his latest single, titled Jio Dara. In Lama’s song, he tells his listeners, “Come let’s go to Jio Dara” where they can be free from the prison of internet shutdown to send and receive messages from the outside world. “I am using that window of access to tell people about our struggle. It has a bit of an anti-administration message,” he says.
View from Carmichael Ground, a Jio Dara spot (Picture Courtesy: Nisha Chettri, Caffeine and Copies) |
Jio Dara (‘dara’ meaning ‘hillock’), also alternatively called ‘Reliance gully’, was not always a specific place but a small window of opportunity during which a weak 2G signal could be accessed in the hills. Towns like Darjeeling and Kalimpong lie very close to the border of West Bengal, separated from their northern neighbour Sikkim by the river Rangeet; and often in the hills along the river bank, phones pick faint signals from the mobile phone towers in Sikkim. For a population that was completely shut off from the outside world, even this thin, fragile lifeline was precious. “I was not here during the agitation but somehow would get information about what was happening in the hills from my family and friends through the Jio Dara,” Alvin says.
Alvin, also founder director & CEO of the Good Shepard Institute of Hospitality Management, is not the only musician to immortalise Jio Dara in song. Young student Saif Ali Khan and his friends also wrote and composed their own ode to this happy accident. “It was really born out of boredom,” he says. “My brother, my friends and I were sitting around the campus and chatting. Classes were cancelled due to the strike and our education was on hold. And we overhead a couple talking about where they were going to go for their date. Of course, we should go to Jio Dara, the girl said, and that led to an argument.”
This sparked off their Jio Dara song which was written, composed and recorded by Khan and his friends under their Firfiray Productions. A satirical take on the internet shutdown and how it has affected the lives of the students in Darjeeling, the song plays out like a dialogue between two lovers and serves as a light-hearted look at a situation that was anything but.
For three months between June and September, the administration had shut down internet access in Darjeeling and in its surrounding hills. This prevented the outside world from hearing the voices of the Gorkhaland protesters but information still trickled out, as it is wont to do, through various sources, one of these being the Jio Dara.
How did this work? Reliance Jio had not long ago made a big splash in India’s telecom market with cheap unlimited data packs and lifetime validity deals, and many had switched to Jio to take advantage of this. This was what eventually gave Jio users the edge, helping them tap into the signal from the towers across the border. While it isn't clear whether signals from other networks were also available in these spots (information varies from they were no other networks at all to there were some but they were even weaker than Jio), what's certain is that without the free internet that Jio subscribers enjoyed, access to the internet through other networks was not feasible after a point because recharging your number at the local mobile shop wasn't an option anymore.
These hotspots used to vary, according to Lama. “The signal would be strong today, but next day one might have to move a few hundred metres up or down till they connected with the network. So, you would go searching in the hills till you get a signal and then the word would spread,” he says. People in Darjeeling were lucky in that their Jio Dara was inside town near the mall in Chowrasta, but it was not as convenient in Kalimpong. One had to travel a couple of kilometres from the city centre to Carmichael grounds, sometimes go even further up the hill towards areas that were facing Sikkim. “People would get to know through word-of-mouth and the number of people there would snowball,” Lama tells us. People, young and old, would come to log in, even though the connection was patchy and slow, to talk about the events of the day, upload pictures, connect with family and friends and basically tell the world what really was happening in Darjeeling.
It became an unofficial symbol of resistance. Each town had its very own Jio Dara and it transcended merely a physical location to become an idea. “Our habits changed after June 18, when the government undemocratically blocked the internet service in the hills,” writes Nisha Chettri, a journalist with the Statesman, in her blog ‘Caffeine and Copies’. Carmichael Ground in Kalimpong invariably became a meeting spot for all sorts of occasions – birthdays, dates, get-togethers. She says that some Jio users even shared their mobile hotspot with others so that everyone could use the internet.
Local journalists would file their stories and upload their pictures side by side with ordinary citizens updating their social media statuses. It helped journalists like the Telegraph’s Passan Yolmo to maintain a line of communication with his publishers. Most evenings he would connect to the Jio Dara to send across photographs from the day, as many as the feeble 2G connection would allow.
“I don’t know who first found this spot behind Chowrasta,” says Khan. Perched in the centre of the city and at a higher elevation than the rest, Chowrasta is a popular tourist destination in Darjeeling; so it couldn’t have been long before people stumbled onto this secret. “I accidentally discovered it one day when I walked past it and suddenly my phone started pinging and I received a bunch of texts on WhatsApp. I checked my phone and realised I was connected to Sikkim’s Jio network.”
Ayswarya Murthy is a Bangalore-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Taxes in the Time of Internet Shutdown
Darjeeling, West Bengal: In mid-June, SC Sharma, a tax lawyer in Darjeeling, was in a fix. Thanks to street protests, he had not left his house for a week. There was an internet shutdown across the district. As a third assault, the finance minister was announcing a new tax regime that confused him. A combination of these factors made Sharma anxious: many of his clients were going to miss the tax deadline and be saddled with a huge fine.
Spurred by the West Bengal government’s new language policy that sidelined minority interests, the Gorkha Janmukti Morcha, a political party that campaigns for a separate state for Nepali-speaking Gorkhas, had called for a bandh from June 12 across the northern hills. Schools and offices were closed. Public transport stopped. Banks would be closed for 104 days. GJM activists and the police clashed everywhere.
The state administration shut the internet down in the Darjeeling hills on June 18. A fortnight later, with the lockdown still in place, the central government rolled out the implementation of the Goods and Services Tax (GST), a pan-India single tax to replace several state-level indirect taxes.
“My clients were jittery because of the penalty issues,” Sharma says. “There was no way I could study the GST, as there was no internet. We were crippled from all sides.” He had also heard reports of GST filing website crashing repeatedly even in regions with regular network services. “Everything was already a mess, and then GST is launched with all the fanfare.”
Since the GST was a new concept, it had to be studied before returns were filed. With no internet, most businessmen were in the dark. Even advisors like tax lawyers and chartered accountants were in a soup as they were unable to use the internet or go down to the plains in Siliguri to address the issue.
Girish Sharda, owner of Nathmulls Tea, an online-cum-retail business of high value tea, felt lost when the GST was introduced. “We tried to solve the GST issues but we could not go online and find a solution.So we just sat around as all shops were shut too, and waited for the bandh to be declared open. It has been a terrible time for all of us in business.”
The June-July season was one for second flush tea, the darker, stronger variety that constitutes 21% of Darjeeling tea exports, and 41% of its revenue. Losses of Rs 250 crores ($39 million) in the season from the triple attack trickled down to the 55,000 permanent and 15,000 temporary workers in the 87 tea gardens in the region.
Ranjeev Pradhan, who runs a construction company in Darjeeling, says those weeks were nightmarish, “The bandh, the internet shutdown, the voice call drops, the sudden introduction of the GST – all this has really taken a toll on me and several others who run small businesses in Darjeeling. Things are still not right. All we need is some peace of mind which is missing right now.”
Only small-scale businessmen like Jeevan Sharma, who had dual offices in Darjeeling and Siliguri, managed to file GST. “If I did not have my chartered accountant based in Siliguri, it would have been impossible to file returns. Siliguri was open and the net was available, so the CA didn’t have a problem. Although the process was very slow because of technical snags in the servers.”
Businessman Gyanendra, who runs Krishna Service Apartments, was not so lucky. “I was held up in Darjeeling because of the bandh. We had practically zero business for the 108 days of forceful bandh, and yet I had to think about filing GST first. This magnitude of shutdown was unthinkable for us.”
Anjan Kumar Kahali, a prominent lawyer who deals with income tax and GST, had a harrowing time during the initial launch. “The system was not stable at all and the GST site kept on hanging after a short duration of use. Entries were taking forever to upload and results were not shown on time and taking really long to verify. The delay was hampering all my other work. Even today, the servers are still far from fast. I have heard that it is not before the end of this financial year that matters will be sorted out.”
In September, the GST council headed by the finance minister Arun Jaitley provided some relief for GST defaulters by extending the July deadline to October first, and then again to November. “I am relieved that I will be getting some extra time to file the returns without paying heavy fines,” says Kahali.
The tea and tourism industries, on which Darjeeling depends most, were severely hit by the bandh. In a politically sensitive time, the double whammy of the internet ban and GST seems to have deepened anger against the state. “The people of the hills feel betrayed, both by the centre and the state,” says Sharma. “They feel they have been taken for a ride once again like they have been several times before.”
Avijit Sarkar is a Siliguri-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
ISPs in Kashmir Grappling with Mounting Losses Amid Recurrent Shutdowns
Srinagar, J&K: CNS Infotel Services, once a buzzing cybercafé in Srinagar’s Lal Chowk, is now a prominent internet service provider (ISP) for the town. It is popular for providing uninterrupted, fast internet connection, but that reputation has been tough to maintain as the Kashmir Valley has witnessed 56 internet shutdowns since 2012, 38 over the last two years alone. This has pushed the economy downhill and discouraged new enterprises from emerging.
Once the internet is blocked, executives at ISPs either skip calls to avoid public ire, or express their helplessness over the sudden disruption of internet ordered by authorities in the wake of some security situation.
An executive at CNS, Imran says how a sudden ‘police directive’ often forces them to apply the internet ‘kill switch’. “In May this year,” says Imran, “we received a circular stating that authorities want us to block 22 social media and messaging sites, including Facebook, WhatsApp, Twitter, Skype, Telegram and Viber, with immediate effect.” That day, CNS executives were only repeating a prohibition procedure that has become a norm in the Valley. In the post-2008 Kashmir, as street protests became the popular
mode of dissent, the state’s observation has been that resistance is being “fuelled by social media.”
“There’s a perpetual struggle for us to grapple between police orders and annoyed customers,” says Owais Mir, an executive of G Technologies, another ISP in Srinagar. “The frequent internet gags hamper our operations… annoyed customers often threaten to either switch over to another service provider or to deactivate their connections.”
Mobile data and broadband services in Kashmir were banned 10 times between April 8 and July 13 in 2017. “By then,” Imran says, “we were running into huge losses.”
While Imran does not have an actual figure to quote about the loss he faced, mobile ISPs were decrying daily losses to the tune of Rs 2 crore between April and July 2017.
According to Cellular Operators Association of India (COAI), mobile service providers in Kashmir suffered losses worth Rs 180 crore during that period. When such orders are passed, usually, except the state-run BSNL, other service providers — Airtel, Aircel, Vodafone and Reliance (Jio) — promptly shut down their operations. The postpaid BSNL numbers, which are mainly with police, army and government officials, continue running.
Alternative access
The repeated loss of communication in the Valley has prompted Kashmiri netizens to explore solutions. Many of them have learnt to access the Virtual Private Networks (VPNs), mostly through broadband internet and state-owned BSNL, in order to continue using messaging services and social media.
A VPN uses proxy servers to securely access a private network while allowing users to change location and share data remotely through public networks. It secures a connection through encryption and security protocols, and enables access to content that is otherwise blocked. VPN keeps the ISP from placing restrictions on access.
“VPNs help us to overcome the irrational social media blockade,” says Shagufta Mir, a college student from Srinagar. “More than a political statement, using VPN sends out a positive message that Kashmiris have evolved to tackle repeated restrictions imposed on them.” Most users have learnt about VPNs from their tech-savvy peers.
“When the government banned social media earlier this year,” says Shafat Hamid, a trader, “my friend taught me how to access a VPN. I felt empowered to be able to overcome the frequent gag on online activities.”
‘India worse than Iraq’
Jammu & Kashmir has higher internet penetration than the all-India average with 28.62 internet subscribers per 100 people compared to the national figure of 25.37.
Although broadband was functioning, the suspended mobile internet for over five months from July 9 to Nov 19, 2016 (data services on pre-paid mobiles remained suspended until January 27, 2017) saw many operators winding up. During that period, internetshutdowns.in, a website run by Delhi-based non-profit Software Freedom Law Centre (SFLC) to track incidents of internet shutdowns across India, recorded that Kashmir had no internet access for “over 2,920 hours”. This made India worse than Iraq and Pakistan in terms of number of days without internet, according to a report by the Brookings Institution.
According to a report, out of the 14,000 local youth employed in the IT sector in the Valley, an estimated 7,000 people lost their jobs due to the frequent internet shutdowns imposed last year. Online businesses incurred losses worth Rs 40-50 lakh on a daily basis during that period.
During the internet shutdown last year, COAI had written to the department of telecommunications that such communication bans have an adverse impact on the subscribers and result in losses to telecom operators. “Kashmir lost around 4.5 lakh active subscribers during the 2016 unrest,” says Sameer Parray, an area manager for Vodafone.
But service providers say they have to comply with the orders, lest their licenses be cancelled.
Safeena Wani is a Srinagar-based freelance writer and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Silence on the Dera Front
Sirsa, Haryana: Raj Rani’s two expensive smartphones are her whole world. But the 32-year-old entrepreneur from Haryana’s Hisar district found them entirely useless when she needed them most – on August 25, during the violent protests by members of the spiritual group Dera Sacha Sauda (DSS) after their leader Gurmeet Ram Rahim was convicted of rape.
“My family follows DSS, and had gone to attend the monthly congregation on August 15 (which also happened to be Ram Rahim’s birthday), when were told that ‘Pitaji’ asked us to stay back in the premises, in case of an adverse verdict by the court in rape cases against him,” she says. This is understood to have been done as a show of support that could put pressure on the judiciary and state for a favourable verdict.
Along with lakhs of other followers, Rani was present in Dera’s Sirsa headquarters with her two children. She stayed in constant touch with her husband Sunny Kumar, a businessman based in New Delhi. "Every day, I showed him the Dera premises and religious activities through WhatsApp video calls,” she says.
She recalls “the nightmarish moment” on the night of August 24 when the Haryana police and the Indian army surrounded the Dera. They imposed a curfew in the town, and restricted people from coming in and going outside the premises spread over 700 acres.
Rani says that the government blocked the internet on August 24 – a day before the self-styled godman appeared in the Panchkula court. Service providers of different companies, including mobile phone and landline services, were also barred at the Dera Sacha Sauda headquarters. As a result, Rani lost all contact with her husband. “I was confident until I was connected with my family over WhatsApp call and video chat, but as soon as this went away, I started losing faith, and felt afraid,” she says.
After the curfew was imposed and internet was shut down, Rani says the devotees started to panic. They demanded that the DSS management permit them to go to their respective homes after Gurmeet’s arrest on August 25. After his conviction for rape, Rani says the politically influential and funds-flushed DSS fell like a house of cards. “There was chaos all around,” she says.
Fearing that Dera followers would vandalise public property to protest their leader’s conviction, the police had restricted public transport. Private vehicles were being allowed to move only after multiple security checks. On the morning of August 27, hundreds of devotees started to leave the Dera premises by foot. Rani walked about 50 kms along the national highway 10 (Hisar-Sirsa) up to Fatehabad district.
It was a coordination committee of police, legislators, and bureaucrats from Haryana, Punjab and Chandigarh, under the chairmanship of Punjab governor and union territory administrator VP Singh Badnore, that took the decision to ban the internet. After the order on August 24, all the SMSes, dongle, and data services provided on mobile network were suspended. The government only allowed phone calls during the internet shutdown in affected districts in these states.
Dissing the police’s claims that Dera followers started the violence first, provoking the cops to fire, 32-year-old shopkeeper Gaurav Soni, an ardent DSS follower for seven years, insists that things went out of control because the internet connection was snapped. He says that senior members in the Dera’s internal WhatsApp groups couldn’t send messages to calm angry followers. “Whatever happened was a result of a communication gap,” says Soni, who joined the protests. “No one asked the followers to get violent, and followers never attempt such things without proper instructions. But since there was a leadership gap, thanks to the break in communication, all this occurred.”
Vikas Kumar, an IT expert of the Dera Sacha Sauda agrees, "As soon as we came to know about the conviction, we tried to send a message from Dera chairperson Vipasana Insan, requesting followers to maintain peace, and keep faith in the judicial process. But we couldn’t upload this message because mobile internet and broadband services were banned." They also tried to call key Dera leaders. “But it was too late by then, and followers clashed with law enforcement agencies," Vikas adds.
The Dera’s protests, and the related internet and transport shutdown seemed to have impacted the group’s own followers too.
Those outside Haryana received misleading or panic-inducing forwards and videos, worrying them, but also worsening the anger against the state administration. Rajat Singh, a 65-year-old Dera follower from Mansa district, Punjab, says his son Rishipal Singh, had gone with several followers to the court in Panchkula, Haryana, where Gurmeet’s case was being heard. Rajat Singh says that since the internet was not banned at Punjab’s Mansa, he continuously received photographs of bullet-ridden bodies, charred cars, massive fires, and vandalism on WhatsApp. It’s unclear how Dera members from Haryana were able to send these pictures, overriding the blocked internet. “I was so disturbed,” he says. “As soon as we came to know that the Haryana police had opened fire on the followers, I started calling my son,” he says. But phone networks were constantly busy or spotty. “My son’s phone was not reachable. I asked relatives to send him text messages, or messages on WhatsApp, but the internet was not working.” It was much later, when Rishipal made a rushed call, that they were assured of his well-being.
Unaware of the violence at the Dera, 37-year-old Rakesh Kumar, a DSS follower from Ghaziabad, Uttar Pradesh, was visiting Sirsa on August 24. “I booked a hotel in Sirsa district through an app, and chose to pay at the hotel. When I reached Sirsa, the internet was off.” Kumar went to the Dera taking lifts from a few vehicles plying on the sly, but soon returned to his hotel after followers went on a rampage. He wanted to leave Sirsa, but “got stuck” because the hotel didn’t allow him to leave without paying. ATMs were closed, vandalised, or not working, and it was generally unsafe to go out. “I had some balance on PayTM, but that was also not working as there was no internet connection,” he says.
Without Facebook or Twitter accounts, the Sirsa police had no way to counter rumours, discourage violence, or call for peace, says additional deputy commissioner (ADC) Sirsa, Munish Nagpal. A ban, he says, was the only way for them to nip crowd mobilisation in the bud, and curb rumours from spreading to Dera followers in other states of north India.
“The ban controlled the situation to a certain extent, but it handicapped us, and slowed the process of our communication with seniors in Chandigarh,” admitted Ashwin Shenvi, the superintendent of police (SP).
The Haryana police, chief minister and health minister are usually active on social media, and the government too prides itself on being digitally savvy, but during the ban, every account was inactive. This despite the state offices having broadband.
It is worth pointing out that DSS is credited for the Bharatiya Janata Party’s first ever win in Haryana in the 2014 state elections. Gurmeet Ram Rahim and CM Manohar Lal Khattar have even shared stages multiple times for photo-ops. Many believe this to be the reason behind the state government not being very vocal, online or offline, in condemning the violence by Gurmeet’s followers. It could have ticked off DSS’s over 50 million followers, a large votebank. The political dynamics, hence, were also responsible for internet becoming a victim of the violence unleashed.
Sat Singh is a Rohtak-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
Will Darjeeling Regain the Trust of Tourists?
Darjeeling, West Bengal: The tourism industry in Darjeeling proved to be as crippled as most businesses operating from the town due to the agitation for a separate state of Gorkhaland. With the scenic beauty of the hills and the spectacular views it affords, Darjeeling has always been a major tourist attraction. A substantial part of the town’s employment is attributed to the tourism industry, which took a bloody blow with the ban on internet services that eventually lasted a hundred days.
“The bookings for Darjeeling generally commence four months prior to the annual Hindu festival Durga Puja (usually in September or October), but this time most of the enquiries were for Sikkim. The Hills usually see huge footfall during Puja, but the unrest hit tourism badly and we incurred huge losses,” says Samrat Sanyal, a tour operator.
The tourist season generally starts around April and continues till late October. That the internet shutdown came right in the middle of this period — it was first announced on June 18 and lasted till late September — did not help matters. Sanyal says that in 2016 around 85% of the tourist footfall took place around the time of Durga Puja, but in 2017 it had fallen to around 5-10%. Though things have relatively calmed down, Sanyal believes the flow of international tourists will remain low for a while. Other tour operators this reporter spoke to also echoed Sanyal’s sentiments and said that the aftermath has left tourists with little confidence in the Hills.
Sources in the tourism department say that apart from the internet shutdown, a general response to the strikes and the violence attributed to the agitation played a major role in “maginalising tourist flow”. The tourists who came to the Hills around the time the agitation intensified could not even get in touch with their families as the mobile reception was poor for days, besides no web connectivity. Many who had already arrived at Darjeeling had to cut short their vacation.
One of them was Kartik Lodha. A tourist from Rajasthan, Lodha was caught unawares by the strike that came just as he prepared to go paragliding in Delo. He had no choice but to return to his hotel midway. With no internet to assist him in looking for a way out, Lodha left Kalimpong the next morning in a state bus with police escort. "It’s the locals who suffer the most during such situations. They are the ones who will have to deal with these problems and difficulties in the long run. Barring a missed vacation, we will be fine," said Lodha.
Blaming the state for imposing the shutdown and creating “unwanted problems” in the Hills, Tapash Mitra, a tourist from Kolkata, said that "the West Bengal government is hindering its own tourism industry”. He had planned a three-day trip with his family, but had to return on the day of his arrival. "I just want the people to have peace in the Hills."
Homestays were also badly hit and saw a spate of booking cancellations in the wake of the agitation and the subsequent network shutdown. Nimlamhu, the owner of Green-Hills homestay at Sangsay, said that more than the owners of hotels or homestays, tourists suffered as they were left stranded, unsure of what they would have to do. “Nothing works when the internet is banned. Even refunds cannot be processed.”
When asked about the arrangements that were eventually made to refund the tourists’ money, he said, "The amount was refunded because we were left with no option, and for those guests who were our regular customers, we adjusted the balance with their future bookings."
He said, however, that it was difficult to contact those who booked stays in advance but were hit with the news of the strike before they arrived there. "There was no way we could contact the guests as the internet was banned. About 50-60% of our bookings are done online and we couldn’t even refund their money through netbanking. We had to personally call them up and apologise for the unforeseen circumstance, and request hem to bear with us, not knowing that the strike would last as long as it did," said Nimlamhu.
Sweta Neriah, who is in charge of Palighar, a homestay in Ecchay, was preparing their promotions when the town was hit with the blanket-ban on internet. "For international guests we have a system where payment is done only during checkout. We did incur heavy losses this season and I’m sure we will feel the impact of this slump for some years. Incidentally, this happened just when the international tourist flow started to pick up in this part of the world."
Complaining that the internet ban cost them a year’s business, Kabir Pradhan, the owner of the homestay, said, "Internet is the only way to really promote a business these days. We need to keep updating out official pages on every social networking site to market it. Only then can we attract clients and agents."
He now looks forward to the spring season.
Meanwhile, many tour guides say they suffered huge losses with the internet ban and dip in the number of tourists. Manisha Sharma, who used to work as a tour guide, says she regrets being in the hills as the ban robbed her of three months’ income. “Had I not been here, I could have travelled to some other places with tourists, but the movement of vehicles was also restricted during the agitation, leaving me broke and with few options,” says Sharma.
Roshan Gupta is a Siliguri-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
E-administration Efforts are Lame Ducks without Internet
Fatehabad, Haryana: It took Mahender Kumar a week to brush up his DJ-ing skills and understand what songs to play for crowds at different events. It wasn’t done out of some special love for music.
When he had to stop operations of his Common Service Center in Fatehabad district’s Badopal village for the third time in 18 months because of an internet shutdown “caused” by violence in his state, Mahender had to revisit his teenage hobby. He was more cautious about running a centre that depended on the internet. After all, the 31-year-old had to do something to feed a family of five. “Kuch to kaam karna tha. Parivar ko bhukhe to rakh nahi sakte."
Launched in 2015 as part of the central government’s ambitious Digital India programme, Common Service Centers or Atal Sewa Kendras (ASKs) are the “access points for delivery of various e-governance and business services to citizens in rural and remote areas of the country”. Sikander Kumar, in-charge of the Fatehabad District Informatics Center, informs that there are 14 such centers in urban areas and a whopping 223 in rural areas in his district alone.
These Kendras deal with banking, insurance, pension, health, and even railway ticketing, Aadhaar services, and electricity bill payment. The Haryana government claims to have integrated “around 170+ state government services of varied departments” with this scheme. More are in the pipeline.
Mahender, who undertook operations of the Kendra in December 2015, earns commissions ranging from Rs 10 to Rs 100 from his customers. A Rs 10 for paying electricity bills, another Rs 10 for correcting every mistake in Aadhar cards. He even fills up job applications and pension forms using the internet. His daily earning ranges from Rs 1000 to Rs 1200, and he provides food and pays Rs 1500 each to the two persons who assist him occasionally.
“Things were running in perfect order until February 2016. I had to incur losses after losses due to multiple internet bans since then," says Mahender. The Jat reservation stir of February 2016 had led to an internet ban when protests turned violent in various parts of Haryana. Internet service were suspended as a preventive measure a year later in March when the protests were brewing again. When Dera Sacha Sauda Chief Gurmeet Ram Rahim was convicted for rape in August 2017, Fatehabad faced internet shutdown for a week.
Mahender lost his bread and butter on these occasions, and being a part-time DJ was his way of minimising the risk. He continues to run the center though.
Rajesh Kumar too makes a living by running an ASK in Dhangar in Fatehabad. A graduate in arts from the National College in Sirsa, he started the Kendra in October 2015. Though he has reservations about the crawling pace of internet in his village, it doesn’t stop him from fulfilling the needs of customers who can be found “flooding the Kendra on any working day”.
He places great importance on the role of the center he runs. After the demonetisation of Rs 500 and Rs 1000 notes in November 2016, Rajesh says his Kendra “reduced the inconveniences caused to common people by the move”. When the cash lying around became of no use, the e-banking services his centre offered came to the rescue. This is why he doesn’t approve of the internet shutdowns. “Rural areas suffer the most. My friends in cities do not have to go through this.”
Even updating panchayat records on time is a hassle during shutdowns. Rajesh Koth, Fatehabad district development and panchayat officer, does not directly face the brunt of internet shutdowns since his office functions out of the mini-secretariat, which continues using internet through a lease line meant for such situations. But shutdowns do affect his department’s work as the 200 something panchayats with which emails are to be exchanged do not have the same luxury. “Village panchayats have been equipped with a computer and an internet connection, which are used to update the department on development works passed by the panchayat,” Rajesh says.
With villages losing access to whatever internet they had, panchayats have to send physical records to the Fatehabad district headquarters, thereby increasing the office’s burden.
Internet lost, grains lost
The impact of internet shutdowns on the administration’s e-governance schemes was felt even by fair price shops. Subhash Singh has been running a ration depot in the same village as Mahender’s, Badopal, for a decade now. It wasn’t just Subhash’s loss when he couldn’t disburse ration because of the internet shutdown in August 2017.
He says he was bound by authorities to not distribute ration without an Aadhar-enabled authentication using a thumbprint. “Several people came, but they had to return empty-handed due to failing biometric verification. I must’ve lost about Rs 2500 in that time.”
Fatehabad district food and supply controller Ashok Bansal confirmed that his department had indeed “issued clear instructions as mandated by the government to distribute ration only after Aadhar-enabled verification”. Strict action is taken on complaints for not complying this order, he says.
Being his only source of income, Subhash eventually "spent a lot of time and energy to persuade people to return” to his shop again. But he clearly remembers how he was accused of finding an excuse to not give people their lot of ration.
Amit Kumar and Sat Singh are Haryana-based members of 101Reporters.com, a pan-India network of grassroots reporters.
Shutdown stories are the output of a collaboration between 101 Reporters and CIS with support from Facebook.
India’s Data Protection Regime Must Be Built Through an Inclusive and Truly Co-Regulatory Approach
The article was published in the Wire on December 1, 2017.
Earlier this week, the Ministry of Electronics and Information Technology released a white paper by a “committee of experts” appointed a few months back led by former Supreme Court judge, Justice B.N. Srikrishna, on a data protection framework for India. The other members of the committee are Aruna Sundararajan, Ajay Bhushan Pandey, Ajay Kumar, Rajat Moona, Gulshan Rai, Rishikesha Krishnan, Arghya Sengupta and Rama Vedashree.
With the exception of Justice Srikrishna and Krishnan, the rest of the committee members are either part of the government or part of organisations that have worked closely with the government on separate issues relating to technology, with some of them also having taken positions against the fundamental right to privacy.
Refreshingly, the committee and the ministry has opted for a consultative process outlining the issues they felt relevant to a data protection law, and espousing provisional views on each of the issues and seeking public responses on them. The paper states that on the basis of the response received, the committee will conduct public consultations with citizens and stakeholders. Legitimate concerns were raised earlier about the constitution of the committee and the lack of inclusion of different voices on it. However, if the committee follows an inclusive, transparent and consultative process in the drafting of the data protection legislation, it would go a long way in addressing these concerns.
The paper seeks response to as many as 231 questions covering a broad spectrum of issues relating to data protection – including definitions of terms such as personal data, sensitive personal data, processing, data controller and processor – the purposes for which exemptions should be available, cross border flow of data, data localisation and the right to be forgotten.
While a thorough analysis of all the issues up for discussion would require a more detailed evaluation, at this point, the process of rule-making and the kind of governance model envisaged in this paper are extremely important issues to consider.
In part IV of the paper on ‘Regulation and Enforcement’, there is a discussion on a co-regulatory approach for the governance of data protection in India. The paper goes so far as to provisionally take a view that it may be appropriate to pursue a co-regulatory approach which involves “a spectrum of frameworks involving varying levels of government involvement and industry participation”.
However, the discussion on co-regulation in the white paper is limited to the section on regulation and enforcement. A truly inclusive and co-regulatory approach ought to involve active participation from non-governmental stakeholders in the rule-making process itself. In India, unfortunately, we lack a strong tradition of lawmakers engaging in public consultations and participation of other stakeholders in the process of drafting laws and regulation. One notable exception has been the Telecom Regulatory Authority of India (TRAI), which periodically seeks public responses on consultation papers it releases and also holds open houses occasionally. It is heartening to see the committee of experts and the ministry follow a similar process in this case.
However, these are essentially examples of ‘notice and comment’ rulemaking where the government actors stand as neutral arbiters who must decide on written briefs submitted to it in response to consultation papers or draft regulations that it notifies to the public.
This process is, by its very nature, adversarial, and often means that different stakeholders do not reveal their true priorities but must take extreme one-sided positions, as parties tend to at the beginning of a negotiation.This also prevents the stakeholders from sharing an honest assessment of the actual regulatory challenge they may face, lest it undermine their position.
This often pits industry and public interest proponents against each other, sometimes also leading to different kinds of industry actors in adversarial positions. An excellent example of this kind of posturing, also relevant to this paper, is visible in the responses submitted to the TRAI on the its recent consultation paper on ‘Privacy, Security and Ownership of data in Telecom Sector’. One of the more contentious issue raised by the TRAI was about the adequacy of the existing data protection framework under the license agreement with telecom companies, and if there was a need to bring about greater parity in regulation between telecom companies and over-the-top (OTT) service providers. Rather than facilitating an actual discussion on what is a complex regulatory issues, and the real practical challenges it poses for the stakeholders, this form of consultation simply led to the telecom companies and OTT services providers submitting contrasting extreme positions without much scope for engagement between two polar arguments.
A truly co-regulatory approach which also extends to rulemaking would involve collaborative processes which are far less adversarial in their design and facilitate joint problem solving through multiple face to face meetings. Such processes are also more likely to lead to better rule making by using the more specialised knowledge of the different stakeholders about technology, domain-specific issues, industry realities and low cost solutions. Further, by bringing the regulated parties into the rulemaking process, the ownership of the policy is shared, often leading to better compliance.
Within the domain of data protection law itself, we have a few existing models of robust co-regulation which entail the involvement of stakeholders not just at the level of enforcement but also at the level of drafting. The oldest and most developed form of this kind of privacy governance can be seen in the study of the Dutch privacy statute. It involved a central privacy legislations with broad principles, sectoral industry-drafted “codes of conduct”, government evaluations and certifications of these codes; and a legal safe harbour for those companies that follow the approved code for their sector. Over a period of 20 years, the Dutch experience saw the approval of 20 sectoral codes across a variety of sectors such as banking, insurance, pharmaceuticals, recruitment and medical research.
Other examples of policies espousing this approach include two documents from the US – first, a draft bill titled ‘Commercial Privacy Bill of Rights Act of 2011’ introduced before the Congress by John McCain and John Kerry, and second, a White House Paper titled ‘Consumer Data Privacy In A Networked World: A Framework For Protecting Privacy And Promoting Innovation In The Global Digital Economy’ released by the Obama administration. Neither of these documents have so far led to a concrete policy. Both of these policies envisioned broadly worded privacy requirements to be passed by the Congress, followed by the detailed rules to be drafted. The Obama administration white paper is more inclusive in mandating that ‘multi-stakeholder groups’ draft the codes that include not only industry representatives but also privacy advocates, consumer groups, crime victims, academics, international partners, federal and state civil and criminal law enforcement representatives and other relevant groups.
The principles that emerge out this consultative process are likely to guide the data protection law in India for a long time to come. Among democratic regimes with a significant data-driven market, India is extremely late in arriving at a data protection law. The least that it can do at this point is to learn from the international experience and scholarship which has shown that merits of a co-regulatory approach which entails active participation of the government, industry, civil society and academia in the drafting and enforcement of a robust data protection law.
New Recommendations to Regulate Online Hate Speech Could Pose More Problems Than Solutions
The article was published by Wire on October 14, 2017
It was reported last week that an expert committee headed by T.K. Viswanathan, former secretary general of Lok Sabha, recommended that the Indian Penal Code (IPC), the Code of Criminal Procedure and the Information Technology Act be amended to include stringent penal provisions regarding online hate speech. While this report has not been made public, the Indian Express reported that the committee’s recommendations include, among other things, insertion and expansion of penal provisions in the IPC on ‘incitement to hatred’ (Section 153C) and ‘causing fear, alarm or provocation of violence’ (Section 505A) to include online speech, and creation of the offices of state cyber crime coordinator and district cyber crime cell.
Online hate speech has been among the more complex issues with regard to the regulation of technology. The complexity of restricting hate speech has to do with a number of factors, including the ubiquity of strong opinions in online speech, often offensive to certain groups, the interplay between individual and group rights, and the tensions between the values of dignity, liberty and equality. Siddharth Narrain has pointed out in his thesis on hate speech law that the use of law to curb offensive or hurtful speech has been done by religious groups, caste based groups, occupation based groups with strong caste associations, language groups and gender based groups. The range of actions arising from such uses of the law include the banning of books, criminal proceedings for political satire, or even ‘liking’ political posts on social media.
The relationship between speech acts and acts of violence is a complicated issue with little consensus on appropriate ways to regulate it. Scholars such as Jonathan Maynard have advocated greater reliance on non-legal responses such as counter speech, as the use of criminal law to tackle speech often has the effect of chilling forms of dissent. The formulation and application of legal tests in criminal law with respect to hate speech is also hard as hate speech has much to do with the content of speech as it has to do with the context, including factors such as power structures. Speech by a figure in a position of power also has a greater likelihood to result in a call for violence.
Before looking at the specific recommendations made by the T.K. Viswanathan committee, it would be worthwhile to also look at the background of this committee. The committee notes with approval the Law Commission of India’s 267th report on the issue of hate speech. The Law Commission, in turn, was acting at the behest of observations made by the Supreme Court in Pravasi Bhalai Sangathan v. Union of India in 2014. In this case, the Supreme Court exhibited judicial restraint and refused to frame guidelines prohibiting political hate speech, and had instead requested the Law Commission to look into it. However, the court noted with approval international case law on the issues, particularly the observations in the Canadian case Saskatchewan v. Whatcott. Relying on Whatcott, the Supreme Court provides a definition of hate speech that includes the following statements:
“Hate speech is an effort to marginalise individuals based on their membership in a group. Using expression that exposes the group to hatred, hate speech seeks to delegitimise group members in the eyes of the majority, reducing their social standing and acceptance within society. Hate speech, therefore, rises beyond causing distress to individual group members..[and] lays the groundwork for later, broad attacks on vulnerable that can range from discrimination, to ostracism, segregation, deportation, violence and, in the most extreme cases, to genocide. Hate speech also impacts a protected group’s ability to respond to the substantive ideas under debate, thereby placing a serious barrier to their full participation in our democracy.”
Thus, it is evident that the Supreme Court itself clearly states that hate speech must be viewed through the lens of the right to equality, and relates to speech not merely offensive or hurtful to specific individuals, but also inciting discrimination or violence on the basis of inclusion of individuals within certain groups. It is important to note that it is the consequence of speech that is the determinative factor in interpreting hate speech, more so than even perhaps the content of the speech. This is also broadly reflected in the Law Commission’s report that identifies the status of the author of the speech, the status of victims of the speech, the potential impact of the speech and whether it amounts to incitement as key identifying criteria of hate speech.
However, in the commission’s recommendations, these principles are not fairly represented in the suggested new Sections 153C and 505A, as per a draft released by the Internet Freedom Foundation. Section 505A, for instance, refers to “highly disparaging, indecent, abusive, inflammatory, false or grossly offensive information” and “derogatory information.” These are extremely broad terms, not having any guiding jurisprudence within Indian or international law, which may be helpful in restrictively interpreting them. It is important to note the similarities between this provision and the repealed Section 66A of the Information Technology Act, which sought to criminalise speech that was “grossly offensive,” having “menacing character,” or “causing annoyance..danger..insult..enmity, hatred or ill will.”
These terms in the recommended Section 505A also run foul of the observations of Justice Nariman in Shreya Singhal v. Union of India, where he took exception to the nature of the terms in Section 66A by stating that, “Information that may be grossly offensive or which causes annoyance or inconvenience are undefined terms which take into the net a very large amount of protected and innocent speech.” While these terms are somewhat tempered in this provision with a requirement to show intent to “cause fear of injury or alarm,” they remain exceedingly broad and contrary to the requirement that restrictions on speech must be couched in the narrowest possible terms.
The T.K. Viswanathan committee, in addition, seeks to bring, within the scope of the prospective Sections 153C and 505A, electronic speech. As per its recommendations, ‘means of communication’ would include “any words either spoken or written, signs, visible representations, information, audio, video or combination of both transmitted, retransmitted or sent through any telecommunication service, communication device or computer resource.” This could have the impact of bringing in a provision that has some similar effects as that of the now defunct Section 66A of the Information Technology Act. The lack of regard for the Supreme Court’s observations on hate speech, the need to look at it through the lens of equality and the over-broadness of restrictions on speech are likely to be dangerous for free speech if the recommendations of this committee are acted upon.
Fixing Aadhaar: Security developers' task is to trim chances of data breach
The article was published in Business Standard on January 10, 2017
I feel no joy when my prophecies about digital identity systems come true. This is because from a Popperian perspective these are low-risk prophecies. I had said that that all centralised identity databases will be breached in the future. That may or may not happen within my lifetime so I can go to my grave without worries about being proven wrong. Therefore, the task before a security developer is not only to reduce the probability but more importantly to eliminate the possibility of certain occurrences.
The blame for fragility in digital identity systems today can be partially laid on a World Bank document titled “Ten Principles on Identification for Sustainable Development” which has contributed to the harmonisation of approaches across jurisdictions. Principle three says, “Establishing a robust — unique, secure, and accurate — identity”. The keyword here is “a”. Like The Lord of the Rings, the World Bank wants “one digital ID to rule them all”. For Indians, this approach must be epistemologically repugnant as ours is a land which has recognised the multiplicity of truth since ancient times.
In “Identities Research Project: Final Report” funded by Omidyar Network and published by Caribou Digital — the number one finding is “people have always had, and managed, multiple personal identities”. And the fourth finding is “people select and combine identity elements for transactions during the course of everyday life”. As researchers they have employed indirect language, for layman the key takeaway is a single national ID for all persons and all purposes is an ahistorical and unworkable solution.
Revoke all Aadhaar numbers that have been compromised, breached, leaked, illegally published or inadvertently disclosed and regenerate new global identifiers. Photo: Reuters
|
The paper in its fourth key recommendation says “cryptographically embed Aadhaar ID into Authentication User Agency (AUAs) and KYC User Agency (aka KUAs) — specific IDs making correlation impossible”. The paper considers several designs for such local identifier where — 1) no linking is possible, 2) only unidirectional linking is possible, and 3) bidirectional linking is possible referring to a similar scheme in the LSE identity report.
Though I had spoken about tokenisation as a fix for Aadhaar earlier, I wrote about it for the first time on the 31st of March, 2017, in The Hindu. The steps would be required are as follows. First, revoke all Aadhaar numbers that have been compromised, breached, leaked, illegally published or inadvertently disclosed and regenerate new global identifiers aka Aadhaar Numbers. Second, reduce the number of KYC transactions by eliminating all use cases that don’t result in corresponding transparency or security benefits. For example, most developed economies don’t have KYC for mobile phone connections. Three, the UIDAI should issue only tokens to those government entities and private sector service providers that absolutely must have KYC. When the NATGRID wants to combine subsets of 20 different databases for up to 12 different intelligence/law enforcement agencies they will have to approach the UIDAI with the token or Aadhaar number of the suspect. The UIDAI will then be able to release corresponding tokens and/or the Aadhaar number to the NATGRID. Implementing tokenisation introduces both technical and institutional checks and balances in our surveillance systems.
On 25th of July 2017, UIDAI published the first document providing implementation details for tokenisation wherein KUAs and AUAs were asked to generate the tokens. But this approach assumed that KYC user agencies could be trusted. This is because the digital identity solution for the nation as conceived by Aadhaar architects is based on the problem statement of digital identity within a firm. Within a firm all internal entities can be trusted. But in a nation state you cannot make this assumption. Airtel, a KUA, diverted 190 crores of LPG subsidy to more than 30 lakh payment bank accounts that were opened without informed consent. Axis Bank Limited, Suvidha Infoserve (a business correspondent) and eMudhra (an e-sign provider or AUA) have been accused of using replay attacks to perform unauthorised transactions. In November last year, the UIDAI indicated to the media that they were working on the next version of tokenisation — this time called dummy numbers or virtual numbers. This work needs to be accelerated to mitigate some of the risks in the current system.
The paper in its fourth key recommendation says “cryptographically embed Aadhaar ID into Authentication User Agency (AUAs) and KYC User Agency (aka KUAs) — specific IDs making correlation impossible”. The paper considers several designs for such local identifier where — 1) no linking is possible, 2) only unidirectional linking is possible, and 3) bidirectional linking is possible referring to a similar scheme in the LSE identity report.Though I had spoken about tokenisation as a fix for Aadhaar earlier, I wrote about it for the first time on the 31st of March, 2017, in The Hindu. The steps would be required are as follows. First, revoke all Aadhaar numbers that have been compromised, breached, leaked, illegally published or inadvertently disclosed and regenerate new global identifiers aka Aadhaar Numbers. Second, reduce the number of KYC transactions by eliminating all use cases that don’t result in corresponding transparency or security benefits. For example, most developed economies don’t have KYC for mobile phone connections. Three, the UIDAI should issue only tokens to those government entities and private sector service providers that absolutely must have KYC. When the NATGRID wants to combine subsets of 20 different databases for up to 12 different intelligence/law enforcement agencies they will have to approach the UIDAI with the token or Aadhaar number of the suspect. The UIDAI will then be able to release corresponding tokens and/or the Aadhaar number to the NATGRID. Implementing tokenisation introduces both technical and institutional checks and balances in our surveillance systems.On 25th of July 2017, UIDAI published the first document providing implementation details for tokenisation wherein KUAs and AUAs were asked to generate the tokens. But this approach assumed that KYC user agencies could be trusted. This is because the digital identity solution for the nation as conceived by Aadhaar architects is based on the problem statement of digital identity within a firm. Within a firm all internal entities can be trusted. But in a nation state you cannot make this assumption. Airtel, a KUA, diverted 190 crores of LPG subsidy to more than 30 lakh payment bank accounts that were opened without informed consent. Axis Bank Limited, Suvidha Infoserve (a business correspondent) and eMudhra (an e-sign provider or AUA) have been accused of using replay attacks to perform unauthorised transactions. In November last year, the UIDAI indicated to the media that they were working on the next version of tokenisation — this time called dummy numbers or virtual numbers. This work needs to be accelerated to mitigate some of the risks in the current system.
Internet Governance Forum Report 2017
The Centre for Internet and Society was invited as one of the participating civil society organisations. The meeting was attended by Sunil Abraham (Executive Director), Elonnai Hickok (Director) - Internet Governance and Vidushi Marda (representing both CIS as Programme Manager and ARTICLE 19 as Policy Advisor).
CIS members participated as speaker / panelists in the following sessions:
- Human Rights based Cyber Security Strategy
- Body as Data: Dataveillance, the Informatisation of the Body and Citizenship
- What digital future for vulnerable people?
- Benchmarking ICT companies on digital rights: How-to and lessons learned
- CyberBRICS: Building the Next Generation Internet, STEP by Step
- State-led interference in encrypted systems: a public debate on different policy approaches
- Artificial Intelligence in Asia: What’s Similar? What’s Different? Findings from our AI workshops
- Datafication and Social Justice: What Challenges for Internet Governance?
- Fake news, Content Regulation and Platformization of the Web: A Global South Perspective
Full report here
Another Step towards Privacy Law
The column was published in Governance Now in January 15, 2018 issue.
(Illustration: Ashish Asthana) |
On July 31 the ministry of electronics and information technology (MeitY) constituted a committee of experts, headed by justice (retired) BN Srikrishna, to deliberate on a data protection framework for India. The committee is another step in India’s journey in formulating a national-level privacy legislation.
The formulation of a privacy law started as early as 2010 with an approach paper for a legislation on privacy towards envisioning a privacy framework for India. In 2011, a bill on right to privacy was drafted. In 2012 the planning commission constituted a group of experts, with justice (retired) AP Shah as its chief, which prepared a report recommending a privacy framework.
A month after the formation of the committee, in August, the sectoral regulator, Telecom Regulatory Authority of India (TRAI), released the consultation paper, ‘Privacy, Security and Ownership of the Data in the Telecom Sector’. In the same month, the supreme court in a landmark decision recognised privacy as a fundamental right.
In November 2017, the expert group released a ‘White Paper of the Committee of Experts on a Data Protection Framework for India’ to solicit public comments on the contours of a data protection law for India.
To understand the evolution of the thinking around a privacy framework for India, this article outlines and analyses common themes and differences between (a) the 2012 group of experts’ report, and the 2017 expert committee’s white paper.
The white paper seeks to gather inputs from the public on key issues towards the development of a data protection law for India. The paper places itself in the context of the NDA government’s Digital India initiative, the justice Shah committee report, and the judicial developments on the right to privacy in India. It is divided into three substantive parts: (1) scope and exemptions, (2) grounds of processing, obligation and entities, individual rights, and (3) regulation and enforcement. Each part is comprised of deep dives into key issues, international practices, preliminary views of the committee, and questions for public consultation.
Broadly, the 2012 report defined nine national-level privacy principles and recommended a co-regulatory framework that consisted of privacy commissioners, courts, self-regulating organisations, data controllers, and privacy officers at the organisational level. At the outset, the 2017 white paper is different from that report simply by the fact that it is a consultation paper soliciting views as compared to a report that recommends a broad privacy framework for India. In doing so, the white paper explores a broader set of issues than those discussed in the justice Shah report – ranging from the implications of emerging technologies on the relevance of traditional privacy principles, data localisation, child’s consent, individual participation rights, the right to be forgotten, cross-border flow of data, breach notification etc. Given that the white paper is a consultation paper, this article examines the provisional views shared in it with the recommendations of the 2012 report.
Key areas that the both the documents touch upon (though not necessarily agree on) include:
Applicability
The 2012 report of experts recommended a privacy legislation that extends the right to privacy to all persons in India, all data that is processed by a company or equipment located in India, and to data that originate in India.
Provisional views in the white paper reflect this position, but also offer that applicability could be in part determined by the legitimate interest of the state, carrying on a business or offering services or goods in India, and if, despite location, the entity is processing the personal data of Indian citizens. The provisional views also touch upon retrospective application of a data protection law and agree with the 2012 report by recommending that a law apply to privacy and public bodies. They also go a step further by recommending specific exemptions in application for well defined categories of public or private entities.
Exceptions
The experts’ report defined the following exceptions to the right to privacy: artistic and journalistic purposes, household purposes, historic and scientific research, and the Right to Information. Exceptions that must be weighed against the principles of proportionality, legality, and necessary in a democratic state included: national security, public order, disclosure in public interest, prevention, detection, investigation, and prosecution of criminal offences, and protection of the individual or of the rights and freedoms of others.
Provisional views in the 2017 white paper broadly mirror the exemptions defined in the experts’ report, but do not weigh exceptions related to national security and public interest etc. against the principles of proportionality, legality, and necessary in a democratic state and instead explored a review mechanism for these exceptions.
Consent
Provisional views in the white paper on consent note that aspects of consent should include that it is freely given, informed and specific and that standards for implied consent need to be evolved.
Though the 2012 experts’ report defined a principle for choice and consent, this principle did not define aspects of what would constitute valid consent, yet it did incorporate an opt-out mechanism.
Notice
Provisional views in the white paper hold that notice is important in enabling consent and explore a number of mechanisms that can be implemented to effect meaningful notice such as codes of practice for designing notice, multilayered notices, assessing notices in privacy impact assessments, assigning ‘data trust scores’ based on their data use policy, and having a ‘consent dashboard’ to help individuals manage their consent across entities.
These views build upon and complement the principle of notice defined in the 2012 report which defined components of a privacy policy as well as other forms of notice including data breach (also addressed in the white paper) and legal access to personal information.
Purpose limitation/minimisation
Provisional views in the white paper recognise the challenges that evolving technology is posing to the principle of purpose limitation and recommend that layered privacy policies and the standard of reasonableness can be used to contextualise this principle to actual purposes and uses.
Though the 2012 report defined a purpose limitation principle, the principle does not incorporate a standard of reasonableness or explore methods of implementation.
Data Retention and Quality
Provisional views in the white paper suggest that the principles of data retention and data quality can be guided by the terms “reasonably and necessary” to ensure that they are not overly burdensome on industry.
The 2012 report of experts briefly touched on data retention in the principle of purpose limitation –holding that practices should be in compliance with the national privacy principles.
Right to Access
Provisional views in the white paper recognise the importance of the right confirmation, access, and rectify personal information of the individual, but note that this is increasingly becoming harder to enforce with respect to data that is observed behavioral data and derived from habits. A suggested solution is to impose a fee on individuals for using these rights to deter frivolous requests.
Though the 2012 report defined a principle of access and correction it did not propose a fee for using this right and it included the caveat that if the access would affect the privacy rights of others, access may not be given by the data controller.
Enforcement Mechanisms
Provisional views in the 2017 white paper broadly agree with the appropriateness of the model of co-regulation and development of codes of practice as suggested in the 2012 report. Within the system envisioned in the 2012 report of experts, self-regulating organisations at the industry level will have the ability to develop industry specific norms and standards in compliance with the national privacy principles to be approved by the privacy commissioner.
Accountability
The provisional views of the white paper go beyond the principle of accountability defined in the 2012 report by suggesting that data controllers should not only be held accountable for implementation of defined data protection standards, but in defined circumstances, also for harm that is caused to an individual.
Additional Obligations and Data Controllers
Provisional views in the white paper suggest the following mechanisms as methods towards ensuring accountability of specific categories of data controllers: registration, data protection impact assessment, data audits, and data protection officers that are centres of accountability.
The 2012 experts’ report also envisioned impact assessments and investigations carried out by the privacy commissioner and the role of a data controller, but did not explore registration of these entities.
Authorities and Adjudication
The both documents are in agreement on the need for a privacy commissioner/data protection authority and envision similar functions such as conducting privacy impact assessments, audits, investigation, and levying of fines. The white paper differs from the 2012 experts’ report in its view that the appellate tribunals under the IT Act and bodies like the National Commission Disputes Redressal Commission could potentially be appropriate venues for adjudicating and resolving disputes.
Though the 2012 experts’ report recommended that complaints can be issued through an alternative dispute resolution mechanism, to central and regional level commissioners, or to the courts – for remedies– enforcement of penalties should involve district and high-level courts and the supreme court. The 2012 report specified that a distinct tribunal should not be created nor should existing tribunals be relied upon as there is the possibility that the institution will not have the capacity to rule on a broad right of privacy. Individuals that can be held liable by individuals include data controllers, organisation directors, agency directors, and heads of governmental departments.
Penalty and Remedy
The white paper goes much further in its thinking on penalties, remedies and compensation than the 2012 report of experts – discussing potential models for calculation of civil penalties including nature and extent of violation of the data protection obligation, nature of personal information involved, number of individuals affected, whether infringement was intentional or negligent, measures taken by the data controller to mitigate the damage, and previous track record of the data controller.
The white paper is a progressive and positive step towards formulating a data protection law for India that is effective and relevant nationally and internationally. It will be interesting to see the public response to it and the response of the committee to the inputs received from the consultation as well as how the final recommendations differ, build upon, and incorporate previous policy steps towards a comprehensive privacy framework for India.
‘Hurt sentiments’ cost Udaipur internet access for four days
Udaipur: In April 2017, a Facebook post led to 21-year-old Ibrahim* getting arrested and Rajasthan’s Udaipur city losing its mobile internet for four days (broadband banned only for first day). The authorities say the hateful content proliferating after Ibrahim’s social media post in praise of neighbouring nation Pakistan could be tackled only by curtailing internet service. Ibrahim’s family has since left the Fatehnagar locality where they were residing.
“On April 19, an FIR was filed by Fatehnagar resident Rahul Chawda” stating that Ibrahim “is a Muslim and has commented on Facebook ‘Pakistan zindabad tha, Pakistan zindabad hai aur Pakistan zindabad rahega’, which had hurt their religious sentiments. People from Vishwa Hindu Parishad and Shiv Sena had also come along with Rahul to press that a case of sedition be filed,” Subhash Chand, head constable of Fatehnagar police station, told 101reporters.
A case under section 153A (promoting enmity on grounds of religion, race, place of birth, etc.) of the Indian Penal Code (IPC) and section 67 of the Information Technology Act (punishment for publishing or transmitting obscene material in electronic form) was registered. “However, sedition charges were not registered as their report did not have sufficient basis for it,” Chand says.
Ibrahim, an undergraduate, lived in a slum in Fatehnagar and did odd jobs to earn money. His father works as a taxi driver to support a family of four children. “Ibrahim had no past criminal record. His family left the locality after the incident. Their house is locked since past few months. He was arrested the same day when FIR was registered, but is presently out on bail,” says Gopal Lal Sharma, station house officer, Fatehnagar police station.
In his locality though, Ibrahim’s reputation was that of a “notorious” boy. “His family was fed up with him. He used to post useless content on Facebook. The atmosphere in the city was tensed between the communities at that time. So, his post triggered the religious sentiments,” says Nadir Khan, 40, a neighbour.
Udaipur police say the content posted by Ibrahim on social media was hateful and could’ve lead to clashes between communities. “Isn’t it enough to say the post was inflammatory?” replied Anand Shrivastava, inspector general of police (IG), Udaipur, when questioned about the content of Ibrahim’s post. “Such messages get easily viral on social media. Some people use Facebook and WhatsApp to spread hatred, but there is no particular site, or content that is blocked during internet shutdown. Accessibility to the internet is completely restricted,” he added.
“Messages that could outrage the religious sentiments of the Hindu community were circulated, and we had to shut down internet in the district for four days,” Shrivastava says. When asked what happens if such inflammatory content finds its way back on internet once it is restored, the IG says, “We review the situation. If it is still in circulation, we can continue with the shutdown.”
‘More than an FB post’
Then Udaipur district magistrate Rohit Gupta, however, doesn’t attribute the shutdown to the post by Ibrahim. “It was not because of a particular kid. There were other reasons. Some incidents had happened in the city which led to a lot of improper posts being circulated on social media,” says Rohit Gupta, who is now the district magistrate for Kota.
Explaining the administrative procedure behind an internet shutdown, Gupta says, “Based on a report from the police, many agencies, including intelligence and the affected party, are consulted about the decision to implement internet shutdown. Curtailing internet doesn’t allow the situation to aggravate further. Its fallout affects the general masses, too, but that happens even in the case of a curfew when we restrict people’s movement.”
Gupta says internet shutdown is a preventive action to keep the situation from escalating into a full blown law and order problem. “People will then question why the administration didn’t act in time to prevent it.”
While the administration ensured that banking and lease-line providers were not affected during the internet ban, several other businesses dependent on internet were affected.
“Why all of us?”
“If four people post hateful content on social media, why should 20 lakh others be punished? When police are unable to control a situation, the easiest way they have is to curtail the internet. I couldn’t work for four days. Many others, who depend on internet for work like me, were affected. They should ban only the social media,” says Chhatrapati Sarupria, an online graphic designer who petitioned the sessions and district court against the arbitrary suspension of internet services in Udaipur.
Cyber experts feel there can be other ways to keep social and business activities out of the purview of ban during such law and order situation, but the competent authorities fail to make any attempts in this direction.
“Internet shutdown is not the only solution. Since, there is no procedure to stop only the hateful content on social media, the only option left is to turn off the internet completely. Facebook has a ‘report abuse’ mechanism, which allows review and removal of any post that goes against the Facebook community standards. We need to work on better alternatives to control inflammatory content on social media. Only if such alternative ways are initiated now, they can be regulated as we progress,” says Mukesh Choudhary, a cyber expert.
*Name changed to protect identity.
(Shruti Jain is a Jaipur-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.)
Mobile net ban during peaceful protest leaves farmers confused
In Sikar district, about 15,000 farmers had staged a protest at Krishi Upaj Mandi on 1 September 2017 under the banner of All India Kisan Sabha. Their major demands were farm loan waiver, pension for farmers and implementation of the recommendations of the Swaminathan Commission. The protest had the support of students, traders' associations, anganwadi workers, transport unions and a few other organisations. About 100,000 people joined farmers in a solidarity march during the next 13 days. The demonstrations continued and when talks with the government failed, thousands of farmers set out to lay siege to the district collector's office and block highways on September 11. Accordingly, the district administration clamped prohibitory orders under Section 144 of Criminal Procedure Code, restricting assembly of five or more people, and blocked mobile internet in the district. |
|
Kishan Pareek, district secretary of Communist Party of India (Marxist) which took part in the protest, contended that though the government says the ban was enforced to check the spread of violence, the actual motive was something else. He says the administration was vying to stifle their movement but couldn’t use force as the protesters were peaceful. "So, they resorted to spreading rumours to provoke us to commit any violent activity. If internet was working that time, we could have easily denied those [rumours],” he says. According to Pareek, the rumours that circulated that day included: the protest has turned violent at some location, police have fired bullets/charged baton at the protesters, additional force has been called in from Jaipur etc. As broadband was operational, the organisers managed to counter falsehood with facts and the misinformation didn't spread outside Sikar. Pareek says whichever protest-spot the rumours portrayed as violence-ridden, their social media team shared videos from there on Facebook to counter them. |
Pictures above: Thousands of farmers held a protest at Krishi Upaj Mandi, Sikar in September 2017.
Nevertheless, in the absence of mobile internet, farmers’ teams that had gathered at various highways to block roads had difficulty processing the false information that was trickling in. Though it created much confusion among them, it failed to instigate them.
Rajpal Singh, a Sikar-based member of CPI(M)'s social media wing, informed that the mainstream media didn't give much attention to the protest. He says it were local websites and newspapers that covered the event, which is why the administration banned internet, hoping restriction on the flow of information would throw a spanner in the works. Apart from local news websites, local Facebook pages -- Sikar Aapno, Sikar Sandesh, We love CPIM- Dhod and CPIM Sikar, etc. -- were giving minute-by-minute updates of the farmers' protest.
The internet services were resumed in Sikar a day later as the protest did not get violent and the protesters were not found circulating any provocative content.
A former CPI(M) MLA, president of All India Kisan Sabha and leader of the farmers' agitation, Amara Ram, told 101Reporters that one of the very reasons their movement enjoyed humongous public support was its peaceful nature. He says as their movement unfolded, people from Sikar and outside realised this protest would not turn violent and it’s a cause that needed support.
As cautious as the government might have been about the September 11 protest, police presence indicated that the law-enforcement agency did not perceive it as a threat. One of the protesters, Nemichand, says only 50-odd policemen had been deputed for the protest march of 15,000+ farmers to the district collectorate. He claimed that the number of men in khaki dwindled to 20 by the afternoon.
He alleged that the real reason for internet shutdown was stopping the dissemination of news about their protest as it exposes the Modi government's inconsiderate approach towards farmers.
“Everybody in Sikar was talking about the internet ban. Since there was no legitimate reason for the ban, the government couldn’t continue with it, fearing how they will justify,” he says.
The administration confirmed that the ban was imposed fearing threat to law and order in the district due to the gathering of thousands. “Though they were protesting peacefully the initial ten days at the mahapadav, they had planned to block the district collectorate on September 11 in thousands. To restrict their movement, internet was suspended in Sikar. During such situations, no one writes positive about the administration. We didn’t want to provide them a platform for spreading rumours that could have made the protestors violent. If there had been no internet ban that day, something big would have happened,” Jai Prakash Narayan, additional district collector and additional district magistrate, told 101reporters. “Broadband was made working during the internet ban so that the private and government offices were not affected. While giving order for internet ban, it is made sure that normal call and broadband facilities are not debarred. General masses are affected but internet shutdown is the only option we have,” he added. “While their blockade continued for three days, we restricted internet services only for first 24 hours as the protest had gained stability till then.” Even three months after the high-powered ministerial committee was formed to look into the farmers’ demand, nothing has been done. Now, they plan to stage a protest in February 2018 when the state assembly will be in function. Former CPI(M) MLA Pema Ram says, “Preparation for February protest has already begun. Kisan Sansads are being organised in Sikar, where active farmers from each village participate to raise demands regarding implementation of the recommendations of the Swaminathan Commission report, a solution to the menace of stray cattle, complete farm loan waiver and pension for farmers. They then discuss it with other farmers in their villages.”
|
Pictures above: Apart from local news websites, local Facebook pages - Sikar Aapno, Sikar Sandesh, We love CPIM- Dhod and CPIM Sikar, etc. - were giving minute-by-minute updates of the farmers' protest. Pictures courtesy: Shruti Jain
(Shruti Jain is a Jaipur-based journalist and a member of 101Reporters.com, a pan-India network of grassroots reporters.)
Data Protection: We can innovate, leapfrog
The article was published in the Deccan Herald on January 20, 2018.
Even if we can read them, we may not have the necessary legal training to understand them. According to a tweet thread by Pat Walshe (@privacymatters), the Tetris app, a popular video game, has a privacy policy that details the third-party advertising companies that they share data with. These third-parties include "123 Ad Networks; 13 Online Analytics companies; 62 Mobile Advertising Networks; 14 Mobile Analytics companies. The linked privacy policies for Tetris run to 407,000 words, compared to 450,000 words for the entire 'Lord of the Rings trilogy'." The child aged four and above that plays the game and her parents need an intermediary to deal with the corporations hiding behind Tetris.
Unlike the European Union, which has more than 37 years of history when it comes to data protection law, India is starting with a near blank slate after the Supreme Court confirmed that privacy is a constitutionally-guaranteed fundamental right in the Puttaswamy case judgement. While we would want to maintain adequacy and compatibility with the EU General Data Protection Regulation (GDPR) because it has become the global standard, we must realise that there is an opportunity for leapfrogging. This article attempts to introduce the reader to three different visions for intermediaries that have emerged within the Indian data protection debate around the accountability principle. I will also provide a brief sketch of an idea that we are developing at the Centre for Internet and Society. This is an incomplete list as there must be more proposals for regulatory innovation around the accountability principle that I am currently unaware of.
n Account Aggregators: The 'India Stack' ecosystem that has been built around the Aadhaar programme first proposed intermediaries called Account Aggregators. Account Aggregators manage consent artifacts. India Stack has traditionally been described as having four layers -- presenceless, paperless, cashless and consent. The consent layer is supposed to feature Account Aggregators. If, for example, a data subject wanting an insurance policy visits an insurance portal, the portal would collect personal information and a consent artifact from her and pass it on to multiple insurance companies. These insurance companies would send personalised bids to the portal, which would be displayed on a comparative grid to enable empowered selection.
The data structure consent artifact has been provided in the Master Direction from RBI titled "Non-Banking Financial Company Account Aggregator Directions," published in September 2016. How does this work? The fields includes (i) identity and optional contact information; (ii) nature of the financial information requested; (iii) purpose; (iv) the identity of the recipients, if any; (v) URL/address for notifications when the consent artifact is used; (vi) consent artifact creation date, expiry date, identity and signature/digital signature of the Account Aggregator; and (vii) any other attribute as may be prescribed by the RBI. While Account Aggregators make it frictionless for the grant of consent and also for the harvesting of consent by data controllers, it does not make it easy for you to manage and revoke your consent.
n Data Trusts: Most recently, Na.Vijayashankar, a Bengaluru-based cybersecurity and cyberlaw expert, has proposed intermediaries called 'Data Trusts' registered with the regulator and who (i) will work as escrow agents for the personal data (which would be classified by type for different degrees of protection); (ii) will make privacy notices accessible by translating them into accessible language and formats; (iii) disclose data minimally to different data controllers based on the purpose limitation; (iv) issue tokens or pseudonymous identifiers and monetise the data for the benefit of the data subject. To ensure that Data Trusts truly protect the interests of the data subject, Vijayashankar proposes three requirements: (a) public performance reviews (b) audits by the regulator and (c) "an arms-length relationship with the data collectors." In his proposal, Data Trusts are firms with "the ability to process a real-time request from the data subject to supply appropriate data to the data collector."
n Learned Intermediaries: The Takshashila Institution published a paper titled Beyond Consent: A New Paradigm for Data Protection, authored by Rahul Matthan, partner at the law firm Trilegal. Learned Intermediaries would perform mandatory audits on all data controllers above a particular threshold. Like Vijayashankar, Matthan also requires these intermediaries to be certified by an appropriate authority. The main harm that he focuses on is, bias or discrimination. He proposes three stages of audit which are designed for the age of Big Data and Artificial Intelligence: "(i) Database Query Review; (ii) Black Box Audits; and (iii) Algorithm Review". Matthan also tentatively considers a rating system. Learned Intermediaries are a means to address information asymmetry in the market by making data subjects more aware. The impact of churn on their bottom-lines, it is hoped, will force data controllers to behave in an accountable manner, protecting rights and mitigating harms.
n Consent Brokers: Finally, I have proposed the model of a 'Consent Broker' by modifying the concept of the Account Aggregator. Like the Account Aggregator proposal, we would want a competitive set of consent brokers who will manage consent artifacts for data subjects. However, I believe there should be a 1:1 relationship between data subjects and consent brokers so that the latter compete for the business of data subjects. Like Vijayashankar, I believe that the consent broker must have an "arms-length distance" from data controllers and must be prohibited from making any money from them. Consent brokers could also be trusted to take proactive actions for the data subjects, such as access and correction.
The need of the hour is the production of regulatory innovations and robust discussions around them for all the nine privacy principles in the Justice AP Shah committee report -- notice, choice and consent, collection limitation, purpose limitation, access and correction, disclosure of information, security, openness and accountability.
Artificial Intelligence in India: A Compendium
Towards understanding the state of AI in India, challenges to the development and adoption of the same, and ethical concerns that arise out of the use of AI - CIS is undertaking research to understand and document national developments, discourse, and impact (actual and potential) to ethical and regulatory solutions and compare the same against global developments in the space. As part of this, CIS is creating a compendium of reports that dive into the use of AI across sectors including healthcare, manufacturing, governance, and finance.
Each report seeks to map the present state of AI in the respective sector. In doing so, it explores: Use: What is the present use of AI in the sector? What is the narrative and discourse around AI in the sector? Actors: Who are the key stakeholders involved in the development, implementation and regulation of AI in the sector? Impact: What is the potential and existing impact of AI in the sector? Regulation: What are the challenges faced in policy making around AI in the sector?
The reports are as follows:
- AI and the Banking and Finance Industry in India: (19th June 2018 Update: This case study has been modified to remove interview quotes, which are in the process of being confirmed. The link above is the latest draft of the report.)
- AI in the Governance Sector in India
The research is funded by Google India. Comments and feedback are welcome. The reports are drafts.
CIS Submission to the Committee of Experts on a Data Protection Framework for India
The submission is divided into four parts — I. Preliminary, II. Scope and Exemption, III. Grounds of Processing, Obligations of Entities and Individual Rights and IV. Regulation and Enforcement. The submission follows the same the order as adopted by the White Paper.
Please access the full submission here.
AI and Manufacturing and Services in India: Looking Forward
Event Report: Download (PDF)
The Roundtable comprised of participants from different sides of the AI and manufacturing and services spectrum including practitioners, representatives from multinational companies, think tanks, academicians, and researchers. The Roundtable discussed various questions regarding AI in the manufacturing and services industry in India.
The round of discussions began with initial observations from the in progress research that the Centre for Internet and Society (CIS) is undertaking, on the use of AI in manufacturing and services. Some of the uses of AI that the research had thus far identified across various sectors included AI platforms in IT services for accurate forecasting for businesses, AI driven automation of routine tasks in manufacturing and production, and AI driven analytics for forecasting in the agriculture sector. The discussion then proceeded to the benefits of using AI - including efficient and effective results, precision, and automation of repetitive maintenance tasks. The draft research also acknowledges that although the use of AI is beneficial in many ways, there are also some key concerns around job displacement, privacy, lack of awareness, and a needed capacity to fully understand and use new AI technologies. The draft research also identified a few key AI initiatives in India, such as Wipro Holmes, TCS Ignio, and G.E, that were providing solutions to help automating software maintenance tasks and helping in the smooth working of SAP (Systems, Applications & Products) operations. Innovative uses of AI in areas such as crop production (M.I.T.R.A.) and dairy optimization (StellApps) were also identified.
To understand the present state of AI and impact of the same, the session was opened to discussion on the following questions: See the full report here.
Unpacking Data Protection Law: A Visual Representation
Cross-posted from Privacy International blog.
Credits: Flag illustrations, when not created by the authors, are from Ibrandify / Freepik.
The Fundamental Right to Privacy - A Visual Guide
A Series of Op-eds on Data Protection
The first article "User consent is the key to data protection in India" examines the debate around consent and the arguments made to discard it. I question the premise of big data exceptionalism, particularly in the absence of any mature governance models which address use regulation.
In the second article "Robust economic argument for a sound Indian data protection law", I examine the substance of the argument of 'innovation' as a legitimate competing interest with respect to privacy, and questionthe economic arguments made in support of innovation enabled by unregulated access to data.
In the third article "India’s data protection law needs graded enforcement mechanism", I look at the two competing arms of regulation - enforcement and compliance, and how a balance of two is need in India,with an empowered regulator and drawing from the principles from responsive regulation theory.
People Driven and Tech Enabled – How AI and ML are Changing the Future of Cyber Security in India
Introduction
In a study conducted by Cisco, it was found that in the past 12-18 months, cyber attacks have caused Indian companies to incur financial damages amounting to USD 500,000.
There is a need to strengthen the nodal agencies in an enterprise that can deal with these threats to prevent irreparable damage to enterprises and their customers. An SOC within any organization is the team responsible for detecting, monitoring, analyzing, communicating and remedying security threats. The SOC technicians employ a combination of technologies and processes to ensure that an enterprise’s security is not compromised. As instances of cyber attacks increase both in number and sophistication, SOCs need to use state of the art technologies to stay one step ahead of the attackers. Presently, SOCs face a number of infrastructural problems such as the low priority given to a cyber security budget, slower and passive response to threats, dearth of skilled technicians, and the absence of a global intelligence network for cyber-threats. This is where technologies such as Artificial Intelligence and Machine learning are helping, by monitoring the system to identify cyber attacks, and analyse the severity of the threat, and in some cases by blocking such threats.
Evolution of Security Operations Centers
In the same study, Cisco looked at the evolution of cyber threats and how companies were using technologies such as AI and ML to ameliorate those threats. Another key insight the study brought out was that 53 and 51 percent of the subject companies were reliant on ML and AI respectively. One of the reasons behind AI and ML’s effectiveness in cyber security is their capacity not only to detect known threats but also to use their learnings from data to detect unknown threats. In his webinar, Peter Sparkes also stated that SOCs were evolving into a ‘people driven and tech enabled’ system.
People Driven and Tech Enabled
In the case of cyber security, which in itself is a relatively new field, technologies such as AI and ML are helping companies to not only overcome infrastructural barriers but also to respond proactively to threats. A study conducted by the Enterprise Strategy Group, revealed that one-third of the respondents believed that ML technology could detect new and unknown malware.
The study also stated that the use of machine learning to detect and prevent threats from unknown malware reduced the number of cases the cyber security team had to investigate.
Similarly, the tasks of monitoring and blocking which were earlier conducted by entry level analysts were now done by systems, using machine learning. Typically, the AI acts as the first monitoring system after which the threat is examined by the company’s technicians who possess the requisite skill set and experience. By delegating the time consuming task of continuous monitoring to an ML system, the technicians now have time to look at serious threats. In this way AI and humans are working together to build a stronger and responsive security protocol.
Detecting the Unknown
Cyber criminals are becoming increasingly sophisticated, and in order to prevent attacks the monitoring systems (both human and automated) need to be able to detect them before the security is compromised. The detection of threats through AI and ML is done in a similar way as it is done for the identification of spam, where the system is trained on a large amount of data which teaches the algorithm to identify right from wrong.
There have been numerous cases of stealthy cyber attacks such as wannacry and ransomware, that have evaded detection by conventional security firewalls and caused crippling damage. There is also the need to use deception technology which involves automatic detection and analysis of attacks. This technology then tricks the attackers and defeats them to bring back normalcy to the system.
The systems that can handle threats by themselves do so by following a predetermined procedure, or playbook where the AI detects activities that go against the procedure/playbook. This is more effective compared to the earlier system where the technicians would analyse the attacks on a case by case basis.
AI and ML can help in reducing the time required to detect threats enabling technicians to act proactively and prevent damage. As AI and ML systems are less prone to make mistakes compared to human beings, each threat is dealt with in a prompt and accurate manner. AI systems also help by categorising attacks based on their propensity for damage. These systems can use the large volumes of data collected about previous attacks and adapt over time to give enterprises a strong line of defence against attacks.
Passive to Active Defense
Threat to cyber security can emerge even in seemingly safe departments, such as Human Resources. It is therefore important to proactively hunt for threats across all departments uniformly.
In order to detect an anomaly, the AI and ML system will require both large volumes of data as well as a significant amount of processing power, which is difficult for smaller companies to provide. A possible solution to improve defense is to have a system of sharing SOC data between companies, and thereby creating a global database of intelligence. A system of global intelligence and threat data sharing could help smaller companies combat cyber threats without having to compromise on core business development.
Use of AI in Cyber Security in India
In 2017, Indian enterprises were infected by two lethal cyber attacks called Nyetya that crept through a trusted software - Ccleaner and infected computers
. These attacks may just be the tip of the iceberg , since there may be many other attacks that might have gone unreported, or worse, undetected. Cisco reported that less than 55 per cent of the Indian enterprises were reliant on AI or ML for combating cyber threats. Although the current numbers seem bleak, there are a number of Indian enterprises that have recently begun using AI and ML in cyber security.
One such example is HDFC bank which is in the process of introducing an AI based Cyber Security Operations Centre (CSOC).
This CSOC is based on a four point approach to dealing with threats - prevent, detect, respond and recover. The government of India has also taken its first step towards the use of AI in cyber security through a project that aims to provide cyber forensic services to the various agencies of the government including law enforcement.
Indian intelligence agencies have also entered into an agreement with tech startup Innefu, which utilizes AI, to process data and decipher threats by looking at the patterns of past threats.
As India is increasingly becoming data dense both private and public organizations need to consider cyber security with utmost seriousness and protect the data from crippling attacks.
Conclusion
Enterprises have become storehouses of user data and the SOCs have a responsibility to protect this data. The companies’ SOCs have been plagued with several problems such as lack of skilled technicians, delay in response time and the inability to proactively respond to attacks. AI and ML can help in a system of continuous monitoring as well as take over the more repetitive and time consuming tasks, leaving the technicians with more time to work on damage control. Although it must be kept in mind that AI is not a silver bullet, since attackers will try their best to confuse the AI systems through evasion techniques such as adversarial AI (where the attackers design machine learning models that are intended to confuse the AI model into making a mistake).
Hence, human intervention and monitoring of AI and ML systems in cyber security is essential to maintain the defence and protection mechanisms of enterprises.
A few topics that Indian SOCs need to consider while using AI and ML :
1. The companies need to understand that AI and ML need human expertise and supervision to be effective and hence substituting people for AI is not ideal.
2. The companies need to give equal if not more importance to data security.
3. The companies need to constantly upgrade their systems and re-skill their technicians to combat cyber security threats.
4. The AI and ML systems need to be regularly audited to ensure that they are not compromised by cyber attacks and also to ensure that they are not generating false positives.
[]. Cisco, (2018, February). Annual Cybersecurity Report. Retrieved March 8, 2018, from https://www.cisco.com/c/dam/m/digital/elq-cmcglobal/witb/acr2018/acr2018final.pdf?dtid=odicdc000016&ccid=cc000160&oid=anrsc005679&ecid=8196&elqTrackId=686210143d34494fa27ff73da9690a5b&elqaid=9452&elqat=2
[]. Enterprise Strategy Group (2017, March ). Top-of-mind Threats and Their Impact on Endpoint Security Decisions. Retrieved March 8, 2018 from https://www.cylance.com/content/dam/cylance/pdfs/reports/ESG-Research-Insights-Report-Summary-Cylance-Oct-2017.pdf
[]. Vorobeychik,Y (2016). Adversarial AI. Retrieved March 8, 2018, from https://www.ijcai.org/Proceedings/16/Papers/609.pdf
[]. Quora. ( 2081, February 15). How Will Artificial Intelligence And Machine Learning Impact Cyber Security? Retrieved March 8, 2018, from https://www.forbes.com/sites/quora/2018/02/15/how-will-artificial-intelligence-and-machine-learning-impact-cyber-security/#569454786147
[]. Sparkes, P. (2018, February 27). The 5 Essentials of Every Next-Gen SOC. Retrieved March 8, 2018, from https://www.brighttalk.com/webcast/13389/303251/the-5-essentials-of-every-next-gen-soc
[]. PTI. ( 2018, February 21).Indian companies lost $500,000 to cyber.Retrieved March 8, 2018, from https://economictimes.indiatimes.com/tech/internet/indian-companies-lost-500000-to-cyber-attacks-in-1-5-years-cisco/articleshow/63019927.cms
[]. Cisco, (2018, February). Annual Cybersecurity Report. Retrieved March 8, 2018, from https://www.cisco.com/c/dam/m/digital/elq-cmcglobal/witb/acr2018/acr2018final.pdf?dtid=odicdc000016&ccid=cc000160&oid=anrsc005679&ecid=8196&elqTrackId=686210143d34494fa27ff73da9690a5b&elqaid=9452&elqat=2
[]. Raval, A. ( 2018,January 30). AI takes cyber security to a new level for HDFC Bank.Retrieved March 8, 2018, from http://computer.expressbpd.com/magazine/ai-takes-cyber-security-to-a-new-level-for-hdfc-bank/23580/
[]. “The Centre for Development of Advanced Computing (C-DAC) under the Ministry of Electronics and Information Technology (MeitY) is working on a project to provide cyber forensic services to law-enforcing and other government and non-government agencies.” Ohri, R. (2018, February 15. Government readies AI-muscled cyber security plan. Retrieved March 8, 2018, from https://economictimes.indiatimes.com/news/politics-and-nation/government-readies-ai-muscled-cyber-security-plan/articleshow/62922403.cms utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst
[]. Chowdhury, P.A. (2017, January 30). Cyber Warfare at large in Southeast Asia, India leverages AI for the same cause Retrieved March 8, 2018, from https://analyticsindiamag.com/cyber-warfare-large-southeast-asia-india-leverages-ai-cause/
[]. Open AI.(2017 February 24). Attacking Machine Learning with Adversarial Examples. Retrieved March 8, 2018, from https://blog.openai.com/adversarial-example-research/
Analysis of ICANN revenue shows ambiguity in their records
Click to download a PDF of the Analysis
In 2014, CIS' Sunil Abraham demanded greater financial transparency of ICANN at both the Asia Pacific IGF and the ICANN Open Forum at the IGF. Later that year, CIS was provided with a list of ICANN's sources of revenue for the financial year 2014, including payments from registries, registrars, sponsors, among others, by ICANN India Head Mr. Samiran Gupta.This was a big step for CIS and the Internet community, as before this, no details on granular income had ever been publicly divulged by ICANN on request.1 Our efforts have resulted in this information now being publicly available from the years 2012 onwards. We then decided to analyze all these years of financial data collaborating with Ashoka fellow Arjun Venkatraman and following are our observations:
To get a clear picture of ICANN's revenue, it can be seen that over the years it has been growing steadily. In 2016 it was 1.7 times the revenue it made in 2012.
A breakdown by country reveals that a significantly higher proportion of their revenue is from sources registered in the United States.
It is also interesting to note that revenue from China has seen a spike in the past 2 years, especially in the period of 2015-2016. Verisign CEO, James Bidzos confirmed in an interview to analysts that Chinese activity had surprised them as well though they expected the activity to slow down in the second quarter of 2016.2
Verisign also happens to be the top paying customer for ICANN every year, running the .com/.net names. Their payments are orders of magnitude greater than payments made by any other single entity or even several collective entities.
ICANN differentiates its sources of revenues by each class of entity which stand for the following:
- RYN - Registry
- OTH - Other
- RYG - Registry
- RIR - Regional Internet Registry
- RYC - ccTLD (Top Level Domains)
- IDN - Internationalized Domain Names
- RAR - Registrar
- SPN - Sponsor
It is evident that the Registries and Registrars contribute the most to revenue however the classification of these groups in itself is ambiguous. RYG and RYN both stand for registry but we do not find any explanation given for the double entry for a single group. Secondly, Sponsors are included yet it is unclear how they have sponsored ICANN, whether through travel and accommodation of personnel or any other mode of institutional sponsorship. The Regional Internet Registries are clubbed under one heading and as a consequence it is not possible to determine individual RIR contribution such as how much did APNIC pay for the Asia and Pacific region. The total payment made by RIRs is a small fraction of the payments made by many other entities and they all pay through the Numbers Resources Organization (NRO), who is listed as paying from Uruguay however the MOU creating the NRO does not specify their location as being there. The NRO website states that " RIRs may be audited by external parties with regards to their financial activities or their operations. RIRs may also allow third parties to report security incidents with regards to their services." 3 Their records show that financial disclosure is done in an inconsistent manner with the last publication from AFRINIC being for the year 2013 4 while the RIPE NCC who coordinates the area of Central Europe, Middle East and Russia last published an annual report for the year 2016 but had no financial information in it. 5
The most frequently found words in their sources which can give us an idea of the structure of the contributing entity yields the following result.
Several clients have registered multiple corporate entities to increase their payments to ICANN such as DropCatch, Everest and Camelot. 6 The first of them, DropCatch, is a domain drop catcher, essentially selling expired domain names to the highest bidder. By the end of 2016, about 43% of all ICANN-accredited registrars were controlled by them. 7
Many clients have reported themselves from different countries over the years as well such as 'Verisign Sarl' which has been reported as originating from Switzerland and in a different year from the United States. 8 Another curious case is of the entity, 'Afilias plc', which when categorized as a sponsor (SPN) is reported from Ireland however as a registry (both RYG and RYN) is reported from the United States. Some entities have originated from one place such as the United Arab Emirates and then moved to other countries such as India.
To summarize, the key takeaways from the information we have dissected so far are:
- ICANN's revenue has been steadily increasing with the 2016 seeing a 1.6 times increase of its revenue generated in 2012.
- United States is the country that most of the revenue originates from.
- After the US, China is now the largest contribution to ICANN revenue, significantly increase their contributions from 2015.
- Verisign is the top contributing entity, their contribution much greater than other entities.
- Registries and Registrars are the main sources of revenue though there is ambiguity as to the classifications provided by ICANN such as the difference between RYG and RYN. The mode of contribution of sponsors exactly is not highlighted either.
- Several entities have been listed from different places in different years, sometimes depending on the role they have played such as whether they are a sponsor or registry. Registering multiple corporate entities to acquire more registrars has occurred as well.
1. Venkataraman, P. (2017). CIS' Efforts Towards Greater Financial Disclosure by ICANN . [online] The Centre for Internet and Society.[Accessed 14 Mar. 2018].
2. Murphy, K. (2016). Verisign has great quarter but sees China growth slowing | Domain Incite - Domain Name Industry News, Analysis & Opinion . [online] DomainIncite. [Accessed 14 Mar. 2018].
3. Nro.net. (2018). RIR Accountability Questions and Answers | The Number Resource Organization . [online] [Accessed 11 Mar. 2018].
6. Murphy, K. (2016). DropCatch spends millions to buy FIVE HUNDRED more registrars | Domain Incite - Domain Name Industry News, Analysis & Opinion . [online] DomainIncite.[Accessed 13 Mar. 2018].
Cambridge Analytica scandal: How India can save democracy from Facebook
The article was published in the Business Standard on March 28, 2018
The Cambridge Analytica scandal came to light when whistleblower Wylie accused Cambridge Analytica of gathering details of 50 million Facebook users. Cambridge Analytica used this data to psychologically profile these users and manipulated their opinion in favour of Donald Trump. BJP and Congress have accused each other of using the services of Cambridge Analytica in India as well. How can India safeguard the democratic process against such intervention? The author tries to answer this question in this Business Standard Special.
Those that celebrate the big data/artificial intelligence moment claim that traditional approaches to data protection are no longer relevant and therefore must be abandoned. The Cambridge Analytica episode, if anything, demonstrates how wrong they are. The principles of data protection need to be reinvented and weaponized, not discarded. In this article I shall discuss the reinvention of three such data protection principles. Apart from this I shall also briefly explore competition law solutions.
Collect data only if mandated by regulation
One, data minimization is the principle that requires the data controller to collect data only if mandated to do so by regulation or because it is a prerequisite for providing a functionality. For example, Facebook’s messenger app on Android harvests call records and meta-data, without any consumer facing feature on the app that justifies such collection. Therefore, this is a clear violation of the data minimization principle. One of the ways to reinvent this principle is by borrowing from the best practices around warnings and labels on packaging introduced by the global anti-tobacco campaign. A permanent bar could be required in all apps, stating ‘Facebook holds W number of records across X databases over the time period Y, which totals Z Gb’. Each of these alphabets could be a hyperlink, allowing the user to easily drill down to the individual data record.
Consent must be explicit, informed and voluntary
Two, the principle of consent requires that the data controller secure explicit, informed and voluntary consent from the data subject unless there are exceptional circumstances. Unfortunately, consent has been reduced to a mockery today through obfuscation by lawyers in verbose “privacy notices” and “terms of services”. To reinvent consent we need to bring ‘Do Not Dial’ registries into the era of big data. A website maintained by the future Indian data protection regulator could allow individuals to check against their unique identifiers (email, phone number, Aadhaar). The website would provide a list of all data controllers that are holding personal information against a particular unique identifier. The data subject should then be able to revoke consent with one-click. Once consent is revoked, the data controller would have to delete all personal information that they hold, unless retention of such information is required under law (for example, in banking law). One-click revocation of consent will make data controllers like Facebook treat data subjects with greater respect.
There must be a right to explanation
Three, the right to explanation, most commonly associated with the General Data Protection Directive from the EU, is a principle that requires the data controller to make transparent the automated decision-making process when personal information is implicated. So far it has been seen as a reactive measure for user empowerment. In other words, the explanation is provided only when there is a demand for it.
The Facebook feeds that were used for manipulation through micro-targeting of content is an example of such automated decision making. Regulation in India should require a user empowerment panel accessible through a prominent icon that appears repeatedly in the feed. On clicking the icon the user will be able to modify the objectives that the algorithm is maximizing for. She can then choose to see content that targets a bisexual rather than a heterosexual, a Muslim rather than a Hindu, a conservative rather a liberal, etc. At the moment, Facebook only allows the user to stop being targeted for advertisements based on certain categories. However, to be less susceptible to psychological manipulation, the user should be allowed to define these categories, for both content and advertisements.
How to fix the business model?
From a competition perspective, Google and Facebook have destroyed the business model for real news, and replaced it with a business model for fake news, by monopolizing digital advertising revenues. Their algorithms are designed to maximize the amount of time that users spend on their platforms, and therefore, don’t have any incentive to distinguish between truth and falsehood. This contemporary crisis requires three types of interventions: one, appropriate taxation and transparency to the public, so that the revenue streams for fake news factories can be ended; two, the construction of a common infrastructure that can be shared by all traditional and new media companies in order to recapture digital advertising revenues; and three, immediate action by the competition regulator to protect competition between advertising networks operating in India.
The Google challenge
With Google, the situation is even worse, since Google has dominance in both the ad network market and in the operating system market. During the birth of competition law, policy-makers and decision-makers acted to protect competition per se. This is because they saw competition as an essential component of democracy, open society, innovation, and a functioning market. When the economists from the Chicago school began to influence competition policy in the USA, they advocated for a singular focus on the maximization of consumer interest. The adoption of this ideology has resulted in competition regulators standing powerlessly by while internet giants wreck our economy and polity. We need to return to the foundational principles of competition law, which might even mean breaking Google into two companies. The operating system should be divorced from other services and products to prevent them from taking advantage of vertical integration. We as a nation need to start discussing the possible end stages of such a breakup.
In conclusion, all the fixes that have been listed above require either the enactment of a data protection law, or the amendment of our existing competition law. This, as we all know, can take many years. However, there is an opportunity for the government to act immediately if it wishes to. By utilizing procurement power, the central and state governments of India could support free and open source software alternatives to Google’s products especially in the education sector. The government could also stop using Facebook, Google and Twitter for e-governance, and thereby stop providing free advertising for these companies for print and broadcast media. This will make it easier for emerging firms to dislodge hegemonic incumbents.
DIDP Request #29 - Revenue breakdown by source for FY 2017
ICANN publication of its financial records for 2017 were missing a crucial document which lists down their revenue as per the all the legal entities as sources who contributed to it including Regional Internet Registries, various registrars and their source of origin among other details. We have requested them for this document in order to get a better idea of the how these entities contribute to ICANN.
In response to our DIDP, ICANN notified us that they are in the process of compiling this report for the year ending June 2017 and will publish the same by 31st of May, 2018. Further they remarked that this procedure of making public their revenue by source was developed as part of ICANN’s enhancements to transparency in response to CIS’s earlier DIDP which was submitted in 2015.
The said report will be published on their Financial page within the time frame mentioned.
Government gives free publicity worth 40k to Twitter and Facebook
We analyzed 5 English language newspapers daily for 2 weeks from March 12th to 26th, one week of the newspapers in Lucknow and the second week in Bangalore. Facebook, Twitter, Instagram and Alphabet backed services such as Youtube and Google Plus were part of our survey. Of a total of 33 advertisements (14 in Lucknow+19 in Bangalore), Twitter stands out as the most prominent advertising platform used by government agencies with 30 ads but Facebook at 29 was more expensive. In order to ascertain the rates of publicity, current advertisement rates for Times of India as our purpose was to solely give a rough estimation of how much the government is spending.
Advertising of this nature is not merely an inherent problem of favoring some social media companies over others but also symptomatic of a bigger problem, the lack of our native e-governance mechanisms which cause the Government to rely and promote others. Where we do have guidelines they are not being followed. By outsourcing their e-governance platforms to Twitter such as TwitterSeva, a feature created by the Twitter India team to help citizens connect better with government services, there is less of an impetus to construct better websites of their own.
If this is so because we currently do not have the capacity to build them ourselves then it is imperative that this changes. We should either be executing government functions on digital infrastructure owned by them or on open and interoperable systems. If anything, the surveyed social media platforms can be used to enhance pre-existing facilities. However, currently the converse is true with these platforms overshadowing the presence of e-governance websites. Officials have started responding to complaints on Twitter, diluting the significance of such complaint mechanisms on their respective department’s portal. Often enough such features are not available on the relevant government website. This sets a dangerous precedent for a citizen management system as the records of such interactions are then in the hands of these companies who may not exist in the future. As a result, they can control the access to such records or worse tamper with them. Posterity and reliability of such data can be ensured only if they are stored within the Government’s reach or if they are open and public with a first copy stored on Government records which ensures transparency as well. Data portability is an important facet to this issue as well as being a right consumers should possess. It provides for support of many devices, transition to alternative technologies and lastly, makes sure that all the data like other public records will be available upon request through the Right to Information procedure. The last is vital to uphold the spirit of transparency envisioned through the RTI process since interactions of government with citizens are then under its ambit and available for disclosure for whomsoever concerned.
Secondly, such practices by the Government are enhancing the monopoly of the companies in the market effectively discouraging competition and eventually, innovation. While a certain elite strata of the population might opt for Twitter or Facebook as their mode of conveying grievance, this may not hold true for the rest of the online India population.
Picking players in a free market is in violation of technology and vendor neutrality, a practice essential in e-governance to provide a level playing field for all and competing technologies. Projecting only a few platforms as de facto mediums of communication with the government inhibits the freedom of choice of citizens to air their grievances through a vendor or technology they are comfortable with. At the same time it makes the Government a mouthpiece for such companies who are gaining free publicity and consolidating their popularity. Government apps such as the SwachBharat one which is an e-governance platform do not offer much more in terms of functionality but either reflect the website or are a less mature version of the same. This leads to the problem of fracturing with many avenues of complaining such as the website, app, Twitter etc. Consequently, the priority of the people dealing with the complaints in terms of platform of response is unsure. Will I be responded to sooner if I tweet a complaint as opposed to putting it up on the app? Having an interoperable system can solve this where the Government can have a dashboard of their various complaints and responses are then made out evenly. Twitter itself could implement this by having complaints from Facebook for example and then the Twitter Seva would be an equal platform as opposed to the current issue where only they are favored.
Recent events have illustrated how detrimental the storage of data by these giants can be in terms of privacy. Data security concerns are also a consequence of such leaks. Not only is this a long overdue call for a better data protection law but at the same time also for the Government to realize that these platforms cannot be trusted. The hiring of Cambridge Analytica to influence voters in the US elections, based on their Facebook profiles and ancillary data, effectively put the governance of the country on sale by exploiting these privacy and security issues. By basing e-governance on their backbone, India is not far from inviting trouble as well. It is unnecessary and dangerous to have a go-between for matters that pertain between an individual and state.
As this article was being written, it was confirmed by the Election Commission that they are partnering with Facebook for the Karnataka Assemby Elections to promote activities such as encourage enrollment of Voter ID and voter participation. Initiatives like these tying the government even closer to these companies are of concern and cementing the latter’s stronghold.
Note: Our survey data and results are attached to this post. All research was collected by Shradha Nigam, a Vth year student at NLSIU, Bangalore.
Survey Data and Results
This report is based on a survey of government advertisements in English language newspapers in relation to their use of social media platforms and dedicated websites (“Survey”). For the purpose of this report, the ambit of the social media platforms has been limited to the use of Facebook, Twitter, YouTube, Google Plus and Instagram. The report was prepared by Shradha Nigam, a student from National Law School of India University, Bangalore. Read the full report here.
Artificial Intelligence in Governance: A Report of the Roundtable held in New Delhi
Event Report: Download (PDF)
This report provides a summary of the proceedings of the Roundtable on Artificial Intelligence (AI) in Governance (hereinafter referred to as ‘the Roundtable’). The Roundtable took place at the India Islamic Cultural Centre in New Delhi on March 16, 2018 and included participation from academia, civil society, law, finance, and government. The main purpose of the Roundtable was to discuss the deployment and implementation of AI in various aspects of governance within the Indian context.
The Roundtable began with a presentation by Amber Sinha (Centre for Internet and Society - CIS) providing an overview of the CIS’s research objectives and findings thus far. During this presentation, he defined both AI and the scope of CIS’s research, outlining the areas of law enforcement, defense, education, judicial decision making, and the discharging of administrative functions as the main areas of concerns for the study. The presentation then outlined the key AI deployments and implementations that have been identified by the research in each of these areas. Lastly, the presentation raised some of the ethical and legal concerns related to this phenomenon.
The presentation was followed by the Roundtable discussion that saw various topics in regards to the usages, challenges, ethical considerations and implications of AI in the sector being discussed. This report has identified a number of key themes of importance evident throughout these discussions.These themes include: (1) the meaning and scope of AI, (2) AI’s sectoral applications, (3) human involvement with automated decision making, (4) social and power relations surrounding AI, (5) regulatory approaches to AI and, (6) challenges to adopting AI. These themes in relation to the Roundtable are explored further below.
Meaning and Scope of AI
One of the first tasks recommended by the group of participants was to define the meaning and scope of AI and the way those terms are used and adopted today. These concerns included the need to establish a distinction between the use of algorithms, machine learning, automation and artificial intelligence. Several participants believed that establishing consensus around these terms was essential before proceeding towards a stage of developing regulatory frameworks around them.
The general fact agreed to was that AI as we understand it does not necessarily extend to complete independence in terms of automated decision making but it refers instead to the varying levels of machine learning (ML), and the automation of certain processes that has already been achieved. Several concerns that emerged during the course of the discussion centred around the question of autonomy and transparency in the process of ML and algorithmic processing. Stakeholders recommended that over and above the debates of humans in the loop [1] on the loop [2] and out of the loop, [3] there were several other gaps with respect to AI and its usage in the industry today which also need to be considered before building a roadmap for future usage. Key issues like information asymmetries, communication lags, a lack of transparency, the increased mystification of the coding process and the centralization of power all needed to be examined and analysed under the rubric of developing regulatory frameworks.
Takeaway Point: The group brought out the need for standardization of terminology as well as the establishment of globally replicable standards surrounding the usage, control and proliferation of AI. The discussion also brought up the problems with universal applicability of norms. One of the participants brought up an issue regarding the lack of normative frameworks around the usage and proliferation of AI. Another participant responded to the concern by alluding to the Asilomar AI principles.[4] The Asilomar AI principles are a set of 23 principles aimed at directing and shaping AI research in the future. The discussion brought out further issues regarding the enforceability as well universal applicability of the principles and their global relevance as well. Participants recommended the development of a shorter, more universally applicable regulatory framework that could address various contextual limitations as well.
AI Sectoral Applications
Participants mentioned a number of both current and potential applications of AI technologies, referencing the defence sector, the financial sector, and the agriculture sector. There are several developments taking place on the Indian military front with the Committee on AI and National Security being established by the Ministry of Defence. Through the course of the discussion it was also stated that the Indian Armed Forces were very interested in the possibilities of using AI for their own strategic and tactical purposes. From a technological standpoint, however, there has been limited progress in India in researching and developing AI.
While India does deploy some Unmanned Aerial Vehicles (UAVs), they are mostly bought from Israel, and often are not autonomous. It was also pointed out that contrary to reportage in the media, the defence establishment in India is extremely cautious about the adoption of autonomous weapons systems, and that the autonomous technology being rolled out by the CAIR is not yet considered trustworthy enough for deployment.
Discussions further revealed that the few technologies that have a relative degree of autonomy are primarily loitering ammunitions and are used to target radar insulations for reconnaissance purposes. One participant mentioned that while most militaries are interested in deploying AI, it is primarily from an Intelligence, Surveillance and Reconnaissance (ISR) perspective. The only exception to this generalization is China where the military ethos and command structure would work better with increased reliance on independent AI systems. One major AI system rolled out by the US is Project Maven which is primarily an ISR system. The aim of using these systems is to improve decision making and enhance data analysis particularly since battlefields generate a lot of data that isn’t used anywhere.
Another sector discussed was the securities market where algorithms were used from an analytical and data collection perspective. A participant referred to the fact that machine learning was being used for processes like credit and trade scoring -- all with humans on the loop. The participant further suggested that while trade scoring was increasingly automated, the overall predictive nature of such technologies remained within a self limiting capacity wherein statistical models, collected data and pattern analysis were used to predict future trends. The participant questioned whether these algorithms could be considered as AI in the truest sense of the term since they primarily performed statistical functions and data analysis.
One participant also recommended the application of AI to sectors like agriculture with the intention of gradually acclimatizing users to the technology itself. Respondents also stated that while AI technologies were being used in the agricultural space it was primarily from the standpoint of data collection and analysis as opposed to predictive methods. It was mentioned that a challenge to the broad adoption of AI in this sector is the core problem of adopting AI as a methodology – namely information asymmetries, excessive data collection, limited control/centralization and the obfuscatory nature of code – would not be addressed/modified. Lastly, participants also suggested that within the Indian framework not much was being done aside from addressing farmers’ queries and analysing the data from those concerns.
Takeaway Point: The discussion drew attention to the various sectors where AI was currently being used -- such as the military space, agricultural development and the securities market -- as well as potential spaces of application -- such as healthcare and manual scavenging. The key challenges that emerged were information asymmetries with respect to the usage of these technologies as well as limited capacity in terms of technological advancement.
Human Involvement with Automated Decision Making
Large parts of discussions throughout the Roundtable event were preoccupied with automated decision making and specifically, the involvement of humans (human on and in the loop) or lack thereof (human out of the loop) in this process. These discussions often took place with considerations of AI for prescriptive and descriptive uses.
Participants expressed that human involvement was not needed when AI was being used for descriptive uses, such as determining relationships between various variables in large data sets. Many agreed to the superior ability of ML and similar AI technologies in describing large and unorganized datasets. It was the prescriptive uses of AI where participants saw the need for human involvement, with many questioning the technology making more important decisions by itself.
The need for human involvement in automated decision making was further justified by references to various instances of algorithmic bias in the American context. One participant, for example, brought up the use of algorithmic decision making by a school board in the United States for human resource practices (hirings, firing, etc.) based on the standardized test scores of students. In this instance, such practices resulted in the termination of teachers primarily from low income neighbourhoods.[5] The main challenge participants identified in regards to human on the loop automated decision making is the issue of capacity, as significant training would have to be achieved for sectors to have employees actively involved in the automated decision making workflow.
An example in the context of the healthcare field was brought up by one participant arguing for human in the loop in regards to prescriptive scenarios. The participant suggested that AI technology, when given x-ray or MRI data for example, should only be limited to pointing out the correlations of diseases with patients’ scans/x-rays. Analysis of such correlations should be reserved for the medical expertise of doctors who would then determine if any instances of causality can be identified from this data and if it’s appropriate for diagnosing patients.
It was emphasized that, despite a preference for human on/in the loop in regards to automated decision making, there is a need to be cognisant of techno-solutionism due to the human tendency of over reliance on technology when making decisions. A need for command and control structures and protocols was emphasized for various governance sectors in order to avoid potentially disastrous results through a checks and balances system. It was noted that the defense sector has already developed such protocols, having established a chain of command due to its long history of algorithmic decision making (e.g. the Aegis Combat System being used by the US Navy in the 1980s).
One key reason why militaries prefer human in and on the loop systems as opposed to out of the loop systems is because of the protocol associated with human action on the battlefield. International Humanitarian Law has clear indicators of what constitutes a war crime and who is to be held responsible in the scenario but developing such a framework with AI systems would be challenging as it would be difficult to determine which party ought to be held accountable in the case of a transgression or a mistake.
Takeaway Point: It was reiterated by many participants that neither AI technology or India’s regulatory framework is at a point where AI can be trusted to make significant decisions alone -- especially when such decisions are evaluating humans directly. It was recommended that human out of the loop decision making should be reserved for descriptive practices whereas human on and in the loop decision making should be used for prescriptive practices. Lastly, it was also suggested that appropriate protocols be put in place to direct those involved in the automated decision making workflow. Particularly when the process involves judgements and complex decision making in sectors such as jurisprudence and the military.
The Social and Power Relations Surrounding AI
Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.
Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.
One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.
Some participants emphasized the need to contextualize discussions of AI and governance within larger themes of poverty, global capital and power/social relations. Their concerns were that the use of AI technologies would only create and reinforce existing power structures and should instead be utilized towards ameliorating such issues. Manual scavenging, for example, was identified as an area where AI could be used to good effect if coupled with larger socio-political policy changes. There are several hierarchies that could potentially be reinforced through this process and all these failings needed to be examined thoroughly before such a system was adopted and incorporated within the real world.
Furthermore the discussion also revealed that the objectivity attributed to AI and ML tends to gloss over the fact that there are nonetheless implicit biases that exist in the minds of the creators that might work themselves into the code. Fears regarding technology recreating a more exclusionary system were not entirely unfounded as participants pointed out the fact that the knowledge base of the user would determine whether technology was used as a tool of centralization or democratization.
One participant also questioned the concept of governance itself, contrasting the Indian government’s usage of the term in the 1950s (as it appears in the Directive Principle) with that of the World Bank in the 1990s.
Takeaway Point: Discussions of the implementation and deployment of AI within the governance landscape should attempt to take into consideration larger power relations and concepts of equity.
Regulatory Approaches to AI
Many recognized the need for AI-specific regulations across Indian sectors, including governance. These regulations, participants stated, should draw from notions of accountability, algorithmic transparency and efficiency. Furthermore, it was also stated that such regulations should consider the variations across the different legs of the governance sector, especially in regards to defence. One participant, pointing to the larger trends towards automation, recommended the establishment of certain fundamental guidelines aimed at directing the applicability of AI in general. The participant drew attention to the need for a robust evaluation system for various sectors (the criminal justice system, the securities market, etc.) as a way of providing checks on algorithmic biases. Another emphasized for the need of regulations for better quality data as to ensure machine readability and processiblity for various AI systems.
Another key point that emerged was the importance of examining how specific algorithms performed processes like identification or detection. A participant recommended the need to examine the ways in which machines identify humans and what categories/biases could infiltrate machine-judgement. They reiterated that if a new element was introduced in the system, the pre-existing variables would be impacted as well. The participant further recommended that it would be useful to look at these systems in terms of the couplings that get created in order to determine what kinds of relations are fostered within that system.
The roundtable saw some debate regarding the most appropriate approach to developing such regulations. Some participants argued for a harms-based approach, particularly in regards to determining if regulations are needed all together for specific sectors (as opposed to guidelines, best practices, etc.). The need to be cognisant of both individual and structural harms was emphasized, mindful of the possibility of algorithmic biases affecting traditionally marginalized groups.
Others only saw value in a harms based approach insomuch that it could help outline the appropriate penalties in an event of regulations being violated, arguing instead for a rights-based approach as it enabled greater room for technological changes. An approach that kept in mind emerging AI technologies was reiterated by a number of participants as being crucial to any regulatory framework. The need for a regulatory space that allowed for technological experimentation without the fear of constitutional violation was also communicated.
Takeaway Point: The need for a AI-specific regulatory framework cognisant of differentiations across sectors in India was emphasized. There is some debate about the most appropriate approach for such a framework, a harms-based approach being identified by many as providing the best perspective on regulatory need and penalties. Some identified the rights-based approach as providing the most flexibility for an rapidly evolving technological landscape.
Challenges to Adopting AI
Out of all the concerns regarding the adoption of algorithms, ML and AI, the two key points of resistance that emerged, centred around issues of accountability and transparency. Participants suggested that within an AI system, predictability would be a key concern, and in the absence of predictable outcomes, establishing redressal mechanisms would pose key challenges as well.
A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.
One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.
Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.
A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.
Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.
A discussion was also initiated regarding the problems involved in attributing responsibility within the AI chain as well as the need to demystify the process of using AI in daily life. While reiterating the current landscape, participants spoke about how the usage of AI is currently limited to the automation of certain tasks and processes in certain sectors where algorithmic processing is primarily used as a tool of data collection and analysis as opposed to an independent decision making tool.
One of the suggestions and thought points that emerged during the discussion was whether a gradual adoption of AI on a sectoral basis might be more beneficial as it would provide breathing room in the middle to test the system and establish trust between the developers, providers, and consumers. This prompted a debate about the controllers and the consumers of AI and how the gap between the two would need to be negotiated. The debate also brought up larger concerns regarding the mystification of AI as a process itself and the complications of translating the code into communicable points of intervention.
Another major issue that emerged was the question of attribution of responsibility in the case of mistakes. In the legal process as it currently exists, human imperfections notwithstanding, it would be possible to attribute the blame for decisions taken to certain actants undertaking the action. Similarly in the defence sector, it would be possible to trace the chain of command and identify key points of failure, but in the case of AI based judgements, it would be difficult to place responsibility or blame. This observation led to a debate regarding accountability in the AI chain. It was inconclusive whether the error should be attributed to the developer, the distributor or the consumer.
A suggestion that was offered in order to counter the information asymmetry as well as reduce the mystification of computational method was to make the algorithm and its processes transparent. This sparked a debate, however, as participants stated that while such a state of transparency ought to be sought after and aspired towards, it would be accompanied by certain threats to the system. A key challenge that was pointed out was the fact that if the algorithm was made transparent, and its details were shared, there would be several ways to manipulate it, translate it and misuse it.
Another question that emerged was the distribution of AI technologies and the centralization of the proliferation process particularly in terms of service provision. One participant suggested that given the limited nature of research being undertaken and the paucity of resources, a limited number of companies would end up holding the best tech, the best resources and the best people. They further suggested that these technologies might end up being rolled out as a service on a contractual basis. In which case it would be important to track how the service was being controlled and delivered. Models of transference would become central points of negotiation with alternations between procurement based, lease based, and ownership based models of service delivery. Participants suggested that this was going to be a key factor in determining how to approach these issues from a legal and policy standpoint.
Takeaway Point: The two key points of resistance that emerged during the course of discussion were accountability and transparency. Participants pointed out the various challenges involved in attributing blame within the AI chain and they also spoke about the complexities of opening up AI code, thereby leaving it vulnerable to manipulation. Certain other challenges that were briefly touched upon were the information asymmetry, excessive data collection, centralization of power in the hands of the controllers and complicated service distribution models.
Conclusion
The Roundtable provided some insight into larger debates regarding the deployment and applications of AI in the governance sector of India. The need for a regulatory framework as well as globally replicable standards surrounding AI was emphasized, particularly one mindful of the particular needs of differing fields of the governance sector (especially defence). Furthermore, a need for human on/in the loop practices with regards to automated decision making was highlighted for prescriptive instances, particularly when such decisions are responsible for directly evaluating humans. Contextualising AI within its sociopolitical parameters was another key recommendation as it would help filter out the biases that might work themselves into the code and affect the performance of the algorithm. Further, it is necessary to see the involvement and influence of the private sector in the deployment of AI for governance, it often translating into the delivery of technological services from private actors to public bodies towards discharge of public functions. This has clear implications for requirements of transparency and procedural fairness even in private sector delivery of these services. Defining the meaning and scope of AI while working to demystify algorithms themselves would serve to strengthen regulatory frameworks as well as make AI more accessible for the user / consumer.
[1]. Automated decision making model where final decisions are made by a human operator
[2]. Automated decision making model where decisions can be made without human involvement but a human can override the system.
[3]. A completely autonomous decision making model requiring no human involvement
[4]. https://futureoflife.org/ai-principles/
[5]. The participant was drawing this example from Cathy O’Neil’s Weapons of Math Destruction, (Penguin,2016), at 4-13.
A look at two problematic provisions of the draft Anti-trafficking bill
On 28 Feb 2018, the Union Cabinet approved ‘The Trafficking of Persons (Prevention, Protection and Rehabilitation) Bill, 2018’ (‘the bill’) for introduction to the Parliament. This comes after a series of consultations on an earlier 2016 draft bill, that had faced its fair share of criticism. As per the Press Information Bureau announcement, the Ministry of Women and Child Development met with various stakeholders including 60 NGOs and have incorporated many of the suggestions put forth. They’ve also stated that ‘the new law will make India a leader among South Asian countries to combat trafficking.’
However, at first glance, there appear to be several issues with overbroad or vague language used in the drafting of the bill, that stretch it into potentially problematic areas. This current post will focus on two such provisions that could lead to a deleterious effect on the Freedom of Expression. As the bill is currently not publicly available, a stakeholder’s copy of the draft is being used to source these provisions. The relevant sections have been reproduced below for convenience. (Emphasis in bold is as provided by the author).
Section 39: Buying or Selling of any person
39. (l) Whoever buys or sells any person for a consideration, shall be punished with rigorous imprisonment for a term which shall not be less than seven years but may extend to ten years, and shall also be liable to fine which shall not be less than one lakh rupees.
(2) Whoever solicits or publicises electronically, taking or distributing obscene photographs or videos or providing materials or soliciting or guiding tourists or using agents or any other form which may lead to the trafficking of a person shall be punished with rigorous imprisonment for a term which shall not be less than five years but may extend to ten years, and shall also be liable to fine which shall not be less than fifty thousand rupees but which may extend to one lakh rupees.
The grammatical acrobatics of section 39(2) aside, this anti-solicitation provision is severely problematic in that it mandates punishment even for a vaguely defined action or actions that may not actually be connected to the trafficking of a person. In other words, the provision doesn’t require any of the actions to be connected to trafficking in their intent or even outcome, but only in potential connection to the outcome. At the same time, it says these ‘shall’ be punished!
This vagary that ignores actual or even probabilistic causation flies in the face of standard criminal law which requires mens rea along with actus rea. The excessively wide scope of this badly drafted provision leaves it prone to abuse. For example, currently the provision allows the following interpretation to be included: ‘Whoever publicizes electronically, by providing materials in any form, which may lead to trafficking of a person shall be punished…’. Even the electronic publicizing of an academic study on trafficking could fall under the provision as it currently reads, if it is argued that publishing studies that show the prevalence of trafficking ‘may lead to the trafficking of a person’! It is not hard to imagine that an academic study that shows trafficking numbers at embarrassingly high rates could be threatened with this provision. Similarly, any of our vast number of self-appointed moral guardians could also pull within this provision any artistic work that they may personally find offensive or ‘obscene’. Simply put, without any burden of showing a causal connect, it could be argued that anything ‘may lead’ to the trafficking of a person. Needless to say, this paves the way for a severe chilling effect on free speech, especially on critical speech around trafficking issues.
Section 41: Offences related to media
41. (l) Whoever commits trafficking of a person with the aid of media, including, but not limited to print, internet, digital or electronic media, shall be punished with rigorous imprisonment for a term which shall not be less than seven years but may extend to ten years and shall also be liable to fine which shall not be less than one lakh rupees.
(2) Whoever distributes, or sells or stores, in any form in any electronic or printed form showing incidence of sexual exploitation, sexual assault, or rape for the purpose of exploitation or for coercion of the victim or his family members, or for unlawful gain shall be punished with rigorous imprisonment for a term which shall not be less than three years but may extend to seven years and shall also be liable to fine which shall not be less than one lakh rupees.
The drafters of this bill have perhaps overlooked the fact that unlike the physical world, the infrastructure of the electronic / digital world requires 3rd party intermediaries to handle information during most forms of electronic activities, whether it is transmission, storage or display. As it is not feasible, desirable or even practically possible for intermediaries to verify the legality of every bit of data that gets transferred or stored by the intermediary, ‘safe harbours’ are provided in law for intermediaries, protecting them from liability of the information being transmitted through them. These ensure that entities that act as architectural requirements and intermediary platforms are able to operate smoothly and without fear. If intermediaries are not granted this protection, it puts them in the unenviable position of having to monitor un-monitorable amounts of data, and face legal action for the slip-ups that are bound to happen regularly. Furthermore, there are several levels of free speech and privacy issues associated with having multiple gatekeepers on the expression of speech online. A charitable reading of the intent of a provision which does not recognise safe harbours for 3rd party intermediaries, would be that the drafters of the bill have simply not realised that users who upload and initiate transfer of information online, are not the same parties who do the actual transmission of the information.
Distribution, selling or storing of information online would require the transmission of information over intermediaries, as well as the temporary storage of such information on intermediary platforms. In India, intermediaries engaging with transmission or temporary storage of information are provided safe harbour[1] by Section 79 of the Information Technology Act, 2000 (‘IT Act’), so long as they:
(i) act as a mere ‘conduit’ and do not initiate the transmission, select the receiver of the transmission, or select or modify the information contained in the transmission.
(ii) exercise due diligence while discharging duties under this Act, and observes other guidelines that the Central Government may prescribe.
The Information Technology (Intermediary Guidelines) Rules, 2011, list out the nature of the due diligence to be followed by intermediaries to claim exemption under Section 79 of the IT Act.
Intermediaries will not be granted safe harbour if they have conspired, abetted, aided or induced commission of the unlawful act, or if they do not remove or disable access to information upon receiving actual knowledge, or notice from the Government, of the information that is transmitted or stored by the intermediary being used for unlawful purposes.
Thus it can be seen that the IT Act already provides an in-depth regime for intermediary liability, and given its non-obstante clause which states that Section 79 of the IT Act would apply “Notwithstanding anything contained in any law for the time being in force” , as well as the reiteration of the IT Act’s overriding effect via Section 81, which states that the provisions of the Act ‘shall have effect notwithstanding anything inconsistent therewith contained in any other law for the time being in force’ (barring the exercise of copyright or patent rights), it is generally considered the appropriate legal framework for this issue. However, it appears that the drafters of the 2018 Anti-trafficking bill have not considered this aspect at all, since they have not referenced the IT Act in this context in the bill, and have additionally added their own non-obstante clause in Section 59 of the bill:
59. The provisions of this Act, shall be in addition to and not in derogation of the provisions of any other law for the time being in force and, in case of any inconsistency, the provisions of this Act shall have overriding effect on the provisions of any such law to the extent of the inconsistency.
So the regime as prescribed by the IT Act allows for safe harbours, whereas the regime as prescribed by the Anti-Trafficking bill does not allow for safe harbours, and both say that they would an overriding effect for any conflicting law. This legislative bumble could potentially be solved by using the settled principle that a special Act prevails over a general legislation. This is still a little tricky as they are technically both special Acts. It could be argued that given the context of the Anti-trafficking bill as focusing on trafficking, and the context of the IT Act focusing on the interface of law and technology, that for the purposes of Section 41(2) of the Anti-trafficking bill, the IT Act is the special legislation. And thus Section 79 of the IT Act should make redundant the relevant portion of Section 41(2) of the Anti-trafficking bill. This reading would require the bill to be modified so as to remove the redundancy and the conflicting portion of Section 41(2).
[1] In 2016, a division bench of the Delhi High Court held in the case of Myspace Inc vs Super Cassettes Industries Ltd that a safe harbour immunity for intermediaries was necessary as it was not technically feasible to pre-screen content from third parties, and that tasking intermediaries with this responsibility could have a chilling effect on free speech, It held that their responsibility was limited to the extent of acting upon receiving ‘actual knowledge’. Earlier, in determining what ‘actual knowledge’ refers to, in 2015 the Supreme Court of India in the landmark case of Shreya Singhal vs Union of India, required this to be in the form of a notice via a court or government order. Thus under our current law, intermediaries are granted a safe harbour from liability so long as they act upon court or government orders which notify them of content that is required to be taken down.
Clarification (18th August, 2018): A letter sent to the Ministry of Women and Child Development mentioned the Centre for Internet & Society as instituionally endorsing a critique of the The Trafficking of Persons (Prevention, Protection and Rehabilitation) Bill, 2018. We seek to clarify that the Centre for Internet & Society did not endorse the letter to the Ministry.
What’s up with WhatsApp?
Silhouettes of mobile users next to a screen projection of the WhatsApp logo. Photo: REUTERS/Dado Ruvic/Illustration
The article by Aayush Rathi and Sunil Abraham was published in Asia Times on April 20, 2018.
Back in April 2016, when WhatsApp Inc announced it was rolling out end-to-end encryption (E2EE) for its billion-plus strong user base as a default setting, the messaging behemoth signaled to its users it was at the forefront of providing technological solutions to protect privacy.
Emphasized in the security white paper explaining the implementation of the technology is the encryption of both forms of communication – one-to-one and group and also of all types of messages shared within such communications – text as well as media.
Simply put, all communication taking place over WhatsApp would be decipherable only to the sender and recipient – it would be virtual gibberish even to WhatsApp.
This announcement came in the backdrop of Apple locking horns with the FBI after being asked to provide a backdoor to unlock the San Bernardino mass shooter’s iPhone. This further reinforced WhatsApp Inc’s stand on the ensuing debate between the interplay of privacy and security in the digital age.
Kudos to WhatsApp, for there is growing discussion around how encryption and anonymity is central to enabling secure online communication which in turn is integral to essential human rights such as those of freedom of opinion and expression.
WhatsApp may have taken encryption to the masses, but here we outline why WhatsApp’s provisioning of privacy and security measures needs a more granular analysis – is the company doing what it claims to be doing? Security issues with WhatsApp’s messaging protocol certainly are not new.
Man-in-the-middle attacks
A study published by a group of German researchers from Ruhr University highlighted issues with WhatsApp’s implementation of its E2EE protocol to group communications. Another paper points out how WhatsApp’s session establishment strategy itself could be problematic and potentially be targeted for what are called man-in-the-middle (MITM) attacks.
An MITM attack takes the form of a malicious actor, as the term suggests, placing itself between the communicating parties to eavesdrop or impersonate. The Electronic Frontier Foundation also highlighted other security vulnerabilities, or trade-offs, depending upon ideological inclinations, with respect to WhatsApp allowing for storage of unencrypted backups, issues with WhatsApp’s web client and also with its approach to cryptographic key change notifications.
Much has been written questioning WhatsApp’s shifting approach to ensuring privacy too. Quoting straight from WhatsApp’s Privacy Policy: “We joined the Facebook family of companies in 2014. As part of the Facebook family of companies, WhatsApp receives information from, and shares information with, this family of companies.” Speaking of Facebook …
Culling out larger issues with WhatsApp’s privacy policies is not the intention here. What we specifically seek to explore is right at the nexus of WhatsApp’s security and privacy provisioning clashing with its marketing strategy: the storage of data on WhatsApp’s servers, or ‘blobs,’ as they are referred to in the technical paper. Facebook’s rather. In WhatsApp’s words: “Once your messages (including your chats, photos, videos, voice messages, files and share location information) are delivered, they are deleted from our servers. Your messages are stored on your own device.”
In fact, this non-storage of data on their ‘blobs’ is emphasizes at several other points on the official website. Let us call this the deletion-upon-delivery model.
A simple experiment
While drawing up a rigorous proof of concept, made near-impossible thanks to WhatsApp being a closed source messaging protocol, a simple experiment is enough to raise some very pertinent questions about WhatsApp’s outlined deletion-upon-delivery model. It should, however, be mentioned that the Signal Protocol developed by Open Whisper Systems and pivotal in WhatsApp’s rolling out of E2EE is open source. Here is how the experiment proceeds:
Rick sends Morty an attachment.
Morty then switches off the data on her mobile device.
Rick downloads the attachment, an image.
Subsequently, Rick deletes the image from his mobile device’s internal storage.
Rick then logs into a WhatsApp’s web client on his browser. (Prior to this experiment, both Rick and Morty had logged out from all instances of the web client)
Upon a fresh log-in to the web client and opening the chat with Morty, the option to download the image is available to Rick.
The experiment concludes with bewilderment at WhatsApp’s claim of deletion-upon-delivery as outlined earlier. The only place from which Morty could have downloaded the image would be from Facebook’s ‘blobs.’ The attachment could not have been retrieved from Morty’s mobile device as it had no way of sending data and neither from Rick’s mobile device as it no longer existed in the device’s storage.
As per the Privacy Policy, the data is stored on the ‘blobs’ for a period of 30 days after transmission of a message only when it can’t be delivered to the recipient. Upon delivery, the deletion-upon-delivery model is supposed to kick in.
Another straightforward experiment that leads to a similar conclusion is seeing the difference in time taken for a large attachment to be forwarded as opposed to when the same large attachment is uploaded. Forwarding is palpably quicker than uploading afresh: non-storage of attachments on the ‘blob’ would entail that the same amount should be taken for both.
The plot thickens. WhatsApp’s Privacy Policy goes on to state: “To improve performance and deliver media messages more efficiently, such as when many people are sharing a popular photo or video, we may retain that content on our servers for a longer period of time.” The technical paper offers no help in understanding how WhatsApp systems assess frequently shared encrypted media messages without decrypting it at its end.
A possible explanation could be the usage of metadata by WhatsApp, which it discloses in its Privacy Policy while simultaneously being sufficiently vague about the specifics of it. That WhatsApp may be capable of reading encrypted communication through the inclusion of a backdoor bodes well for law enforcement, but not so much for unsuspecting users.
The weakest link in the chain
Concerns about backdoors in WhatsApp’s product have led the French government to start developing their own encrypted messaging service. This will be built using Matrix – an open protocol designed for real-time communication. Indeed, the Privacy Policy lays out that the company “may collect, use, preserve, and share your information if we have a good-faith belief that it is reasonably necessary to respond pursuant to applicable law or regulations, to legal process, or to government requests.”
The Signal Protocol is the undisputed gold standard of E2EE implementations. It is the integration with the surrounding functionality that WhatsApp offers which leads to vulnerabilities. After all, a chain is only as strong as its weakest link. Assuming that the attachments stored on the ‘blobs’ are in encrypted form, indecipherable to all but the intended recipients, this does not pose a privacy risk for the users from a technological point of view.
However, it is easy lose sight of the fact that the Privacy Policy is a legally binding document and it specifically states that messages are not stored on the ‘blobs’ as a matter of routine. As a side note, WhatsApp’s Privacy Policy and Terms of Service are refreshing in their readability and lack of legalese.
As we were putting the final touches to this piece, news from WABetaInfo, a well-reputed source of information on WhatsApp features, has broken that newer updates of WhatsApp for Android are permitting users to re-download media deleted up to three months back. WhatsApp cannot possibly achieve this without storing the media in the ‘blobs,’ or in other words, in violation of its Privacy Policy.
As the aphorism goes: “When the service is free, you are the product.”
Revenge Porn Laws across the World
Country-wise legislation on “revenge porn” laws, click to download the file (PDF, 636 Kb)
- Alabama
- Alaska
- Arizona
- Arkansas
- California
- Colorado
- Connecticut
- Delaware
- District of Columbia
- Florida
- Georgia
- Hawaii
- Idaho
- Illinois
- Iowa
- Kansas
- Louisiana
- Maine
- Maryland
- Michigan
- Minnesota
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- North Carolina
- North Dakota
- Oklahoma
- Oregon
- Pennsylvania
- South Dakota
- Tennessee
- Texas
- Utah
- Vermont
- Virginia
- Washington
- West Virginia. 20
- Wisconsin. 20
1. Europe
Country |
Statute |
Year |
Contents – definition, classification, punishment, standard of proof |
Punishment |
Remarks |
United Kingdom |
|||||
Section 33, Criminal Justice and Courts Act 2015 |
2015 |
Makes it an offence in England and Wales to disclose private sexual photographs and films without the consent of the individual depicted and with the intent to cause distress. |
There is a maximum sentence of two years imprisonment |
A call has been made to cover a wider range of offences through enactment of a new Act. The law is not applicable retroactively. |
|
Part 1, Section 2, Abusive Behaviour and Sexual Harm Act, 2016 |
2016 |
A person (“A”) commits an offence if— (a)A discloses, or threatens to disclose, a photograph or film which shows, or appears to show, another person (“B”) in an intimate situation, (b)by doing so, A intends to cause B fear, alarm or distress or A is reckless as to whether B will be caused fear, alarm or distress, and (c)the photograph or film has not previously been disclosed to the public at large, or any section of the public, by B or with B’s consent. |
A person who commits such an offence is liable— (a)on summary conviction, to imprisonment for a term not exceeding 12 months or a fine not exceeding the statutory maximum (or both), (b) on conviction on indictment, to imprisonment for a term not exceeding 5 years or a fine (or both). |
|
|
Part 3, Section 51, Amendment to Justice Act |
2016 |
It is an offence for a person to disclose a private sexual photograph or film if the disclosure is made— (a)without the consent of an individual who appears in the photograph or film, and (b)with the intention of causing that individual distress. |
A person guilty of an offence under this section is liable— (a) on conviction on indictment, to imprisonment for a term not exceeding 2 years or a fine (or both), and (b) on summary conviction, to imprisonment for a term not exceeding 6 months or a fine not exceeding the statutory maximum (or both). |
|
|
Article 208E, Maltese Criminal Code 2016 |
2016 |
It punishes whoever, with an intent to cause distress, emotional harm or harm of any nature, discloses a private sexual photograph or film without the consent of the person or persons displayed or depicted in such photograph or film. |
Such person would, on conviction be liable to imprisonment for a term of up to two years or to a fine of not less than €,3000 and not more than €5,000, or to both such imprisonment and fine |
|
|
General Data Protection Regulation
|
|
Regulation (EU) 679/2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation (GDPR) A person also has the right to object to the unauthorised dissemination or public display of his/her photograph (section 22, Art Copyright Law) |
If privacy rights are infringed, the individual affected can seek civil law remedies, which include:
|
In 2014, The Bundesgerichtshof (BGH), upheld an earlier ruling from a regional court in Koblenz, Germany, that said a man did not have the right to keep intimate photos of his ex-lover just because she had consented to taking them in the first place. | |
2016 |
Under the new law, the persons have a right to oppose the use of their personal data. |
Revenge porn may be sanctioned by 2 years of imprisonment and a 60.000 euro fine. |
|
2. United States of America
State |
Statute |
Year |
Constituents of the offence |
Punishment |
Remarks |
SB301. Code of Alabama 1975 Secs 15-20A-4 to 15-20A-43 amended. |
2017 |
Distribution of an intimate, private image, also known as "revenge porn" or "nonconsensual pornography." The law applies when the depicted person has not consented to the transmission and the sender intends to harass or intimidate the depicted person. |
A first offense is a Class A misdemeanor, punishable by up to a year in jail. Subsequent offenses are Class C felonies, punishable by up to 10 years in prison. |
|
|
|
Provides that whoever publishes or distributes electronic or printed photographs, pictures, or films that show the genitals, anus, or female breast of the other person or show that person engaged in a sexual act commits a crime of harassment in second degree.
|
Harassment in the second degree is a class B misdemeanor. Class B misdemeanors are less serious crimes, punishable by up to 90 days in jail and a fine of up to $2,000. |
|
||
Unlawful Distribution of Private Images, 2016 through amending Section 13‑1425 of the Arizona Revised Statutes |
2016 |
It provides that the distribution of images depicting states of nudity or specific sexual activities of another person is unlawful. If such disclosure is by electronic means, it is a Class 4 felony. If the person threatens to disclose but does not disclose, then it is a Class 1 Misdemeanor.
|
· Class 4 felonies are punishable up to 3.75 years in prison.
· A class 1 misdemeanor is the most serious misdemeanor offense and is punishable by up to 6 months in jail, 3 years of probation (5 years maximum probation for DUI offenses) and a $2,500 fine plus surcharges. |
· The earlier state revenge porn bill was scrapped due to an ACLU Lawsuit. |
|
July, 2015 |
It criminalizes the distribution of an image, picture, video, or voice or audio recording of a sexual nature to harass, frighten, intimidate, threaten, or abuse a family or household member or a person in a current or former dating relationship; and for other purposes. Such an offence is a Class A misdemeanour. |
· A Class A misdemeanor is the most serious type of misdemeanor in Arkansas and it is punishable by up to one year in jail and a fine of up to $2,500. |
Defines a “dating relationship” as romantic/ intimate relationship between two individuals and provides additional factors. |
||
2014 |
Under this provision an act of revenge porn is defined as someone who “photographs or records by any means the image of the intimate body part or parts of another identifiable person, under circumstances where the parties agree or understand that the image shall remain private, and the person subsequently distributes the image taken, with the intent to cause serious emotional distress, and the depicted person suffers serious emotional distress. |
It shall be a disorderly conduct, misdemeanour. |
|
||
2014 |
Posting a Private Image for Harassment and Posting a Private Image for Pecuniary Gain is a Class 1 Misdemeanor. |
The defendant can be fined up to $10,000. |
|
||
October 1, 2015 |
It provides that whoever indulges in Unlawful dissemination of an intimate image is guilty |
The offence is a class A misdemeanor. |
|
||
2014 |
When a person knowingly reproduces, distributes, exhibits, publishes, transmits, or otherwise disseminates a visual depiction of a person who is nude, or who is engaging in sexual conduct, when the person knows or should have known that the reproduction, distribution, exhibition, publication, transmission, or other dissemination was without the consent of the person depicted and that the visual depiction was created or provided to the person under circumstances in which the person depicted has a reasonable expectation of privacy, such person shall be guilty of violation of privacy. |
It is a class A misdemeanor; class G felony. |
|
||
2014 |
It provides that a person knowingly discloses one or more sexual images of another identified or identifiable person when: (1) The person depicted did not consent to the disclosure of the sexual image; (2) There was an agreement or understanding between the person depicted and the person disclosing that the sexual image would not be disclosed; and (3) The person disclosed the sexual image with the intent to harm the person depicted person depicted or to receive financial gain. (b) A person who violates this subsection shall be guilty of a misdemeanour. |
Upon conviction such person shall be fined not more than the amount set forth in section 101 of the Criminal Fine Proportionality Amendment Act of 2012, approved June 11, 2013 (D.C. Law 19-317; D.C.42 Official Code § 22-3571.01), imprisoned for not more than 180 days, or both. |
|
||
Florida Statute Section 784.049
|
2015 |
· “Sexually cyberharass” means to publish a sexually explicit image of a person that contains or conveys the personal identification information of the depicted person to an Internet website without the depicted person’s consent, for no legitimate purpose, with the intent of causing substantial emotional distress to the depicted person. |
A person who willfully and maliciously sexually cyberharasses another person commits a misdemeanor of the first degree, punishable as provided in s. 775.082 or s. 775.083. If a person who has one prior conviction for sexual cyber harassment and who commits a second or subsequent sexual cyber harassment commits a felony of the third degree, punishable as provided in s. 775.082, s. 775.083, or s. 775.084. |
Aggrieved person can also initiate civil action to recover damages. |
|
Article 3 of Chapter 11 of Title 16 of the Official Code of Georgia
|
2014 |
· Whoever Electronically transmits or posts or causes such transmission or posting, in one or more transmissions or posts, a photograph or video which depicts nudity or sexually explicit conduct of an adult when the transmission or post is harassment or causes financial loss to the depicted person and serves no legitimate purpose to the depicted person.
|
Such person shall be guilty of a misdemeanor of a high and aggravated nature; provided, however, that upon a second or subsequent violation of this Code section, he or she shall be guilty of a felony and, upon conviction thereof, shall be punished by imprisonment of not less than one nor more than five years, a fine of not more than $100,000.00, or both. |
There is a rebuttable presumption on the Internet Service Provider that it was not aware of the content of such post |
|
Section 711-1110.9, Hawaii Revised Statutes
|
2014 |
A person commits the offense of violation of privacy in the first degree if The person knowingly discloses an image or video of another identifiable person either in the nude, as defined in section 712-1210, or engaging in sexual conduct, as defined in section 712-1210, without the consent of the depicted person, with intent to harm substantially the depicted person with respect to that person’s health, safety, business, calling, career, financial condition, reputation, or personal relationships. |
Violation of privacy in the first degree is a class C felony. In addition to any penalties the court may impose, the court may order the destruction of any recording made in violation of this section |
Exception has been carved out for When the person was voluntarily nude in public or voluntarily engaging in sexual conduct in public. |
|
2017 |
Intentionally or with reckless disregard disseminating, publishing or selling (or conspiring) any image or images of the intimate areas of another person or persons without the consent of such other person or persons and he knows or reasonably should have known that one or both parties agreed or understood that the images should remain private. |
The punishments are decided on a case by case basis, but seem to range from state prison terms of three to five years, and/or a fine of up to $5,000 based on the cases that have emerged |
|
||
2015 |
Criminalises the Non-Consensual Dissemination of Private Sexual Images. |
It is a Class 4 Felony. |
|
||
2017 |
Dissemination, publication, distribution or causing it thereof of photograph or film showing another person in partial or full nudity or engaged in a sex act, without consent, is harassment. |
Such an offence is harassment in first degree and is an aggravated misdemeanour |
|
||
2016 |
Breach of privacy is knowingly and without lawful authority: disseminating any videotape, photograph, film or image of another identifiable person 18 years of age or older who is nude or engaged in sexual activity and under circumstances in which such identifiable person had a reasonable expectation of privacy, with the intent to harass, threaten or intimidate such identifiable person, and such identifiable person did not consent to such dissemination |
Such an offence is a Severity level 8, person felony |
|
||
2015 |
A person commits the offense of non-consensual disclosure of a private mage when all of the following occur: (1) The person intentionally discloses an image of another person who is seventeen years of age or older, who is identifiable from the image or information displayed in connection with the image, and whose intimate parts are exposed in whole or in part. (2) The person who discloses the image obtained it under circumstances in which a reasonable person would know or understand that the image was to remain private. (3) The person who discloses the image knew or should have known that the person in the image did not consent to the disclosure of the image. (4) The person who discloses the image has the intent to harass or cause emotional distress to the person in the image, and the person who commits the offense knew or should have known that the disclosure could harass or cause emotional distress to the person in the image |
Whoever commits the offense of non-consensual disclosure of a private image shall be fined not more than ten thousand dollars, imprisoned with or without hard labour for not more than two years, or both |
No liability is imposed on the computer service used for posting such image |
||
|
2015 |
A person is guilty of unauthorized dissemination of certain private images if the person, with the intent to harass, torment or threaten the depicted person or another person, knowingly disseminates, displays or publishes a photograph, videotape, film or digital recording of another person in a state of nudity or engaged in a sexual act or engaged in sexual contact in a manner in which there is no public or newsworthy purpose when the person knows or should have known that the depicted person: (1) Is 18 years of age or older; (2) Is identifiable from the image itself or information displayed in connection with the image; and (3) Has not consented to the dissemination, display or publication of the private image. |
Unauthorized dissemination of certain private images is a Class D crime. |
|
|
|
2014 |
A person may not intentionally cause serious emotional distress to another by intentionally placing on the internet an identifiable a photograph, film, videotape, recording, or any other reproduction of the image of the other person that reveals the identity of the other person with his or her intimate parts exposed or while engaged in an act of sexual contact: (1) knowing that the other person did not consent to the placement of the image on the internet; and (2) under circumstances in which the other person had a reasonable expectation that the image would be kept private. |
A person who violates this section is guilty of a misdemeanor and on conviction is subject to imprisonment not exceeding 2 years or a fine not exceeding $5,000 or both. |
|
|
|
2016 |
If a person threatens, coerces, or intimidates dissemination of any sexually explicit visual material of another person shall be punishable under section 145f. |
Section 145f- first offense punishable by 93 day sentence or fine up to $500. |
|
|
|
2016 |
A cause of action against a person for the non-consensual dissemination of private sexual images exists when: (1) a person disseminated an image without the consent of the person depicted in the image; (2) the image is of an individual depicted in a sexual act or whose intimate parts are exposed in whole or in part; (3) the person is identifiable: (i) from the image itself, by the person depicted in the image or by another person; or (ii) from the personal information displayed in connection with the image; and (4) the image was obtained or created under circumstances in which the person depicted had a reasonable expectation of privacy. The fact that the individual depicted in the image consented to the creation of the image or to the voluntary private transmission of the image is not a defense to liability for a person who has disseminated the image without consent. |
Conviction for nonconsensual dissemination of private sexual images qualifies as a prior “qualified domestic violence-related offense” that enhances penalties for convictions for domestic assault, 4th & 5th degree assault, stalking, and violation of a harassment restraining order. |
Consent to such image being taken is no defense |
|
Sections 2-6 of Chapter 200 of NRS
|
2015 |
A person commits the crime of unlawful dissemination of an intimate image when, with the intent to harass, harm or terrorize another person, the person electronically disseminates or sells an intimate image which depicts the other person and the other person: (1) did not give prior consent to the electronic dissemination or sale; (2) had a reasonable expectation that the intimate image would be kept private and would not be made visible to the public; and (3) was at least 18 years of age when the intimate image was created |
Such person is guilty of a category D felony |
|
|
2016 |
Nonconsensual dissemination of private sexual images with the intent to harass, intimidate, threaten, or coerce the depicted person. |
It is a felony. |
|
||
2015 |
Making a nonconsensual recording that reveals another person’s "intimate parts" or shows the person engaged in a sexual act without consent. |
Felony, three to five years in prison, a fine not to exceed $15,000. |
|
||
HB 142, new section added to the New Mexico Criminal Code
|
2015 |
Unauthorised distribution of sensitive images without that person’s consent with the intent to harass, humiliate or intimidate that person or cause substantial emotional distress is a misdemeanour. |
It is a misdemeanour. Upon a second or subsequent conviction, the offender is guilty of a fourth degree felony |
|
|
§ 14-190.5A, Article 26 of Chapter 14 of the General Statutes |
2015 |
A person is guilty of disclosure of private images if all of the following apply: (1) The person knowingly discloses an image of another person with the intent to do either of the following: a. Coerce, harass, intimidate, demean, humiliate, or cause financial loss to the depicted person. b. Cause others to coerce, harass, intimidate, demean, humiliate, or cause financial loss to the depicted person. (2) The depicted person is identifiable from the disclosed image itself or information offered in connection with the image. (3) The depicted person's intimate parts are exposed or the depicted person is engaged in sexual conduct in the disclosed image. (4) The person discloses the image without the affirmative consent of the depicted person. (5) The person discloses the image under circumstances such that the person knew or should have known that the depicted person had a reasonable expectation of privacy. |
For an offense by a person who is 18 years of age or older at the time of the offense, the violation is a Class H felony.
For a first offense by a person who is under 18 years of age at the time of the offense, the violation is a Class 1 misdemeanor.
For a second or subsequent offense by a person who is under the age of 18 at the time of the offense, the violation is a Class H felony |
The Court may order destruction of such image.
This provision is in addition to civil and criminal remedies. |
|
2015 |
· A person commits the offense of distribution of intimate images if the person knowingly or intentionally distributes to any third party any intimate image of an individual eighteen years of age or older, if: (1) The person knows that the depicted individual has not given consent to the person to distribute the intimate image; (2) The intimate image was created by or provided to the person under circumstances in which the individual has a reasonable expectation of privacy; and (3) Actual emotional distress or harm is caused to the individual as a result of the distribution under this section. |
Distribution of an intimate image is a class A misdemeanor |
|
||
Section 1040.13b of Title 21, Oklahoma Statutes
|
2016 |
· A person commits nonconsensual dissemination of private sexual images when he or she: (1) Intentionally disseminates an image of another person: a. who is at least eighteen (18) years of age, b. who is identifiable from the image itself or information displayed in connection with the image, and c. who is engaged in a sexual act or whose intimate parts are exposed, in whole or in part; (2) Disseminates the image with the intent to harass, intimidate or coerce the person, or under circumstances in which a reasonable person would know or understand that dissemination of the image would harass, intimidate or coerce the person (3) Obtains the image under circumstances in which a reasonable person would know or understand that the image was to remain private; and (4) Knows or a reasonable person should have known that the person in the image has not consented to the dissemination. |
Any person who violates the provisions of this section shall be guilty of a misdemeanour punishable by imprisonment in a county jail for not more than one (1) year or by a fine of not more than.
One Thousand Dollars ($1,000.00), or both such fine and imprisonment |
The court shall have the authority to order the defendant to remove the disseminated image should the court find it is in the power of the defendant to do so. |
|
|
2015 |
· (1) A person commits the crime of unlawful dissemination of an intimate image if: (a) The person, with the intent to harass, humiliate or injure another person, knowingly causes to be disclosed through an Internet website an identifiable image of the other person whose intimate parts are visible or who is engaged in sexual conduct; (b) The person knows or reasonably should have known that the other person does not consent to the disclosure; (c) The other person is harassed, humiliated or injured by the disclosure; and (d) A reasonable person would be harassed, humiliated or injured by the disclosure. |
Unlawful dissemination of an intimate image is a Class A misdemeanor.
Unlawful dissemination of an intimate image is a Class C felony if the person has a prior conviction under this section at the time of the offense. |
|
|
Title 18 Pennsylvania Consolidated Statutes § 3131
|
2014 |
A person commits the offense of unlawful dissemination of intimate image if, with intent to harass, annoy or alarm a current or former sexual or intimate partner, the person disseminates a visual depiction of the current or former sexual or intimate partner in a state of nudity or engaged in sexual conduct. |
· An offense shall be: (1) A misdemeanor of the first degree, when the person depicted is a minor. (2) A misdemeanor of the second degree, when the person depicted is not a minor. |
|
|
2015 |
No person may use or disseminate in any form any visual recording or photographic device to photograph or visually record any other person without clothing or under or through the clothing, or with another person depicted in a sexual manner, for the purpose of viewing the body of, or the undergarments worn by, that other person, without the consent or knowledge of that other person, with the intent to self-gratify, to harass, or embarrass and invade the privacy of that other person, under circumstances in which the other person has a reasonable expectation of privacy. |
A violation of this section is a Class 1 misdemeanor.
However, a violation of this section is a Class 6 felony if the victim is seventeen years of age or younger and the perpetrator is at least twenty-one years old. |
|
||
2017 |
(a) A person commits unlawful exposure who, with the intent to cause emotional distress, distributes an image of the intimate part or parts of another identifiable person if: (1) The image was photographed or recorded under circumstances where the parties agreed or understood that the image would remain private; and (2) The person depicted in the image suffers emotional distress. (b) As used in this section: (1) "Emotional distress" has the same meaning as defined in § 39-17-315; and (2) "Intimate part" means any portion of the primary genital area, buttock, or any portion of the female breast below the top of the areola that is either uncovered or visible through less than fully opaque clothing. |
A violation of subsection (a) is a Class A misdemeanor. However, nothing in this section precludes punishment under any other section of law providing for greater punishment. |
|
||
Chapter 98B, ATitle 4, Civil Practice and Remedies Code
|
2015 |
(a)A defendant is liable, as provided by this chapter, to a person depicted in intimate visual material for damages arising from the disclosure of the material if: (1)the defendant discloses the intimate visual material without the effective consent of the depicted person; (2)the intimate visual material was obtained by the defendant or created under circumstances in which the depicted person had a reasonable expectation that the material would remain private; (3)the disclosure of the intimate visual material causes harm to the depicted person; and (4)the disclosure of the intimate visual material reveals the identity of the depicted person in any manner, including through: (A)any accompanying or subsequent information or material related to the intimate visual material; or (B)information or material provided by a third party in response to the disclosure of the intimate visual material (b) defendant is liable, as provided by this chapter, to a person depicted in intimate visual material for damages arising from the promotion of the material if, knowing the character and content of the material, the defendant promotes intimate visual material described by Subsection (a) on an Internet website or other forum for publication that is owned or operated by the defendant. |
An offense under this section is a Class A misdemeanor.
If conduct that constitutes an offense under this section also constitutes an offense under another law, the actor may be prosecuted under this section, the other law, or both. |
Aggrieved person may recover actual and exemplary damages.
The provisions shall be liberally construed by the courts to promote its underlying purpose to protect Persons from, and provide adequate remedies to victims of, the disclosure or promotion of intimate visual material. |
|
|
2014 |
An actor commits the offense of distribution of intimate images if the actor, with the intent to cause emotional distress or harm, knowingly or intentionally distributes to any third party any intimate image of an individual who is 18 years of age or older, if: (a) the actor knows that the depicted individual has not given consent to the actor to distribute the intimate image; (b) the intimate image was created by or provided to the actor under circumstances in which the individual has a reasonable expectation of privacy; and (c) actual emotional distress or harm is caused to the person as a result of the distribution under this section. |
Distribution of an intimate image is a class A misdemeanour. |
|
|
2015 |
A person violates this section if he or she knowingly discloses a visual image of an identifiable person who is nude or who is engaged in sexual conduct, without his or her consent, with the intent to harm, harass, intimidate, threaten, or coerce the person depicted, and the disclosure would cause a reasonable person to suffer harm. A person may be identifiable from the image itself or information offered in connection with the image. Consent to recording of the visual image does not, by itself, constitute consent for disclosure of the image.
|
A person who violates this provision shall be imprisoned not more than two years or fined not more than $2,000.00, or both.
A person who violates this provision with the intent of disclosing the image for financial profit shall be imprisoned not more than five years or fined not more than $10,000.00, or both. |
In addition, the Court may order equitable relief, including a temporary restraining order, a preliminary injunction, or a permanent injunction ordering the defendant to cease display or disclosure of the image.
The Court may grant injunctive relief maintaining the confidentiality of a plaintiff using a pseudonym. |
||
§ 18.2-386.2, Code of Virginia
|
2014 |
Any person who, with the intent to coerce, harass, or intimidate, maliciously disseminates or sells any videographic or still image created by any means whatsoever that depicts another person who is totally nude, or in a state of undress so as to expose the genitals, pubic area, buttocks, or female breast, where such person knows or has reason to know that he is not licensed or authorized to disseminate or sell such videographic or still image is guilty. |
Such an offense is a Class 1 misdemeanor. |
|
|
2015 |
A person commits the crime of disclosing intimate images when the person knowingly discloses an intimate image of another person and the person disclosing the image: (a) Obtained it under circumstances in which a reasonable person would know or understand that the image was to remain private; (b) Knows or should have known that the depicted person has not consented to the disclosure; and10 (c) Knows or reasonably should know that disclosure would cause harm to the depicted person.
|
The crime of disclosing intimate images: (a) Is a gross misdemeanor on the first offense; or (b) Is a class C felony if the defendant has one or more prior convictions for disclosing intimate images. |
A person who is under the age of eighteen is not guilty of the crime of disclosing intimate images unless the person: (a) Intentionally and maliciously disclosed an intimate image of another person; (b) Obtained it under circumstances in which a reasonable person would know or understand that the image was to remain private; and (c) Knows or should have known that the depicted person has not consented to the disclosure |
||
§61-8-28a, Code of West Virginia
|
2017 |
No person may knowingly and intentionally disclose, cause to be disclosed or threaten to disclose, with the intent to harass, intimidate, threaten, humiliate, embarrass, or coerce, an image of another which shows the intimate parts of the depicted person or shows the depicted person engaged in sexually explicit conduct which was captured under circumstances where the person depicted had a reasonable expectation that the image would not be publicly disclosed. |
A person convicted is guilty of a misdemeanor and, upon conviction thereof, shall be confined in jail for not more than one year, fined not less than $1,000 nor more than $5,000, or both confined and fined. |
|
|
2014 |
It provides for posting or publishing a sexually explicit image without consent and providing a penalty. Such an offence is a Class A misdemeanour. |
Class A misdemeanors can result in fines up to $10,000, imprisonment up to 9 months or a combination of the two. |
|
3. Australia
Country |
Statute |
Year |
Contents – definition, classification, punishment, standard of proof |
Punishment |
Remarks |
2018 |
A person who intentionally distributes an intimate image of another person: (a) without the consent of the person, and (b) knowing the person did not consent to the distribution or being reckless as to whether the person consented to the distribution, is guilty of an offence. "intimate image" means: (a) an image of a person's private parts, or of a person engaged in a private act, in circumstances in which a reasonable person would reasonably expect to be afforded privacy, or (b) an image that has been altered to appear to show a person's private parts, or a person engaged in a private act, in circumstances in which a reasonable person would reasonably expect to be afforded privacy. |
Maximum penalty: 100 penalty units or imprisonment for 3 years, or both. |
|
||
2018 |
A person who distributes an invasive image of another person, knowing or having reason to believe that the other person— (a) does not consent to that particular distribution of the image; or (b) does not consent to that particular distribution of the image and does not consent to distribution of the image generally, is guilty of an offence. An image of a person will be taken to be an invasive image of the person if it depicts the person in a place other than a public place— (a) engaged in a private act; or (b) in a state of undress such that— (i) in the case of a female—the bare breasts are visible; or (ii) in any case—the bare genital or anal region is visible. (3) However, an image of a person that falls within the standards of morality, decency and propriety generally accepted by reasonable adults in the community will not be taken to be an invasive image of the person. |
Maximum penalty: (a) if the invasive image is of a person under the age of 17 years—$20000 or imprisonment for 4 years; (b) in any other case—$10 000 or imprisonment for 2 years. |
|
||
2016 |
A court may restrain the respondent from doing all or any of the following in the case of a family violence restraining order: distributing or publishing, or threatening to distribute or publish, intimate personal images of the person seeking to be protected; |
2 years imprisonment. |
Check comes into play only in case of a family violence restraining order and is not general protection. |
||
2012 |
A person who visually captures or has visually captured an image of another person's genital or anal region must not intentionally distribute that image. |
2 years imprisonment. |
|
4. Asia and Rest of the World
Country |
Statute |
Year |
Contents – definition, classification, punishment, standard of proof |
Punishment |
Remarks |
Section 162.1, Criminal Code through Bill C-13 or Cyberbullying Act |
2015 |
Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty. In this section, “intimate image” means a visual recording of a person made by any means including a photographic, film or video recording, (a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity; (b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and (c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed. |
Punishment is: (a) of an indictable offence and liable to imprisonment for a term of not more than five years; or (b) of an offence punishable on summary conviction. |
|
|
|
It is hereby prohibited and declared unlawful for any person: (a) To take photo or video coverage of a person or group of persons performing sexual act or any similar activity or to capture an image of the private area of a person/s such as the naked or undergarment clad genitals, public area, buttocks or female breast without the consent of the person/s involved and under circumstances in which the person/s has/have a reasonable expectation of privacy; (b) To copy or reproduce, or to cause to be copied or reproduced, such photo or video or recording of sexual act or any similar activity with or without consideration; (c) To sell or distribute, or cause to be sold or distributed, such photo or video or recording of sexual act, whether it be the original copy or reproduction thereof; or (d) To publish or broadcast, or cause to be published or broadcast, whether in print or broadcast media, or show or exhibit the photo or video coverage or recordings of such sexual act or any similar activity through VCD/DVD, internet, cellular phones and other similar means or device. The prohibition under paragraphs (b), (c) and (d) shall apply notwithstanding that consent to record or take photo or video coverage of the same was given by such person/s. Any person who violates this provision shall be liable for photo or video voyeurism as defined herein. |
The penalty of imprisonment of not less that three (3) years but not more than seven (7) years and a fine of not less than One hundred thousand pesos (P100,000.00) but not more than Five hundred thousand pesos (P500,000.00), or both, at the discretion of the court shall be imposed upon any person found guilty of violating Section 4 of this Act. If the violator is a juridical person, its license or franchise shall be automatically be deemed revoked and the persons liable shall be the officers thereof including the editor and reporter in the case of print media, and the station manager, editor and broadcaster in the case of a broadcast media. If the offender is a public officer or employee, or a professional, he/she shall be administratively liable. If the offender is an alien, he/she shall be subject to deportation proceedings after serving his/her sentence and payment of fines. |
|
||
Prevention of Sexual Harassment Law, 5758-1998 amended in 2014 |
2014 |
The distribution of still pictures or video recordings of a person’s image that focuses on his/her sexuality, including by editing or incorporation, is unlawful if made: 1. without the person’s consent; 2. in a way that facilitates identification of the person; and 3. under circumstances that may degrade or shame him/her
The distribution of such an image constitutes sexual harassment under section 3(a) of the Prevention of Sexual Harassment Law and intentional harm to a person’s privacy under section 5 of the Protection of Privacy Law. |
The crimes are punishable with five years of imprisonment, in addition to subjecting the perpetrator to civil liability and the duty to pay monetary compensation to the victim. |
|
|
Act on Prevention of Damage by Provision of Private Sexual Image Records Act |
2014 |
It criminalizes the provision of a private sexual image of another person without the person’s approval via a means of telecommunication to an unspecified number of or to many people. It allows Internet service providers to delete suspected revenge porn images without the uploader’s consent, in cases where: 1. the victim had notified the provider of the existence of the image; 2. the provider had requested the consent of the uploader to delete the image; and 3. the uploader did not respond or delete the image. |
A maximum sentence of 500,000 yen or three years in jail. |
The Act also obligates the national and local governments to ease victims’ embarrassment when they report the crime. For especially young potential victims, the Act further obligates the governments to educate people on how to avoid revenge porn. |
Comments on the Draft Digital Information Security in Healthcare Act
This submission presents comments by the Centre for Internet and Society, India (“CIS”) on the Draft Digital Information Security in Healthcare Act, released by Ministry of Health & Family Welfare, Government of India. CIS has conducted research on the issues of privacy, data protection and data security since 2010 and is thankful for the opportunity to put forth its views. This submission was made on April 21, 2018.
AI in the Banking and Finance Industry in India
This draft report was prepared by Saman Goudarzi, Elonnai Hickok and Amber Sinha. It was edited by Shyam Ponappa. Mapping was done by Shweta Mohandas. Pranav M Bidare, Sidharth Ray, and Aayush Rathi provided research assistance in preparing this report.
Executive Summary
In the last couple of years, the finance and banking sectors in India have increasingly deployed and implemented AI technologies. Such technologies are being implemented for front-end and back end processes – offering solutions for both financial and business management operations. At the moment, the AI landscape appears to be overwhelmingly populated by natural language processing and natural language generation technologies culminating in numerous chatbot initiatives by various banking and financial actors. Arguably more significant – but less documented – is the usage of said technologies for financial decision making on a variety of issues including, credit-scoring, transactions, wealth and risk management, and fraud detection. These trends are largely facilitated by technology service companies – both large-scale firms and startups – that either work with established banking and financial institutions to deploy AI technologies or develop and offer their own financial services directly to consumers.
This draft report seeks to map the present state of use of AI in the banking and financial sector in India. In doing so, it explores:
- Uses: What is the present use of AI in banking and finance? What is the narrative and discourse around AI and banking/finance in India?
- Actors: Who are the key stakeholders involved in the development, implementation and regulation of AI in the banking/finance sector?
- Impact: What is the potential and existing impact of AI in the banking and finance sectors?
- Regulation: What are the challenges faced in policy making around AI in the banking and finance sectors?
The draft report first offers an overview of the ways in which AI is being used in the sector. This is followed by an examination of existing challenges to the adoption of AI and the significant legal and ethical concerns that need to be considered in light of these trends. Lastly, the draft report draws attention to a number of key government actions and initiatives surrounding AI related to the banking and finance industry, discusses challenges to the adoption and implementation of AI and articulates recommendations towards addressing the same.
Download the draft report here
19th June Update: This case study has been modified to remove interview quotes, which are in the process of being confirmed. The link above is the latest draft of the report.
Internet Shutdown Stories
Read the report here: Download (PDF)
The report is shared under Creative Commons Attribution-NoDerivatives 4.0 International license.
Edited by Debasmita Haldar, Ambika Tandon, and Swaraj Barooah
Print Design by Saumyaa Naidu
Advisor: Nikhil Pahwa, Founder and Editor at MediaNama
Foreword
Aside from the waves of innovation that the digital revolution brought with it, the ever increasing pervasiveness of the internet has had a tremendous impact on empowerment and freedoms in society. We are seeing unprecedented levels of access to information, along with a democratization of the means of creation, production and dissemination of information to anyone with an internet connection. This in turn has greatly amplified, and in many cases even created the ability, particularly for those traditionally left in the margins, to more meaningfully participate in their global as well as local societies. Recognising the significance of the internet to the freedom of expression as well as for the development and exercising of human rights more broadly, the United Nations Human Rights Council unanimously passed a resolution confirming internet access being a fundamental human right.
Simultaneously however, we are seeing Indian states discover and experiment with their power to clamp down on these new modes of communication for a variety of reasons, ranging from the ill-intentioned to the ill-informed. An internet shutdown tracker maintained by the Software Freedom Law Centre, shows that the number of shutdowns in India is increasing every year, with 70 shutdowns reported in 2017,and 45 shutdowns already reported from 1st Jan, 2018 to 4th May, 2018. These shutdowns also come at a significant economic cost. A 2016 Brookings report estimates that India faced a loss of about $968 million due to internet shutdowns. However, the democratic harms we have been accruing are more difficult to quantify and demonstrate.
This book seeks to give a glimpse into the lives of those directly affected by these internet shutdown experiments. From Jammu and Kashmir to Telangana, from Gujarat to Nagaland, we have collected 30 stories from across the country for an up-close look at how the everyday lives of common citizens have been impacted by internet shutdowns and website blocks. From CRPF members posted in Srinagar who use the internet to connect with their family, to students who have been cut off from education resources for competitive exams; from the disruptions in day to day life brought about by non-functional bank services in Darjeeling, to stock brokers in Ahmedabad who faced costly slowdowns; the idea of a Digital India is facing severe setbacks with these continuously increasing internet shutdowns.
When seen in a larger context, we hope that the stories in this book also demonstrate that access to the internet and freedom of speech is not just about an individual’s rights, but are also required for the collective good. The diversity of perspectives and activities that a healthy democracy demands is not met by the versioning of dominant narratives, but by allowing for, if not directly encouraging, the voices and activities of the unheard, oppressed and marginalised. We hope that in the telling of these personal stories of the day-to-day of people affected by such internet shutdowns, this book joins in the effort to position the dehumanized internet kill switches more aptly as dangers to democracy.
Sunil Abraham
Executive Director
The Centre for Internet and Society
India's Data Protection Framework Will Need to Treat Privacy as a Social and Not Just an Individual Good
Published in Economic & Political Weekly, Volume 53, Issue No. 18, 05 May, 2018. Article can be accessed online here.
In July 2017, the Ministry of Electronics and Information Technology (MeITy) in India set up a committee headed by a former judge, B N Srikrishna, to address the growing clamour for privacy protections at a time when both private collection of data and public projects like Aadhaar are reported to pose major privacy risks (Maheshwari 2017). The Srikrishna Committee is in the process of providing its input, which will go on to inform India’s data-protection law.
While the committee released a white paper with provisional views, seeking feedback a few months ago, it may be discussing a data protection framework without due consideration to how data practices have evolved.
In early 2018, a series of stories based on investigative journalism by Guardianand Observer revealed that the data of 87 million Facebook users was used for the Trump campaign by a political consulting firm, Cambridge Analytica, without their permissions. Aleksandr Kogan, a psychology researcher at the University of Cambridge, created an application called “thisisyourdigitallife” and collected data from 270,000 participants through a personality test using Facebook’s application programming interface (API), which allows developers to integrate with various parts of the Facebook platform (Fruchter et al 2018). This data was collected purportedly for academic research purposes only. Kogan’s application also collected profile data from each of the participants’ friends, roughly 87 million people.
The kinds of practices concerning the sharing and processing of data exhibited in this case are not unique. These are, in fact, common to the data economy in India as well. It can be argued that the Facebook–Cambridge Analytica incident is representative of data practices in the data-driven digital economy. These new practices pose important questions for data protection laws globally, and how these may need to evolve to address data protection, particularly for India, which is in the process of drafting its own data protection law.
Privacy as Control
Most modern data protection laws focus on individual control. In this context, the definition by the late Alan Westin (2015) characterises privacy as:
The claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to other.
The idea of “privacy as control” is what finds articulation in data protection policies across jurisdictions, beginning with the Fair Information Practice Principles (FIPP) from the United States (US) (Dixon 2006). These FIPPs are the building blocks of modern information privacy law (Schwartz 1999) and not only play a significant role in the development of privacy laws in the US, but also inform data protection laws in most privacy regimes internationally (Rotenberg 2001), including the nine “National Privacy Principles” articulated by the Justice A P Shah Committee in India. Much of this approach is also reflected in the white paper released by the committee, led by Justice Srikrishna, towards the creation of data protection laws in India (Srikrishna 2017)
This approach essentially involves the following steps (Cate 2006):
(i) Data controllers are required to tell individuals what data they wish to collect and use and give them a choice to share the data.
(ii) Upon sharing, the individuals have rights such as being granted access, and data controllers have obligations such as securing the data with appropriate technologies and procedures, and only using it for the purposes identified.
The objective in this approach is to make the individual empowered and allow them to weigh their own interests in exercising their consent. The allure of this paradigm is that, in one elegant stroke, it seeks to “ensure that consent is informed and free and thereby also (seeks) to implement an acceptable tradeoff between privacy and competing concerns.” (Sloan and Warner 2014). This approach is also easy to enforce for both regulators and businesses. Data collectors and processors only need to ensure that they comply with their privacy policies, and can thus reduce their liability while, theoretically, consumers have the information required to exercise choice. In recent years, however, the emergence of big data, the “Internet of Things,” and algorithmic decision-making has significantly compromised the notice and consent model (Solove 2013).
Limitations of Consent
Some cognitive problems, such as long and difficult to understand privacy notices, have always existed with regard to the issue of informed consent, but lately these problems have become aggravated. Privacy notices often come in the form of long legal documents, much to the detriment of the readers’ ability to understand them. These policies are “long, complicated, full of jargon and change frequently” (Cranor 2012).
Kent Walker (2001) lists five problems that privacy notices typically suffer from:
(i) Overkill: Long and repetitive text in small print.
(ii) Irrelevance: Describing situations of little concern to most consumers.
(iii) Opacity: Broad terms that reflect limited truth, and are unhelpful to track and control the information collected and stored.
(iv) Non-comparability: Simplification required to achieve comparability will lead to compromising of accuracy.
(v) Inflexibility: Failure to keep pace with new business models.
Today, data is collected continuously with every use of online services, making it humanly impossible to exercise meaningful consent.
The quantity of data being generated is expanding at an exponential rate. With connected devices, smartphones, appliances transmitting data about our usage, and even the smart cities themselves, data now streams constantly from almost every sector and function of daily life, “creating countless new digital puddles, lakes, tributaries and oceans of information” (Bollier 2010).
The infinitely complex nature of the data ecosystem renders consent of little value in cases where individuals may be able to read and comprehend privacy notices. As the uses of data are so diverse, and often not limited by a purpose identified at the beginning, individuals cannot conceptualise how their data will be aggregated and possibly used or reused.
Seemingly innocuous bits of data revealed at different stages could be combined to reveal sensitive information about the individual. While the regulatory framework is designed such that individuals are expected to engage in cost–benefit analysis of trading their data to avail services, this ecosystem makes such individual analysis impossible.
Conflicts Between Big Data and Individual Control
The thrust of big data technologies is that the value of data resides not in its primary purposes, but in its numerous secondary purposes, where data is reused many times over (Schoenberger and Cukier 2013).
On the other hand, the idea of privacy as control draws from the “data minimisation” principle, which requires organisations to limit the collection of personal data to the minimum extent necessary to obtain their legitimate purpose and to delete data no longer required. Control is excercised and privacy is enhanced by ensuring data minimisation. These two concepts are in direct conflict. Modern data-driven businesses want to retain as much data as possible for secondary uses. Since these secondary uses are, by their nature, unanticipated, their practices run counter to the very principle of purpose limitation (Tene and Polonetsky 2012).
It is evident from such data-sharing practices, as demonstrated by the Cambridge Analytica–Facebook story, that platform architectures are designed with a clear view to collect as much data as possible. This is amply demonstrated by the provision of a “friends permission” feature by Facebook on its platform to allow individuals to share information not just about themselves, but also about their friends. For the principle of informed consent to be meaningfully implemented, it is necessary for users to have access to information about intended data practices, purposes and usage, so they consciously share data about themselves.
In reality, however, privacy policies are more likely to serve as liability disclaimers for companies than any kind of guarantee of privacy for consumers. A case in point is Mark Zuckerberg’s facile claim that there was no “data-breach" in the Cambridge Analytica–Facebook incident. Instead of asking each of the 87 million users whether they wanted their data to be collected and shared further, Facebook designed a platform that required consent in any form only from 270,000 users. Not only were users denied the opportunity to give consent, their consent was assumed through a feature which was on by default. This is representative of how privacy trade-offs are conceived by current data-driven business models. Participation in a digital ecosystem is by itself deemed as users’ consent to relinquish control over how their data is collected, who may have access to it, and what purposes it may be used for.
Yet, Zuckerberg would have us believe that the primary privacy issue of concern is not about how his platform enabled the collection of users’ data without their explicit consent, but in the subsequent unauthorised sharing of the data by Kogan. Zuckerberg’s insistence that collection of data of people without their consent is not a data breach is reminiscent of the UIDAI’s recent claims in India that publication of Aadhaar numbers and related information by several government websites is not a data breach, so long as its central biometric database in secure (Sharma 2018). In such cases also, the intended architecture ensured the seeding of other databases with Aadhaar numbers, thus creating multiple potential points of failure through disclosure. Similarly, the design flaws in direct benefit transfers enabled Airtel to create payments bank accounts with the customers’ knowledge (Hindu Business Line 2017). Such claims clearly suggest the very limited responsibility data controllers (both public and private) are willing to take for personal data that they collect, while wilfully facilitating and encouraging data practices which may lead to greater risk to data.
On this note, it is also relevant to point out that the Srikrishna committee white paper begins with identifying informational privacy and data innovation as its two key objectives. It states that “a firm legal framework for data protection is the foundation on which data-driven innovation and entrepreneurship can flourish in India.”
Conversations around privacy and data have become inevitably linked to the idea of technological innovation as a competing interest. Before engaging in such conversations, it is important to acknowledge that the value of innovation as a competing interest itself is questionable. It is not a competing right, nor a legitimate public interest endeavour, nor a proven social good.
The idea that in policymaking, technological innovations may compete with privacy of individuals assumes that there is social and/or economic good in allowing unrestricted access to data. The social argument is premised on the promises of mathematical models and computational capacity being capable of identifying key insights from data. In turn, these insights may be useful in public and private decision-making. However, it must be remembered that data is potentially a toxic asset, if it is not collected, processed, secured and shared in the appropriate way. Sufficient research suggests that indiscriminate data collection is greatly increasing the ratio of noise to signal, and can lead to erroneous insights. Further, the greater the amount of data you collect, the greater is the attack surface that leads to cybersecurity risks. Further, incidents such as Facebook–Cambridge Analytica demonstrate that toxicity of data in various ways and underscores the need for data regulation at every stage of the data lifecycle (Scheiner 2016). These are important tempering factors that need to be kept in mind while evaluating data innovation as a key mover of policy or regulation.
Privacy as Social Good
As long as privacy is framed as arising primarily from individual control, data controllers will continue to engage in practices that compromise the ability to exercise choice. There is a need to view privacy as a social good, and policymaking should ensure its preservation and enhancement. Contractual protections and legal sanctions can themselves do little if platform architectures are designed to do the exact opposite.
More importantly, policymaking needs to recognise privacy not merely as an individual right, available for individuals to forego when engaging with data-driven business models, but also as a social good. The recognition of something as a social good deems it desirable by definition, and a legitimate goal of law and policy, rather than rely completely on market forces for its achievement.
The Puttaswamy judgment (K Puttaswamy v Union of India 2017) lends sufficient weight to privacy’s social value by identifying it as fundamental to any individual development through its dependence on solitude, anonymity, and temporary releases from social duties.
Sociological scholarship demonstrates that different types of social relationships, be it Gesellschaft (interest groups and acquaintances) or Gemeinschaft (friendship, love, and marriage), and the nature of these relationships depend on the ability to conceal certain things (Simmel 1906). Demonstrating this in the context of friendships, it has been stated that such relationships “present a very peculiar synthesis in regard to the question of discretion, of reciprocal revelation and concealment.” Friendships, much like most other social relationships, are very much dependent on our ability to selectively present ourselves to others. Contrast this with Zuckerberg’s stated aim of making the world more “open” where information about people flows freely and effectively without any individual control. Contrast this also with government projects such as the Aadhaar which intends to act as one universal identity which can provide a 360-degree view of citizens.
Other scholars such as Julie Cohen (2012) and Anita Allen (2011) have demonstrated that data that a person produces or has control over concerns both herself and others. Individuals can be exposed not only because of their own actions and choices, but also made vulnerable merely because others have been careless with their data. This point is amply demonstrated in the Facebook–Cambridge Analytica incident. What this means is that protection of privacy requires not just individual action, but in a sense, requires group co-ordination. It is my argument that this group interest of privacy as a social good must be the basis of policymaking and regulation of data in the future, in addition to the idea of privacy as an individual right. In the absence of attention to the social good aspect of privacy, individual consumers are left to their own devices to negotiate their privacy trade-offs with large companies and governments and are significantly compromised.
What this translates into is a regulatory framework and data protection frameworks should not be value-neutral in their conception of privacy as a facet of individual control. The complete reliance of data regulation on the data subject to make an informed choice is, in my opinion, an idea that has run its course. If privacy is viewed as a social good, then the data protection framework, including the laws and the architecture must be designed with a view to protect it, rather than leave it entirely to the market forces.
The Way Forward
Data protection laws need to be re-evaluated, and policymakers must recognise Lawrence Lessig’s dictum that “code is law.” Like laws, architecture and norms can play a fundamental role in regulation. Regulatory intervention for technology need not mean regulation of technology only, but also how technology itself may be leveraged for regulation (Lessig 2006; Reidenberg 1998). It is key that the latter is not left only in the hands of private players.
Zuckerberg, in his testimony (Washington Post 2018) before the United States Senate's Commerce and Judiciary committees, asserted that "AI tools" are central to any strategy for addressing hate speech, fake news, and manipulations that use data ecosystems for targeting.
What is most concerning in his testimony is the complete lack of mention of standards, public scrutiny and peer-review processes, which “AI tools” and regulatory technologies need to be subject to. Further, it cannot be expected that data-driven businesses will view privacy as a social good or be publicly accountable.
As policymakers in India gear up for writing the country’s data protection law, they must acknowledge that their responsibility extends to creating norms and principles that will inform future data-driven platforms and regulatory technologies.
Since issues of privacy and data protection will have to be increasingly addressed at the level of how architectures enable data collection, and more importantly how data is used after collection, policymakers must recognise that being neutral about these practices is no longer enough. They must take normative positions on data collection, processing and sharing practices. These positions cannot be implemented through laws only, but need to be translated into technological solutions and norms. Unless a multipronged approach comprising laws, architecture and norms is adopted, India’s new data protection regime may end up with limited efficacy.
Indian Intermediary Liability Regime: Compliance with the Manila Principles on Intermediary Liability
The report was edited by Elonnai Hickok and Swaraj Barooah
The report is an examination of Indian laws based upon the background paper to the Manila Principles as the explanatory text on which these recommendations have been based, and not an assessment of the principles themselves. To do this, the report considers the Indian regime in the context of each of the principles defined in the Manila Principles. As such, the explanatory text to the Manila Principles recognizes that diverse national and political scenario may require different intermediary liability legal regimes, however, this paper relies only on the best practices prescribed under the Manila Principles.
The report is divided into the following sections
- Principle I: Intermediaries should be shielded by law from liability for third-party content
- Principle II: Content must not be required to be restricted without an order by a judicial authority
- Principle III: Requests for restrictions of content must be clear, be unambiguous, and follow due process
- Principle IV: Laws and content restriction orders and practices must comply with the tests of necessity and proportionality
-
Principle V: Laws and content restriction policies and practices must respect due process
-
Principle VI: Transparency and accountability must be built into laws and content restriction policies and practices
-
Conclusion
DIDP Request #30 - Employee remuneration structure at ICANN
We have requested ICANN to disclose information pertaining to the income of each employee based on the following grounds. We had hoped this information will increase ICANN's transparency regarding their remuneration policies however ths was not the case, they either referred to their earlier documents who do not have concrete information or stated that the relevant documents were not in their possession. Their response to the respective questions were:
Average salary across designations
ICANN responded by referring to their FY18 Remuneration Practices document which states, “ICANN uses a global compensation expert consulting firm to provide comprehensive benchmarking market data (currently Willis Towers Watson, Mercer and Radford). The market study is conducted before the salary review process. Estimates of potential compensation adjustments typically are made during the budgeting process based on current market data. The budget is then approved as part of ICANN’s overall budget planning process.”
Average salary for female and male employees
ICANN responded by saying “ICANN org’s remuneration philosophy and practice is not based upon gender” which is why they said that they have “no documentary information in ICANN org’s possession, custody or control that is responsive to this request.” However, the exact average salaries of female and male employees was not provided nor any information that could that could give us an idea as to whether the remuneration of their employees was in accordance with the above claim.
Bonuses - frequency at which it is given and upon what basis
ICANN responded by referring to “Discretionary At-Risk Component” section in their FY18 Remuneration Practices document which states,”The amount of at-risk pay an individual can earn is based on a combination of both the achievement of goals as well as the behaviors exhibited in achieving those goals… The Board has approved a framework whereby those with ICANN Org are eligible to earn an at-risk payment of up to 20 percent of base compensation as at-risk payment based on role and level in the organization, with certain senior executives eligible for up to 30 percent.” The duration over which the employees are eligible to receive an “at-risk” payment was given to be “twice a year".
Average salary across regions for the same region
ICANN responded by saying,”compensation may vary across the regions based on currency differences, the availability of positions in a given region, market conditions, as well as the type of positions that are available in a given region. “ They also added that they have no documentary information in their possession, custody or control that is responsive to this request.
The request filed by Paul Kurian may be found here. ICANN's response can be read here.
Design Concerns in Creating Privacy Notices
This blog post was edited by Elonnai Hickok.
The Role of Design in Enabling Informed Consent
Currently, privacy notices and choice mechanisms, are largely ineffective. Privacy and security researchers have concluded that privacy notices not only fail to help consumers make informed privacy decisions but are mostly ignored by them. [1] They have been reduced to being a mere necessity to ensure legal compliance for companies. The design of privacy systems has an essential role in determining whether the users read the notices and understand them. While it is important to assess the data practices of a company, the communication of privacy policies to users is also a key factor in ensuring that the users are protected from privacy threats. If they do not read or understand the privacy policy, they are not protected by it at all.
The visual communication of a privacy notice is determined by the User Interface (UI) and User Experience (UX) design of that online platform. User experience design is broadly about creating the logical flow from one step to the next in any digital system, and user interface design ensures that each screen or page that the user interacts with has a consistent visual language and styling. This compliments the path created by the user experience designer. [2] UI/UX design still follows the basic principles of visual communication where information is made understandable, usable and interesting with the use of elements such as colours, typography, scale, and spacing.
In order to facilitate informed consent, the design principles are to be applied to ensure that the privacy policy is presented clearly, and in the most accessible form. A paper by Batya Friedman, Peyina Lin, and Jessica K. Miller, ‘Informed Consent By Design’, presents a model of informed consent for information systems. [3] It mentions the six components of the model; Disclosure, Comprehension, Voluntariness, Competence, Agreement, Minimal Distraction. The design of a notice should achieve these components to enable informed consent. Disclosure and comprehension lead to the user being ‘informed’ while ‘consent’ encompasses voluntariness, competence, and agreement. Finally, The tasks of being informed and giving consentshould happen with minimal distraction, without diverting users from their primary taskor overwhelming them with unnecessary noise.[4]
UI/UX design builds upon user behaviour to anticipate their interaction with the platform. It has led to practices where the UI/UX design is directed at influencing the user to respond in a way that is desired by the system. For instance, the design of default options prompts users to allow the system to collect their data when the ‘Allow’ button is checked by default. Such practices where the interface design is used to push users in a particular direction are called “dark patterns”.[5] These are tricks used in websites and apps that make users buy or sign up for things that they did not intend to. [6] Dark patterns are often followed as UI/UX trends without the consequences on users being questioned. This has had implications on the design of privacy systems as well. Privacy notices are currently being designed to be invisible instead of drawing attention towards them.
Moreover, most communication designers believe that privacy notices are beyond their scope of expertise. They do not consider themselves accountable for how a notice comes across to the user. Designers also believe that they have limited agency when it comes to designing privacy notices as most of the decisions have been already taken by the company or the service. They can play a major role in communicating privacy concerns at an interface level, but the issues of privacy are much deeper. Designers tend to find ways of informing the user without compromising the user experience, and in the process choose aesthetic decisions over informed consent.
Issues with Visual Communication of Privacy Notices
The ineffectiveness of privacy notices can be attributed to several broad issues such as the complex language and length, their timing, and location. In 2015, the Center for Plain Language [7] published a privacy-policy analysis report [8] for TIME.com [9], evaluating internet-based companies’ privacy policies to determine how well they followed plain language guidelines. The report concluded that among the most popular companies, Google and Facebook had the more accessible notices, while Apple, Uber, and Twitter were ranked as less accessible. The timing of notices is also crucial in ensuring that it is read by the users. The primary task for the user is to avail the service being offered. The goals of security and privacy are valued but are only secondary in this process. [10] Notices are presented at a time when they are seen as a barrier between the user and the service. People thus, choose to ignore the notices and move on to their primary task. Another concern is disassociated notices or notices which are presented on a separate website or manual. The added effort of going to an external website also gets in the way of the users which leads to them not reading the notice. While most of these issues can be dealt with at the strategic level of designing the notice, there are also specific visual communication design issues that are required to be addressed.
Invisible Structure and Organisation of Information
Long spells of text with no visible structure or content organisation is the lowest form of privacy notices. These are the blocks of text where the information is flattened with no visual markers such as a section separator, or contrasting colour and typography to distinguish between the types of content. In such notices, the headings and subheadings are also not easy to locate and comprehend. For a user, the large block of text appears to be pointless and irrelevant, and they begin to dismiss or ignore it. Further, the amount of time it would take for the user to read the entire text and comprehend it successfully, is simply impractical, considering the number of websites they visit regularly.
The privacy policy notice by Apple [11] with no use of colours or visuals.
The privacy policy notice by Twitter [12] no visual segregator
Visual Contrast Between Front Interface and Privacy Notices
The front facing interface of an app or website is designed to be far more engaging than the privacy notice pages. There is a visible difference in the UI/UX design of the pages, almost as if the privacy notices were not designed at all. In case of Uber’s mobile app, the process of adding a destination, selecting the type of cab and confirming a ride has been made simple to do for any user. This interface has been thought through keeping in mind the users’ behaviour and needs. It allows for quick and efficient use of the service. As opposed to the process of buying into the service, the privacy notice on the app is complex and unclear.
Uber mobile app screenshots of the front interface (left) and the policy notice page (right)
Gaining Trust Through the Initial Pitch
A pattern in the privacy notices of most companies is that they attempt to establish credibility and gain confidence by stating that they respect the users’ privacy. This can be seen in the introductory text of the privacy notices of Apple and LinkedIn. The underlying intent seems to be that since the company understands that the users’ privacy is important, the users can rely on them and not read the full notice.
Introduction text to Apple’s privacy policy notice [13]
Introduction text to LinkedIn’s privacy policy notice [14]
Low Navigability
The text heavy notices need clear content pockets which can be navigated through easily using mechanisms such as menu bar. Navigability of a document allows for quick locating of sections, and moving between them. Several companies miss to follow this. Apple and Twitter privacy notices (shown above), have low navigability as the reader has no prior indication of how many sections there are in the notice. The reader could have summarised the content based on the titles of the sections if it were available in a table of contents or a menu. Lack of a navigation system leads to endless scrolling to reach the end of the page.
Facebook privacy notice, on the other hand is an example of good navigability. It uses typography and colour to build a clear structure of information that can be navigated through easily using the side menu. The menu doubles up as a table of contents for the reader. The side menu however, does not remain visible while scrolling down the page. This means while the user is reading through a section, they cannot switch to a different section from the menu directly. They will need to click on the ‘Return to top’ button and then select the section from the menu.
Navigation menu in the Facebook Data Policy page [15]
Lack of Visual Support
Privacy notices can rely heavily on visuals to convey the policies more efficiently. These could be visual summaries or supporting infographics. The data flow on the platform and how it would affect the users can be clearly visualised using infographics. But, most notices fail to adopt them. The Linkedin privacy notice [16] page shows a video at the beginning of its privacy policy. Although this could have been an opportunity to explain the policy in the video, LinkedIn only gives an introduction to the notice and follows it with a pitch to use the platform. The only visual used in notices currently are icons. Facebook uses icons to identify the different sections so that they can be located easily. But, apart from being identifiers of sections, these icons do not contribute to the communication of the policy. It does not make reading of the full policy any easier.
Icon Heavy ‘Visual’ Privacy Notices
The complexity of privacy notices has led to the advent of online tools and generators that create short notices or summaries for apps and websites to supplement the full text versions of policies. Most of these short notices use icons as a way of visually depicting the categories of data that is being collected and shared. iubenda [17], an online tool, generates policy notice summary and full text based on the inputs given by the client. It asks for the services offered by the site or app, and the type of data collection. Icons are used alongside the text headings to make the summary seem more ‘visual’ and hence more easily consumable. It makes the summary more inviting to read, but does not reduce the time for reading.
Another icon-based policy summary generator was created by KnowPrivacy. [18] They developed a policy coding methodology by creating icon sets for types of data collected, general data practices, and data sharing. The use of icons in these short notices is more meaningful as they show which type of data is collected or not collected, shared or not shared at a glance without any text. This facilitates comparison between data practices of different apps.
Icon based short policy notice created for Google by KnowPrivacy [19]
Initiatives to Counter Issues with the Design of Privacy Notices
Several initiatives have called out the issues with privacy notices and some have even countered them with tools and resources. The TIME.com ranking of internet-based companies’ privacy policies brought attention to the fact that some of the most popular platforms have ineffective policy notices. A user rights initiative called Terms of Services; Didn’t Read [20] rates and labels websites’ terms & privacy policies. There is also the Usable Privacy Policy Project which develops techniques to semi-automatically analyze privacy policies with crowdsourcing, natural language processing, and machine learning. [21] It uses artificial intelligence to sift through the most popular sites on the Internet, including Facebook, Reddit, and Twitter, and annotate their privacy policies. They realise that it is not practical for people to read privacy policies. Thus, their aim is to use technology to extract statements from the notices and match them with things that people care about. However, even AI has not been fully successful in making sense of the dense documents and missed out some important context. [22]
One of the more provocative initiatives is the Me and My Shadow ‘Lost in Small Print’ [23] project. It shows the text for the privacy notices of companies like LinkedIn, Facebook, WhatsApp, etc. and then ‘reveals’ the data collection and use information that would closely affect the users.
Issues with notices have also been addressed by standardising their format, so people can interpret the information faster. The Platform for Privacy Preferences Project (P3P) [24] was one of the initial efforts in enabling websites to share their privacy practices in a standard format. Similar to KnowPrivacy’s policy coding, there are more design initiatives that are focusing on short privacy notice design. An organisation offering services in Privacy Compliance and Risk Management Solutions called TrustArc, [25] is also in the process of designing an interactive icon-based privacy short notice.
TrustArc’s proposed design [26] for the short notice for a sample site
Most efforts have been done in simplifying the notices so as to decode the complex terminology. But, there have been very few evaluations and initiatives to improve the design of these notices.
Recommendations
Multilayered Privacy Notices
One of the existing suggestions on increasing usability of privacy notices are multilayered privacy notices. [27] Multilayered privacy notices comprise a very short notice designed for use on portable digital devices where there is limited space, condensed notice that contains all the key factors in an easy to understand way, and a complete notice with all the legal requirements. [28] Some of the examples above use this in the form of short notices and summaries. The very short notice layer consists of who is collecting the information, primary uses of information, and contact details of the organisation.[29] Condensed notice layer covers scope or who does the notice apply to, personal information collected, uses and sharing, choices, specific legal requirements if any, and contact information. [30] In order to maintain consistency, the sequence of topics in the condensed and the full notice must be same. Words and phrases should also be consistent in both layers. Although an effective way of simplifying information, multi-layered notices must be reconsidered along with the timing of notices. For instance, it could be more suitable to show very short notices at the time of collection or sharing of user data.
Supporting Infographics
Based on their visual design, the currently available privacy notices can be broadly classified into 4 categories; (i) the text only notices which do not have a clearly visible structure, (ii) the text notices with a contents menu that helps in informing of the structure and in navigating, (iii) the notices with basic use of visual elements such as icons used only to identify sections or headings, (iv) multilayered notices or notices with short summary before giving out the full text. There is still a lack of visual aid in all these formats. The use of visuals in the form of infographics to depict data flows could be more helpful for the users both in short summaries and complete text of policy notices.
Integrating the Privacy Notices with the Rest of the System
The design of privacy notices usually seems disconnected to the rest of the app or website. The UI/UX design of privacy notices requires as much attention as the consumer-facing interface of a system. The contribution of the designer has to be more than creating a clean layout for the text of the notice. The integration of privacy notices with the rest of the system is also related to the early involvement of the designer in the project. The designer needs to understand the information flows and data practices of a system in order to determine whether privacy notices are needed, who should be notified, and about what. This means that decisions such as selecting the categories to be represented in the short or condensed notice, the datasets within these categories, and the ways of representing them would all be part of the design process. The design interventions cannot be purely visual or UI/UX based. They need to be worked out keeping in mind the information architecture, content design, and research. By integrating the notices, strategic decisions on the timing and layering of content can be made as well, apart from the aesthetic decisions. Just as the aim of the front face of the interface in a system makes it easier for the user to avail the service, the policy notice should also help the user in understanding the consequences, by giving them clear notice of the unexpected collection or uses of their data.
Practice Based Frameworks on Designing Privacy Notices
There is little guidance available to communication designers for the actual design of privacy notices which is specific to the requirements and characteristics of a system. [31] The UI/UX practice needs to be expanded to include ethical ways of designing privacy notices online. The paper published by Florian Schaub, Rebecca Balebako, Adam L. Durity, and Lorrie Faith Cranor, called, ‘A Design Space for Effective Privacy Notice’ in 2015 offers a comprehensive design framework and standardised vocabulary for describing privacy notice options. [32] The objective of the paper is to allow designers to use this framework and vocabulary in creating effective privacy notices. The design space suggested has four key dimensions, ‘timing’, ‘channel’, ‘modality’ and ‘control’. [33] It also provides options for each of these dimensions. For example, ‘timing’ options are ‘at setup’, ‘just in time’, ‘context-dependent’, ‘periodic’, ‘persistent’, and ‘on demand’. The dimensions and options in the design space can be expanded to accommodate new systems and interaction methods.
Considering the Diversity of Audiences
For the various mobile apps and services, there are multiple user groups who use them. The privacy notices are hence not targeted to one kind of an audience. There are diverse audiences who have different privacy preferences for the same system. [34] The privacy preferences of these diverse groups of users’ must be accommodated. In a typical design process for any system, multiple user personas are identified. The needs and behaviour of each persona is used to determine the design of the interface. Privacy preferences must also be observed as part of these considerations for personas, especially while designing the privacy notices. Different users may need different kinds of notices based on which data practices affect them.[35] Thus, rather than mandating a single mechanism for obtaining informed consent for all users in all situations, designers need to provide users with a range of mechanisms and levels of control. [36]
Ethical Framework for Design Practitioners
An ethical framework is required for design practitioners that can be followed at the level of both deciding the information flow and the experience design. With the prevalence of ‘dark patterns’, the visual design of notices is used to trick users into accepting it. Design ethics can play a huge role in countering such practices. Will Dayable, co-director at Squareweave, [37] a developer of web and mobile apps, suggests that UI/UX designers should “Design Like They’re (Users are) Drunk”. [38] He asks designers to imagine the user to be in a hurry and still allow them access to all the information necessary for making a decision. He concludes that good privacy UX and UI is about actually trying to communicate with users rather than trying to slip one past them. In principle, an ethical design practice would respect the rights of the users and proactively design to facilitate informed consent.
Reconceptualising Privacy Notices
Based on the above recommendations, a guiding sample for multilayered privacy notices has been created. Each system would need its own structure and mechanisms for notices, which are integrated with its data practice, audiences, and medium, but this sample notice provides basic guidelines for creating effective and accessible privacy notices. The aesthetic decisions would also vary based on the interface design of a system.
Sample Fixed Icon for Privacy Notifications
A fixed icon can appear along with all privacy notifications on the system, so that the users can immediately know that the notification is about a privacy concern. This icon should capture attention instantly and suggest a sense of caution. Besides its use as a call to attention, the icon can also lead to a side panel for privacy implications from all actions that the user takes.
Sample Very Short Notice on Desktop and Mobile Platforms
The very short notices can be shown when an action from the user would lead to data collection or sharing. The notice mechanism should be designed to provide notices at different times tailored to a user’s needs in that context. The styling and placement of the ‘Allow’ and ‘Don’t Allow’ buttons should not be biased towards the ‘Allow’ option. The text used in very short and condensed notice layers should be engaging yet honest in its communication.
Sample Summary Notice
The summary or the condensed notice layer should allow the user to gauge at a glance, how the data policy is going to affect them. This can be combined with a menu that lists the topics covered in the full notice. The menu would double up as a navigation mechanism for users. It should be visible to users even as they scroll down to the full notice. The condensed notice can also be supported by an infographic depicting the flow of data in the system.
Sample Navigation Menu
All the images in this section use sample text for the purpose of illustrating the structure and layout
The full notice can be made accessible by creating a clear information hierarchy in the text. The menu which is available on the side while scrolling down the text would facilitate navigation and familiarity with the structure of the notice.
Conclusion
The presentation of privacy notices directly influences the decisions of users online and ineffective notices make users vulnerable to their data being misused. But currently, there is little conversation about privacy and data protection among designers. Design practice has to become sensitive to privacy and security requirements. Designers need to take the accountability of creating accessible notices which are beneficial to the users, rather than to the companies issuing them. They must prioritise the well-being of users over aesthetics and user experience even. The aesthetics of a platform must be directed at achieving transparency in the privacy notice by making it easily readable.
The design community in India has a more urgent task at hand of building a design practice that is informed by privacy. Comparing the privacy notices of Indian and global companies, Indian companies have an even longer way to go in terms of communicating the notices effectively. Most Indian companies such as Swiggy, [39] 99acres, [40] and Paytm [41] have completely textual privacy policy notices with no clear information hierarchy or navigation. Ola Cabs [42] provides an external link to their privacy notice, which opens as a pdf, making it even more inaccessible. Thus, there is a complete lack of design input in the layout of these notices.
Designers must engage in conversations with technologists and researchers, and include privacy and other user rights in design education in order to prepare practitioners for creating more valuable digital platforms.
- https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
- https://www.fastcodesign.com/3032719/ui-ux-who-does-what-a-designers-guide-to-the-tech-industry
- https://vsdesign.org/publications/pdf/Security_and_Usability_ch24.pdf
- https://vsdesign.org/publications/pdf/Security_and_Usability_ch24.pdf
- https://fieldguide.gizmodo.com/dark-patterns-how-websites-are-tricking-you-into-givin-1794734134
- https://darkpatterns.org/
- https://centerforplainlanguage.org/
- https://centerforplainlanguage.org/wp-content/uploads/2016/11/TIME-privacy-policy-analysis-report.pdf
- http://time.com/3986016/google-facebook-twitter-privacy-policies/
- https://www.safaribooksonline.com/library/view/security-and-usability/0596008279/ch04.html
- https://www.apple.com/legal/privacy/en-ww/?cid=wwa-us-kwg-features-com
- https://twitter.com/privacy?lang=en
- https://www.apple.com/legal/privacy/en-ww/?cid=wwa-us-kwg-features-com
- https://www.linkedin.com/legal/privacy-policy
- https://www.facebook.com/privacy/explanation
- https://www.linkedin.com/legal/privacy-policy
- http://www.iubenda.com/blog/2013/06/13/privacypolicyforandroidapp/
- http://knowprivacy.org/policies_methodology.html
- http://knowprivacy.org/profiles/google
- https://tosdr.org/
- https://explore.usableprivacy.org/
- https://motherboard.vice.com/en_us/article/a3yz4p/browser-plugin-to-read-privacy-policy-carnegie-mellon
- https://myshadow.org/lost-in-small-print
- https://www.w3.org/P3P/
- http://www.trustarc.com/blog/2011/02/17/privacy-short-notice-designpart-i-background/
- http://www.trustarc.com/blog/?p=1253
- https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
- https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
- https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
- https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
- https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
- https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
- https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
- https://www.safaribooksonline.com/library/view/security-and-usability/0596008279/ch04.html
- https://www.ftc.gov/system/files/documents/public_comments/2015/10/00038-97832.pdf
- https://vsdesign.org/publications/pdf/Security_and_Usability_ch24.pdf
- https://www.squareweave.com.au/
- https://iapp.org/news/a/how-ui-and-ux-can-ko-privacy/
- https://www.swiggy.com/privacy-policy
- https://www.99acres.com/load/Company/privacy
- https://pages.paytm.com/privacy.html
- https://s3-ap-southeast-1.amazonaws.com/ola-prod-website/privacy_policy.pdf
CIS contributes to ABLI Compendium on Regulation of Cross-Border Transfers of Personal Data in Asia
The compendium contains 14 detailed reports written by legal practitioners, legal scholars and researchers in their respective jurisdictions, on the regulation of cross-border data transfers in the wider Asian region (Australia, China, Hong Kong SAR, India, Indonesia, Japan, South Korea, Macau SAR, Malaysia, New Zealand, Philippines, Singapore, Thailand, and Vietnam).
The compendium is intended to act as a springboard for the next phase of ABLI's project, which will be devoted to the in-depth study of the differences and commonalities between Asian legal systems on these issues and – where feasible – the drafting of recommendations and/or policy options to achieve convergence in this area of law in Asia.
The chapter titled Jurisdictional Report India was authored by Amber Sinha and Elonnai Hickok. The compendium can be accessed here.
Comments on the Draft National Policy on Official Statistics
Edited by Swaraj Barooah. Download a PDF of the submission here
Preliminary
CIS appreciates the Government’s efforts in realising the importance of the need for high quality statistical information enshrined in the Fundamental Principles of Official Statistics as adopted by the UN General Assembly in January 2014. CIS is grateful for the opportunity to put forth its views on the draft policy. This submission was made on 31st May, 2018.
First, this submission highlights some general defects in the draft policy: there is lack of principles guiding data dissemination policies; there are virtually no positive mandates set for Government bodies for secure storage and transmission of data; and while privacy is mentioned as a concern, it has been overlooked in designing the principles of the implementation of surveys. Then, this submission puts forward specific comments suggesting improvements to various sections in the draft policy.
CIS would also like to point out the short timeline between the publication of the draft policy (18th May, 2018), and the deadline set for the stakeholders to submit their comments (31st May, 2018). Considering that the policy has widespread implications for all Ministries, citizens, and State legislation rights (proposed changes include a Constitutional Amendment), it is necessary that such call-for-comments are publicised widely, and enough time is given to the public so that the Government can receive well-researched comments.
General Comments
Data dissemination
For data dissemination, the draft policy does not stress upon a general principle or set of principles, and often disregards principles specified in the Fundamental Principles of Official Statistics, which are the very principles the Government intends to draw its policies on official statistics from. Rather it relies on context-specific provisions that fail to summarise and articulate a general philosophy for the dissemination of official statistics, and fails to practically embody some stated goals. The first principle on Official Statistics, as realised by the United Nations General Assembly, clearly states that: “[...] official statistics that meet the test of practical utility are to be compiled and made available on an impartial basis by official statistical agencies to honour citizens’ entitlement to public information.”
Let us compare this with Section 5.1.7 (9) of the draft policy, which refers to policies regarding core statistics: it mentions a data “warehouse” to be maintained by the NSO which should be accessible to private and public bodies. While this does point towards an open data policy, such a vision has not been articulated in any part thereof.
The draft policy, at the outset, should have general guiding principles of publishing data openly and freely (once it meets the utility test, and it has been ensured that individual privacy will not be violated by the publishing of such statistics). This should serve well to inform further regulations and related policies governing the use and publishing of statistics, like the Statistical Disclosure Control Report.
A general commitment to a well-articulated policy on data dissemination will ensure easy-to-follow principles for the various Ministries that will refer to the document. The additional principles that come with open data principles should also be described by the policy document: a commitment to publishing data in a machine-readable format, making it available in multiple data formats (.txt, .csv, etc.), and including its metadata.
Data storage and usage
In the absence of a regime for data protection, it is absolutely necessary that a national policy on statistics provide positive mandates for the encryption of all digitally-stored personal and sensitive information collected through surveys. Even though the current draft of the policy mentions the need to protect confidential information, it sets no mandatory requirements on the Government to ensure the security of such information, especially on digital platforms.
Additionally, all transmission of potentially sensitive information should be done with the digital signatures of the employee/Department/Ministry authorising said transmission. This will ensure the integrity and authenticity of the information, and provide with an auditable trail of the information flowing between entities in the various bodies.
Data privacy
It is appreciable that Section 5.7.9 of the draft policy notes, “[a]ll statistical surveys represent a degree of privacy invasion, which is justified by the need for an alternative public good, namely information.” However, all statistical surveys may not be proportionate in their invasiveness, even if they might serve a legitimate public goal in the future.
The draft policy does not address how privacy concerns can be taken into account while designing the survey itself. A necessary outcome of the realisation of the possible privacy violations that may arise due to surveys is that all data collection be “minimally intrusive”, the data be securely stored (see previous comment section, ‘Data storage and usage’), and the surveyed users have control over the data even after they have parted with their information.
Since the policy deals extensively with the implementation of surveys, the following should details should be clearly laid out in the policy:
- The extent to which an individual has control over the data they have provided to the surveying agency.
- The means of redressal available to an individual who feels that his/her privacy has been violated through the publication of certain statistical information
Specific Comments
Section 5.1: Dichotomising official statistics as core statistics and other official statistics
Comments
The reasons for dichotomising official statistics has not been appropriately substantiated with evidence, considering the wide implications of policy proposals that arise from the definition of “core statistics.”
Firstly, the descriptions of what constitutes “core statistics” casts too wide a net by only having a single vague qualitative criterion, i.e. “national importance.” All the other characteristics of the “core statistics” are either recommendations or requirements as to how the data will be handled and thus, pose no filter to what can constitute “core statistics.” The wide net is apparent in the fact that even the initially-proposed list of “core statistics”, given in Annex-II of the policy, has 120 categories of statistics.
Secondly, the policy does not provide reasons for why the characteristics of “core statistics”, highlighted in Section 5.1.5, should not apply to all official statistics at the various levels of Government. Therefore, the utility of the proposed dichotomy has also not been appropriately substantiated with illustrative examples of how “core statistics” should be considered qualitatively different from all official statistics.
This definition may lead to widespread disagreement between the States and the Centre, because Section 5.2 proposes that “core statistics” be added to the Union List of the Seventh Schedule of the Constitution. How the proposal may affect Centre-State responsibilities and relations pertaining to the collection and dissemination of statistics is elaborated in the next section.
Recommendations
The policy should not make a forced dichotomy between “core” and (ipso facto) non-core statistics. If a distinction is to be made for any reason(s) (such as for the purposes of delineating administrative roles) then such reason must be clearly defined, along with a clear explanation for why such a dichotomy would alleviate the described problem. The definitions should have tangible and unambiguous qualitative criteria.
Section 5.2: Constitutional amendment in respect of core statistics
Comments
The main proposal in the section is that the Seventh Schedule of the Constitution be amended to include “core statistics” in the Union List. This would give the Parliament the legislative competence to regulate the collection, storage, publication and sharing of such statistics, and the Central Government the power to enforce such legislation. Annex-II provides a tentative list of what would constitute “core statistics”; as is apparent, this list is wide-ranging and consists over 120 items which span the gamut of administrative responsibilities.
The list includes items such as “Landholdings Number, area, tenancy, land utilisation [...]” (S. No. 21), and “Statistics on land records” (S. No. 111) while most responsibilities of land regulation currently lie with the States. Similarly, items in Annex-II venture into statistics related to petroleum, water, agriculture, electricity, and industry; some of which are in the Concurrent or State List.
Statistics are metadata. There is no reason for why the administration of a particular subject lie with the State, and the regulation of data about such subject should lie with solely with the Central Government. It is important to recognise that adding the vaguely defined “core statistics” to the Union List, while enabling the Central Government to execute and plan such statistical exercises, will also prevent the States from enacting any legislation that regulates the management of statistics regarding its own administrative responsibilities.
The regulation of State Government records in general has been a contentious issue, and its place in our federal structure has been debated several times in the Parliament: the enactment of Public Records Act, 1993; the Right to Information Act, 2005; and the Collection of Statistics Act, 2008 are predicated on an assumption of such competence lying with the Parliament. However, it is equally important to recognise the role States have played in advancing transparency of Government records. For example, State-level Acts analogous to the Right to Information Act existed in Tamil Nadu and Karnataka before the Central Government enactment.
Recommendations
We strongly recommend that “statistics” be included in the Concurrent List, so that States are free to enact progressive legislation which advances transparency and accountability, and is not in derogation of Parliamentary legislation.
The Ministry should view this statistical policy document as a venue to set the minimum standards for the collection, handling and publication of statistics regarding its various functions. If the item is added to the Concurrent List, the States, through local legislation, will only have the power to improve on the Central standards since in a case of conflict, State-levels laws will be superseded by Parliamentary ones.
Section 5.3: Mechanism for regulating core statistics including auditing
Comments
The draft policy in Section 5.3.2 says, “[...] The Committee will be assisted by a Search Committee headed by the Vice-Chairperson of the NITI Aayog, in which a few technical experts could be included as Members.” The non-commital nature of the word ‘could’ in this statement detracts from the importance of having technical experts on this committee, by making their inclusion optional. The policy also does not specify who has the power to include technical experts as Members in the Search Committee. The statement should include either a minimum number of a specific number or members, and not use the non-committal word “could”
The National Statistical Development Council, as mentioned in 5.3.9, is supposed to “handle Centre-State relations in the areas of official statistics, the Council should be represented by Chief Ministers of six States to be nominated by the Centre” (Section 5.3.10). The draft does not elaborate on the rationale behind including just six states in the Council. It does not recommend any mechanism on the basis of which Centre will nominate states in the council.
Recommendations
The policy should recommend a minimum number of technical experts who must be included in the search committee, along with a clear process for how such members are to be appointed.
Additionally, the policy appropriately recognises the great diversity in India and the unique challenges faced by each State. Thus, each State has its unique requirements. Since in Section 5.3.11, the policy recommends that council meet at a low frequency of at least once in a year, all States should be represented in the Council.
Section 5.4: Official Machinery to implement directions on core statistics
Comments
The functions of Statistics Wing in the MOSPI, laid out in Section 5.4.7, include advisory functions which overlap with functions of National Statistical Commission (NSC) mentioned in Section 5.3.5. Some regulatory functions of Statistics Wing, like “conducting quality checks and auditing of statistical surveys/data sets”, overlap with the regulatory functions of NSC mentioned in Section 5.3.7.
In section 5.3.1, the draft policy explicitly mentions that “what is feasible and desirable is that production of official statistics should continue with the Government, whereas the related regulatory and advisory functions could be kept outside the Government”. But Statistics Wing is a part of the government and it also has regulatory and advisory functions. It will adversely affect the power of NSC as an autonomous body.
There are inconsistencies in the draft-policy regarding the importance and need of a decentralized statistical system. In section 3 [Objectives], it has been emphasized that the Indian Statistical System shall function within decentralized structure of the system. But, in section 5.4.15, the draft says that decentralized statistical system poses a variety of problems, and advocates for a unified statistical system. Again, in section 5.15, draft emphasizes the development of sub-national statistical systems. These views are inconsistent and create confusion regarding the nature of statistical system that policy wants to pursue.
Recommendations
The functions of the NSC should be kept in its exclusive domain. Any such overlapping functions should be allocated to one agency taking into consideration the Fundamental Principles on Official Statistics.
The inconsistencies regarding the decentralisation philosophy of the statistical system should be addressed.
Section 5.5: Identifying statistical products required through committees
Comments
While Section 5.5.2 recognises data confidentiality as a goal for statistical coordination, it does not take into account the violation of privacy that might occur due to the sharing of data. For example, a certain individual might agree to share personal information with a particular Ministry, but have apprehensions about it being shared with other Ministries or private parties.
Recommendations
We recommend that point 4 in Section 5.5.2 be read as, “enabling sharing of data without compromising the privacy of individuals and the confidentiality/security of data.”The value of of the individual privacy stems from both the recent Supreme Court judgment that affirmed privacy as a Fundamental Right, and also Principle 6 of the of the Fundamental Principles of Official Statistics. Realising privacy as a goal in this section will add a realm of individual control that is already articulated in Section 5.7.9.
Annex-VII: Guidelines on Outsourcing statistical activities
Comments
Section 6 defines “sensitive information” in an all-inclusive manner and does not leave space for further inclusion of any information that may be interpreted as sensitive. For example, biometric data has not been listed as “sensitive information”.
Section 9.1, draft says, “[t]he identity of the Government agency and the Contractor may be made available to informants at the time of collection of data”. It is imperative that informants have the right to verify the identity of the Government agency and the Contractor before parting with their personal information.
Recommendations
The definition of “sensitive information” should be broad-based with scope for further inclusion of any kind of data that may be deemed “sensitive.”
Section 9.1 must mandate that the identity of the Government agency and the Contractor be made available to informants at the time of collection of data.
Section 9.6 can be redrafted to state that each informant must be informed of the manner in which the informant could access the data collected from the informant in a statistical project, as also of the measures taken to deny access on that information to others, except in the cases specified by the policy.
Section 10.2 can be improved to state that if information exists in a physical form that makes the removal of the identity of informants impracticable (e.g. on paper), the information should be recorded in another medium and the original records must be destroyed.
Network Disruptions Report by Global Network Initiative
The report by Global Network Initiative can be read here.
However S.144 of the Criminal Procedure Code as well Section 5 of the Telegraph Act are still used as legal grounds. The former targets unlawful assembly while the latter gives authorities the right to prevent transmission of messages, applicable to messages sent over the Internet as well. A case in the Gujarat High Court challenging the validity of using S.144 of the CrPC was dismissed essentially stating the Government could use the section to enforce shutdowns to maintain law and order.
The right to Internet has been accepted as a fundamental right by the United Nations and one which, cannot be disassociated from the exercise of freedom of expression and opinion and the right to peaceful assembly. These are rights guaranteed by the Constitution, affirmed in the Universal Declaration of Human Rights and thus should be provided, both, online and offline. Online movements are unpredictable and dynamic making Governments fearful of their lack of control over content hosting websites. Their fear becomes their de facto perception of online services resulting in network shutdowns regardless of the reality on ground.
Given the rising importance of this issue, Global Network Initiative has published a report on such Network Disruptions by Jan Rydzak . A former Google Policy fellow and now a PhD candidate at the University of Arizona, he, conducts research on the nexus between technology and protest. The report, which uses India as a case study calls for more attention on network disruptions, the 'new form of digital repression' and delves into its impact on human rights. Rydzak aims at widening the gambit of affected rights by discussing the civil and political rights of freedom of assembly, right to equality, religious belief and such. These are ramifications not widely discussed so far and helps shine a light on the collateral damage incurred due to these shutdowns. Through a multitude of interviews with various stakeholders, the author brings to forefront the human rights implications of network disruptions on different groups of individuals such as women, immigrants and certain ethnic groups. These dangers are even more when it comes to vulnerable populations and the report does a comprehensive analysis of all of the above.
NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy
The 115-page discussion paper attempts to be an all encompassing document looking at a host of AI related issues including privacy, security, ethics, fairness, transparency and accountability. The paper identifies five focus areas where AI could have a positive impact in India. It also focuses on reskilling as a response to the potential problem of job loss due the future large-scale adoption of AI in the job market. This blog is a follow up to the comments made by CIS on Twitter on the paper and seeks to reflect on the National Strategy as a well researched AI roadmap for India. In doing so, it identifies areas that can be strengthened and built upon.
Identified Focus Areas for AI Intervention
The paper identifies five focus areas—Healthcare, Agriculture, Education, Smart Cities and Infrastructure, Smart Mobility and Transportation, which Niti Aayog believes will benefit most from the use of AI in bringing about social welfare for the people of India. Although these sectors are essential in the development of a nation, the failure to include manufacturing and services sectors is an oversight. Focussing on manufacturing is fundamental not only in terms of economic development and user base, but also regarding questions of safety and the impact of AI on jobs and economic security. The same holds true for the service sector particularly since AI products are being made for the use of consumers, not just businesses. Use of AI in the services sector also raises critical questions about user privacy and ethics. Another sector the paper fails to include is defense, this is worrying since India is chairing the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) in 2018. Across sectors, the report fails to look at how AI could be utilised to ensure accessibility and inclusion for the disabled. This is surprising, as aid for the differently abled and accessibility technology was one of the 10 domains identified in the Task Force Report on AI published earlier this year. This should have been a focus point in the paper as it aims to identify applications with maximum social impact and inclusion.
In its vision for the use of AI in smart cities, the paper suggests the adoption of a sophisticated surveillance system as well as the use of social media intelligence platforms to check and monitor people’s movement both online and offline to maintain public safety. This is at variance with constitutional standards of due process and criminal law principles of reasonable ground and reasonable suspicion. Further, use of such methods will pose issues of judicial inscrutability. From a rights perspective, state surveillance can directly interfere with fundamental rights including privacy, freedom of expression, and freedom of assembly. Privacy organizations around the world have raised concerns regarding the increased public surveillance through the use of AI. Though the paper recognized the impact on privacy that such uses would have, it failed to set a strong and forward looking position on the issue - such as advocating that such surveillance must be lawful and inline with international human rights norms.
Harnessing the Power of AI and Accelerating Research
One of the ways suggested for the proliferation of AI in India was to increase research, both core and applied, to bring about innovation that can be commercialised. In order to attain this goal the paper proposes a two-tier integrated approach: the establishment of COREs (Centres of Research Excellence in Artificial Intelligence) and ICTAI (International Centre for Transformational Artificial Intelligence). However the roadmap to increase research in AI fails to acknowledge the principles of public funded research such as free and open source software (FOSS), open standards and open data. The report also blames the current Indian Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI. Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component. The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI, innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes would be more desirable. The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.
Ethics, Privacy, Security and Safety
In a positive step forward, the paper addresses a broader range of ethical issues concerning AI including transparency, fairness, privacy and security and safety in more detail when compared to the earlier report of the Task Force. Yet despite a dedicated section covering these issues, a number of concerns still remain unanswered.
Transparency
The section on transparency and opening the Black Box has several lacunae. First, AI that is used by the government, to an acceptable extent, must be available in the public domain for audit, if not under Free and Open Source Software (FOSS). This should hold true in particular for uses that impinge on fundamental rights. Second, if the AI is utilised in the private sector, there currently exists a right to reverse engineer within the Indian Copyright Act, which is not accounted for in the paper. Furthermore, if the AI was involved both in the commission of a crime or the violation of human rights, or in the investigations of such transgressions, questions with regard to judicial scrutability of the AI remain. In addition to explainability, the source code must be made circumstantially available, since explainable AI alone cannot solve all the problems of transparency. In addition to availability of source code and explainability, a greater discussion is needed about the tradeoff between a complex and potentially more accurate AI system (with more layers and nodes) vs. an AI system which is potentially not as accurate but is able to provide a human readable explanation. It is interesting to note that transparency within human-AI interaction is absent in the paper. Key questions on transparency, such as whether an AI should disclose its identity to a human have not been answered.
Fairness
With regards to fairness, the paper mentions how AI can amplify bias in data and create unfair outcomes. However, the paper neither suggests detailed or satisfactory solutions nor does it deal with biased historical data in an Indian context. More specifically, there seems to be no mention of regulatory tools to tackle the problem of fairness, such as:
- Self-certification
- Certification by a self-regulatory body
- Discrimination impact assessments
- Investigations by the privacy regulator
Such tools will proactively need to ensure inclusion, diversity, and equity in composition and decisions.
Additionally, with reference to correcting bias in AI, it should be noted that the technocratic view that as an AI solution continues to be trained on larger amounts of data , systems will self correct, does not fully recognize the importance of data quality and data curation, and is inconsistent with fundamental rights. Policy objectives of AI innovation must be technologically nuanced and cannot be at the cost of intermediary denial of rights and services.
Further, the paper does not deal with issues of multiple definitions and principles of fairness, and that building definitions into AI systems may often involve choosing one definition over the other. For instance, it can be argued that the set of AI ethical principles articulated by Google are more consequentialist in nature involving a a cost-benefit analysis, whereas a human rights approach may be more deontological in nature. In this regard, there is a need for interdisciplinary research involving computer scientists, statisticians, ethicists and lawyers.
Privacy
Though the paper underscores the importance of privacy and the need for a privacy legislation in India - the paper limits the potential privacy concerns arising from AI to collection, inappropriate use of data, personal discrimination, unfair gain from insights derived from consumer data (the solution being to explain to consumers about the value they as consumers gain from this), and unfair competitive advantage by collecting mass amounts of data (which is not directly related to privacy). In this way the paper fails to discuss the full implications on privacy that AI might have and fails to address the data rights necessary to enable the right to privacy in a society where AI is pervasive. The paper fails to engage with emerging principles from data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Further, there is no discussion on the issues such as data minimisation and purpose limitation which some big data and AI proponents argue against. To that extent, there is a lack of appreciation of the difficult policy questions concerning privacy and AI. The paper is also completely silent on redress and remedy. Further the paper endorses the seven data protection principles postulated by the Justice Srikrishna Committee. However CIS has pointed out that these principles are generic and not specific to data protection. Moreover, the law chapter of IEEE’s ‘Global Initiative on Ethics of Autonomous and Intelligent Systems’ has been ignored in favor of the chapter on ‘Personal Data and Individual Access Control in Ethically Aligned Design’ as the recommended international standard. Ideally, both chapters should be recommended for a holistic approach to the issue of ethics and privacy with respect to AI.
AI Regulation and Sectoral Standards
The discussion paper’s approach towards sectoral regulation advocates collaboration with industry to formulate regulatory frameworks for each sector. However, the paper is silent on the possibility of reviewing existing sectoral regulation to understand if they require amending. We believe that this is an important solution to consider since amending existing regulation and standards often takes less time than formulating and implementing new regulatory frameworks. Furthermore, although the emphasis on awareness in the paper is welcome, it must complement regulation and be driven by all stakeholders, especially given India’s limited regulatory budget. The over reliance on industry self-regulation, by itself, is not advisable, as there is an absence of robust industry governance bodies in India and self-regulation raises questions about the strength and enforceability of such practices. The privacy debate in India has recognized this and reports, like the Report of the Group of Experts on Privacy, recommend a co-regulatory framework with industry developing binding standards that are inline with the national privacy law and that are approved and enforced by the Privacy Commissioner. That said, the UN Guiding Principles on Business and Human Rights and its “protect, respect, and remedy” framework should guide any self regulatory action.
Security and Safety of AI Systems
In terms of security and safety of AI systems the paper seeks to shift the discussion of accountability being primarily about liability, to that of one about the explainability of AI. Furthermore, there is no recommendation of immunities or incentives for whistleblowers or researchers to report on privacy breaches and vulnerabilities. The report also does not recognize certain uses of AI as being more critical than others because of their potential harm to the human. This would include uses in healthcare and autonomous transportation. A key component of accountability in these sectors will be the evolution of appropriate testing and quality assurance standards. Only then, should safe harbours be discussed as an extension of the negligence test for damages caused by AI software. Additionally, the paper fails to recommend kill switches, which should be mandatory for all kinetic AI systems. Finally, there is no mention of mandatory human-in-the-loop in all systems where there are significant risks to safety and human rights. Autonomous AI is only viewed as an economic boost, but its potential risks have not been explored sufficiently. A welcome recommendation would be for all autonomous AI to go through human rights impact assessments.
Research and Education
Being a government think-tank, the NITI Aayog could have dealt in detail with the AI policies of the government and looked at how different arms of the government are aiming to leverage AI and tackle the problems arising out of the use of AI. Instead of tabulating the government’s role in each area and especially research, the report could have also listed out the various areas where each department could play a role in the AI ecosystem through regulation, education, funding research etc. In terms of the recommendations for introducing AI curriculums in schools, and colleges, the government could also ensure that ethics and rights are part of the curriculum - especially in technical institutions. A possible course of action could include corporations paying for a pan-Indian AI education campaign.This would also require the government to formulate the required academic curriculum that is updated to include rights and ethics.
Data Standards and Data Sharing
Based on the amount of data the Government of India collects through its numerous schemes, it has the potential to be the largest aggregator of data specific to India. However the paper does not consider the use of this data with enough gravity. For example, the paper recommends Corporate Data Sharing for “social good” and making government datasets from the social sector available publicly. Yet this section does not mention privacy enhancing technologies/standards such as pseudonymization, anonymization standards, differential privacy etc. Additionally there should be provisions that allow the government to prevent the formation of monopolies by regulating companies from hoarding user data. The open data standards could also be applicable to the private companies, so that they can also share their data in compliance with the privacy enhancing technologies mentioned above. The paper also acknowledges that AI Marketplaces require monitoring and maintenance of quality. It recognises the need for “continuous scrutiny of products, sellers and buyers”, and proposes that the government enable these regulations in a manner that private players could set up the marketplace. This is a welcome suggestion, but the legal and ethical framework of the AI Marketplace requires further discussion and clarification.
An AI Garage for Emerging Economies
The discussion paper also qualifies India as an “ideal test-bed” for trying out AI related solutions. This is problematic since questions of regulation in India with respect to AI have yet to be legally clarified and defined and India does not have a comprehensive privacy law. Without a strong ethical and regulatory framework, the use of new and possibly untested technologies in India could lead to unintended and possibly harmful outcomes.The government's ambition to position India as a leader amongst developing countries on AI related issues should not be achieved by using Indians as test subjects for technologies whose effects are unknown.
Conclusion
In conclusion, NITI Aayog’s discussion paper represents a welcome step towards a comprehensive AI strategy for India. However, the trend of inconspicuously releasing reports (this and the AI Task Force) as well as the lack of a call for public comments, seems to be the wrong way to foster discussion on emerging technologies that will be as pervasive as AI.
The blanket recommendations were provided without looking at its viability in each sector. Furthermore, the discussion paper does not sufficiently explore or, at times, completely omits key areas. It barely touched upon societal, cultural and sectoral challenges to the adoption of AI — research that CIS is currently in the process of undertaking.Future reports on Indian AI strategy should pay more attention to the country’s unique legal context and to possible defense applications and take the opportunity to establish a forward looking, human rights respecting, and holistic position in global discourse and developments. Reports should also consider infrastructure investment as an important prerequisite for AI development and deployment. Digitised data and connectivity as well as more basic infrastructure, such as rural electricity and well-maintained roads, require more funding to more successfully leverage AI for inclusive economic growth. Although there are important concerns, the discussion paper is an aspirational step toward India’s AI strategy.
Why NPCI and Facebook need urgent regulatory attention
The article was published in the Economic Times on June 10, 2018.
As the network effects compound, disruptive acceleration hurtle us towards financial utopia, or dystopia. Our fate depends on what we get right and what we get wrong with the law, code and architecture, and the market.
The Internet, unfortunately, has completely transformed from how it was first architected. From a federated, generative network based on free software and open standards, into a centralised, environment with an increasing dependency on proprietary technologies.
In countries like Myanmar, some citizens misconstrue a single social media website, Facebook, for the internet, according to LirneAsia research. India is another market where Facebook could still get its brand mistaken for access itself by some users coming online. This is Facebook put so many resources into the battle over Basics, in the run-up to India’s network neutrality regulation. an odd corporation.
On hand, its business model is what some term surveillance capitalism. On the other hand, by acquiring WhatsApp and by keeping end-toend (E2E) encryption “on”, it has ensured that one and a half billion users can concretely exercise their right to privacy. At the time of the acquisition, WhatsApp founders believed Facebook’s promise that it would never compromise on their high standards of privacy and security. But 18 months later, Facebook started harvesting data and diluting E2E.
In April this year, my colleague Ayush Rathi and I wrote in Asia Times that WhatsApp no longer deletes multimedia on download but continues to store it on its servers. Theoretically, using the very same mechanism, Facebook could also be retaining encrypted text messages and comprehensive metadata from WhatsApp users indefinitely without making this obvious.
My friend, Srikanth Lakshmanan, founder of the CashlessConsumer collective, is a keen observer of this space. He says in India, “we are seeing an increasing push towards a bank-led model, thanks to National Payments Corporation of India (NPCI) and its control over Unified Payments Interface (UPI), which is also known as the cashless layer of the India Stack.”
NPCI is best understood as a shape shifter. Arundhati Ramanathan puts it best when she says “depending on the time and context, NPCI is a competitor. It is a platform. It is a regulator. It is an industry association. It is a profitable non-profit. It is a rule maker. It is a judge. It is a bystander.”
This results in UPI becoming, what Lakshmanan calls, a NPCI-club-good rather than a new generation digital public good. He also points out that NPCI has an additional challenge of opacity — “it doesn’t provide any metrics on transaction failures, and being a private body, is not subject to proactive or reactive disclosure requirements under the RTI.”
Technically, he says, UPI increases fragility in our financial ecosystem since it “is a centralised data maximisation network where NPCI will always have the superset of data.” Given that NPCI has opted for a bank-led model in India, it is very unlikely that Facebook able to leverage its monopoly the social media market duopoly it shares with in the digital advertising market to become a digital payments monopoly.
However, NCPI and Facebook both share the following traits — one, an insatiable appetite for personal information; two, a fetish for hypercentralisation; three, a marginal commitment to transparency, and four, poor track record as a custodian of consumer trust. The marriage between these like-minded entities has already had a dubious beginning.
Previously, every financial technology wanting direct access to the NPCI infrastructure had to have a tie-up with a bank. But for Facebook and Google, as they are large players, it was decided to introduce a multi-bank model. This was definitely the right thing to do from a competition perspective. But, unfortunately, the marriage between the banks and the internet giant was arranged by NPCI in an opaque process and WhatsApp was exempted from the full NPCI certification process for its beta launch.
Both NPCI and Facebook need urgent regulatory attention. A modern data protection law and a more proactive competition regulator is required for Facebook. The NPCI will hopefully also be subjected to the upcoming data protection law. But it also requires a range of design, policy and governance fixes to ensure greater privacy and security via data minimisation and decentralisation; greater accountability and transparency to the public; separation of powers for better governance and open access policies to prevent anti-competitive behaviour.
Comments on the Draft Digital Communications Policy
Preliminary
On 1st May 2018, the Department of Telecommunications of the Ministry of Communications released the Draft Digital Communications Policy for comments and feedback. We laud the Government’s attempts to realise the socio-economic potential of India by increasing access to Internet, and drafting a comprehensive policy while adequately keeping in mind the various security and privacy concerns that arise due to online communication. On behalf of the Centre for Internet & Society (CIS), we thank the Department of Telecommunications for the opportunity to submit its comments on the draft policy.
We would like to point out two concerns with the consultation process: (i) a character-limit imposed on the comments to each section, due to which this submission has to sacrifice on providing comprehensive references to research; and (ii) issues with signing in on the MyGov where this consultation was hosted. We strongly recommend that the consultation process be liberal in accepting content, and allow for multiple types of submissions.
Comments
Connect India: Creating a Robust Digital Communication Infrastructure
Propel India: Enabling Next Generation Technologies and Services through Investments, Innovation, Indigenous Manufacturing and IPR Generation
On Strategies
2.2 (a) ii. Simplifying licensing and regulatory frameworks whilst ensuring appropriate security frameworks for IoT/ M2M / future services and network elements incorporating international best practices
The process of “simplifying” licensing and regulatory regime is currently vague, and the intentions remain unclear. Simplifying licences without clear intentions can lead to losing the necessary nuance in the license agreements required to maintain competitive markets. In recent months, the industry has already witnessed a dilution of provisions which were placed to ensure healthy competition in the sector. For example, on May 31st, new norms were announced by DoT under which now allow an operator to hold 35% of the total spectrum as opposed to the earlier regulation which only allowed for holding a maximum 25% of the total spectrum.
2.3 (d) (iii) Providing financial incentives for the development of Standard Essential Patents(SEPs) in the field of digital communications technologies
This is a welcome step by the government to incentivise the development of SEPs in the country. However, this appreciable step will only yield results in the long term - and realistically speaking, not before a decade. It is equally necessary to improve the environment of licensing of SEPs in the short-term. The government should take initiative for creation of government-controlled patent pools for SEPs, which will solve issues of licensing for SEP holders, and also improve transparency of information relating to SEPs. Specifically, we recommend that the government initiate the formation of a patent pool of critical mobile technologies and apply a five percent compulsory license.
Secure India: Ensuring Digital Sovereignty, Safety and Security of Digital Communications
On Strategies
3.1 Harmonising communications law and policy with the evolving legal framework and jurisprudence relating to privacy and data protection in India
We welcome the Ministry’s intention to amend licence agreements to include data protection and privacy provisions. In the same vein, the Ministry should also consider removing provisions from licenses that prevent the operator from using certain encryption methods in its network. For example, Clause 2.2 (vii) of the License Agreement between DoT & ISP prohibits bulk encryption. Additionally, in the License Agreement, encryption with only up to 40-bit in RSA (or equivalent) is normally permitted. Similarly, Clause 37.1 of the Unified Service License Agreement prohibits bulk encryption. These provisions must be revised to ensure that ISPs and other service providers can employ more cryptographically secure methods.
When regulating on encryption, we recommend that the government only set positive minimum mandates for the storage and transmission of data, and not set upper limits on the number of bits or on the quality of cryptographical method. In pursuance of the same goals, we also recommend adding point ‘iii’ to 3.1 (b): “promoting the use of encryption in private communication by providing positive minimum mandates for strong encryption in (or along with) the data protection framework.”
3.2 (a) Recognising the need to uphold the core principles of net neutrality
Like other goals of the draft policy, the target for ensuring and enforcing net neutrality principles has been set as 2022. However, this goal is achievable by as early as December 2018. We suggest that the Government take the first step towards this goal by accepting the net neutrality principles proposed by the TRAI and its recommendations to the government which have been pending with the Ministry since November 2017. The government may additionally take into consideration CIS’ position on net neutrality.
The vaguely worded “appropriate exclusions and exceptions” carved out to net-neutrality principles in the policy need urgent elaboration. Given the vague boundaries between different control layers in digital communication, content regulation is very easy to slip into, and needs to be consciously avoided by the government.
3.3 (f) ii. Facilitating lawful interception agencies with state of the art lawful intercept and analysis systems for implementation of law and order and national security
There is no clarity in policy on how the government plans to meet the goal of “[f]acilitating lawful interception agencies with state of the art lawful intercept and analysis systems for implementation of law and order and national security.” It has been recently suggested that some legal provisions that enable targeted communication surveillance might be violative of the privacy guidelines laid out in the recent Supreme Court judgment that affirmed the Right to Privacy. Additionally, mass surveillance, prime facie, does not meet the “proportionality test.” Therefore, the policy documents needs details as to how the Ministry will aid intelligence agencies, and whether these interception details will be known to ISPs, TSPs and the public via reflection in the various License Agreements.
Comments on the Telecom Commercial Communications Customer Preference Regulations
Preliminary
This submission presents comments by the Centre for Internet & Society (“CIS”), India on ‘The Telecom Commercial Communications Customer Preference Regulations, 2018’ which were released on 29th May 2018 for comments and counter-comments.
CIS appreciates the intent and efforts of Telecom Regulatory Authority of India (TRAI) to curb the problem of Unsolicited Commercial Communication (UCC), or spam. Spam messages are constant irritants for telecom subscribers. Acknowledging the same, TRAI has proposed regulations which aim to empower subscribers in effectively dealing with UCC. CIS is grateful for the opportunity to put forth its views and comments on the regulations. This submission was made on 18th June 2018. This text has been slightly edited for readability.
The first part of the submission highlights some general issues with the regulations. While TRAI has offered a technological solution to the menace of UCC, the policy documents have no accompanying technical details. TRAI has not made a compelling case for why Distributed Ledger Technologies (DLTs) should be used for storing data instead of a distributed database. There is no clarity on the technical aspects of the proposed DLTs: the participating nodes in the network, how these nodes arrive at a consensus, whether they are independent of each other, are questions that remain unanswered. The draft regulations also mention curbing Robocalls, but technical challenges associated with the same have not been discussed. Spam which is non-commercial in nature remains out of the scope of the current regulations.
The second part of this submission puts forth specific comments related to various sections of the draft and suggests improvements therein. While CIS appreciates the extension of the deadline from 11th June to 18th June, we would like to highlight that the Draft was released on 29th May, and despite the extension, the time to submit comments remains less than a month. Considering the fact that the draft regulations hold significance for the entire telecom industry and nearly 1.5 billion subscribers, TRAI should have granted at least a month’s time for the stakeholder’s sound scrutiny.
General Comments
Distributed Ledger Technology (DLT)
The draft greatly emphasizes the fact that data regarding Consent, Complaints, Headers, Preferences, Content Template Register and Entities are stored on distributed ledgers. The intent is to keep data cryptographically secure with no centralized point of control. However, the regulations do not go into the technical details of the working of these distributed ledgers leading to several potential pitfalls.
As per the draft, every access provider has to establish distributed ledgers for Complaints, Consent, Content, Preference, Header, Entities and so on. There are specific entities mentioned which will act as nodes in the network, and these nodes are preselected.
Whenever a sender seeks to send commercial communications across a list of subscribers, the list is ‘scrubbed’ against the DL-Consent and DL-Preference, to check whether the subscriber has given consent and registered their preference. The sender can only send the commercial communication to the numbers which are present in the scrubbed list.
The objective of these regulations is to protect consumers’ rights but the consumer, i.e., the subscriber, is not a node in the distributed ledger. Since the primary benefits of decentralization are gained when the trust is devolved to the individual subscribers, and the individual users are not specified as participating nodes in the ledger, the justification behind a distributed ledger is unclear.
Additionally, the proposed regime requires the subscriber to place her trust in the access provider to register the complaint, thus offers no tangible benefit over the current regulation. While there are penalties for non-compliant Access Providers (APs), there are no business incentives for APs to expend the extra amount of resources required in for effective implementation of this technology, to act in the users’ interest. This builds a system where APs interests clash with subscribers, but they are nonetheless required to be the guardian of the subscribers’ concerns.
Further, the nodes are entities constituted by the access providers (APs), and there is no mechanism to ensure that they behave independently of each other. In such case, it is wholly possible that all nodes on a distributed ledger are run by the same entity, thus defeating the purpose of establishing consensus. The proposed regulations do not address this scenario.
One solution would be to add subscribers as nodes to the DLT network. But this would be impractical as the technical challenges associated therein, including generating public-private key pairs of each user, the computational complexity of the network, are immense. If this is indeed the intention of TRAI, this has not been spelled out clearly in the draft regulations. Additionally, in such a scenario, there would be no requirement for mandating every AP to maintain their own DLT for customer preference and consent artifacts.
Considering the points mentioned above, we request TRAI to publish the technical specifications of DLTs, which addresses the following issues:
- Who can participate in the network other than the entities mentioned in the regulations? Are these participating entities independent of each other? If not, then how will the conflict of interest be resolved?
- What is the consensus algorithm used in the DLTs?
- Will the code to implement DLTs be open-source?
Our recommendations are three-fold in this regard:
If distributed ledger is used, then, mechanisms should be devised to ensure the integrity of the consensus. For this, participating nodes in the network must be independent of each other. Aforementioned points regarding consensus protocol should be taken into consideration as well.
In place of DLTs, we recommend the use of a distributed database with signature-based authentication and encryption of the data to be stored. The immutability and non-repudiation of data can be achieved in this way. Distributed ledgers such as DL-consent, DL-preference, DL-complaints are instances where authentication of data and subscriber can be done using simplers means such as OTP verification, etc. So, such ledgers need not necessarily utilize DLTs.
The regulations should mandate the open-source publication of the implementation of the DLTs. This will enable interoperability, add transparency to the functioning of the regulations, and enable security audits to ensure accountability of the APs.
Broadening the scope of the Regulations to non-commercial communication
The proposed regulations attempt to specifically curb unsolicited commercial communications as defined in Regulation 2(bt). But, there are other forms of communication which are unsolicited and non-commercial, including political messages and market surveys.
We recommend that the scope of the regulations should be broadened to include both commercial and non-commercial communications. And both of these should be grouped under the category of Institutional Communications. Wherever needed, changes should be made to the regulations dealing with UCC to suit the specific requirements of dealing with unsolicited non-commercial communications as well. At the same time, the regulations should ensure that individual communications are not brought within their ambit.
Technical challenges in combating Robocalls
Robocalls are defined in Regulation 2(ba) and in Schedule IV, provision 3, it has been clubbed with other kinds of spam. However, there are some specific technical challenges in regulating robocalls. Right now, ‘block listing’ is a prevalent model where one can identify a number and then block it so that it cannot be used further. But with robocalls, spoofing of other numbers is easily achievable which makes the blocking of the real identity of caller difficult. The proposed regulations do not adequately address this challenge.
The Alliance for Telecommunications Industry Solutions, with working groups of the Internet Engineering Task Force (IETF), has been working on a different approach to solve this problem. They are working on standards for all mobile and VoIP calling services which would enable them to do cryptographic digital call signing, “so calls can be validated as originating from a legitimate source, and not a spoofed robocall system. The protocols, known as ‘STIR’ and ‘SHAKEN,’ are in industry testing right now through ATIS's Robocalling Testbed, which has been used by companies like Sprint, AT&T, Google, Comcast, and Verizon so far”.
TRAI should take into account these developments and propose a specific regime accordingly. One possible way forward, for now, could be the banning of robocalls unless there is explicit opt-in by subscribers.
Registration of content-template
The draft envisages a distributed ledger system for registration of content template which would have both a fixed part and a variable part. The content template needs to be registered by the content template registrar, which would be an authorized entity.
Problematically, the content template is defined to include the fixed part as well as the variable part. Further, Schedule I, provision 4(3)(e) mandates that content template registration functions should be utilized to extract fixed and the variable portion from actual messages offered for delivery or already delivered. The variable portion of the message contains information specific to a customer, as defined in regulation 2(q)(ii). In addition to privacy concerns with accessing the variable part, there is no functional reason for variable portions to be extracted from the actual message, as only the fixed portion needs to be verified.
The hash of the fixed portion of the message can be used to identify whether a user has received UCC or not. We, therefore, recommend that the variable portion of the message shall not be made accessible to entities because it is not required for the identification of a message as UCC.
‘Safe and Secure Manner’
Throughout the draft, reference is made to the data collected being stored and/or exchanged in a ‘safe and secure manner’, without any clarification as to what this term implies.
We recommend that the term be defined as ‘measures in accordance with reasonable security practices and procedures’ as given in section 43A of the Information Technology Act, 2008 read with section 8 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011.
Bulk Registration
In India, evidence suggests that major victims of spam are the elderly and people with limited financial capacities. In such cases, consent and preference registration on behalf of these people by one person may help in the successful control of UCC.
Some telecom service providers argued against this by emphasizing the individual choice of a subscriber. However, in cases where there is authorization given by the customer, the primary user can register consent on his/her behalf. Similarly, since corporate connections are by definition owned and paid for by corporates, bulk registration in those situations can be also be done.
We recommend that given the situation in India, the provision for bulk registration be incorporated in the regulations for specific scenarios, as mentioned above. An authorization template giving the nominee power to register on behalf of a class can be incorporated to this effect. Also, an opt-out option must be incorporated in case an individual choice differs from the choice registered in the bulk-registration.
Specific Comments
Inferred Consent [Regulation 2(k)(II)(A)]
Comments
Regulation 2(k)(ii)(a) of the Draft defines consent as “voluntary permission given by the customer to the sender to receive commercial communication”. However, the draft also includes, “inferred consent”, which is defined as consent that can be “reasonably inferred from the customer’s conduct or the business and the relationship between the individual and the sender”.
When consent is derived from the customer’s conduct, rather than being given explicitly, it defeats its ‘voluntary nature’. The provision of consent being ‘reasonably inferred’ from the customer’s conduct is also vague, and there is no indication given in the draft as to what kind of conduct would lead to a reasonable inference of implied consent. The definition can also be interpreted to mean that customer’s conduct will be subject to monitoring, which raises privacy concerns.
Recommendations
Consent shall not be derived from the customer’s conduct unless the person provides it explicitly. We recommend amendment to the definition of ‘inferred consent’ accordingly.
Three years history to be stored in DL-Complaints [Regulations 24(3) and 24(4)]
Regulation 24(3) and (4) states that the DL-Ledger for Complaints (DL-Complaints) shall record ‘three years history’ of both the complainant and the sender, with details of complaints made, date, time and status of the resolution of the complaint. It is not clear from the regulation whether the mentioned set of data is exhaustive or not.
Recommendations
We recognize that the legislative intent behind drafting Regulation 24(3) and (4) was to curb frivolous or false complaints, which has already been a concern of TRAI. Storing both the complainant and the sender’s history, in such cases, may aid in resolving these.
The responsibility of the APs to ensure that the devices support the requisite permissions [Regulation 34]
Comments
Regulation 34 mandates that the APs are to ensure that the devices “registered in the network” shall support the requisite permissions of the Apps under this regulations.
In terms of jurisdiction, regulation of the functioning of electronic devices (which can be phones, tablets or smart watches) is outside the scope of the proposed regulations, and probably out of TRAI's regulatory competence.
Even if TRAI can impose the regulation on end devices, this regulation puts the burden on the APs to ensure that devices support the pertinent app permissions. Considering that TRAI itself has been weighing legal recourse against device manufacturers on similar grounds, it is unclear why TRAI assumes that APs have any legal or technical method to ensure control of a device which has neither been manufactured by them nor is it under their physical or remote control.
In modern smartphones, the end-user has full control over most app installations and permissions. This practice is consistent with a consumer's autonomy over the device and its functioning. Considering the fact that TRAI has not implemented basic security features in the 'Do Not Disturb' app, TRAI is putting at risk the privacy of millions of device owners by legally mandating permissions for an app with the second proviso. The proviso further gives TRAI the power to order APs to derecognize devices from their network. This regulation is draconic and inimical to the rights of consumers, who are at risk of losing network access and connectivity because of their device choice, which is a completely different business and market.
Recommendations
Reporting unsolicited messages or calls is a consumer right, and the regulations are in furtherance of the same goals. TRAI should enable consumer rights by giving subscribers the option to report spam and has no reason to force users to report spam possibly through legal overreach and privacy invasion. Accordingly, we recommend the removal of Regulation 34.
Additional Suggestions
Consumer and subscriber
The usage of the terms ‘customer’ and ‘subscriber’ in Regulation 3(1) implies that the terms have two different meanings. This interpretation, however, clashes with the actual definition given in Regulation 2(u) and 2(bk), whereby a customer is a subscriber. This is an inconsistent interpretation.
Either the definition of a ‘customer’ must be clarified or differentiated from that of a ‘subscriber’ in regulation 2, or regulation 3 must be amended to indicate what its actual object of regulation is - the customer or the subscriber.
Drafting misnumbering
There are a few instances of misnumbering of regulations and reference regulations which are non-existent.
Regulations 25(5)(b) and (c) make a reference to regulation 25(3)(a), which does not exist in the given draft. A bare reading of regulation 25, however, indicate that the intention was to refer to regulation 25(5)(a), and as such, this misnumbering should be rectified.
Regulation 34 makes a reference to regulation 7(2), which again, does not exist. In such case, either regulation 34 or regulation 7(2) must be amended to keep a consistent interpretation.
Ambiguous terms
‘Allocation and assignment principles and policies’ - Provision 4(1)(a) of Schedule I of the regulations indicate that header assignment should be done on the basis of ‘allocation and assignment principles and policies’, without any clarification to the meaning of this term. We recommend an amendment to this provision accordingly.
The AI Task Force Report - The first steps towards India’s AI framework
The blog post was edited by Swagam Dasgupta. Download PDF here
The Task Force’s Report, released on March 21st 2018, is a result of the combined expertise of members from different sectors and examines how AI will benefit India. It sheds light on the Task Force’s perception of AI, the sectors in which AI can be leveraged in India, the challenges endemic to India and certain ethical considerations. It concludes with a set of policy recommendations for the government to leverage AI for the next five years. While acknowledging AI as a social and economic problem solver, the Report attempts to answer three policy questions:
- What are the areas where government should play a role?
- How can AI improve quality of life and solve problems at scale for Indian citizens?
- What are the sectors that can generate employment and growth by the use of AI technology?
This blog will look at how the Task Force answered these three policy questions. In doing so, it gives an overview of salient aspects and reflects on the strengths and weaknesses of the Report.
Sectors of Relevance and Challenges
In order to navigate the outlined questions, the Report looks at ten sectors that it refers to as ‘domains of relevance to India’. Furthermore, it examines the use of AI along with its major challenges, and possible solutions for each sector. These sectors include: Manufacturing, FinTech, Agriculture, Healthcare, Technology for the Differently-abled, National Security, Environment, Public Utility Services, Retail and Customer Relationship, and Education. While these ten domains are part of the 16 domains of focus listed in the AITF’s web page, it would have been useful to know the basis on which these sectors were identified. A particular strength of the identified sectors is the consideration of technology for the differently abled as well as the recognition to the development of AI systems in spoken and sign languages in the Indian context.
Some of the problems endemic to India that were recognized include infrastructural barriers, managing scale and innovation, and the collection, validation and distribution of data. The Task Force also noted the lack of consumer awareness, and inability of technology providers to explain benefits to end users as further challenges. The Task Force — by putting the onus on the individual — seems to hint that the impediment to the uptake of technology is the inability of individuals to understand the benefits of the technology, rather than aspects such as poor design, opacity, or misuse of data and insights. Furthermore, although the Report recognizes the challenges associated to data in India and highlights the importance of quality and quantity of data; it overlooks the importance of data curation in creatinge reliable AI systems.
Although the Report examines challenges to AI in each sector, it fails to include all challenges that require addressal. For example, the report fails to acknowledge challenges such as the lack of appropriate certification systems for AI driven health systems and technologies. In the manufacturing sector, the Report fails to highlight contextual challenges associated with the use of AI. This includes the deployment of autonomous vehicles compared to the use of industrial robots.
On the use of AI in retail, the Report while examining consumer data and its respective regulatory policies, identified the issues to be related to the definition, discrimination, data breaches, digital products and safety awareness and reporting standards. In this, the Report is limited in its understanding of what categories of data can lead to discrimination and restricts mechanisms for transparency and accountability to data breaches. The Report could have also been more forward looking in its position on security — including security by design and security by default. Furthermore, these issues were noted only in the context of the retail sector and ideally should have been discussed across all sectors.
The challenges for utilizing AI for national security could have been examined beyond cost and capacity to include associated ethical and legal challenges such as the need for legal backing. The use of AI in national security demands clear accountability and oversight as it is a ground for legitimate state interference with fundamental rights such as privacy and freedom of expression. As such, there is a need for human rights impact assessments, as well as a need for such uses to be aligned with international human rights norms. Government initiatives that allow country wide surveillance and AI decisions based on such data should ideally be implemented only after a comprehensive privacy law is in place and India’s surveillance regime has been revisited.
Recognizing the potential of AI for the benefit of the differently abled is one of the key takeaways from this section of the Report. Furthermore, it also brings in the need for AI inclusivity. AI in natural language generation and translation systems have the potential to help the large number of youth that are disabled or deprived. Therefore, AI could have a large positive impact through inclusive growth and empowerment.
Although the Report examines each of the ten domains in an attempt to provide an insight into the role the government can play, there seems to be a lack of clarity in terms of the role that each department will and is playing with respect to AI. Even the section which lays down the relevant ministries for each of the ten domains failed to include key ministries and departments. For example, the Report does not identify the Ministry of Education, nor does it list the Ministry of Law for national security. The Report could have also identified government departments which would be responsible for regulation and standardization. This could include the Medical Council of India (healthcare), CII (manufacture and retail), RBI (Fintech) etc. The Report also does not recognize other developments around AI emerging out the government. For example, the Draft National Digital Communications Policy (published on May 1, 2018) seeks to empower the Department of Telecommunication to provide a roadmap for AI and robotics. Along similar lines, the Department of Defence Production has also created a task force earlier this year to study the use of AI to accelerate military technology and economic growth. The government should look at building a cohesive AI government body, or clearly delineating the role of each ministry, in order to ensure harmonization going forward.
Areas in need of Government Intervention
The Report also lists out the grand challenges where government intervention is required. This includes data collection and management and the need for widespread expertise contributing to research, innovation, and response. However, while highlighting the need for AI experts from diverse backgrounds, it fails to include experts from law and policy into the discussion. While identifying manufacturing, agriculture, healthcare and public utility to be places where government intervention is needed, the Report failed to examine national security beyond an important domain to India and as a sector where government intervention is needed.
Participation in International Forums
Another relevant concern that the Report underscores is India’s scarce participation as researchers, AI developers and government engagement in global discussions around AI. The Report states that although efforts were being made by Indian universities to increase their presence in international AI conferences, they were lagging behind other nations. On the subject of participation by the government it recommends regular presence in International AI policy forums. Hence, emphasising the need for India’s active participation in global conversations around AI and international rulemaking.
Key Enablers to AI
The Report while analysing the key enablers for AI deployment in India states that positive societal attitudes will be the driving force behind the proliferation of AI. Although relying on positive social attitudes alone will not help in increasing the trust on AI, steps such as making algorithms that are used by public bodies public, enacting a data protection law etc. will be important in enabling trust beyond highlighting success stories.
Data and Data Marketplaces
While the Report identifies data as a challenge where government intervention is needed, it also points to the Aadhaar ecosystem as an enabler. It states that Aadhaar will help in the proliferation of AI in three ways: one as a creator of jobs as related to the collection and digitization of data, two as a collector of reliable data, and three as a repository of Indian data. However, since the very constitutionality of Aadhaar is yet to be determined by the Supreme Court, the task force should have used caution in identifying Aadhaar as a definitive solution. Especially while making statements that the Aadhaar along with the SC judgement has created adequate frameworks to protect consumer data. Additionally, the Task Force should have recognized the various concerns that have been voiced about Aadhaar, particularly in the context of the case before the Supreme Court.
This section also proposes the creation of a Digital Data Marketplace. A data marketplace needs to be framed carefully so as to not create a situation where privacy becomes a right available to only those who can afford it. It is concerning that the discussion on data protection and privacy in the Report is limited to policies and guidelines for businesses and not centered around the individual.
Innovation and Patents
The Report states that the Indian startups working in the field of AI must be encouraged, and industry collaborations and funding must be taken up as a policy measure. One of the ways in which this could be achieved is by encouraging innovations, and one of the ways to do so is by adding a commercial incentive to it, such as through IP rights. Although the Report calls for a stronger IP regime that protects and incentivises innovation, it remains ambiguous as to which aspect of IP rights — patents, trade secrets and copyrights — need significant changes. If the Report is specifically advocating for stronger patent rights in order to match those of China and US, then it shows that the the task force fails to understand the finer aspects of Indian patent law and the history behind India’s stance on patenting. This includes the fact that Indian patent law excludes algorithms from being patented. Indian patent law, by providing a higher threshold for patenting computer related inventions (CRIs), ensures that only truly innovative patents are granted. Given the controversies over CRIs that have dotted the Indian patent landscape, the task force would have done well to provide more clarity on the ‘how’ and ‘why’ of patenting in this sector, if that is their intent with this suggestion.
Ethical AI framework
Responsible AI
In terms of establishing an ethical AI framework, the Task Force suggests measures such as making AI explainable, transparent, and auditable for biases. The Report addresses the fact that currently with the increase in human and AI interaction there is a need to have new standards set for the deployment of AI as well as industrial standards for robots. However, the Report does not go into details of how AI could cause further bias based on various identifiers such as gender and caste, as well as the myriad concerns around privacy and security. This is especially a concern given that the Report envisions widespread use of AI in all major sectors. In this way, the Report looks at data as both a challenge and an enabler, but fails to dedicate time towards explaining the various ethical considerations behind the collection and use of data in the context of privacy, security and surveillance as well as account for unintended consequences. In laying out the ethical considerations associated with AI, the report does not make a distinction between the use of AI by the public sector and private sector. As the government is responsible for ensuring the rights of citizens and holds more power than the citizenry, the public sector needs to be more accountable in their use of AI. This is especially so in cases where AI is proposed to be used for sovereign functions such as national security.
Privacy and Data
The Report also recognises the significance of the implementation of the Aadhaar Act, the privacy judgement and the proposed data protection laws, on the development and use of AI for India. Yet, the Report does not seem to recognize the importance of a robust and multi-faceted privacy framework as it assumes that the Aadhaar Act and the Supreme Court Judgement on privacy and potential privacy law have already created a basis for safe and secure utilization and sharing of customer data. Although the Report has tried to be an expansive examination of various aspects of AI for India, it unfortunately has not looked in depth at the current issues and debates around AI privacy and ethics and makes policy recommendations without appearing to fully reflect on the implementation and potential impact of the same. Similar to the discussion paper by the Niti Aayog, this Report does not consider the emerging principles of data protection such as right to explanation and right to opt-out of automated processing, which directly relate to AI. Furthermore, there is a lack of discussion on issues such as data minimisation and purpose limitation which some big data and AI proponents argue against.
Liability
On the question of liability, the Report only states that specific liability mechanisms need to be worked out for certain categories of machines. The Report does not address the questions of liability that should be applicable to all AI systems, and on whom the duty of care lies, not only in case of robots but also in the case of automated decision making etc. Thus, there is a need for further thinking on mechanisms for determining liability and how these could apply to different types of AI (deep learning models and other machine learning models) and AI systems.
AI and Employment
On the topic of jobs and employment, the Report states that AI will create more jobs than it takes as a result of an increase in the number of companies and avenues created by AI technologies. Additionally, the Report provides examples of jobs where AI could replace the human (autonomous drivers, industrial robots etc,) but does not go as far as envisioning what jobs could be created directly from this replacement. Though the Report recognizes emerging forms of work such as crowdsourcing platforms like Mturk, it fails to examine the impact of such models of work on workers and traditional labour market structures and processes. Going forward, it will be important that the government and the private sector undertake the necessary steps to ensure that fair, protected, and fulfilling jobs are created simultaneously with the adoption of AI. This will include revisiting national and organizational skilling programmes, labor laws, social benefit schemes, relevant economic policies, and exploring best practices with respect to the adoption and integration of AI in work.
Education and Re-skilling
The task force emphasised the need for a change in the education curriculum as well as the need to reskill the labour force to ensure an AI ready future. This level of reskilling will be a massive effort, and a thorough review and audit of existing skilling programmes in India is needed before new skilling programmes are established and financed. The Report also clarifies that the statistics used were based on a study on the IT component of the industry, and that a similar study was required to analyse AI’s effect on the automation component. Going forward, there is the need for a comprehensive study of the labour intensive sectors and formal and informal sectors to develop evidence based policy responses.
Policy Recommendations
The Task Force, in its policy recommendations, notes that the successful adoption of AI in India will depend on three factors: people, process and technology. However, it does not explain these three factors any further.
National Artificial Intelligence Mission
The most significant suggestion made in the Report is for the establishment of the National Artificial Intelligence Mission (N-AIM) — a centralised nodal agency for coordinating and facilitating research, collaboration and providing economic impetuous to AI startups. The mission with a budget allocation of Rs 1,200 crore over five years aims, among other things, to look at various ways to encourage AI research and deployment. Some of the suggestions include targeting and prototyping AI systems and setting up of a generic AI test bed. These suggestions seems to draw inspiration from other countries such as the US DARPA Challenge and Japan’s sandbox for self driving trucks. The establishment of N-AIM is a welcome step to encourage both AI research and development on a national scale. The availability of public funds will encourage more AI research and development.Additionally, government engagement in AI projects has thus far been fragmentedand a centralised body will presumably bring about better coordination and harmonization. Some of the initiatives such as Capture the flag competition that seeks to centre around the provision for real datasets to catalyze innovation will need to be implemented with appropriate safeguards in place.
Other recommendations
There are other suggestions that are problematic — particularly that of funding “an inter-disciplinary large data integration center in pilot mode to develop an autonomous AI Machine that can work on multiple data streams in real time and provide relevant information and predictions to public across all domains.” Before such a project is developed and implemented there are a number of factors where legal clarity is required; a few being: data collection and use, accuracy and quality of the AI system. There is also a need to ensure that bias and discrimination have been accounted for and fairness, responsibility and liability have been defined with consideration that this will be a government driven AI system. Additionally, such systems should be transparent by design and should include redress mechanisms for potential harms that may arise. This can be through the presence of a human in the loop, or the existence of a kill switch. These should be addressed through ethical principles, standards, and regulatory frameworks.
The recommendations propose establishing operation standards for data storage and privacy, communication standards for autonomous systems, and standards to allow for interoperability between AI based systems. A significant lacuna in this list is the development of safety, accuracy, and quality standards for AI algorithms and systems.
Similarly, although the proposed public private partnership model for research and startups is a good idea, this initiative should be undertaken only after questions such as the implications of liability, ownership of IP and data, and the exclusion of critical sectors are thought through.
Furthermore, the suggestion to ‘fund a national level survey on identification of cluster of clean annotated data necessary for building effective AI systems’ needs to recognize the existing initiatives around open data or use this as a starting place. The Report does not clarify if this survey would involve identifying data.
Conclusion
The inconspicuous release of the Report as well as the lack of a call for public comments results in the fact that the Report does not incorporate or reflect on the sentiments of the public or draw upon the expertise that exists in India on the topic or policies around emerging technologies, which will have a pervasive and wide effect on society. The need for multi stakeholder engagement and input cannot be understated. Nonetheless, the Report of the Task Force is a welcome step towards understanding the movement towards an definitive AI policy. The task force has attempted answering the three policy questions keeping people, process and technology in mind. However, it could have provided greater details about these indices. The Report, which is meant for a wider audience, would have done well to provide greater detail, while also providing clarity on technical terms. On a definitional plane, a list of technologies that the task force perceived as AI for this Report, could have also helped keep it grounded on possible and plausible 5 year recommendations.
Compared to the recent Niti Aayog Discussion Paper, this Report misses out on a detailed explanation on AI and ethics, however, it does spend some considerable amount of time on education and the use of AI for the differently abled. Additionally, the Report’s statement on the democratization of development and equal access as well as assigning ownership and framing transparent rules for usage of the infrastructure is a positive step towards making AI inclusive. Overall, the Report is a progressive step towards laying down India’s path forward in the field of Artificial Intelligence. The emphasis on India’s involvement in International rulemaking gives India an opportunity to be a leader of best practice in international forums by adopting forward looking and human rights respecting practices. Whether India will also become a strong contender in the AI race, with policies favouring the development of a socio-economically beneficial, and ethical-AI backed industries and services is yet to be seen.
The Task Force consists of 18 members in total. Of these, 11 members are from the field of AI technology both research and industry, three from the civil services, one from healthcare research, one with and Intellectual property law background, and two from a finance background. The specializations of the members are not limited to one area as the members have experience or education in various areas relevant to AI. https://www.aitf.org.in// There is a notable lack of members from Civil Society. It may also be noted that only 2 of the 18 members are women
The Report on the Artificial Intelligence Task Force, Pg. 1,http://dipp.nic.in/sites/default/files/Report_of_Task_Force_on_ArtificialIntelligence_20March2018_2.pdf
The Artificial Intelligence Task Force https://www.aitf.org.in/
The Report on the Artificial Intelligence Task Force, Pg. 8
The Report on the Artificial Intelligence Task Force, Pg. 9,10.
The Report on the Artificial Intelligence Task Force, Pg. 9
Artificial Intelligence in the Healthcare Industry in India https://cis-india.org/internet-governance/files/ai-and-healtchare-report
Artificial Intelligence in the Manufacturing and Services Sector https://cis-india.org/internet-governance/files/AIManufacturingandServices_Report _02.pdf
The Report on the Artificial Intelligence Task Force, Pg. 21.
Submission to the Committee of Experts on a Data Protection Framework for India, Centre for Internet and Society https://cis-india.org/internet-governance/files/data-protection-submission
The Report on the Artificial Intelligence Task Force, Pg. 22
Draft National Digital Communications Policy-2018, http://www.dot.gov.in/relatedlinks/draft-national-digital-communications-policy-2018
Task force set up to study AI application in military,https://indianexpress.com/article/technology/tech-news-technology/task-force-set-up-to-study-ai-application-in-military-5049568/
It is not just technical experts that are needed, ethical, technical, and legal experts as well as domain experts need to be part of the decision making process.
The Report on the Artificial Intelligence Task Force, Pg. 31
Constitutional validity of Aadhaar: the arguments in Supreme Court so far, http://www.thehindu.com/news/national/constitutional-validity-of-aadhaar-the-arguments-in-supreme-court-so-far/article22752084.ece
CIS Submission to TRAI Consultation on Free Data http://trai.gov.in/Comments_FreeData/Companies_n_Organizations/Center_For_Internet_and_Society.pdf
The Report on the Artificial Intelligence Task Force, Pg. 30
Section 3(k) of the patent act describes that a mere mathematical or business method or a computer programme or algorithm cannot be patented.
Patent Office Reboots CRI Guidelines Yet Again: Removes “novel hardware” Requirement
https://spicyip.com/2017/07/patent-office-reboots-cri-guidelines-yet-again-removes-novel-hardware-requirement.html
The Report on the Artificial Intelligence Task Force, Pg. 37
The Report on the Artificial Intelligence Task Force, Pg. 7
The Report on the Artificial Intelligence Task Force, Pg. 8
National Strategy for Artificial Intelligence: http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf
Meaningful information and the right to explanation,Andrew D Selbst Julia Powles, International Data Privacy Law, Volume 7, Issue 4, 1 November 2017, Pages 233–242
The Principle of Purpose Limitation and Big Data, https://www.researchgate.net/publication/319467399_The_Principle_of_Purpose_Limitation_and_Big_Data
For example a lesser threshold of minimum wages, no job secuirity etc, https://blogs.scientificamerican.com/guilty-planet/httpblogsscientificamericancomguilty-planet20110707the-pros-cons-of-amazon-mechanical-turk-for-scientific-surveys/
The Report on the Artificial Intelligence Task Force, Pg. 41
Report of Artificial Intelligence Task Force Pg, 46, 47
The DARPAChallenge https://www.darpa.mil/program/darpa-robotics-challenge
Japan may set regulatory sandboxes to test drones and self driving vehicles http://techwireasia.com/2017/10/japan-may-set-regulatory-sandboxes-test-drones-self-driving-vehicles/
Mariana Mazzucato in her 2013 book The Entrepreneurial State, argued that it was the government that drives technological innovation. In her book she stated that high-risk discovery and development were made possible by government spending, which the private enterprises capitalised once the difficult work was done.
https://tech.economictimes.indiatimes.com/news/technology/govt-of-karnataka-launches-centre-of-excellence-for-data-science-and-artificial-intelligence/61689977,https://analyticsindiamag.com/amaravati-world-centre-for-ai-data/
The Report on the Artificial Intelligence Task Force, Pg. 47
Report of Artificial Intelligence Task Force Pg. 49
The Report on the Artificial Intelligence Task Force, Pg. 47
The AI task force website has a provision for public comments although it is only for the vision and mission and the domains mentioned in the website.
National Strategy for Artificial Intelligence: http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf
CIS contributes to the Research and Advisory Group of the Global Commission on the Stability of Cyberspace (GCSC)
Chaired by Marina Kaljurand, and Co-Chaired by Michael Chertoff and Latha Reddy, the Commission comprises 26 prominent Commissioners who are experts hailing from a wide range of geographic regions representing multiple communities including academia industry, government, technical and civil society.
As a part of their efforts, the GCSC sent out a call for proposals for papers that sought to analyze and advance various aspects of the cyber norms debate.
Elonnai Hickok and Arindrajit Basu’s paper ‘ Conceptualizing an International Security Architecture for Cyberspace’ was selected by the Commissioners and published as a part of the Briefings of the Research and Advisory Group.
Arindrajit Basu represented CIS at the Cyberstability Hearings held by the GCSC at the sidelines of the GLOBSEC forum in Bratislava-a multilateral conference seeking to advance dialogue on various issues of international peace and security.
The published paper and the Power Point may be accessed here.
The agenda for the hearings is reproduced below
GCSC HEARINGS, 19 MAY 2018
HEARINGS: TOWARDS INTERNATIONAL CYBERSTABILITY
Venue: “Habsburg” room, Grand Hotel River Park 15:00-15:15
Welcome Remarks by Marina Kaljurand, Chair of the Global Commission on the Stability of Cyberspace (GCSC) and former Foreign Minister of Estonia 15:15-16:45
Hearing I: Expert Hearing
This session focuses on the topic Cyberstability and the International Peace and Security Architecture and includes scene settings, food-for-thought presentations on the new GCSC commissioned research, briefings and open statements by government and nongovernmental speakers.
“Scene setting: ”Cyber Diplomacy in Transition” by Carl Bildt, former Prime Minister of Sweden
“Commissioned Research I: Lessons learned from three historical case studies on establishing international norms” by Arindrajit Basu, Centre for Internet and Society, India
Commission Research II: The “pre-normative” framework and options for cyber diplomacy” by Elana Broitman, New America Foundation
“Some Remarks on current thinking within the United Nations”, by Renata Dwan, Director United Nations Institute for Disarmament Research (UNIDIR) (Registered Statements by Government Advisors) (Statements by other experts)
(Open floor discussion) 16:45-17:15
Coffee Break
ICANN Diversity Analysis
The by-laws of The Internet Corporation for Assigned Names and Numbers (ICANN) state that it is a non-profit public-benefit corporation which is responsible at the overall level, for the coordination of the “global internet's systems of unique identifiers, and in particular to ensure the stable and secure operation of the internet's unique identifier systems”.[1]Previously, this was overseen by the Internet Assigned Number Authority (IANA) under a US Government contract but in 2016, the oversight was handed over to ICANN, as a global multi-stakeholder body.[2] Given the significance of the multistakeholder nature of ICANN, it is imperative that stakeholders continue to question and improve the inclusiveness of its processes. The current blog post seeks to focus on the diversity of participation at the ICANN process.
As stakeholders are spread across the world, much of the communication discussing the work of ICANN takes place over email. Various [or X number of ] mailing lists inform members of ICANN activities and are used for discussions between them from policy advice to organizational building matters. Many of these lists are public and hence can be subscribed to by anyone and also can be viewed by non-members through the archives.
CIS analysed the five most active mailing lists amongst the working group mailing lists from January 2016 to May 2018, namely:
- Outreach & Engagement,
- Technology,
- At-Large Review 2015 - 2019,
- IANA Transition & ICANN Accountability, and
- Finance & Budget mailing lists.
We looked at the diversity among these active participants by focusing on their gender, stakeholder grouping and region. In order to arrive at the data, we referred to public records such as the Statement of Interests which members have to give to the Generic Names Supporting Organization(GNSO) Council if they want to participate in their working groups. We also used, where available, ICANN Wiki and the LinkedIn profiles of these participants. Given below are some of the observations we made subsequent to surveying the data. We acknowledge that there might be some inadvertent errors made in the categorization of these participants, but are of the opinion that our inference from the data would not be drastically affected by a few errors.
The following findings were observed:
- A total of 218 participants were present on the 5 mailing lists that were looked at.
- Of these,, 92 were determined to be active participants (participants who had sent more than the median number of mails in their working group) out of which 75 were non-staff members.
Among the active non-staff participants:
- Out of the 75 participants, 56 (74.7%) were male and 19 (25.3%) were female.
- 57.3% were identified to be members of the industry and technological community and 1.3% were identified as government representatives. 8.0% were representatives from Academia, 25.3% represented civil society and the remaining 8.0% were from fields that were uncategorizable with respect to the above, but were related to law and consultancy.
- Only 14.7% of the participants were from Asia while the majority belonged to Africa and then North America with 24% and 22.7% participation respectively
- Within Asia, we identified only one active participant from China.
Concerns
- The vast number of the people participating and as an extension, influencing ICANN work are male constituting three fourth of the participants.
- The mailing list are dominated by individuals from industry.. This coupled with the relative minority presence of the other stakeholders creates an environment where concerns emanating from other sections of the society could be overshadowed.
- Only 14.7% of the participants were from Asia, which is concerning since 48.7% of internet users worldwide belong to Asia.[3]
- China which has the world’s largest population of internet users (700 million people)[4] had only one active participant on these mailing lists.
ICANN being a global multistakeholder organization should ideally have the number of representatives from each region be proportionate to the number of internet users in that region. In addition to this, participation of women on these mailing lists need to increase to ensure that there is inclusive contribution in the functioning of the organization. We did not come across any indication of participation of individuals of non binary genders.
[1] https://cis-india.org/telecom/knowledge-repository-on-internet-access/icann
[2] https://www.icann.org/news/announcement-2016-10-01-en
[3] https://www.internetworldstats.com/stats.htm
[4] https://www.internetworldstats.com/stats3.htm
CIS submitted a response to a Notice of Enquiry by the US Government on International Internet Policy Priorities
The notice was based on different areas and we commented on the following three areas; The Free Flow of Information and Jurisdiction, The Multi-stakeholder Approach to Internet Governance, Privacy and Security. The submission was made by Swagam Dasgupta and Akriti Bopanna. Read the submission here.
The submission broadly covered the following aspects:
The Free Flow of Information and Jurisdiction
- What are the challenges to the free flow of information online?
- Which foreign laws and policies restrict the free flow of information online? What is the impact on U.S companies and users in general?
- Have courts in other countries issued internet-related judgments that apply national laws to the global internet? What have the effects been on users?
- What are the challenges to freedom of expression online?
- What should be the role of all stakeholders globally—governments, companies, technical experts, civil society and end users — in ensuring free expression online?
- What role can NTIA play in helping to reduce restrictions on the free flow of information over the internet and ensuring free expression online?
- In which international organizations or venues might NTIA most effectively advocate for the free flow of information and freedom of expression? What specific actions should NTIA and the U.S. Government take?
Multistakeholder Approach to Internet Governance
- Does the multistakeholder approach continue to support an environment for the internet to grow and thrive? If so, why? If not, why not?
- Are there public policy areas in which the multistakeholder approach works best? If yes, what are those areas and why? Are there areas in which the multistakeholder approach does not work effectively? If there are, what are those areas and why?
- Should the IANA Stewardship Transition be unwound? If yes, why and how? If not, why not?
- What should be NTIA’s priorities within ICANN and the GAC?
- Are there barriers to engagement at the IGF? If so, how can we lower these barriers?
- Are there improvements that can be made to the IGF’s structure?
Privacy and Security
- In what ways are cybersecurity threats harming international commerce? In what ways are the responses to those threats harming international commerce?
DIDP #31 Diversity of employees at ICANN
This data is being requested to verify ICANN’s claim of being an equal opportunities employer. ICANN’s employee handbook states that they “...provide equal opportunities and are committed to the principle of equality regardless of race, colour, ethnic or national origin, religious belief, political opinion or affiliation, sex, marital status, sexual orientation, gender reassignment, age or disability.” The data on the diversity of employees based on race and nationality of their employees will depict how much they have stuck to their commitment to delivering equal opportunities to personnel in ICANN and potential employees.
The request filed by CIS can be accessed here
The Centre for Internet and Society’s Comments and Recommendations to the: Indian Privacy Code, 2018
Click to download the file here
As India moves towards greater digitization, and technology becomes even more pervasive, there is a need to ensure the privacy of the individual as well as hold the private and public sector accountable for the use of personal data. Towards enabling public discourse and furthering the development a privacy framework for India, a group of lawyers and policy analysts backed by the Internet Freedom Foundation (IFF) have put together a draft a citizen's bill encompassing a citizen centric privacy code that is based on seven guiding principles.[1] This draft builds on the Citizens Privacy Bill, 2013 that had been drafted by CIS on the basis of a series of roundtables conducted in India.[2] Privacy is one of the key areas of research at CIS and we welcome this initiative and hope that our comments make the Act a stronger embodiment of the right to privacy.
Section by Section Recommendations
Preamble
Comment: The Preamble specifies that the need for privacy has increased in the digital age, with the emergence of big data analytics.
Recommendation: It could instead be worded as ‘with the emergence of technologies such as big data analytics’, so as to recognize the impact of multiple technologies and processes including big data analytics.
Comment: The Preamble states that it is necessary for good governance that all interceptions of communication and surveillance be conducted in a systematic and transparent manner subservient to the rule of law.
Recommendation: The word ‘systematic’ is out of place, and can be interpreted incorrectly. It could instead be replaced with words such as ‘necessary’, ‘proportionate’, ‘specific’, and ‘narrow’, which would be more appropriate in this context.
Chapter 1
Preliminary
Section 2: This Section defines the terms used in the Act.
Comment: Some of the terms are incomplete and a few of the terms used in the Act have not been included in the list of definitions.
Recommendations:
- The term “effective consent” needs to be defined. The term is first used in the Proviso to Section 7(2), which states “Provided that effective consent can only be said to have been obtained where...:”It is crucial that the Act defines effective consent especially when it is with respect to sensitive data.
- The term “open data” needs to be defined. The term is first used in Section 5 that states the exemptions to the right to privacy. Subsection 1 clause ii states as follows “the collection, storage, processing or dissemination by a natural person of personal data for a strictly non-commercial purposes which may be classified as open data by the Privacy Commission”. Hence the term open data needs to be defined in order to ensure that there is no ambiguity in terms of what open data means.
- The Act does not define “erasure”, although the term erasure does come under the definition of destroy (Section 2(1)(p)). There are some provisions that use the word erasure , hence if erasure and destruction mean different acts then the term erasure needs to be defined, otherwise in order to maintain uniformity the sections where erasure is used could be substituted with the term “destroy” as defined under this Act.
- The definition of “sensitive personal data” does not include location data and identification numbers. The definition of sensitive data must include location data as the Act also deals in depth with surveillance. With respect to identification numbers, the Act needs to consider identification numbers (eg. the Aadhaar number, PAN number etc.) as sensitive information as this number is linked to a person's identity and can reveal sensitive personal data such as name, age, location, biometrics etc. Example can be taken from Section 4(1) of the GDPR[3] which identifies location data as well as identification numbers as sensitive personal data along with other identifies such as biometric data, gender race etc.
- The Act defines consent as the “unambiguous indication of a data subject’s agreement” however, the definition does not indicate that there needs to be an informed consent. Hence the revised definition could read as follows “the informed and unambiguous indication of a data subject’s agreement”. It is also unclear how this definition of consent relates to ‘effective consent’. This relationship needs to be clarified.
- The Act defines ‘data controller’ in Section 2(1)(l) as “ any person including appropriate government..”. In order to remove any ambiguity over the definition of the term person, the definition could specify that the term person means any natural or legal person.
- The Act defines ‘data processor’ in Section (2(1)(m) as “means any person including appropriate government”. In order to remove any ambiguity over the definition of the term ‘any person’, the definition could specify that the term person means any natural or legal person.
CHAPTER II
Right to Privacy
Section 5: This section provides exemption to the rights to privacy.
Comment: Section 5(1)(ii) states that the collection, storage, processing or dissemination by a natural person of personal data for a strictly non-commercial purposes are exempted from the provisions of the right to privacy. This clause also states that this data may be classified as open data by the Privacy Commission. This section hence provides individuals the immunity from collection, storage, processing and dissemination of data of another person. However this provision fails to state what specific activities qualify as non commercial use.
Recommendation: This provision could potentially be strengthened by specifying that the use must be in the public interest. The other issue with this subsection is that it fails to define open data. If open data was to be examined using its common definition i.e “data that can be freely used, modified, and shared by anyone for any purpose”[4] then this section becomes highly problematic. As a simple interpretation would mean that any personal data that is collected, stored, processed or disseminated by a natural person can possibly become available to anyone. Beyond this, India has an existing framework governing open data. Ideally the privacy commissioner could work closely with government departments to ensure that open data practices in India are in compliance with the privacy law.
CHAPTER III
Protection of Personal Data
PART A
Notice by data controller
Section 6: This section specifies the obligations to be followed by data controllers in their communication, to maintain transparency and lays down provisions that all communications by Data Controllers need to be complied with.
Comment: There seems to be a error in the Proviso to this section. The proviso states “Provided that all communications by the Data Controllers including but not limited to the rights of Data Subjects under this part shall may be refused when the Data Controller is, unable to identify or has a well founded basis for reasonable doubts as to the identity of the Data Subject or are manifestly unfounded, excessive and repetitive, with respect to the information sought by the Data Subject ”.
Recommendation: The proviso could read as follows “The proviso states “Provided that all communications by the Data Controllers including but not limited to the rights of Data Subjects under this part may be refused when the Data Controller is…”. We suggest the use of the ‘may’ as this makes the provision less limiting to the rights of the data controller.
Additionally, it is not completely clear what ‘included but not limited to...’ would entail. This could be clarified further.
PART B
CONSENT OF DATA SUBJECTS
Section 10: This section talks about the collection of personal data.
Comment: Section 10(3) lays down the information that a person must provide before collecting the personal data of an individual.
Comment: Section 10(3)(xi) states as follows “the time and manner in which it will be destroyed, or the criteria used to Personal data collected in pursuance of a grant of consent by the data subject to whom it pertains shall, if that consent is subsequently withdrawn for any reason, be destroyed forthwith: determine that time period;”. There seems to be a problem with the sentence construction and the rather complex sentence is difficult to understand.
Recommendation: This section could be reworked in such as way that two conditions are clear, one - the time and manner in which the data will be destroyed and two the status of the data once consent is withdrawn.
Comment: Section 10(3)(xiii) states that the identity and contact details of the data controller and data processor must be provided. However it fails to state that the data controller should provide more details with regard to the process for grievance redressal. It does not provide guidance on what type of information needs to go into this notice and the process of redressal. This could lead to very broad disclosures about the existence of redress mechanisms without providing individuals an effective avenue to pursue.
Recommendation: As part of the requirement for providing the procedure for redress, data controllers could specifically be required to provide the details of the Privacy Officers, privacy commissioner, as well as provide more information on the redressal mechanisms and the process necessary to follow.
Section 11:This section lays out the provisions where collection of personal data without prior consent is possible.
Comment: Section 11 states “Personal data may be collected or received from a third party by a Data Controller the prior consent of the data subject only if it is:..”. However as the title of the section suggests the sentence could indicate the situations where it is permissible to collect personal data without prior consent from the data subject”. Hence the word “without” is missing from the sentence. Additionally the sentence could state that the personal data may be collected or received directly from an individual or from a third party as it is possible to directly collect personal data from an individual without consent.
Recommendation:The sentence could read as “Personal data may be collected or received from an individual or a third party by a Data Controller without the prior consent of the data subject only if it is:..”.
Comment: Section 11(1)(i) states that the collection of personal data without prior consent when it is “necessary for the provision of an emergency medical service or essential services”. However it does not specify the kind or severity of the medical emergency.
Recommendation: In addition to medical emergency another exception could be made for imminent threats to life.
Section 12: This section details the Special provisions in respect of data collected prior to the commencement of this Act.
Comment: This section states that all data collected, processed and stored by data controllers and data processors prior to the date on which this Act comes into force shall be destroyed within a period of two years from the date on which this Act comes into force. Unless consent is obtained afresh within two years or that the personal data has been anonymised in such a manner to make re-identification of the data subject absolutely impossible. However this process can be highly difficult and impractical in terms of it being time consuming, expensive particularly, in cases of analog collections of data. This is especially problematic in cases where the controller cannot seek consent of the data subject due to change in address or inavailability or death. This will also be problematic in cases of digitized government records.
Recommendation: We suggest three ways in which the issue of data collected prior to the Act can be handled. One way is to make a distinction on the data based on whether the data controller has specified the purpose of the collection before collecting the data. If the purpose was not defined then the data can be deleted or anonymised. Hence there is no need to collect the data afresh for all the cases. The purpose of the data can also be intimated to the data subject at a later stage and the data subject can choose if they would like the controller to store or process the data.The second way is by seeking consent afresh only for the sensitive data. Lastly, the data controller could be permitted to retain records of data, but must necessarily obtain fresh consent before using them. By not having a blanket provision of retrospective data deletion the Act can address situations where deletion is complicated or might have a potential negative impact by allowing storage, deletion, or anonymisation of data based on its purpose and kind.
Comment: Section (2)(1)(i) of the Act states that the data will not be destroyed provided that effective consent is obtained afresh within two years. However as stated earlier the Act does not define effective consent.
Recommendation: The term effective consent needs to be defined in order to bring clarity to this provision.
PART C
FURTHER LIMITATIONS ON DATA CONTROLLERS
Section 16: This section deals with the security of personal data and duty of confidentiality.
Comment: Section 16(2) states “ Any person who collects, receives, stores, processes or otherwise handles any personal data shall be subject to a duty of confidentiality and secrecy in respect of it.” Similarly Section 16(3) states “data controllers and data processors shall be subject to a duty of confidentiality and secrecy in respect of personal data in their possession or control. However apart from the duty of confidentiality and secrecy the data collectors and processors could also have a duty to maintain the security of the data.” Though it is important for confidentiality and secrecy to be maintained, ensuring security requires adequate and effective technical controls to be in place.
Recommendation: This section could also emphasise on the duty of the data controllers to ensure the security of the data. The breach notification could include details about data that is impacted by a breach or attach as well as the technical details of the infrastructure compromised.
Section 17: This section details the conditions for the transfer of personal data outside the territory of India.
Comment: Section 17 allows a transfer of personal data outside the territory of India in 3 situations- If the Central Government issues a notification deciding that the country/international organization in question can ensure an adequate level of protection, compatible with privacy principles contained in this Act; if the transfer is pursuant to an agreement which binds the recipient of the data to similar or stronger conditions in relation to handling the data; or if there are appropriate legal instruments and safeguards in place, to the satisfaction of the data controller. However, there is no clarification for what would constitute ‘adequate’ or ‘appropriate’ protection, and it does not account for situations in which the Government has not yet notified a country/organisation as ensuring adequate protection. In comparison, the GDPR, in Chapter V[5], contains factors that must be considered when determining adequacy of protection, including relevant legislation and data protection rules, the existence of independent supervisory authorities, and international commitments or obligations of the country/organization. Additionally, the GDPR allows data transfer even in the absence of the determination of such protection in certain instances, including the use of standard data protection clauses, that have been adopted or approved by the Commission; legally binding instruments between public authorities; approved code of conduct, etc. Additionally, it allows derogations from these measures in certain situations: when the data subject expressly agrees, despite being informed of the risks; or if the transfer is necessary for conclusion of contract between data subject and controller, or controller and third party in the interest of data subject; or if the transfer is necessary for reasons of public interest, etc. No such circumstances are accounted for in Section 17.
Recommendation: Additionally, data controllers and processors could be provided with a period to allow them to align their policies towards the new legislation. Making these provisions operational as soon as the Act is commenced might put the controllers or processors guilty of involuntary breaching the provisions of the Act.
Section 19: This section states the special provisions for sensitive personal data.
Comment: Section 19(2) states that in addition to the requirements set out under sub-clause (1), the Privacy Commission shall set out additional protections in respect of:i.sensitive personal data relating to data subjects who are minors; ii.biometric and deoxyribonucleic acid data; and iii.financial and credit data.This however creates additional categories of sensitive data apart from the ones that have already been created.[6] These additional categories can result in confusion and errors.
Recommendation: Sensitive data must not be further categorised as this can lead to confusion and errors. Hence all sensitive data could be subject to the same level of protection.
Section 20: This section states the special provisions for data impact assessment.
Comment: This section states that all data impact assessment reports will be submitted periodically to the State Privacy commission. This section does not make provisions for instances of circumstances in which such records may be made public. Additionally the data impact assessment could also include a human rights impact assessment.
Recommendation: The section could also have provisions for making the records of the impact assessment or relevant parts of the assessment public. This will ensure that the data controllers / processors are subjected to a standard of accountability and transparency. Additionally as privacy is linked to human rights the data impact assessment could also include a human rights impact assessment. The Act could further clarify the process for submission to State Privacy Commissions and potential access by the Central Privacy Commission to provide clarity in process.
Section 20 requires controllers who use new technology to assess the risks to the data protection rights that occur from processing. ‘New technology’ is defined to include pre-existing technology that is used anew. Additionally, the reports are required to be sent to the State Privacy Commission periodically. However, there is no clarification on the situations in which such an assessment becomes necessary, or whether all technology must undergo such an assessment before their use. Additionally, the differentiation between different data processing activities based on whether the data processing is incidental or a part of the functioning needs to be clarified. This differentiation is necessary as there are some data processors and controllers who need the data to function; for instance an ecommerce site would require your name and address to deliver the goods, although these sites do not process the data to make decisions. This can be compared to a credit rating agency that is using the data to make decisions as to who will be given a loan based on their creditworthiness. Example can taken from the GDPR, which in Article 35, specifies instances in which a data impact assessment is necessary: where a new technology, that is likely to result in a high risk to the rights of persons, is used; where personal aspects related to natural persons are processed automatically, including profiling; where processing of special categories of data (including data revealing ethnic/racial origin, sexual orientation etc), biometric/genetic data; where data relating to criminal convictions is processed; and with data concerning the monitoring of publicly accessible areas. Additionally, there is no requirement to publish the report, or send it to the supervising authority, but the controller is required to review the processor’s operations to ensure its compliance with the assessment report.
Recommendation: The reports could be sent to a central authority, which according to this Act is the Privacy Commission, along with the State Privacy Commission. Additionally there needs to be a differentiation between the incidental and express use of data. The data processors must be given at least a period of one year after the commencement of the Act to present their impact assessment report. This period is required for the processors to align themselves with the provisions of the Act as well as conduct capacity building initiatives.
PART C
RIGHTS OF A DATA SUBJECT
Section 21: This section explains the right of the data subject with regard to accessing her data. It states that the data subject has the right to obtain from the data controller information as to whether any personal data concerning her is collected or processed. The data controller also has to not only provide access to such information but also the personal data that has been collected or processed.
Comment: This section does not provide the data subject the right to seek information about security breaches.
Recommendation: This section could state that the data subject has the right to seek information about any security breaches that might have compromised her data (through theft, loss, leaks etc.). This could also include steps taken by the data controller to address the immediate breach as well as steps to minimise the occurrence of such breaches in the future.[7]
CHAPTER IV
INTERCEPTION AND SURVEILLANCE
Section 28: This section lists out the special provisions for competent organizations.
Comment: Section 28(1) states ”all provisions of Chapter III shall apply to personal data collected, processed, stored, transferred or disclosed by competent organizations unless when done as per the provisions under this chapter ”.This does not make provisions for other categories of data such as sensitive data.
Recommendation: This section needs to include not just personal data but also sensitive data, in order to ensure that all types of data are protected under this Act.
Section 30: This section states the provisions for prior authorisation by the appropriate Surveillance and Interception Review Tribunal.
Comment: Section 30(5) states “any interception involving the infringement of the privacy of individuals who are not the subject of the intended interception, or where communications relate to medical, journalistic, parliamentary or legally privileged material may be involved, shall satisfy additional conditions including the provision of specific prior justification in writing to the Office for Surveillance Reform of the Privacy Commission as to the necessity for the interception and the safeguards providing for minimizing the material intercepted to the greatest extent possible and the destruction of all such material that is not strictly necessary to the purpose of the interception.” This section needs to state why these categories of communication are more sensitive than others. Additionally, interceptions typically target people and not topics of communication - thus medical may be part of a conversation between two construction workers and a doctor will communicate about finances.
Recommendation: The section could instead of singling out “medical, journalistic, parliamentary or legally privileged material” state that “any interception involving the infringement of the privacy of individuals who are not the subject of the intended interception may be involved, shall satisfy additional conditions including the provision of specific prior justification in writing to the Office for Surveillance Reform of the Privacy Commission.
Section 37: This section details the bar against surveillance.
Comment: Section 37(1) states that “no person shall order or carry out, or cause or assist the ordering or carrying out of, any surveillance of another person”. The section also prohibits indiscriminate monitoring, or mass surveillance, unless it is necessary and proportionate to the stated purpose. However, it is unclear whether this prohibits surveillance by a resident of their own residential property, which is allowed in Section 5, as the same could also fall within ‘indiscriminate monitoring/mass surveillance’. For instance, in the case of a camera installed in a residential property, which is outward facing, and therefore captures footage of the road/public space.
Recommendation: The Act needs to bring more clarity with regard to surveillance especially with respect to CCTV cameras that are installed in private places, but record public spaces such as public roads. The Act could have provisions that clearly define the use of CCTV cameras in order to ensure that cameras installed in private spaces are not used for carrying out mass surveillance. Further, the Act could address the use of emerging techniques and technology such as facial recognition technologies, that often rely on publicly available data.
CHAPTER V
THE PRIVACY COMMISSION
Section 53: This section details the powers and functions of the Privacy Commission.
Comment: Section 53(2)(xiv) states that the Privacy Commission shall publish periodic reports “providing description of performance, findings, conclusions or recommendations of any or all of the functions assigned to the Privacy Commission”. However this Section does not make provisions for such reporting to happen annually and to make them publicly available, as well as contain details including financial aspects of matters contained within the Act.
Recommendation: The functions could include a duty to disclose the information regarding the functioning and financial aspects of matters contained within the Act. Categories that could be included in such reports include: the number of data controllers, number of data processors, number of breaches detected and mitigated etc.
CHAPTER IX
OFFENCES AND PENALTIES
Sections 73 to 80: These sections lay out the different punishments for controlling and processing data in contravention to the provisions of this Act.
Comment: These sections, while laying out different punishments for controlling and processing data in contravention to the provisions of this Act, mets out a fine extending upto Rs. 10 crore. This is problematic as it does not base these penalties on the finer aspects of proportionality, such as offences that are not as serious as the others.
Recommendation: There could be a graded approach to the penalties based on the degree of severity of the offence.This could be in the form of name and shame, warnings and penalties that can be graded based on the degree of the offence.
----------------------------------------------------------------------
Additional thoughts: As India moves to a digital future there is a need for laws to be in place to ensure that individual's rights are not violated. By riding on the push to digitization, and emerging technologies such as AI, a strong all encompassing privacy legislation can allow India to leapfrog and use these emerging technologies for the benefit of the citizens without violating their privacy. A robust legislation can also ensure a level playing field for data driven enterprises within a framework of openness, fairness, accountability and transparency.
[1] These seven principles include: Right to Access, Right to Rectification, Right to Erasure And Destruction of Personal Data,Right to Restriction Of Processing, Right to Object, Right to Portability of Personal Data,Right to Seek Exemption from Automated Decision-Making.
[2]The Privacy (Protection) Bill 2013: A Citizen’s Draft, Bhairav Acharya, Centre for Internet & Society, https://cis-india.org/internet-governance/blog/privacy-protection-bill-2013-citizens-draft
[3]General Data Protection Regulation, available at https://gdpr-info.eu/art-4-gdpr/.
[4] Antonio Vetro, Open Data Quality Measurement Framework: Definition and Application to Open Government Data, available at https://www.sciencedirect.com/science/article/pii/S0740624X16300132
[5] General Data Protection Regulation, available at https://gdpr-info.eu/chapter-5/.
[6] Sensitive personal data under Section 2(bb) includes, biometric data; deoxyribonucleic acid data;
sexual preferences and practices;medical history and health information;political affiliation;
membership of a political, cultural, social organisations including but not limited to a trade union as defined under Section 2(h) of the Trade Union Act, 1926;ethnicity, religion, race or caste; and
financial and credit information, including financial history and transactions.
[7] Submission to the Committee of Experts on a Data Protection Framework for India, Amber Sinha, Centre for Internet & Society, available at https://cis-india.org/internet-governance/files/data-protection-submission
The Potential for the Normative Regulation of Cyberspace: Implications for India
The standards of international law combined with strategic considerations drive a nation's approach to any norms formulation process. CIS has already produced work with the Research and Advisory Group (RAG) of the Global Commission on the Stability of Cyberspace (GCSC), which looks at the negotiation processes and strategies that various players may adopt as they drive the cyber norms agenda.
This report focuses more extensively on the substantive law and principles at play and looks closely at what the global state of the debate means for India
With the cyber norms formulation efforts in a state of flux,India needs to advocate a coherent position that is in sync with the standards of international law while also furthering India's strategic agenda as a key player in the international arena.
This report seeks to draw on the works of scholars and practitioners, both in the field of cybersecurity and International Law to articulate a set of coherent positions on the four issues identified in this report. It also attempts to incorporate, where possible, state practice on thorny issues of International Law. The amount of state practice that may be cited differs with each state in question.
The report provides a bird’s eye-view of the available literature and applicable International Law in each of the briefs and identifies areas for further research, which would be useful for the norms process and in particular for policy-makers in India.Historically, India had used the standards of International Law to inform it's positions on various global regimes-such as UNCLOS and legitimize its position as a leader of alliances such as the Non-Aligned Movement and AALCO. However, of late, India has used international law far less in its approach to International Relations. This Report therefore explores how various debates on international law may be utilised by policy-makers when framing their position on various issues. Rather than creating original academic content,the aim of this report is to inform policy-makers and academics of the discourse on cyber norms.In order to make it easier to follow, each Brief is followed by a short summary highlighting the key aspects discussed in order to allow the reader to access the portion of the brief that he/she feels would be of most relevance. It does not advocate for specific stances but highlights the considerations that should be borne in mind when framing a stance.
The report focuses on four issues which may be of specific relevance for Indian policy-makers. The first brief, focuses on the Inherent Right of Self-Defense in cyberspace and its value for crafting a stable cyber deterrence regime. The second brief looks at the technical limits of attributability of cyber-attacks and hints at some of the legal and political solutions to these technical hurdles. The third brief looks at the non-proliferation of cyber weapons and the existing global governance framework which india could consider when framing its own strategy. The final brief looks at the legal regime on counter-measures and outlines the various grey zones in legal scholarship in this field. It also maps possible future areas of cooperation with the cyber sector on issues such as Active Cyber Defense and the legal framework that might be required if such cooperation were to become a reality.Each brief covers a broad array of literature and jurisprudence and attempts to explore various debates that exist both among international legal academics and the strategic community.
The ongoing global stalemate over cyber norms casts a grim shadow over the future of cyber-security. However, as seen with the emergence of the nuclear non-proliferation regime, it is not impossible for consensus to emerge in times of global tension. For India, in particular, this stalemate presents an opportunity to pick up the pieces and carve a leadership position for itself as a key norm entrepreneur in cyberspace.
Lining up the data on the Srikrishna Privacy Draft Bill
The article was published in Economic Times on July 30, 2018
Non-consensual processing is permitted in the bill as long it is “necessary for any function of the” Parliament or any state legislature. These functions need not be authorised by law.
Or alternatively “necessary for any function of the state authorised by law” for the provision of a service or benefit, issuance of any certification, licence or permit.
Fortunately, however, the state remains bound by the eight obligations in chapter two i.e., fair and reasonable processing, purpose limitation, collection limitation, lawful processing, notice and data quality and data storage limitations and accountability. This ground in the GDPR has two sub-clauses: one, the task passes the public interest test and two, the loophole like the Indian bill that possibly includes all interactions the state has with all persons.
The “necessary” test appears both on the grounds for non-consensual processing, and in the “collection limitation” obligation in chapter two of the bill. For sensitive personal data, the test is raised to “strictly necessary”. But the difference is not clarified and the word “necessary” is used in multiple senses.
Under the “collection limitation” obligation the bill says “necessary for the purposes of processing” which indicates a connection to the “purpose limitation” obligation. The “purpose limitation” obligation, however, only requires the state to have a purpose that is “clear, specific and lawful” and processing limited to the “specific purpose” and “any other incidental purpose that the data principal would reasonably expect the personal data to be used for”. It is perhaps important at this point to note that the phrase “data minimisation” does not appear anywhere in the bill.
Therefore “necessary” could broadly understood to mean data Parliament or the state legislature requires to perform some function unauthorised by law, and data the citizen might reasonably expect a state authority to consider incidental to the provision of a service or benefit, issuance of a certificate, licence or permit.
Or alternatively more conservatively understood to mean data without which it would be impossible for Parliament and state legislature to carry out functions mandated by the law, and data without it would be impossible for the state to provide the specific service or benefit or issue certificates, licences and permits. It is completely unclear like with the GDPR why an additional test of “strictly necessary” is — if you will forgive the redundancy — necessary.
After 10 years of Aadhaar, the average citizen “reasonably expects” the state to ask for biometric data to provide subsidised grain. But it is not impossible to provide subsidised grain in a corruption-free manner without using surveillance technology that can be used to remotely, covertly and non-consensually identify persons. Smart cards, for example, implement privacy by design. Therefore a “reasonable expectation” test is not inappropriate since this is not a question about changing social mores.
When it comes to persons that are not law abiding the bill has two exceptions — “security of the state” and “prevention, detection, investigation and prosecution of contraventions of law”. Here the “necessary” test is combined with the “proportionate” test.
The proportionate test further constrains processing. For example, GPS data may be necessary for detecting someone has jumped a traffic signal but it might not be a proportionate response for a minor violation. Along with the requirement for “procedure established by law”, this is indeed a well carved out exception if the “necessary” test is interpreted conservatively. The only points of concern here is that the infringement of a fundamental right for minor offences and also the “prevention” of offences which implies processing of personal data of innocent persons.
Ideally consent should be introduced for law-abiding citizens even if it is merely tokenism because you cannot revoke consent if you have not granted it in the first place. Or alternatively, a less protective option would be to admit that all egovernance in India will be based on surveillance, therefore “necessary” should be conservatively defined and the “proportionate” test should be introduced as an additional safeguard.
Spreading unhappiness equally around
The article was published in Business Standard on July 31, 2018.
There is a joke in policy-making circles — you know you have reached a good compromise if all the relevant stakeholders are equally unhappy. By that measure, the B N Srikrishna committee has done a commendable job since there are many with complaints.
Some in the private sector are unhappy because their demonisation of the European Union’s General Data Protection Regulation (GDPR) has failed. The committee’s draft data protection Bill is closely modelled upon the GDPR in terms of rights, principles, design of the regulator and the design of the regulatory tools like impact assessments. With 4 per cent of global turnover as maximum fine, there is a clear signal that privacy infringements by transnational corporations will be reigned in by the regulator. Getting a law that has copied many elements of the European regulation is good news for us because the GDPR is recognised by leading human rights organisations as the global gold standard. But the bad news for us is that the Bill also has unnecessarily broad data localisation mandates for the private sector.
Some in the fintech sector are unhappy because the committee rejected the suggestion that privacy be regulated as a property right. This is a positive from the human rights perspective, especially because this approach has been rejected across the globe, including the European Union. Property rights are inappropriate because a natural law framing of the enclosure of the commons into private property through labour does not translate to personal data. Also in comparison to patents — or “intellectual property” — the scale of possible discreet property holdings in personal information is several orders higher, posing unimaginable complexity for regulation, possibly creating a gridlock economy.
The section of civil society opposed to Aadhaar is unhappy because the UIDAI and all other state agencies that wish to can process data non-consensually. A similar loophole exists in the GDPR. Remember the definition of processing includes “operations such as collection, recording, organisation, structuring, storage, adaptation, alteration, retrieval, use, alignment or combination, indexing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction”. This means the UIDAI can collect data from you without your consent and does not have to establish consent for the data it has collected in the past. There is a “necessary” test which is supposed to constrain data collection. But for the last 10 odd years, the UIDAI has deemed it “necessary” to collect biometrics to give the poor subsidised grain. Will those forms of disproportionate non-consensual data collection continue? Most probably because the report recommends that the UIDAI continue to play the role of the regulator with heightened powers. Which is like trusting the fox with
the henhouse.
Employees should be unhappy because the Bill has an expansive ground under which employers can nonconsensually harvest their data. The Bill allows for non-consensual processing of any data “necessary” for recruitment, termination, providing any benefit or service, verifying the attendance or any other activity related to the assessment of the performance”. This is permitted when consent is not an appropriate basis or would involve disproportionate effort on the part of the employer. This is basically a surveillance provision for employers. Either this ground should be removed like in the GDPR or a “proportionate” test should also be introduced otherwise disproportionate mechanisms like spyware on work computers will be installed by employees without providing notice.
Some free speech activists are unhappy because the law contains a “right to be forgotten” provision. They are concerned that this will be used by the rich and powerful to censor mainstream and alternative media. On the face of the “right to be forgotten” in the GDPR is a much more expansive “right to erasure”, whilst the Bill only provides for a more limited "right to restrict or prevent continuing disclosure”. However, the GDPR has a clear exception for “archiving purposes in the public interest, scientific or historical research purposes or statistical purposes”. The Bill like the GDPR does identify the two competing human rights imperatives — freedom of expression and the right to information. However, by missing the “public interest” test it does not sufficiently social power asymmetries.
Privacy and security researchers are unhappy because re-identification has been made an offence without a public interest or research exception. It is indeed a positive that the committee has made re-identification a criminal offence. This is because the de-identification standards notified by the regulator would always be catching up with the latest mathematical development. However, in order to protect the very research that the regulator needs to protect the rights of individuals, the Bill should have granted the formal and non-formal academic community immunity from liability and criminal prosecution.
Lastly but also most importantly, human rights activists are unhappy because the committee again like the GDPR did not include sufficiently specific surveillance law fixes. The European Union has historically handled this separately in the ePrivacy Regulation. Maybe that is the approach we must also follow or maybe this was a missed opportunity. Overall, the B N Srikrishna committee must be commended for producing a good data protection Bill. The task before us is to make it great and to have it enacted by Parliament at the earliest.
Anti-trafficking Bill may lead to censorship
The article was published in Livemint on July 24, 2018.
The legislative business of the monsoon session of Parliament kicked off on 18 July with the introduction of the Trafficking of Persons (Prevention, Protection and Rehabilitation) Bill, 2018, in the Lok Sabha. The intention of the Union government is to “make India a leader among South Asian countries to combat trafficking” through the passage of this Bill. Good intentions aside, there are a few problematic provisions in the proposed legislation, which may severely impact freedom of expression.
For instance, Section 36 of the Bill, which aims to prescribe punishment for the promotion or facilitation of trafficking, proposes a minimum three-year sentence for producing, publishing, broadcasting or distributing any type of material that promotes trafficking or exploitation. An attentive reading of the provision, however, reveals that it has been worded loosely enough to risk criminalizing many unrelated activities as well.
The phrase “any propaganda material that promotes trafficking of person or exploitation of a trafficked person in any manner” has wide amplitude, and many unconnected or even well-intentioned actions can be construed to come within its ambit as the Bill does not define what constitutes “promotion”. For example, in moralistic eyes, any sexual content online could be seen as promoting prurient interests, and thus also promoting trafficking.
Rather than imposing a rigorous standard of actual and direct nexus with the act of trafficking or exploitation, a vaguer standard which includes potentially unprovable causality, including by actors who may be completely unaware of such activity, is imposed. This opens the doors to using this provision for censorship and imposes a chilling effect on any literary or artistic work which may engage with sensitive topics, such as trafficking of women.
In the past, governments have been keen to restrict access to online escort services and pornography. In June 2016, the Union government banned 240 escort sites for obscenity even though it cannot do that under Section 69A or Section 79 of the Information Technology Act, or Section 8 of the Immoral Traffic (Prevention) Act. In July 2015, the government asked internet service providers (ISPs) to block 857 pornography websites sites on grounds of outraging “morality” and “decency”, but later rescinded the order after widespread criticism. If historical record is any indication, Section 36 in this present Bill will legitimize such acts of censorship.
Section 39 proposes an even weaker standard for criminal acts by proposing that any act of publishing or advertising “which may lead to the trafficking of a person shall be punished” (emphasis added) with imprisonment for 5-10 years. In effect, the provision mandates punishment for vaguely defined actions that may not actually be connected to the trafficking of a person at all. This is in stark contrast to most provisions in criminal law, which require mens rea (intention) along with actus reus (guilty act). The excessive scope of this provision is prone to severe abuse, since without any burden of showing a causal connect, it could be argued that anything “may lead” to the trafficking of a person.
Another by-product of passing the proposed legislation would be a dramatic shift in India’s landscape of intermediary liability laws, i.e., rules which determine the liability of platforms such as Facebook and Twitter, and messaging services like Whatsapp and Signal for hosting or distributing unlawful content.
Provisions in the Bill that criminalize the “publication” and “distribution” of content, ignore that unlike the physical world, modern electronic communication requires third-party intermediaries to store and distribute content. This wording can implicate neutral communication pipeways, such as ISPs, online platforms, mobile messengers, which currently cannot even know of the presence of such material unless they surveil all their users. Under the proposed legislation, the fact that human traffickers used Whatsapp to communicate about their activities could be used to hold the messaging service criminally liable.
By proposing such, the Bill is in direct conflict with the internationally recognized Manila Principles on Intermediary Liability, and in dissonance with existing principles of Indian law, flowing from the Information Technology Act, 2000, that identify online platforms as “safe harbours” as long as they act as mere conduits. From the perspective of intermediaries, monitoring content is unfeasible, and sometimes technologically impossible as in the case of Whatsapp, which facilitates end-to-end encrypted messaging. And as a 2011 study by the Centre for Internet & Society showed, platforms are happy to over-comply in favour of censorship to escape liability rather than verify actual violations. The proposed changes will invariably lead to a chilling effect on speech on online platforms.
Considering these problematic provisions, it will be a wise move to send the Bill to a select committee in Parliament wherein the relevant stakeholders can engage with the lawmakers to arrive at a revised Bill, hopefully one which prevents human trafficking without threatening the Constitutional right of free speech.
The National Health Stack: An Expensive, Temporary Placebo
The article was published by Bloomberg Quint on August 6, 2018.
However, the next ten years would see the scheme meet with constant criticism about its poor management and immense expenditure; and after a gruelling battle for survival, including spending £20 billion and having top experts on board, the NPfIT finally met its end in 2011.
Fast forward eight years — the Indian government’s public policy think tank, NITI Aayog, is proposing an eerily similar idea for the much less developed, and much more populated Indian healthcare sector. On July 6, the NITI Aayog released a consultation paper to discuss “a digital infrastructure built with a deep understanding of the incentive structures prevalent in the Indian healthcare ecosystem”, called the National Health Stack. The paper identifies four challenges that previous government-run healthcare programs ran into and that the current system hopes to solve. These include:
- low enrollment of entitled beneficiaries of health insurance,
- low participation by service providers of health insurance,
- poor fraud detection,
- lack of reliable and timely data and analytics.
The current article takes a preliminary look at the goals of the NHS and where it falls behind. Subsequent articles will break down the proposed scheme with regard to safety, privacy and data security concerns, the feasibility of data analytics and fraud detection, and finally, the role of private players within the entire structure.
The primary aim of any digital health infrastructure should be to compliment an existing, efficient healthcare delivery system.
As seen in the U.K., even a very well-functioning healthcare system doesn’t necessarily mean the digitisation efforts will bear fruit.
The NHS is meant to be designed for and beyond the Ayushman Bharat Yojana — the government’s two-pronged healthcare regime that was introduced on Feb. 1. Unfortunately, though, India’s healthcare regime has long been in the need of severe repair, and even if the Ayushman Bharat Yojana works optimally, there are no indications to show that this will miraculously change by their stated target of 2022. Indeed, experts predict it would take at least a ten-year period to successfully implement universal health coverage. A 2013 report by EY-FICCI stated that we must consider a ten-year time frame as well as allocating 3.5-4.7 percent of the GDP to health expenditure to achieve universal health coverage.
However, as per the current statistics, the centre’s allocation for health in the 2017-18 budget is Rs 47,353 crore, which is 1.15 percent of India’s GDP.
Patients wait for treatment in the corridor of the Acharya Tulsi Regional Cancer Treatment & Research Institute in Bikaner, Rajasthan, India. (Photographer: Prashanth Vishwanathan/Bloomberg)
Along with the state costs, India’s current expenditure in the health sector comes to a meagre 1.4 percent of the total GDP, far short of what the target should be. Yet, the government aims to attain universal health coverage by 2022.
In the first of its two-pronged strategy, the Ayushman Bharat Yojana aims to establish 1.5 lakh ‘Health and Wellness Centres’ across the country by 2022, which would provide primary healthcare services free of cost.
However, the total fund allocated for ’setting up’ these centres is only Rs 1,200 crore, which comes down to a meagre Rs 80,000 per centre.
It is unclear whether the government plans to establish new sub-centres, or improve the existing ones. Either way, a pittance of Rs 80,000 is grossly insufficient. As per reports, among the 1,56,231 current health centres, only 17,204 (11 percent) have met Indian Public Health Standards as of March 31, 2017. Shockingly, basic amenities like water and electricity are scarce, if not, absent in a substantial number of these centres.
At least 6,000 centres do not have a female health worker, and at least 1,00,000 centres do not have a male health worker.
A woman holds a child in the post-delivery ward of the district hospital in Jind, Haryana, India. (Photographer: Prashanth Vishwanathan/Bloomberg)
Even taking the generous assumption that the existing 17,204 centres are in top condition, the future of the rest of these health and wellness centres continues to be bleak.
In truth, both limbs of the Ayushman Bharat strategy remain oblivious to the reality of the situation. The goals do not take into account the existing problems within access to healthcare, nor the relevant economic and social indicators that depict a contrasting reality.
Therefore, the fundamental question remains: if there is no established, well-functioning healthcare delivery system to support, what will the NHS help?
NHS: What Purpose Does It Serve?
The ambitious scope of the National Health Stack consultation paper aside, the central problem plaguing the Indian healthcare system, i.e, delivery, and access to healthcare, remains unaddressed. The first two problems that the NHS aims to solve focus solely on increasing health insurance coverage. However, very problematically, the document does not explicitly mention how a digital infrastructure would lead to rising enrollment of both beneficiaries and service providers of insurance.
This goal of increasing enrollment without a functioning healthcare system could result in two highly problematic scenarios.
Either health and wellness centres will effectively act as enrollment agencies rather than providers of healthcare, or the government would fall back on its ‘Aadhar approach’ and employ external enrollment agents.
The former approach runs a very real risk of the health and wellness centres losing focus on their primary purpose even while statistics show them as functioning centres – thus negatively impacting even the working centres. The latter approach is at a higher risk of running into problems akin to the case of Aadhaar enrollment, such as potential data leakages, identity thefts and a market for fake IDs. Even if we somehow overlook this and assume that the NHS would help increase insurance coverage without additional problems, the larger question still stands: should health insurance even be the primary goal of the government, over and above providing access to healthcare? And what effect will this have on the actual delivery of healthcare services to the common citizen?
A lone patient sleeps in the post operation recovery ward of the district hospital in Jind, Haryana, India. (Photographer: Prashanth Vishwanathan/Bloomberg)
Should Insurance Be A Primary Objective Of The Indian Government?
Simply put, the answer is no, because greater insurance coverage need not necessitate better access to healthcare. In recent years, health insurance in India has been rising rapidly due to government-sponsored schemes. In the fiscal year 2016-17, the health insurance market was prized to be worth Rs 30,392 crore. Even with such large investments in insurance premiums, the insurance market accounts for lesser than 5 percent of the total health expenditure.
Furthermore, previous experiences with government-sponsored health insurance schemes have proven that there is little merit to such an expensive task.
For instance, the government’s earlier health insurance scheme, Rashtriya Swasthya Bima Yojana, was predicted to be unable to completely provide ‘accessible, affordable, accountable and good quality health care’ if it focussed only on “increasing financial means and freedom of choice in a top-down manner”.
These traditional insurance-based models are characterised by problems of information asymmetry such as ‘moral hazard’ — patients and healthcare providers have no incentive to control their costs and tend to overuse, resulting in an unsustainable insurance system and cost inflation. Any attempt to regulate providers is met with harsh, cost-cutting steps which end up harming patients.
On another note, some diseases which are responsible for the most number of deaths in the country — including ischaemic heart diseases, lower respiratory tract infections, chronic obstructive pulmonary disease, tuberculosis and diarrhoeal diseases — are usually chronic conditions that need outpatient consultation, resulting in out-of-pocket expenses.
Patients wait at the Head and Neck Cancer Out Patient department of Tata Memorial Hospital in Mumbai, India. (Photographer: Prashanth Vishwanathan/Bloomberg News)
Even though the government has added non-communicable diseases under the ambit of the health and wellness centres, there are still reports stating that for some of the most impoverished, their reality is that 80 percent of the time, they have to cover their expenses from their pocket. This issue in all probability will continue to exist since the status and likelihood for these centres to be successful itself is questionable.
It is clear, that in the current scheme of things, this traditional insurance model of healthcare cannot benefit those it is meant for.
If this is the case, why has the NHS built its main objectives around insurance coverage rather than access to healthcare? It is imperative that we question the legitimacy of these goals, especially if they indicate the government's intentions to push health insurance via the NHS above its responsibility of delivering healthcare. The government's thrust for a digital infrastructure shows tremendous foresight, but at what cost? Even the clear goal of healthcare data portability has very little benefit when one understands that this becomes an important goal only when one has given up on ensuring widespread accessible healthcare. Once the focus shifts from using technology needlessly to developing an efficient and universally accessible healthcare delivery system, the need for data portability dramatically reduces. The temptation of digitisation and insurance coverage cannot and should not blind us from the main goal — access to healthcare. The one lesson that we must learn from the case of the U.K. is that even with a well-functioning healthcare delivery system, a digital infrastructure must be introduced very thoughtfully and carefully. In our eagerness to leapfrog with technology, we must not mistake a placebo for a panacea.
Murali Neelakantan is an expert in healthcare laws. Swaraj Barooah is Policy Director at The Centre for Internet and Society. Swagam Dasgupta and Torsha Sarkar are interns at The Centre for Internet and Society.
Future of Work: Report of the ‘Workshop on the IT/IT-eS Sector and the Future of Work in India’
This report was authored by Torsha Sarkar, Ambika Tandon and Aayush Rath. It was edited by Elonnai Hickok. Akash Sriram, Divya Kushwaha and Torsha Sarkar provided transcription and research assistance. A PDF of the report can be accessed here.
Introduction
The Workshop was attended by a diverse group of stakeholders which included industry representatives, academicians and researchers, and civil society. The discussions went over various components of the transition in the sector to Industry 4.0, including the impact of Industry 4.0-related technological innovations on work broadly in India, and specifically in the IT/IT-eS sector (hereinafter referred to as the “Sector”). The discussion focused on the reciprocal impact on socio-political dimensions, the structure of employment, and forms of work within workspaces.
The Workshop was divided into three sessions. The first session was themed around the adoption and impact of Industry 4.0 technologies vis-a-vis the organisation of work. Within this the key questions were: the nature of the technologies being adopted, the causes that are driving the uptake of these technologies, and the ‘tasks’ constituting jobs in the Sector.
The second session focussed on the role of skilling and re-skilling measures as mitigators to projected displacement of jobs. The issues dealt with included shifts in company, educational, and social competency profiles as a result of Industry 4.0, transformations in the predominant pedagogy of education, vocational, and skill development programmes in India, and their success in creating employable workers and filling skill gaps in the industry.
The third session looked at social welfare considerations and public policy interventions that may be necessitated in the wake of potential technological unemployment owing to Industry 4.0. The session was designed with a specific focus on the axes of gender and class, addressing questions of precarity, wages, and job security in the future of work for marginalized groups in the workforce.
Preliminary Comments
The Workshop opened with a brief introduction on the research the Centre for Internet and Society (CIS) is undertaking on the Future of Work (hereinafter referred to as “FoW”) vis-a-vis Industry 4.0. The conception of Industry 4.0 that CIS is looking at is the technical integration of cyber-physical systems in production and logistics on one hand, and the use of internet of things (IoT) and the connection between everyday objects and services in the industrial processes on the other. The scope of the project, including the impact of automation on the organisation of employment and shifts in the nature and forms of work, including through the gig economy, and microwork, was detailed. The historical lens taken by the project, and the specific focus on questions of inequality across gender, class, language, and skill were highlighted.
It was pointed out that CIS’ research, in this regard, comes from the necessity of localising and re-examining the global narratives around Industry 4.0. While new technologies will be developed and implemented globally, the impact of these technologies in the Indian context would be mediated through local, political and socio-economic structures. For instance, the Third Industrial Revolution, largely associated with the massification of computing, telecommunications and electronics, is still unfolding in India, while attempts are already being made to adapt to Industry 4.0. These issues provided a starting point to the discussion on the impact of Industry 4.0 in India.
Qualifying Technological Change
Contexualising the narrative with historical perspectives
The panel for the first session commenced with a discussion around a historical perspective on job loss being brought about due to mechanisation. The distinction between Industry 3.0 and 4.0, it was suggested, was largely arbitrary, inasmuch as technological innovation has been a continuous process and has been impacting lives and the way work is perceived. It was argued that the only factor differentiating Industry 4.0 from previous industrial revolutions is ‘intelligent’ technology that is automating routine cognitive tasks. The computer, programmatic logic control (PLC) and data (called the ‘new oil’) were also a part of Industry 3.0, but intelligent technologies are able to provide greater analytical power under Industry 4.0.
The discussion also went over the distinction between the terms ‘job’, ‘task’ and ‘work’. It was argued that the term ‘job’ might be treated as a subset of the term ‘work’, with the latter moving beyond livelihood to encompass questions of dignity and a sense of fulfilment in the worker. With relation to this distinction, it was mentioned that the jobs at the risk of automation would be those that fulfill only the basic level in Maslow’s hierarchy - implying largely routine manual tasks. Additionally, it was explained that although these jobs will continue to use labour through Industry 4.0, it is only the nature of technological enablement that would change to automate more dangerous and hazardous tasks.
Technology as a long-term enabler of job creation
It was argued that technology has historically been associated with job creation. Historical instances cited included that of popular anxiety due to anticipated job loss through the uptake of the spinning machine and the steam engine, whereas the actual reduction in the cost of production led to greater job creation, increased mobility and improved quality of life in the long-term. Such instances were used to further argue that technology has historically not resulted in long-term job reductions.
The platform economy was posited as a model for creating jobs, through the efficient matching of supply and demand through digital platforms. It was indicated that rural to urban migration is aided by such platforms, as labourers voluntarily enrol in skilling initiatives given the certainty of employment through platformization. It was further argued that historically, Indian workers have been educated rather than skilled, and that platformization and automation, coupled with the elasticity of human needs, will provide greater incentives for technically skilled workers by creating desirable jobs.
Factors leading to differential adoption of automation
In relation to the adoption of the technologies Industry 4.0, it was argued that the mere existence of a technology does not necessitate its scalablity at an industrial level. Scalability would be possible only when the cost of labour is high relative to the costs entailed in technological adoption. This was supported by data from a McKinsey Report[1] which indicated that countries like the US and Germany would be impacted in the short term by automation, because their cost of labour is higher. Conversely, since the cost of labour in India is relatively cheap, the reality of technological displacement is still far away and the impact would not be immediate.
Similarly, a distinction was also made to account for the differential impact of automation in various sectors. For instance, it was indicated that since the IT/IT-eS sector in India is based on exporting services and outsourcing of businesses. Accordingly, if Germany automates its automobile industry, that would impact India less than if it automates the IT/IT-eS sector, as the latter is more reliant on exporting its services to developed economies. The IT/IT-eS sector was further broken down into sub-sectors with the intention of highlighting the differential impact of automation and FoW in each of these sub-sectors. It was agreed that the BPO sub-sector would be more adversely impacted than core IT services, given its constitution of routine nature of tasks at a higher risk of automation.
Disaggregating India’s Skilling Approach
The discussion around skilling measures was contextualised in the Indian context by alluding to data collected from the National Sample Survey Organisation (NSSO) surveys. The data revealed that around 36% of India’s total population is under the age of seventeen and approximately 13% are between 18 - 24. Additional statistics suggested that only around a quarter of the workforce aged 18-24 years had achieved secondary and higher secondary education and close to 13% of the workforce was illiterate. While these numbers included both male and female workers, it was pointed out that it was an incomplete dataset as it excluded transgender workers. It was suggested it should be this segment of the Indian demographic that is targeted for significant skilling pushes, which could be catalysed through specific vocational training centres. It was also suggested that there was a need for to restructure the role of the National Skill Development Corporation (NSDC) in the Indian skilling framework.
A comprehensive picture was painted by conceptualising the skilling framework in India as 5 distinct pillars. This conceptualisation was used to debunk the narrative around NSDC being the sole entity pushing for skill development in the country. The NSDC’s function in the skilling framework was posited as providing funding to skilling initiatives with programmes lasting for a period of 3 months. These 3- month programmes were critiqued for being insufficient for effective training, especially given the low skill levels of workers going into the programmes. The NSDC’s placement rate of 12% as per their own records was used to support this argument. Further suggestions on making the NSDC more effective were made in a later discussion[2].
Related to this, the second pillar of vocational skilling was said to be the Industrial Training Institute (ITI). The third pillar was said to be the school system which was critiqued for does not offering vocational education at secondary and senior secondary levels. The fourth pillar comprised of the 16 ministries which governed the labour laws in India - none of whose courses were National Skills Qualifications Framework (NSQF) compliant.
The fifth pillar was construed as the industry itself and the enterprise-based training it conducted. However, it was stated that India’s share of registered companies who did enterprise-based training was dismal. In 2009, the share of enterprise-based training was 16% which rose in 2014 to 36%. Further, most of these 36% were registered large firms as opposed to small and medium sized enterprises. Unregistered companies, it was suggested, were simply doing informal apprenticeships.
Joint public and private skilling initiatives
In addition to government sponsored skilling initiatives, attention was directed to skill development partnerships that took the shape of public-private initiatives. As an example, it was said that that a big player in the ride-hailing economy had worked with NSDC and other skilling entities to ensure that soft skills were being imparted to their driver partners before they were on-boarded onto the platform.
It was also brought forth that innovative forms of skilling and training were gaining traction in the education sector as well in the private sector. This was instantiated through instances of uptake of platforms which apply artificial intelligence, and within that machine learning based techniques, to generate and disseminate easier- to- consume video-based learning.
Driving Job Growth: Solving for Structural Eccentricities of the Indian Labour Market
Catalysing manufacturing-led job growth
The discussion began by discussing specific dynamics of the Indian labour markets in the context of the Indian economy. It was pointed out the productivity level of the services sector is not as high as the productivity level of manufacturing, which is problematic for job creation in a developing economy such as India witnessing capital-intensive growth in the manufacturing sector. The underlying argument was that the jobs of the future in the Indian context will have to be created in the manufacturing sector.
Several macroeconomic policy interventions were suggested to reverse the trend of capital-intensive growth in order to make manufacturing the frontier for enhanced job creation. The need for a trade policy in consonance with the industrial policy was stated as imperative. This was substantiated by highlighting the lack of an inverted duty structure governing the automobile sector that has led India to be amongst the biggest manufacturers of automobiles. The inverted duty structure entails finished products having a lower import tariff and a lower customs duty when compared to import of raw materials or intermediates. However, it was highlighted that a dissonant industrial policy failed to acknowledge that at least 50% of india’s manufacturing comes from Micro, Small & Medium Enterprises (MSMEs) and provided no assistance to MSMEs in obtaining credit, market access or technology upgradation. On the other hand, it was asserted that large corporates get 77% of the total bank credit.
Another challenge that was highlighted was with the Government of India’s severely underfunded manufacturing cluster development programs under the aegis of the Ministry of Textiles and the Ministry of MSMEs. For sectors that contribute majorly towards India’s manufacturing output, it was asserted that these programmes were astonishingly bereft of any governing policy and suffer from several foundational issues. Moreover, it was observed that these clusters are located around the country in Tier 2, 3 and 4 cities where the quality of infrastructure is largely lacking. The Atal Mission for Rejuvenation and Urban Transformation (AMRUT) program devised for the development of these cities is also myopic as the the target cities are not the ones where these manufacturing clusters are located. The rationale behind such an approach was that building infrastructure at geographical sites of job creation would lead to an increase in productivity which would in turn attract greater investment. This would have to necessarily be accompanied by hastening the setting up of industrial corridors - the lackadaisical approach to which was stated as a key component of India being outpaced by other developing economies in the South East Asian region.
An additional policy intervention that was suggested was from the lens of setting up of skilling centres by NSDC in proximity to these manufacturing clusters where the job creation is being evidenced as opposed to larger metropolitan cities.
Carving out space for a vocational training paradigm
It was asserted that the focus of skilling needs to be on the manufacturing rather than services sector, given the centrality of manufacturing to a developing economy undergoing an atypical structural transformation[3] - as outlined above. Further compounding the problem of jobless growth, it was stated that 50% of the manufacturing workforce have 8 or less years of education and only 5% of the workforce including those that have technical education are vocationally trained, according to the NSS, 62nd Round on Employment and Unemployment.
A gulf in primary and secondary education vis-a-vis vocational training was pointed as one of the most predominant causes behind the much touted ‘skills gap’ that the Indian workforce is said to be battling with. Using data to further cull out the argument, it was said that in 2007, the net enrollment in India for primary education had already reached 97% and that between 2010 - 2015, the secondary education enrollment rate went from 58% to 85%.[4] It was hypothesised that the latter may have risen to 90% levels since. Furthermore, the higher education enrollment rate also commensurately went up from 11% in 2006 to 26-27% in 2017.[5] It was argued that this is impossible to achieve without gender parity in higher education. This gender parity in education was contrasted with the systematic decline in the women’s labour force participation that India has been witnessing in the last 30 years.
Consequently, the ‘massification’ of higher education in India over the past 10 years was critiqued as ineffectual in comparison to the Chinese model, as the latter focused on engaging students in vocational training, which the Indian education system had failed to do. The role of the gig economy in creating job opportunities despite this gap between educational and vocational training was regarded as important, especially given the lack of growth in the traditional job markets.
Accounting for the Margins
With relation to the profiles of workers within sectors, it was indicated that factors such as gender, class, skill, income, and race must be accounted for to determine the ‘winners’ and ‘losers’ of automation. Several points were discussed with relation to this disaggregation.
Technology as an equaliser? Gender and skill-biased technological change
First, the idea of technology and development as objective and neutral forces was questioned, with the assertion that human decision-makers, who more often than not tend to be male, allow inherent biases to creep into outputs, processes, and objectives of automation. Data from the Belong Survey in IT services[6] indicated that the proportion of women in core engineering was 26% of the workforce, while that in software testing was 33%. Coupled with the knowledge that automation and technology would automate software testing first, it was argued that jobs held by female workers were positioned at a higher immediate risk of automation than male workers.
The ‘Leaky Pipe Problem’ in STEM industries i.e. the observation that female workers tend to be concentrated in entry level jobs, while senior management is largely male dominated was also brought to the fore. This was used to bolster the argument that female workers in the Sector will lose out in the shorter term, when automation adversely impacts the lower level jobs.
A survey conducted by Aspiring Minds[7] which tracked the employability of the engineering graduates was utilised to further flesh out skill biased technological change. As per the survey, 40% of the graduating students are employable in the BPO sector, while only 3% of the students are employable in the sector for software production. With the BPO sector likelier to be impacted more adversely than core IT services, it was emphasised that policy considerations should be very specific in their ambit.
Social security and the platform economy
The discussion around the platform economy commenced with a focus on how it had created economic opportunities in the formal sector by matching demand and supply on one hand, and by reducing inefficiency in the system through technology on the other. It was pointed out that these newer forms of work were creating millions of entrepreneurship opportunities that did not exist previously. These opportunities, it was suggested, were in themselves flexible and contributing the greater push towards enhancing the numbers of those that come within the ambit of India’s formal economy.
This discussion was countered by suggesting that the shift of the workforce from the informal sector to the formal sector, which companies in the gig economy claimed they contributed to, have instead restricted the kind of lives gig workers have been living historically. As an instance, it was pointed out that a farmer who had been working with a completely different set of skills was now being asked to shift to a new set of skills which would be suited for a very specific role and not transposable across occupations. In other words, it would not be meaningful skilling. It was also pointed out that what distinguishes formal work from informal is whether the worker has social security net or not - mere access to banking services or filing of tax returns was not sufficient for characterising a workforce as being formal in nature.
Relatedly, the possibility of social security was discussed for the unorganised sector and microworkers. One of the possibilities discussed was to ensure state subsidised maternity, disability, and death security, and pensions for workers below the poverty line. The fiscal brunt borne by the government for such a scheme was anticipated to not be above 0.4% of the GDP. It was suggested that this would move forward the conversation on minimum wage and fair work, which would be of great importance in broader conversations around working conditions in the platform economy.
The interplay of gender and platformisation
It was highlighted that trends in automation are going to change the occupational structure in the digital economy - the effect of which will especially be felt in cognitive routine jobs given their increased propensity to platformisation. A World Economic Forum report[8] was cited which indicated the disproportional risk of unemployment faced by women given their concentration in cognitive routine jobs was also brought up.
The discussion logically undertook a deeper look at the platformisation of work with a specific focus on freelance microwork and its impact on the female labour force and culled out certain positive mandates arising from such newer forms of work. It was suggested that industries are more likely to employ female workers in microwork due to lower rates of attrition, and flexible labour. It was reiterated that freelancing in India extends beyond data entry and other routine jobs, to include complex work - thereby also catering to skilled workers desirous of flexibility. Platforms designing systems to meet the demand for flexible work were also discussed, such as platforms geared towards female workers undertaking reskilling measures and counselling for females returning from maternity leave or sabbaticals. Additionally, the difficulty of defining freelancing under existing frameworks of employment, compounded by the lack of legal structures for such work, was outlined.
Systemic challenges within the Indian labour law framework
Static design of legal processes
Labour law was, naturally, acknowledged as a key determinant in the conversation around both the uptake and impact of automative technologies encapsulated within Industry 4.0.
The archaic nature of India’s labour law framework was highlighted as a major impediment to ensuring both worker rights as well as the ease of conducting commerce. It was pointed out that organised labour continues to be under the ambit of the Industrial Disputes Act, which was made effective in 1947, has undergone minimal amendments since. This was critiqued on the basis that the framework for the law is embedded in its historical context, and while the industrial landscape in the country has transformed drastically since the implementation of the Industrial Disputes Act, the legal framework has not evolved. Similarly, the Karnataka Shops and Establishments Act, 1961 which regulates the Sector today was enacted much before the Sector even opened up in India in the 1990s.
Additionally, it was pointed out that the consolidation of the fragmented extant framework of labour laws in India was being consolidated into 4 labour codes without any wholesale modernisation push reforming the laws being consolidated. Consequently, it was argued that the government has to drive changes through policies alone as the legal framework remains static. Barriers to implementation of adequate policies were also discussed, such as the political impact of labour policies, lack of state initiative to deal with the impact of the future of work, apart from the historic inability of the law to keep up with the state of labour and economy.
Labour law arbitrage
One of the reasons behind the increasing contractualizing of labour in India was attributed to over-regulation. There was consensus that the labour law regime was not conducive to industry in India leading to greater opportunistic behaviours from industry participants. It was acknowledged that the political clout that a lot of contractors (of labourers) enjoy along with providing primary employers greater flexibility to hire and fire employees at will has led to a widespread utilisation of contract labour entities.
It was further stated that industry behaviour has adopted several other tools of arbitrage to not consider labour law as a key impediment in the ease of scaling business. Empirical evidence of labour law arbitrage was cited to drive home the point - according to national surveys, 80-85% of enterprises employ less than 99 workers as the law mandates stricter compliance requirements for enterprises employing 100 or more workers[9]. This was acknowledged a serious hurdle to scaling businesses.
Problems behind other apparently well-intentioned legislation from a public policy lens having counterproductive consequences was also highlighted. In the space of labour laws, the example of the recently enacted Maternity Benefit (Amendment) Act, 2017 was cited. By enhancing maternity benefits, without accounting for other provisioning such as a paternity benefit inclusion, it was anticipated that companies may entirely shy away from hiring women.
Policy Paralysis
The discussion progressed towards a high-level discussion around the efficacy of law vis-a-vis state policy as a means to create a system of checks and balances in the context of Industry 4.0. It was highlighted that law, by design, would be outpaced by technological change. The common law system of law operating in India is premised on a time-tested emphasis on post-facto regulation. In other words, it is reactionary. While policy making in India suffers from a similar plague of playing catch-up, it is in large part due to a bureaucratic structure premised on generalism - a pressing need for domain expertise in policy making was emphasised upon. Having said that, it was stated that it is the institutional design of policy making institutions that needs rectification. What was acknowledged was the success, albeit scant, that individual states have had in policy making catering to specific yet diverse domains. A greater push towards clear and progressive evidence-based policy pushes was stressed upon with the anticipation that it would lead to self-regulation by the industry itself - be it in terms of the future of employment or of the economic direction that the industry will embark on.
Concluding Remarks
The discussions during the course of the Workshop situated the discourse around Industry 4.0 within the contours of the Indian labour realities and the IT sector within that.
As a useful starting point, various broader perspectives around the impact of technological change on the quantum of jobs were brought forth. While the industry perspective was that of technology as an enabler of job creation in the long-run, it was sufficiently tempered by concerns around those impacted adversely in the short to medium-term time frames. These concerns coalesced towards understanding the potential impact of Industry 4.0 on the nature of work, as well as mitigation tools to ease the impact of technological disruption on labour.
Important facets of technological adoption within the Sector were highlighted, such as potential for scalability as well as the distinct eccentricities of the various sub-sectors the IT sector subsumed. The differential impact within the various sub-sectors was pegged to the differential composition of automatable tasks (routine, rule-based) within each sub-sector. However, questions regarding the exact contours of task composition were left unanswered signalling a potential area for further research. On the other hand, the primary challenge to technological adoption faced from the labour-supply side was skilling, or the lack thereof. This was contextualised in the larger scheme of structural issues plaguing the skilling machinery operating in the country, which lead to inadequate dispensation of technical and vocational education and training (TVET). In terms of additional structural issues that would potentially have an impact on how Industry 4.0 plays out in the Indian context, attention was directed towards overdue reform of the labour law framework which has already struggled with incorporating newer forms of working engagements such as platform and gig work, that are being evidenced as a part of Industry 4.0.
An underlying theme that found mention across sessions was the need to devote attention to prevent further marginalisation as a consequence of technological disruption of the already marginalised. Evidence from government datasets as well as from literature around concepts such as skill biased technological change, the leaky pipe problem, and the U-shaped curve of female labour force participation were cited to explicate these issues. The merits of different policy measures to address these concerns, such as social security, living wages, and maternity benefits were also discussed.
While the Workshop touched upon several facets of the discourse around Industry 4.0 in the Sector, it also sprung up areas that require further inquiry. Questions around where in the value chain use-cases for Industry 4.0 technologies were emerging needed a more comprehensive understanding. Moreover, the impact of Sector Skill Councils (SSCs), a central aspect of the skilling ecosystem in India, wasn’t touched upon. An additional path of inquiry that emerged pertained to evolving constructive reforms to legal and economic policy frameworks as top-down interventions within the Sector that could be anticipated to play a significant role in the uptake and impact of Industry 4.0 technologies.
[1] McKinsey Global Institute, A future that works: Automation, employment, and productivity, https://www.mckinsey.com/~/media/mckinsey/featured%20insights/Digital%20Disruption/Harnessing%20automation%20for%20a%20future%20that%20works/MGI-A-future-that-works-Executive-summary.ashx, (accessed 10 August 2018).
[2] See discussion under ‘Catalysing manufacturing-led job growth‘.
[3] R. Verma, Structural Transformation and Jobless Growth in the Indian Economy, The Oxford Handbook of the Indian Economy, 2012.
[4] S. Mehrotra, ‘The Indian Labour Market: A Fallacy, Two Looming Crises and a Tragedy’, CSE Working Paper, April 2018.
[5] ibid.
[6] Mohita Nagpal, ‘Women in tech: There are 3 times more male engineers to females’, belong.co, http://blog.belong.co/gender-diversity-indian-tech-companies, (accessed 10 August 2018).
[7] Aspiring Minds, National Programming Skills Report - Engineers 2017, https://www.aspiringminds.com/sites/default/files/National%20Programming%20Skills%20Report%20-%20Engineers%202017%20-%20Report%20Brief.pdf, (accessed 11 August 2018).
[8] World Economic Forum, The Future of Jobs Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution: Global Challenge Insight Report, January 2016.
[9] Ministry of Statistics and Programme Implementation, All India Report of Sixth Economic Census, Government of India, 2014.
India's Contribution to Internet Governance Debates
Abstract
India is the leader that championed ‘access to knowledge’ and ‘access to medicine’. However, India holds seemingly conflicting views on the future of the Internet, and how it will be governed. India’s stance is evolving and is distinct from that of authoritarian states who do not care for equal footing and multi-stakeholderism.
Introduction
Despite John Perry Barlow’s defiant and idealistic Declaration of Independence of Cyberspace1 in 1996, debates about governing the Internet have been alive since the late 1990s. The tug-of-war over its governance continues to bubble among states, businesses, techies, civil society and users. These stakeholders have wondered who should govern the Internet or parts of it: Should it be the Internet Corporation for Assigned Names and Numbers (ICANN)? The International Telecommunications Union (ITU)? The offspring of the World Summit on Information Society (WSIS) - the Internet Governance Forum (IGF) or Enhanced Cooperation (EC) under the UN? Underlying this debate has been the role and power of each stakeholder at the decision-making table.States in both the global North and South have taken various positions on this issue.
Whether all stakeholders ought to have an equal say in governing the unique structure of the Internet or do states have sovereign public policy authority? India has, in the past, subscribed to the latter view. For instance, at WSIS in 2003, through Arun Shourie, then India’s Minister for Information Technology, India supported the move ‘requesting the Secretary General to set up a Working Group to think through issues concerning Internet Governance,’ offering him ‘considerable experience in this regard... [and] contribute in whatever way the Secretary General deems appropriate’. The United States (US), United Kingdom (UK) and New Zealand have expressed their support for ‘equal footing multi-stakeholderism’ and Australia subscribes to the status quo.
India’s position has been much followed, discussed and criticised. In this article, we trace and summarise India’s participation in the IGF, UN General Assembly (‘UNGA’), ITU and the NETmundial conference (April 2014) as a representative sample of Internet governance fora. In these fora, India has been represented by one of three arms of its government: the Department of Electronics and Information Technology (DeitY), the Department of Telecommunications (DoT) and the Ministry of External Affairs (MEA). The DeitY was converted to a full-fledged ministry in 2016 known as the Ministry of Electronics and Information Technology (MeitY). DeitY and DoT were part of the Ministry of Communications and Information Technology (MCIT) until 2016 when it was bifurcated into the Ministry of Communications and MeitY.
DeitY used to be and DoT still is, within the Ministry of Communications and Information Technology (MCIT) in India. Though India has been acknowledged globally for championing ‘access to knowledge’ and ‘access to medicine’ at the World Intellectual Property Organization (WIPO) and World Trade Organization (WTO), global civil society and other stakeholders have criticised India’s behaviour in Internet governance for reasons such as lack of continuity and coherence and for holding policy positions overlapping with those of authoritarian states.
We argue that even though confusion about the Indian position arises from a multiplicity of views held within the Indian government, India’s position, in totality, is distinct from those of authoritarian states. Since criticism of the Indian government became more strident in 2011, after India introduced a proposal at the UNGA for a UN Committee on Internet-related Policies (CIRP) comprising states as members, we will begin to trace India's position chronologically from that point onwards.
- Download the paper published in NLUD Student Law Journal here
- For a timeline of the events described in the article click here
- Read the paper published by NLUD Student Law Journal on their website
National Health Stack: Data For Data’s Sake, A Manmade Health Hazard
The op-ed was published in Bloomberg Quint on August 14, 2018.
Apart from facing the severity of their condition, patients afflicted with diseases such as HIV, tuberculosis, and mental illnesses, are often subject to social stigma, sometimes even leading to the denial of medical treatment. Given this grim reality would patients want their full medical history in a database?
The ‘National Health Stack’ as described by the NITI Aayog in its consultation paper, is an ambitious attempt to build a digital infrastructure with a “deep understanding of the incentive structures prevalent in the Indian healthcare ecosystem”. If the government is to create a database of individuals’ health records, then it should appreciate the differential impact that it could have on the patients.
The collection of health data, without sensitisation and accountability, has the potential to deny healthcare to the vulnerable.
We have innumerable instances of denial of services due to Aadhaar and there is a real risk that another database will lead to more denial of access to the most vulnerable.
Earlier, we had outlined some key aspects of the NHS, the ‘world’s largest’ government-funded national healthcare scheme. Here we discuss some of the core technical issues surrounding the question of data collection, updating, quality, and utilisation.
Resting On A Flimsy Foundation: The Unique Health ID
The National Health Stack envisages the creation of a unique ID for registered beneficiaries in the system — a ‘Digital Health ID’. Upon the submission of a ‘national identifier’ and completion of the Know Your Customer process, the patient would be registered in the system, and a unique health ID generated.
This seemingly straightforward process rests on a very flimsy foundation. The base entry in the beneficiary registry would be linked to a ‘strong foundational ID’. Extreme care needs to be taken to ensure that this is not limited to an Aadhaar number. Currently, the unavailability of Aadhaar would not be a ground for denial of treatment to a patient only for their first visit; the patient must provide Aadhaar or an Aadhaar enrolment slip to avail treatment thereafter. This suggests that the national healthcare infrastructure will be geared towards increasing Aadhaar enrollment, with the unstated implication that healthcare is a benefit or subsidy — a largess of government, and not, as the courts have confirmed, a fundamental right.
Not only is this project using government-funded infrastructure to deny its citizens the fundamental right to healthcare, it is using the desperate need of the vulnerable for healthcare to push the ‘Aadhaar’ agenda.
Any pretence that Aadhaar is voluntary is slowly fading with the government mandating it at every step of our lives.
Is The Health ID An Effective And Unique Identifier?
Even if we choose to look past the fact that the validity of Aadhaar is still pending the test of legality before the apex court, a foundational ID would mean that the data contained within that ID is unique, accurate, incorruptible, and cannot be misused. These principles, unfortunately, have been compromised by the UIDAI in the Aadhaar project with its lack of uniqueness of identity (i.e, fake IDs and duplicity), failure to authenticate identity, numerous alleged data leaks (‘alleged’ because UIDAI maintains that there haven’t been any leaks), lack of connectivity to be able to authenticate identity and numerous instances of inaccurate information which cannot be corrected.
Linking something as crucial and basic as healthcare data with such a database is a potential disaster.
There is a real risk that incorrect linking could cause deaths or inappropriate medical care.
The High Risk Of Poor Quality Data
The NITI Aayog paper envisages several expansive databases that are capable of being updated by different entities. It includes enrollment and updating processes but seems to assume that all these extra steps will be taken by all the relevant stakeholders and does not explain the motivation for stakeholders to do so.
In a country where government doctors, hospitals, wellness centres, etc are overburdened and understaffed, this reliance is simply not credible. For instance, all attributes within the registries are to be digitally signed by an authorised updater, there must be an audit trail for all changes made to the registries, and surveyors will be tasked with visiting providers in person to validate the data. Identifying these precautions as measures to assure accurate data is a great step towards building a national health database, but this seems an impossible task.
Who are these actors and what will incentivise them to ensure the accuracy and integrity of data?
In other words, what incentive and accountability structures will ensure that data entry and updating is accurate, and not approached from a more ‘jugaad’ ‘let’s just get this done for the sake of it’ attitude that permeates much of the country. How will patients have access to the database to be able to check its accuracy? Is it possible for a patient (who will presumably be ill) to gain easy access to an updater to change their data? If so, how? It is worth noting that the patient’s ‘right’ to check her data assumes that they have access to a computer that is connected to the internet as well as a good level of digital literacy, which is not the case in India for a significant section of the population. Even data portability loses its potential benefits if the quality of data on these registries is not reliable. In this case, healthcare providers will need to verify their patients’ health history using physical records instead, rendering the stack redundant.
Who will be liable to the patient for misdiagnosis based on the database?
Leaving the question of accountability vague opens updaters to the possibility of facing dangerous and unnecessarily punitive measures in the future. The NITI Aayog paper fails to address this key issue which arose recently. Despite being a notifiable disease, there are reports that numerous doctors from the private sector failed to notify or update TB cases to the Ministry of Health and Family Welfare ostensibly on the grounds that they did not receive consent from their patients to share their information with the government. This was met with a harsh response from the government which stated that clinical establishment that failed to notify tuberculosis patients would face jail time. According to a few doctors, the government’s new move would coerce patients to go to ‘underground clinics’ to receive treatment discreetly and hence, would not solve the issue of TB.
The document also offers no specific recommended procedures regarding how inaccurate entries will be corrected or deleted.
It is then perhaps not a stretch to imagine that these scenarios would affect the quality of the data stored; defeating NITI Aayog’s objective of researchers using the stack for high-quality medical data.
The reason why the quality and integrity of data is at the head of the table is that all the proposed applications of the NHS (analytics, fraud detection etc.) assume a high quality, accurate dataset. At the same time, the enrolment process, updating process and disclosed measures to ensure data quality will effectively lead to poor quality data. If this is the case, then applications derived from the NHS dataset should assume an imperfect data, rather than an accurate dataset, which should make one wonder if no data is better than data that is certainly inaccurate.
Lack Of Data Utilisation Guidelines
Issues with data quality are exacerbated depending on how and where it is used, and who uses it. The paper has identified some users to be health-sector stakeholders such as healthcare providers (hospitals, clinics, labs etc), beneficiaries, doctors, insurers and accredited social health activists but misses laying down utilisation guidelines. The foresight to create a dataset that can be utilised by multiple actors for numerous applications is commendable, but potentially problematic -- especially if guidelines on how this data is to be used by stakeholders (especially the private sector) are ignored.
In order to bridge this knowledge gap, India has the opportunity to learn from the legal precedent set by foreign institutions. As an example, one could examine the Health Information Technology for Economic and Clinical Health Act (HITECH) and the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. which sets out strict guidelines for how businesses are to handle sensitive health data in order to maintain the individual’s privacy and security. It goes one step further to also lay down incentive and accountability structures in order that business associates necessarily report security breaches to their respective covered entities.
If we do not take necessary precautions now, we not only run the risk of poor security and breach of privacy but of inaccurate data that renders the national health data repository a health risk for the whole patient population.
There’s also the lack of clarity on who is meant to benefit from using such a database or whether the benefits are equal to all stakeholders, but more on that in a subsequent piece.
It’s Your Recipe, You Try It First!
If the NITI Aayog and the government are sure that there is a need for a national healthcare database, perhaps they can start using the Central Government Health Scheme (which includes all current and retired government employees and their families) as a pilot scheme for this. Once the software, database and the various apps built on it are found to be good value for money and patients benefit from excellent treatment all over the country, it could be expanded to those who use the Employees’ State Insurance system, and then perhaps to the armed forces. After all, these three groups already have a unique identifier and would benefit from the portability of healthcare records since they are likely to be transferred and posted all over the country. If, and only if, it works for these groups and the claimed benefits are observed, then perhaps it can be expanded to the rest of the country’s healthcare systems.
Murali Neelakantan is an expert in healthcare laws. Swaraj Barooah is Policy Director at The Centre for Internet and Society. Swagam Dasgupta and Torsha Sarkar are interns at The Centre for Internet and Society.
Use of Visuals and Nudges in Privacy Notices
Edited by Elonnai Hickok and Amber Sinha
Former Supreme Court judge, Justice B.N. Srikrishna, who is currently involved in drafting the new data-privacy laws for India, was quoted recently by the Bloomberg[1]. Acknowledging the ineffectiveness of consent forms of tech companies that leads to users’ data being collected and misused, he asked if we should have pictograph warnings for consent much like the warnings that are given on cigarette packets. His concern is that an average Indian does not realise how much data they are generating or how it is being used. He attributed this to the access issues with the consent forms presented by companies which are in the English language. In the Indian context, Justice Srikrishna pointed out, considerations around literacy and languages should be addressed.
The new framework being worked on by Srikrishna and his committee comprising academics and government officials, would make the tech companies more accountable for data collection and use, and allow users to have more control over their own data. But, in addition to this regulatory step towards privacy and data protection, the concern towards communication of companies’ data practices through consent forms or privacy notices is also critical for users. Currently, the cryptic notices are a barrier for users, as are the services that do not provide incremental information about the use of the service - for example, what data is being shared with how many people or what data is being collected at what point, instead relying on blanket consent forms taken at the beginning of a service. Visuals can go a long way in making these notices and services accessible to users.
Although, Justice Srikrishna chose the extreme example of warnings on cigarette packets, visually depicting the health risks of cigarette smoking using repulsive imagery, the underlying intent seems to be of using visuals as a means of giving an immediate and clear warning about how people’s data is being used and by whom. It must be noted that the effectiveness of warnings on cigarette packets is debatable. These warnings are also a way in which manufacturers consider their accountability met, which is a possible danger with privacy notices as well. Most companies consider that their accountability is limited to giving all the information to the users without ensuring that the information is communicated to help the user understand the risks. Hence, one has to be cautious of the role of visuals in notices so that they are used with the primary purpose of meaningful communication and accessibility that can be used to inform further action. The visual summary of the data practice in terms of how it will affect the user will also serve as a warning.
The warning images on cigarette packets are an example of the user-influencing design approach called nudging[2]. While nudging techniques are meant to be aimed at the users’ well being, it brings forward the question of who decides what is beneficial for the users. Moreover, the harm in cigarette smoking is more obvious, and thus the favourable choice for the users is also clearer. But, in the context of data privacy, the harms are less apparent. It is difficult to demonstrate the harms or benefits of data use, particularly when data is re-purposed or used indirectly. There is also no single choice that can be pushed when it comes to the use and collection of data. Different users may have different preferences or degrees to which they would like to allow the use of their data. This raises deeper questions about the extent to which privacy law and regulation should be paternalistic.
Nudges are considered to follow the soft or libertarian paternalism approach, where the user is not forbidden any options but only given a push to alter their behaviour in a predictable way[3]. It is crucial to differentiate between the strong paternalistic approach that doesn’t allow a choice at all, the usability approach, and the soft paternalistic approach of nudging, as mentioned by Alessandro Acquisti in his paper, ‘The Behavioral Economics of Personal Information’[4]. In the usability approach, the design of the system would make it intuitive for users to change settings and secure their data. The soft paternalistic approach of nudging would be a step further and present secure settings as a default. Usability is often prioritised by designers. However, soft paternalism techniques help to enhance choice for users and lead to larger welfare[5].
Nudging in privacy notices can be a privacy-enhancing tool. For example, informing users of how many people would have access to their data would help them make a decision[6]. However, nudges can also be used to influence users towards making choices that compromise their privacy. For example, the visual design of default options on digital platforms currently nudge users to share their data. It is critical to ensure that there is mindful use of nudges, and that it is directed at the well being of the users.
The design of privacy notices should be re-conceptualised to ensure that they inform the users effectively, keeping in mind certain best practices. For instance, a multilayered privacy notice can be used, which includes a very short notice designed for use on portable digital devices where there is limited space, condensed notice that contains all the key factors in an easy to understand way, and a complete notice with all the legal requirements[7]. Along with the layering of information, the timing of notices should also be designed to be at setup, just in time of the user’s action, or at periodic intervals. In terms of visuals, infographics can be used to depict data flows in a system. Another best practice is to integrate privacy notices with the rest of the system. Designers are needed to be involved early in the process so that the design decisions are not purely visual but also consider information architecture, content design, and research.
Practice based frameworks should be developed for communication designers in order to have a standardised vocabulary around creating privacy notices. Additionally, multiple user groups and their varied privacy preferences must be taken into account. Finally, an ethical framework must be put into place for design practitioners in order to ensure that the users’ well being is prioritised, and notices are designed to facilitate informed consent. Further recommendations and concerns regarding the design of privacy notices, and the use of visuals can be read here.
Justice Srikrishna’s statement is an important step towards creating effective privacy notices with visuals. The conversation on the need to design privacy notices can lead to clearer and more comprehensible notices. Combined with the enforcement of fair collection and use of data by companies, well designed notices will allow users more control and a real choice to opt-in or out of a service and make informed choices as they engage with a service. Justice Srikrishna’s analogy seems to recommend using visuals to describe what type of data is being collected and for what purposes at the time of taking consent. Though cigarette warnings may not be the most appropriate analogy, this is a good start, and it is important to explore how visuals and design can be used throughout a service - from beginning to end - to convey and promote awareness and informed choices by users. It is also important to extend this conversation outside of privacy into the realm of security and understand how visuals and design can inform users’ awareness and personal choices around security when using a service.
[1] https://www.bloomberg.com/news/articles/2018-06-10/tech-giants-nervous-as-judge-drafts-first-data-rules-in-india
[2] http://www.ijdesign.org/index.php/IJDesign/article/viewFile/1512/584
[3] https://www.andrew.cmu.edu/user/pgl/psosm2013.pdf
[4] https://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf
[5] https://www.heinz.cmu.edu/~acquisti/papers/acquisti-privacy-nudging.pdf
[6] https://cis-india.org/internet-governance/files/rethinking-privacy-principles
[7] https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/ten_steps_to_develop_a_multilayered_privacy_notice__white_paper_march_2007_.pdf
ICANN response to DIDP #31 on diversity
The file can be found here
In our 31st DIDP request, we had asked ICANN to disclose information pertaining to the diversity of employees based on their race and citizenship. ICANN states that they are an equal opportunities employer and to ascertain the extent of people from different backgrounds in their ranks, we were hoping to be given the information.
However the response provided to us did not shed any light on this because of two reasons; firstly, ICANN has this information solely for two countries namely USA and Singapore as legislation in these countries compels employers to record this information. In the US, Title VII of the Civil Rights Act of 1964 requires that any organization with 100 or more employees have to file an Employer Information Report wherein the employment data is categorized by race/ethnicity/, gender and job category. Whereas in Singapore, information on race is gathered from the employee to assess which Self-Help group fund an employee should contribute to under Singaporean law.
Secondly, for the two countries, they refused to divulge information on the basis of their conditions of nondisclosure. The conditions pertinent here were:
- Information provided by or to a government or international organization, or any form of recitation of such information, in the expectation that the information will be kept confidential and/or would or likely would materially prejudice ICANN's relationship with that party.
- Personnel, medical, contractual, remuneration, and similar records relating to an individual's personal information, when the disclosure of such information would or likely would constitute an invasion of personal privacy, as well as proceedings of internal appeal mechanisms and investigations.
- Drafts of all correspondence, reports, documents, agreements, contracts, emails, or any other forms of communication
We had only enquired about the percentage of representation of employees at each level by their race or citizenship but this was deemed dangerous to disclose by ICANN. They did not volunteer anymore information such as an anonymized data set and hence we will now file a DIDP to ask them for the same.
Given the global and multi-stakeholder nature of the processes at ICANN, it is also of importance that their workforce represents true diversity as well. Their bylaws mandate diversity amongst its Board of Directors and some of its constituent bodies but there is no concrete proof of this being imbibed within their recruitment ICANN also did not think it was necessary to disclose our requested information in the benefit of public interest because it does not outweigh the harm that could be caused by the requested disclosure.
DNA ‘Evidence’: Only Opinion, Not Science, And Definitely Not Proof Of Crime!
The article was published in Bloomberg Quint on August 20, 2018.
Though taking some steps in the right direction such as formalising the process for lab accreditation, the Bill ignores many potential cases of ‘harm’ that may arise out of the collection, databasing, and using DNA evidence for criminal and civil purposes.
DNA evidence is widely touted as the most accurate forensic tool, but what is not widely publicised is it is not infallible. From crime scene to database, it is extremely vulnerable to a number of different unknown variables and outcomes. These variables are only increasing as the technology becomes more precise – profiles can be developed from only a few cells and technology now exists that generates a profile in 90 minutes. Primary and secondary transfer, contamination, incomplete samples, too many mixed samples, and inaccurate or outdated methods of analysis and statistical methodologies that may be used, are all serious reasons as to why DNA evidence may paint an innocent person guilty.
Importantly, DNA itself is not static and predicting how it may have changed over time is virtually impossible.
Innocent, But Charged
In April 2018, WIRED carried a story of Lukis Anderson who was charged with the first-degree murder of Raveesh Kumra, a Silicon Valley investor after investigators found Anderson’s DNA on Kumra’s nails. Long story short – Anderson earlier that day had been intoxicated in public and had been attended by paramedics. The same paramedics handled Kumra’s body and inadvertently transferred Anderson’s DNA to Kumra’s body. The story quotes some sobering facts that research has found about DNA:
- Direct contact is not necessary for DNA to be transferred. In an experiment with a group of individuals sharing a bottle of juice, 50 percent had another’s DNA on their hand and ⅓rd of the glasses contained DNA from individuals that did not have direct contact with them.
- An average person sheds 50 million skin cells a day.
- Standing still our DNA can travel over a yard away and will be easily carried over miles on others clothing or hair, for example not very differently from pollen.
- In an experiment that tested public items, it was found that items can contain DNA from a half-dozen people.
- A friendly or inadvertent contact can transfer DNA to private regions or clothing.
- Different people shed detritus at different levels that contain DNA.
- One in five has some other person’s DNA under the fingernails on a continuous basis.
In another case, the police in Idaho, USA, used a public DNA database to run a familial DNA search – a technique used to identify suspects whose DNA is not recorded in a law enforcement database, but whose close relatives have had their genetic profiles cataloged, just as India's DNA Bill seeks to do. The partial match that resulted implicated Michael Usry, the son of the man whose DNA was in the public database. It took 33 days for Michael to be cleared of the crime. That an innocent man only spent 33 days under suspicion could be considered a positive outcome when compared to the case of Josiah Sutton who spent four years convicted of rape in prison due to misinterpretation of DNA samples by the Houston Police Department Crime Laboratory, which is among the largest public forensic centers in Texas. The Atlantic called this out as “The False Promise of DNA Testing – the forensic technique is becoming ever more common and ever less reliable”.
Presently, there is little confidence that such safeguards exist – prosecutors do not share any exculpatory evidence with the accused and India does not even follow the ‘fruit of a poisonous tree’ doctrine with respect to the admissibility of evidence and India has yet to develop a robust jurisprudence for evaluating scientific evidence.
The 2015 Law Commission Report cites four cases that speak to the role and reliance on expert opinion as evidence. Though these cases point to the importance of expert opinion they differ on the weight that should be given to the same. International best practice requires the submission of corroborating evidence, training law enforcement, and court officers, and ensuring that prosecution and defence have equal access to forensic evidence.
Consider India with a population of 1.3 billion people – 70 percent mostly residing in rural areas and less educated and a heavy migrant population in urban centres, an overwhelmed police force in nascent stages of forensic training, and an overburdened judiciary and no concrete laws to govern issues of the admissibility of forensic techniques.
In such circumstances, the question is not only how many criminals can be convicted but also how many innocents could be convicted.
A pair of standard issue handcuffs sits on a table. (Photographer: Jerome Favre/Bloomberg)
The DNA Bill seeks to establish DNA databanks at the regional and national level but how this will be operationalised is not quite clear. The Bill enables the DNA Regulatory Board to accredit DNA labs. Will databases be built from scratch? Will they begin by pulling in existing databases?
The question is not if the DNA samples match but how they came to match. The greater power that comes from the use of DNA databases requires greater responsibility in ensuring adequate information, process, training, and laws are in place for everyone – those who give DNA, collect DNA, store DNA, process DNA, present DNA, and eventually decide on the use of the DNA. As India matures in its use of DNA evidence for forensic purposes it is important that it keeps at the forefront what is necessary to ensure and protect the rights of the individual.
Elonnai Hickok Chief Operating Officer at The Centre for Internet and Society. Murali Neelakantan is an expert in healthcare laws, and the author of ‘DNA Testing as Evidence - A Judge’s Nightmare’ in the Journal of Law and Medicine.
An Analysis of the CLOUD Act and Implications for India
Introduction
Networked technologies have changed the nature of crime and will continue to do so. Access to data generated by digital technologies and on digital platforms is important in solving online and offline crimes. Yet, a significant amount of such data is stored predominantly under the control of companies in the United States. Thus, for Indian law enforcement to access metadata (location data or subscriber information), they can send a request directly to the company. However for access to content data, law enforcement must follow the MLAT process as a result of requirements under the Electronic Communications Privacy Act (ECPA). ECPA allows service providers to share metadata on request of foreign governments, but requires a judicially issued warrant based on a finding of ‘probable cause’ for a service provider to share content data.
The challenges associated with accessing data across borders has been an area of concern for India for many years. From data localization requirements, legal decryption mandates, proposed back doors- law enforcement and the government have consistently been trying to find efficient ways to access data across borders.
Towards finding solutions to the challenges in the MLAT process, Peter Swire and Deven Desai in the article “A Qualified SPOC Approach for India and Mutual Legal Assistance” have noted the importance of finding a solution to the hurdles in the India - US MLAT and have suggested that reforms for the MLAT process in India should not start with law enforcement, and have instead proposed the establishment of a Single Point of Contact designated to handle and process government to government requests with requests emerging from that office receiving special legal treatment.
Frustrations with cross border sharing of data are not unique to India and the framework has been recognized by many stakeholders for being outdated, slow, and inefficient - giving rise to calls from governments, law enforcement, and companies for solutions. As a note, some research has also highlighted that the identified issues with the MLAT system are broad and more evidence is needed to support each concern and inform policy response.
Towards this, the US and EU have undertaken clear policy steps to address the tensions in the MLAT system by enabling direct access by governments to content data. On April 17 2018, the European Union published the E-Evidence Directive and a Regulation that allows for a law enforcement agency to obtain electronic evidence from service providers within 10 days of receiving a request or 6 hours for emergency requests and request the preservation or production of data. Production orders for content and transactional records can be issued only for certain serious crimes and must be issued by a judge. No judicial authorisation is required for production orders for subscriber information and access data, and it can be sought to investigate any criminal offense, not just serious offenses. Preservation orders can be issued without judicial authorisation for all four types of data and for the investigation of any crime. Further, requests originating from the European Union must be handled by a designated legal representative. Preservation orders can be issued for all four types of data. Further, requests originating from the European Union must be handled by a designated legal representative.
On the US side, in 2016, the Department of Justice (DoJ) put out draft legislation that would create a framework allowing the US to enter into executive agreements with countries that have been evaluated as meeting criteria defined in the law. Our response to the DoJ draft Bill can be found here. In February 2018, the Microsoft Ireland Case was presented before the U.S Supreme Court. The question central to the case was whether or not a US warrant issued against a company incorporated in the US was valid if the data was stored in servers outside of the US. On March 23, 2018, the United States government enacted the “Clarifying Lawful Overseas Use of Data Act” also known as the CLOUD Act. The passing of the Act solves the dilemma found in the Microsoft Ireland case. The CLOUD Act amends Title 18 of the United States Code and allows U.S. law enforcement agencies to access data stored abroad by increasing the reach of the U.S. Stored Communication Act, enabling access without requiring the specific cooperation of foreign governments. Under this law, U.S. law enforcement agencies can seek or issue orders that compel companies to provide data regardless of where the data is located as long as the data is under their “possession, custody or control”. It further allows US communication service providers to intercept or provide the content of communications in response to orders from foreign governments if the foreign government has entered into an executive agreement with the US upon approval by the Attorney General and concurrence with the Secretary of State. The Act also absolves companies from criminal and civil liability when disclosing information in good faith pursuant to an executive agreement between the US and a foreign country. Such access would be reciprocal, with the US government having similar access rights to data stored in the foreign country.
Though the E-Evidence Directive is a significant development, in this article - we focus on the CLOUD Act and its implications for cross border sharing of data between India and the US.
To read more download the PDF
Consumer Care Society: Silver Jubilee Year Celebrations
CONSUMER CARE SOCIETY (CCS) is an active volunteer based not-for-profit organization involved in Consumer activities. Established as a registered society in the year 1994, CCS has for the past 3 decades functioned as the voice of consumer in many forums. Today CCS is widely recognized as an premier consumer voluntary organization (CVO) in Bangalore and Karnataka. CCS is registered with many goverenmental agencies and regulators like TRAI,BIS, Petroleum and Natural Gas Regulatory Board, DOT, ICMR at the Central Government levels and with almost all service providers at the State Level like BWSSB, BESCOM, BDA, BBMP.
Shreenivas.S. Galgali, ITS, Adviser, TRAI Regional Office, Bangalore and Aradhana Biradar, User Education and Research Specialist, Google were the other speakers at the event held at CCS.
The Srikrishna Committee Data Protection Bill and Artificial Intelligence in India
Privacy Considerations in AI
Other related privacy concerns in the context of AI center around re-identification and de-anonymisation, discrimination, unfairness, inaccuracies, bias, opacity, profiling, and misuse of data and imbedded power dynamics.[1]
The need for large amounts of data to improve accuracy, the ability to process vast amounts of granular data, and the present relationship between explainability and result of AI systems[2] have raised many concerns on both sides of the fence. On one hand, there is concern that heavy handed or inappropriate regulation will result in stifling innovation. If developers can only use data for pre-defined purpose - the prospects of AI are limited. On the other hand, individuals are concerned that privacy will be significantly undermined in light of AI systems that collect and process data in realtime and at a personal level not previously possible. Chatbots, house assistants, wearable devices, robot caregivers, facial recognition technology etc. have the ability to collect data from a person at an intimate level. At the sametime, some have argued that AI can work towards protecting privacy by limiting the access that humans working at respective companies have to personal data.[3]
India is embracing AI. Two national roadmaps for AI were released in 2018 respectively by the Ministry of Commerce and Industry and Niti Aayog. Both roadmaps emphasized the importance of addressing privacy concerns in the context of AI and ensuring that a robust privacy legislation is enacted. In August 2018, the Srikrishna Committee released a draft Personal Data Protection Bill 2018 and the associated report that outlines and justifies a framework for privacy in India. As the development and use of AI in India continues to grow, it is important that India simultaneously moves forward with a privacy framework that addresses the privacy dimensions of AI.
In this article we attempt to analyse if and how the Srikrishna committee draft Bill and report has addressed AI, contrast this with developments in the EU and the passing of the GDPR, and identify solutions that are being explored towards finding a way to develop AI while upholding and safeguarding privacy.
The GDPR and Artificial Intelligence
The General Data Protection Regulation became enforceable in May 2018 and establishes a framework for the processing of personal data for individuals within the European Union. The GDPR has been described by IAAP as taking a ‘risk based’ approach to data protection that pushes data controllers to engage in risk analysis and adopt ‘risk measured responses’.[4] Though the GDPR does not explicitly address artificial intelligence, it does have a number of provisions that address automated decision making and profiling and a number of provisions that will impact companies using artificial intelligence in their business activities. These have been outlined below:
- Data rights: The GDPR enables individuals with a number of data rights: the right to be informed, right of access, right to rectification, right to erasure, right to restrict processing, right to data portability, right to object, and rights related to automated decision making including profiling. The last right - rights related to automated decision making - seeks to address concerns arising out of automated decision making by giving the individual the right to request to not be subject to a decision based solely on automated decision making including profiling if the decision would produce legal effects or similarly significantly affects them. There are three exceptions to this right - if the automated decision making is: a. necessary for the performance of a contract, b. authorised by the Union or Member State c. is based on explicit consent.[5]
- Transparency: Under Article 14, data controllers must enable the right to opt out of automated decision making by notifying individuals of the existence of automated decision making including profiling and providing meaningful information about the logic involved as well as the potential consequences of such processing.[6] Importantly, this requirement has the potential of ensuring that companies do not operate complete ‘black box’ algorithms within their business processes.
- Fairness: The principle of fairness found under Article 5(1) will also apply to the processing of personal data by AI. The principle requires that personal data must be processed in a way to meet the three conditions of lawfully, fairly, and in a transparent manner in relation to the data subject. Recital 71 further clarifies that this will include implementing appropriate mathematical and statistical measures for profiling, ensuring that inaccuracies are corrected, and ensuring that processing that does not result in negative discriminatory results.[7]
- Purpose Limitation: The principle of purpose limitation (Article 5(1)(b) requires that personal data must be collected for specified, explicit, and legitimate purposes and not be further processed in a manner incompatible with those purposes. Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes are not considered to be incompatible with the initial purposes. It has been noted that it is unclear if research carried out through artificial intelligence would fall under this exception as the GDPR does not define ‘scientific purposes’.[8]
- Privacy by Design and Default: Article 25 requires all data controllers to implement technical and organizational measures to meet the requirements of the regulation. This could include techniques like pseudonymisation. Data controllers also are required to implement appropriate technical and organizational measures for ensuring that by default only personal data which are necessary for a specific purpose are processed.[9]
- Data Protection Impact Assessments: Article 35 requires data controllers to undertake impact assessments if they are undertaking processing that is likely to result in a high risk to individuals. This includes if the data controller undertakes: systematic and extensive profiling, processes special categories of criminal offence data on a large scale, systematically monitor publicly accessible places on a large scale. In implementation, some jurisdictions like the UK require impact assessments on additional conditions including if the data controller: uses new technologies, uses profiling or special category data to decide on access to services, profile individuals on a large scale, process biometric data, process genetic data, match data or combine datasets from different sources, collect personal data from a source other than the individual without providing them with a privacy notice, track individuals’ location or behaviour, profile children or target marketing or online services at them, process data that might endanger the individual’s physical health or safety in the event of a security breach.[10]
- Security: Article 30 requires data controllers to ensure a level of security appropriate to the risk including employing methods like encryption and pseudonymization.
Srikrishna Committee Bill and AI
The Draft Data Protection Bill and associated report by the Srikrishna Committee was published in August 2018 and recommends a privacy framework for India. The Bill contains a number of provisions that will directly impact data fiduciaries using AI and that try and account for the unintended consequences of emerging technologies like AI. These include:
- Definition of Harm: The Bill defines harm as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal. The Bill also allows for categories of significant harm to be further defined by the data protection authority.
Many of the above are harms that have been associated with artificial intelligence - specifically loss employment, discriminatory treatment, and denial of service. Enabling the data protection authority to further define categories of significant harm, could allow for unexpected harms arising from the use of AI to come under the ambit of the Bill.
- Data Rights: Like the GDPR, the Bill creates a set of data rights for the individual including the right to confirmation and access, correction, data portability, and right to be forgotten. At the sametime the Bill is intentionally silent on the rights and obligations that have been incorporated into the GDPR that address automated decision making including: The right to object to processing,[11] the right to opt out of automated decision making[12], and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same.[13] As justification, in their report the Committee noted the following: The right to restrict processing may be unnecessary in India as it provides only interim remedies around issues such as inaccuracy of data and the same can be achieved by a data principal approaching the DPA or courts for a stay on processing as well as simply withdraw consent. The objective of protecting against discrimination, bias, and opaque decisions that the right to object to automated processing and receive information about the processing of data in the Indian context seeks to fulfill would be better achieved through an accountability framework requiring specific data fiduciaries that will be making evaluative decisions through automated means to set up processes that ‘weed out’ discrimination. At the same time, if discrimination has taken place, individuals can seek remedy through the courts.
By taking this approach, the Bill creates a framework to address harms arising out of AI, but does not empower the individual to decide how their data is processed and remains silent on the issue of ‘black box’ algorithms.
- Data Quality: Requires data fiduciaries to ensure that personal data that is processed is complete, accurate, not misleading and updated with respect to the purposes for which it is processed. When taking steps to comply with this - data fiduciaries must take into consideration if the personal data is likely to be used to make a decision about the data principal, if it is likely to be disclosed to other individuals, if the personal data is kept in a form that distinguishes personal data based on facts from personal data based on opinions or personal assessments.[14]
This principle, while not mandating that data fiduciaries take into account considerations such as biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias.
- Principle of Privacy by Design: Requires significant data fiduciaries to have in place a number policies and measures around several aspects of privacy. These include - (a) measures to ensure managerial, organizational, business practices and technical systems are designed in a manner to anticipate, identify, and avoid harm to the data principal (b) the obligations mentioned in Chapter II are embedded in organisational and business practices (c) technology used in the processing of personal data is in accordance with commercially accepted or certified standards (d) legitimate interests of business including any innovation is achieved without compromising privacy interests (e) privacy is protected throughout processing from the point of collection to deletion of personal data (f) processing of personal data is carried out in a transparent manner (g) the interest of the data principal is accounted for at every stage of processing of personal data.
A number of these (a, d, e, and g) require that the interest of the data principal is accounted for throughout the processing of personal data, This will be significant for systems driven by artificial intelligence as a number of the harms that have arisen from the use of AI include discrimination, denial of service, or loss of employment - have been brought under the definition of harm within the Bill. Placing the interest of the data principal first is also important in protecting against unintended consequences or harms that may arise from AI.[15] If enacted, it will be important to see what policies and measures emerge in the context of AI to comply with this principle. It will also be important to see what commercially accepted or certified standard companies rely on to comply with (c.)
- Data Protection Impact Assessment: Requires data fiduciaries to undertake a data protection impact assessment when implementing new technologies or large scale profiling or use of sensitive personal data. Such assessments need to include a detailed description of the proposed processing operation, the purpose of the processing and the nature of personal data being processed, an assessment of the potential harm that may be caused to the data principals whose personal data is proposed to be processed, and measures for managing, minimising, mitigating or removing such risk of harm. If the Authority finds that the processing is likely to cause harm to the data principles, it may direct the data fiduciary to undertake processing in certain circumstances or entirely. This requirement applies to all significant data fiduciaires and all other data fiduciaries as required by the DPA.[16]
This principle will apply to companies implementing AI systems. For AI systems, it will be important to see how much information the DPA will require under the requirement of data fiduciaries providing detailed descriptions of the proposed processing operation and purpose of processing.
- Classification of data fiduciaries as significant data fiduciaries: The Authority has the ability to notify certain categories of data fiduciaries as significant data fiduciaries based on 1. The volume of personal data processed, 2. The sensitivity of personal data processed, turnover of the data fiduciary, risk of harm resulting from any processing being undertaken by the fiduciary, use of new technologies for processing, and other factor relevant for causing harm to any data principal. If a data fiduciary falls under the ambit of any of these conditions they are required to register with the Authority. All significant data fiduciaries must undertake data protection impact assessments, maintain records as per the bill, under go data audits, and have in place a data protection officer.
As per this provision - companies deploying artificial intelligence would come under the definition of a significant data fiduciary and be subject to the principles of privacy by design etc. articulated in the chapter. The exception to this will be if the data fiduciary comes under the definition of ‘small entity’ found in section 48.[17]
- Restrictions on cross border transfer of personal data: Requires that all data fiduciaries must store a copy of personal data on a server or data centre located in India and notified categories of critical personal data must be processed in servers located in India.
It is interesting to note that in the context of cross border sharing of data, the Bill is creating a new category of data that can be further defined beyond personal and sensitive personal data. For companies implementing artificial intelligence, this provision may prove cumbersome to comply with as many utilize cloud storage and facilities located outside of India for the processing of larger amounts of data.[18]
- Powers and functions of the Authority: The Bill lays down a number of functions of the Authority one being to monitor technological developments and commercial practices that may affect protection of personal data.
By assumption, this will include monitoring of technological developments in the field of Artificial Intelligence.[19]
- Fair and reasonable processing: Requires that any person processing personal data owes a duty to the data principal to process such personal data in a fair and reasonable manner that respects the privacy of the data principal. In the Srikrishna Committee report, the committee explains that the principle of the fair and reasonable is meant to address 1. Power asymmetries between data subjects and data fiduciaries - recognizing that data fiduciaires have a responsibility to act in the best interest of the data principal 2. Situations where processing may be legal but not necessary fair or in the best interest of the data principal 3. Developing trust between the data principal and the data fiduciary.[20]
This is in contrast to the GDPR which requires processing to simultaneously meet the three conditions of fairness, lawfulness, and transparency.
- Purpose Limitation: Personal data can only be processed for the purposes specified or any other purpose that the data principal would reasonably expect.
As a note, the Srikrishna Committee Bill does not include ‘scientific purposes’ as an exception to the principle of purpose limitation as found in the GDPR,[21] and instead creates an exception for research, archiving, or statistical purposes.[22] The DPA has the responsibility of developing codes defining research purposes under the act.[23]
- Security Safeguards: Every data fiduciary must implement appropriate security safeguards including the use of methods such as de-identification and encryption, steps to protect the integrity of personal data, and steps necessary to prevent misuse, unauthorised access to, modification, and disclosure or destruction of personal data.[24]
Unlike the GDPR which explicitly refers to the technique of pseudonymization, the Srikrishna uses Bill uses term de-identification. The Srikrishna Report clarifies that the this includes techniques like pseudonymization and masking and further clarifies that because of the risk of re-identification, de-identified personal data should still receive the same level of protection as personal data. The Bill further gives the DPA the authority to define appropriate levels of anonymization. [25]
Technical perspectives of Privacy and AI
There is an emerging body of work that is looking at solutions to the dilemma of maintaining privacy while employing artificial intelligence and finding ways in which artificial intelligence can support and strengthen privacy. For example, there are AI driven platforms that leverage the technology to help a business to meet regulatory compliance with data protection laws[26], as well as research into AI privacy enhancing technologies.[27] Standards setting bodies like IEEE have undertaken work on the ethical considerations in the collection and use of personal data when designing, developing, and/or deploying AI through the standard ‘Ethically Aligned Design’.[28] . In the article Artificial Intelligence and Privacy by Datatilsynet - the Norwegian Data Protection Authority[29] break such methods into three categories:
- Techniques for reducing the need for large amounts of training data: Such techniques can include
- Generative adversarial networks (GANs): GANs are used to create synthetic data and can address the need for large volumes of labelled data without relying on real data containing personal data. GANs could potentially be useful from a research and development perspective in sectors like healthcare where most data would quality as sensitive personal data.
- Federated Learning: Federated learning allows for models to be trained and improved on data from a large pool of users without directly using user data. This is achieved by running a centralized model on a client unit and subsequently improved on local data. Changes from the improvements are shared back with the centralized server. An average of the changes from multiple individual client units becomes the basis for improving the centralized model.
- Matrix Capsules: Proposed by Google researcher Geoff Hinton, Matrix Capsules improve the accuracy of existing neural networks while requiring less data.[30]
- Techniques that uphold data protection without reducing the basic data set
- Differential Privacy: Differential privacy intentionally adds ‘noise’ to data when accessed. This allows for personal data to be accessed with revealing identifying information.
- Homomorphic Encryption: Homomorphic encryption allows for the processing of data while it is still encrypted. This addresses the need to access and use large amounts of personal data for multiple purposes
- Transfer Learning: Instead of building a new model, transfer learning relies builds upon existing models that are applied to new related purposes or tasks. This has the potential to reduce the amount of training data needed.
- RAIRD: Developed by Statistics Norway and the Norwegian Centre for Research Data, RAIRD is a national research infrastructure that allows for access to large amounts of statistical data for research while managing statistical confidentiality. This is achieved by allowing researchers access to metadata. The metadata is used to build analyses which are then run against detailed data without giving access to actual data.[31]
- Techniques to move beyond opaque algorithms
- Explainable AI (XAI): DARPA in collaboration with Oregon State University is researching how to create explainable models and explanation interface while ensuring a high level of learning performance in order to enable individuals to interact with, trust, and manage artificial intelligence.[32] DARPA identifies a number of entities working on different models and interfaces for analytics and autonomy AI.[33]
- Local Interpretable Model Agnostic Explanations: Developed to enable trust between AI models and humans by generating explainers to highlight key aspects that were important to the model and its decision - thus providing insight into the rationale behind a model.[34]
Public Sector use of AI and Privacy
The role of AI in public sector decision making has been gradually growing globally across sectors such as law enforcement, education, transportation, judicial decision making and healthcare. In India too, use of automated processing in electronic governance under the Digital India mission, domestic law enforcement agencies monitoring social media content and educational schemes is being discussed and gradually implemented. Much like the potential applications of AI across sub-sectors, the nature of regulatory issues are also diverse.
Aside from the accountability framework discussed in the Srikrishna Committee report, the Puttaswamy judgment also provides a basis for governance of AI with respect to its concerns for privacy, in limited contexts. The sources of right to privacy as articulated in the Puttaswamy judgments included the terms ‘personal liberty’ under Article 21 of the Constitution. In order to fully appreciate how constitutional principles could apply to automated processing in India, we need to look closely at the origins of privacy under liberty. In the famous case of AK Gopalan there is a protracted discussion on the contents of the rights under Article 21. Amongst the majority opinions itself, the opinion was divided. While Sastri J. and Mukherjea J. took the restrictive view that limiting the protections to bodily restraint and detention, Kania J. and Das J. take a broader view for it to include the right to sleep, play etc. Through RC Cooper[35] and Maneka[36], the Supreme Court took steps to reverse the majority opinion in Gopalan and it was established that that the freedoms and rights in Part III could be addressed by more than one provision. The expansion of ‘personal liberty’ has began in Kharak Singh where the unjustified interference with a person’s right to live in his house, was held to be violative of Article 21. The reasoning in Kharak Singh draws heavily from Munn v. Illinois[37] which held life to be “more than mere animal existence.” Curiously, after taking this position Kharak Singh fails to recognise a fundamental right to privacy (analogous to the Fourth Amendment protection in US) under Article 21. The position taken in Kharak Singh was to extrapolate the same method of wide interpretation of ‘personal liberty’ as was accorded to ‘life’. Maneka which evolved the test for enumerated rights within Part III says that the claimed right must be an integral part of or of the the same nature as the named right. It says that the claimed must be ‘in reality and substance nothing but an instance of the exercise of the named fundamental right’. The clear reading of privacy into ‘personal liberty’ in this judgment is effectively a correction of the inherent inconsistencies in the positions taken by the majority in Kharak Singh.
The other significant change in constitutional interpretation that occurred in Maneka was with respect to the phrase ‘procedure established by law’ in Article 21. In Gopalan, the majority held that the phrase ‘procedure established by law’ does not mean procedural due process or natural justice. What this meant was that, once a ‘procedure’ was ‘established by law’, Article 21 could not be said to have been infringed. This position was entirely reversed in Maneka. The ratio in Maneka said that ‘procedure established by law’ must be fair, just and reasonable, and cannot be arbitrary and fanciful. Therefore, any infringement of the right to privacy must be through a law which follows the principles of natural justice, and is not arbitrary or unfair. It follows that any instances of automated processing for public functioning by state actors or others, must meet this standard of ‘fair, just and reasonable’.
While there is a lot of focus internationally on what ethical AI must be, it is important that when we consider use of AI by the state, we pay heed to the existing constitutional principles which determine how AI must be evaluated against these standards. These principles however extend only to limited circumstances for protections under Article 21 are not horizontal in nature but only applicable against the state. Whether a party is the state or not is a question that has been considered several times by the Supreme Court and must be determined by functional tests. In our submission of the Justice Srikrishna Committee, we clearly recommended that where automated decision making is used for discharging of public functions, the data protection law must state that such actions are subject the the constitutional standards and are ‘just, fair and reasonable’ and satisfy the tests for both procedural and substantive due process. To a limited extent, the committee seems to have picked up the standards of ‘fair’ and ‘reasonable’ and made it applicable to all forms of processing, whether public or private. It is as yet unclear whether fairness and reasonableness as inserted in the bill would draw from the constitutional standard under Article 21. The report makes a reference to the twin principles of acting in a manner that upholds the best interest of the privacy of the individual, and processing within the reasonable expectations of the individual, which do not seem to cover the fullest essence of the legal standard under Article 21.
Conclusion
The Srikrishna Committee Bill attempts to create an accountability framework for the use of emerging technologies including AI that is focused on placing the responsibility on companies to prevent harm. Though not as robust as found in the GDPR, the protections have been enabled through requirements such as fair and reasonable processing, ensuring data quality, and implementing principles of privacy of design. At the sametime, the Srikrishna Bill does not include provisions that can begin to address the consumer facing ‘black box’ of AI by ensuring that individuals have information about the potential impact of decisions taken by automated means. In contrast, the GDPR has already taken important steps to tackle this by requiring companies to explain the logic and potential impact of decisions taken by automated means.
Most importantly, the Bill gives the Data Protection Authority the necessary tools to hold companies accountable for the use of AI through the requirements of data protection audits. If enacted, it will have to be seen how these audits and the principle of privacy by design are implemented and enforced in the context of companies using AI. Though the Bill creates a Data Protection Authority consisting of members that have significant experience in data protection, information technology, data management, data science, cyber and internet laws, and related subjects, these requirements can be further strengthened by having someone from a background of ethics and human rights.
One of the responsibilities of the DPA under the Srikrishna Bill will be to monitor technological developments and commercial practices that may affect protection of personal data and promote measures and undertake research for innovation in the field of protection of personal data. If enacted, we hope that AI and solutions towards enhancing privacy in the context of AI like described above will be one of these focus areas of the DPA. It will also be important to see how the DPA develops impact assessments related to AI and what tools associated with the principle of Privacy by Design emerge to address AI.
[1] https://privacyinternational.org/topics/artificial-intelligence
[2] https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/
[3] https://iapp.org/news/a/ai-offers-opportunity-to-increase-privacy-for-users/
[4] https://iapp.org/media/pdf/resource_center/GDPR_Study_Maldoff.pdf
[5] https://gdpr-info.eu/art-22-gdpr/
[6] https://gdpr-info.eu/art-14-gdpr/
[7] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf
[8] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf
[9] https://gdpr-info.eu/art-25-gdpr/
[10] https://ico.org.uk/for-organisations/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/
[11] https://gdpr-info.eu/art-21-gdpr/
[12] https://gdpr-info.eu/art-22-gdpr/
[13] https://gdpr-info.eu/art-14-gdpr/
[14]Draft Data Protection Bill 2018 - Chapter II section 9
[15] Draft Data Protection Bill 2018 - Chapter VII section 29
[16] Draft Data Protection Bill 2018 - Chapter VII section 33
[17] Draft Data Protection Bill 2018 - Chapter VII section 38
[18] Draft Data Protection Bill 2018 - Chapter VIII section 40
[19] Draft Data Protection Bill 2018 - Chapter X section 60
[20] Draft Data Protection Bill 2018 - Chapter II section 4
[21] Draft Data Protection Bill 2018 - Chapter II section 5
[22] Draft Data Protection Bill 2018 - Chapter IX Section 45
[23] Draft Data Protection Bill 2018 - Chapter XIV section 97
[24] Draft Data Protection Bill 2018 - Chapter VII section 31
[25] Srikrishna Committee Report on Data Protection pg. 36 and 37. Available at: http://www.prsindia.org/uploads/media/Data%20Protection/Committee%20Report%20on%20Draft%20Personal%20Data%20Protection%20Bill,%202018.pdf
[26] https://www.ciosummits.com/Online_Assets_DocAuthority_Whitepaper_-_Guide_to_Intelligent_GDPR_Compliance.pdf
[27] https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech217.pdf
[28] https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_personal_data_v2.pdf
[29] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf
[30] https://www.artificial-intelligence.blog/news/capsule-networks
[31] http://raird.no/about/factsheet.html
[32] https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
[33] https://www.darpa.mil/attachments/XAIProgramUpdate.pdf
[34] https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
[35] R C Cooper v. Union of India, 1970 SCR (3) 530.
[36] Maneka Gandhi v. Union of India, 1978 SCR (2) 621.
[37] 94 US 113 (1877).
AI in India: A Policy Agenda
Click to download the file
Background
Over the last few months, the Centre for Internet and Society has been engaged in the mapping of use and impact of artificial intelligence in health, banking, manufacturing, and governance sectors in India through the development of a case study compendium.[1] Alongside this research, we are examining the impact of Industry 4.0 on jobs and employment and questions related to the future of work in India. We have also been a part of several global conversations on artificial intelligence and autonomous systems. The Centre for Internet and Society is part of the Partnership on Artificial Intelligence, a consortium which has representation from some of most important companies and civil society organisations involved in developments and research on artificial intelligence. We have contributed to the The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and are also a part of a Big Data for Development Global Network, where we are undertaking research towards evolving ethical principles for use of computational techniques. The following are a set of recommendations we have arrived out of our research into artificial intelligence, particularly the sectoral case studies focussed on the development and use of artificial intelligence in India.
National AI Strategies: A Brief Global Overview
Artificial Intelligence is emerging as a central policy issue in several countries. In October 2016, the Obama White House released a report titled, “Preparing for the Future of Artificial Intelligence”[2] delving into a range of issues including application for public goods, regulation, economic impact, global security and fairness issues. The White House also released a companion document called the “National Artificial Intelligence Research and Development Strategic Plan”[3] which laid out a strategic plan for Federally-funded research and development in AI. These were the first of a series of policy documents released by the US towards the role of AI. The United Kingdom announced its 2020 national development strategy and issued a government report to accelerate the application of AI by government agencies while in 2018 the Department for Business, Energy, and Industrial Strategy released the Policy Paper - AI Sector Deal.[4] The Japanese government released it paper on Artificial Intelligence Technology Strategy in 2017.[5] The European Union launched "SPARC," the world’s largest civilian robotics R&D program, back in 2014.[6]
Over the last year and a half, Canada,[7] China,[8] the UAE,[9] Singapore,[10] South Korea[11], and France[12] have announced national AI strategy documents while 24 member States in the EU have committed to develop national AI policies that reflect a “European” approach to AI [13]. Other countries such as Mexico and Malaysia are in the process of evolving their national AI strategies. What this suggests is that AI is quickly emerging as central to national plans around the development of science and technology as well as economic and national security and development. There is also a focus on investments enabling AI innovation in critical national domains as a means of addressing key challenges facing nations. India has followed this trend and in 2018 the government published two AI roadmaps - the Report of Task Force on Artificial Intelligence by the AI Task Force constituted by the Ministry of Commerce and Industry[14] and the National Strategy for Artificial Intelligence by Niti Aayog.[15] Some of the key themes running across the National AI strategies globally are spelt out below.
Economic Impact of AI
A common thread that runs across the different national approaches to AI is the belief in the significant economic impact of AI, that it will likely increase productivity and create wealth. The British government estimated that AI could add $814 billion to the UK economy by 2035. The UAE report states that by 2031, AI will help boost the country’s GDP by 35 per cent, reduce government costs by 50 per cent. Similarly, China estimates that the core AI market will be worth 150 billion RMB ($25bn) by 2020, 400 billion RMB ($65bn) and one trillion RMB ($160bn) by 2030. The impact of adoption of AI and automation of labour and employment is also a key theme touched upon across the strategies. For instance, the White House Report of October 2016 states the US workforce is unprepared – and that a serious education programme, through online courses and in-house schemes, will be required.[16]
State Funding
Another key trend exhibited in all national strategies towards AI has been a commitment by the respective governments towards supporting research and development in AI. The French government has stated that it intends to invest €1.5 billion ($1.85 billion) in AI research in the period through to 2022. The British government’s recommendations, in late 2017, were followed swiftly by a promise in the autumn budget of new funds, including at least £75 million for AI. Similarly, the the Canadian government put together a $125-million ‘pan-Canadian AI strategy’ last year.
AI for Public Good
The use of AI for Public Good is a significant focus of most AI policies. The biggest justification for AI innovation as a legitimate objective of public policy is its promised impact towards improvement of people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies, and emerge as a transformative technology, much like mobile computing. These public good uses of AI are emerging across sectors such as transportation, migration, law enforcement and justice system, education, and agriculture..
National Institutions leading AI research
Another important trend which was key to the implementation of national AI strategies is the creation or development of well-funded centres of excellence which would serve as drivers of research and development and leverage synergies with the private sector. The French Institute for Research in Computer Science and Automation (INRIA) plans to create a national AI research program with five industrial partners. In UK, The Alan Turing Institute is likely to emerge as the national institute for data science, and an AI Council would be set up to manage inter-sector initiatives and training. In Canada, Canadian Institute for Advanced Research (CIFAR) has been tasked with implementing their AI strategy. Countries like Japan has a less centralised structure with the creation of strategic council for AI technology’ to promote research and development in the field, and manage a number of key academic institutions, including NEDO and its national ICT (NICT) and science and tech (JST) agencies. These institutions are key to successful implementation of national agendas and policies around AI.
AI, Ethics and Regulation
Across the AI strategies — ethical dimensions and regulation of AI were highlighted as concerns that needed to be addressed. Algorithmic transparency and explainability, clarity on liability, accountability and oversight, bias and discrimination, and privacy are ethical and regulatory questions that have been raised. Employment and the future of work is another area of focus that has been identified by countries. For example, the US 2016 Report reflected on if existing regulation is adequate to address risk or if adaption is needed by examining the use of AI in automated vehicles. In the policy paper - AI Sector Deal - the UK proposes four grand challenges: AI and Data Economy, Future Mobility, Clean Growth, and Ageing Society. The Pan Canadian Artificial Intelligence Strategy focuses on developing global thought leadership on the economic, ethical, policy, and legal implications of advances in artificial intelligence.[17]
The above are important factors and trends to take into account and to different extents have been reflected in the two national roadmaps for AI. Without adequate institutional planning, there is a risk of national strategies being too monolithic in nature. Without sufficient supporting mechanisms in the form of national institutions which would drive the AI research and innovation, capacity building and re-skilling of workforce to adapt to changing technological trends, building regulatory capacity to address new and emerging issues which may disrupt traditional forms of regulation and finally, creation of an environment of monetary support both from the public and private sector it becomes difficult to implement a national strategy and actualize the potentials of AI . As stated above, there is also a need for identification of key national policy problems which can be addressed by the use of AI, and the creation of a framework with institutional actors to articulate the appropriate plan of action to address the problems using AI. There are several ongoing global initiatives which are in the process of trying to articulate key principles for ethical AI. These discussions also feature in some of the national strategy documents.
Key considerations for AI policymaking in India
As mentioned above, India has published two national AI strategies. We have responded to both of these here[18] and here.[19] Beyond these two roadmaps, this policy brief reflects on a number of factors that need to come together for India to leverage and adopt AI across sectors, communities, and technologies successfully.
Resources, Infrastructure, Markets, and Funding
Ensure adequate government funding and investment in R&D
As mentioned above, a survey of all major national strategies on AI reveals a significant financial commitment from governments towards research and development surrounding AI. Most strategy documents speak of the need to safeguard national ambitions in the race for AI development. In order to do so it is imperative to have a national strategy for AI research and development, identification of nodal agencies to enable the process, and creation of institutional capacity to carry out cutting edge research.
Most jurisdictions such as Japan, UK and China have discussed collaborations between the industry and government to ensure greater investment into AI research and development. The European Union has spoken using the existing public-private partnerships, particularly in robotics and big data to boost investment by over one and half times.[20] To some extent, this step has been initiated by the Niti Aayog strategy paper. The paper lists out enabling factors for the widespread adoption of AI and maps out specific government agencies and ministries that could promote such growth. In February 2018, the Ministry of Electronics and IT also set up four committees to prepare a roadmap for a national AI programme. The four committees are presently studying AI in context of citizen centric services; data platforms; skilling, reskilling and R&D; and legal, regulatory and cybersecurity perspectives.[21]
Democratize AI technologies and data
Clean, accurate, and appropriately curated data is essential for training algorithms. Importantly, large quantities of data alone does not translate into better results. Accuracy and curation of data should be prerequisites to quantity of data. Frameworks to generate and access larger quantity of data should not hinge on models of centralized data stores. The government and the private sector are generally gatekeepers to vast amounts of data and technologies. Ryan Calo has called this an issue of data parity,[22] where only a few well established leaders in the field have the ability to acquire data and build datasets. Gaining access to data comes with its own questions of ownership, privacy, security, accuracy, and completeness. There are a number of different approaches and techniques that can be adopted to enable access to data.
Open Government Data
Robust open data sets is one way in which access can be enabled. Open data is particularly important for small start-ups as they build prototypes. Even though India is a data dense country and has in place a National Data and Accessibility Policy India does not yet have robust and comprehensive open data sets across sectors and fields. Our research found that this is standing as an obstacle to innovation in the Indian context as startups often turn to open datasets in the US and Europe for developing prototypes. Yet, this is problematic because the demography represented in the data set is significantly different resulting in the development of solutions that are trained to a specific demographic, and thus need to be re-trained on Indian data. Although AI is technology agnostic, in the cases of different use cases of data analysis, demographically different training data is not ideal. This is particularly true for certain categories such as health, employment, and financial data.
The government can play a key role in providing access to datasets that will help the functioning and performance of AI technologies. The Indian government has already made a move towards accessible datasets through the Open Government Data Platform which provides access to a range of data collected by various ministries. Telangana has developed its own Open Data Policy which has stood out for its transparency and the quality of data collected and helps build AI based solutions.
In order to encourage and facilitate innovation, the central and state governments need to actively pursue and implement the National Data and Accessibility Policy.
Access to Private Sector Data
The private sector is the gatekeeper to large amounts of data. There is a need to explore different models of enabling access to private sector data while ensuring and protecting users rights and company IP. This data is often considered as a company asset and not shared with other stakeholders. Yet, this data is essential in enabling innovation in AI.
Amanda Levendowski states that ML practitioners have essentially three options in securing sufficient data— build the databases themselves, buy the data, or use data in the public domain. The first two alternatives are largely available to big firms or institutions. Smaller firms often end resorting to the third option but it carries greater risks of bias.
A solution could be federated access, with companies allowing access to researchers and developers to encrypted data without sharing the actual data. Another solution that has been proposed is ‘watermarking’ data sets.
Data sandboxes have been promoted as tools for enabling innovation while protecting privacy, security etc. Data sandboxes allow companies access to large anonymized data sets under controlled circumstances. A regulatory sandbox is a controlled environment with relaxed regulations that allow the product to be tested thoroughly before it is launched to the public. By providing certification and safe spaces for testing, the government will encourage innovation in this sphere. This system has already been adopted in Japan where there are AI specific regulatory sandboxes to drive society 5.0.160 data sandboxes are tools that can be considered within specific sectors to enable innovation. A sector wide data sandbox was also contemplated by TRAI.[23] A sector specific governance structure can establish a system of ethical reviews of underlying data used to feed the AI technology along with data collected in order to ensure that this data is complete, accurate and has integrity. A similar system has been developed by Statistics Norway and the Norwegian Centre for Research Data.[24]
AI Marketplaces
The National Roadmap for Artificial Intelligence by NITI Aayog proposes the creation of a National AI marketplace that is comprised of a data marketplace, data annotation marketplace, and deployable model marketplace/solutions marketplace.[25] In particular, it is envisioned that the data marketplace would be based on blockchain technology and have the features of: traceability, access controls, compliance with local and international regulations, and robust price discovery mechanism for data. Other questions that will need to be answered center around pricing and ensuring equal access. It will also be interesting how the government incentivises the provision of data by private sector companies. Most data marketplaces that are emerging are initiated by the private sector.[26] A government initiated marketplace has the potential to bring parity to some of the questions raised above, but it should be strictly limited to private sector data in order to not replace open government data.
Open Source Technology
A number of companies are now offering open source AI technologies. For example, TensorFlow, Keras, Scikit-learn, Microsoft Cognitive Toolkit, Theano, Caffe, Torch, and Accord.NET.[27] The government should incentivise and promote open source AI technologies towards harnessing and accelerating research in AI.
Re-thinking Intellectual Property Regimes
Going forward it will be important for the government to develop an intellectual property framework that encourages innovation. AI systems are trained by reading, viewing, and listening to copies of human-created works. These resources such as books, articles, photographs, films, videos, and audio recordings are all key subjects of copyright protection. Copyright law grants exclusive rights to copyright owners, including the right to reproduce their works in copies, and one who violates one of those exclusive rights “is an infringer of copyright.[28]
The enterprise of AI is, to this extent, designed to conflict with tenets of copyright law, and after the attempted ‘democratization’ of copyrighted content by the advent of the Internet, AI poses the latest challenge to copyright law. At the centre of this challenge is the fact that it remains an open question whether a copy made to train AI is a “copy” under copyright law, and consequently whether such a copy is an infringement.[29] The fractured jurisprudence on copyright law is likely to pose interesting legal questions with newer use cases of AI. For instance, Google has developed a technique called federated learning, popularly referred to as on-device ML, in which training data is localised to the originating mobile device rather than copying data to a centralized server.[30] The key copyright questions here is whether decentralized training data stored in random access memory (RAM) would be considered as “copies”.[31] There are also suggestions that copies made for the purpose of training of machine learning systems may be so trivial or de minimis that they may not qualify as infringement.[32] For any industry to flourish, there needs to be legal and regulatory clarity and it is imperative that these copyright questions emerging out of use of AI be addressed soon.
As noted in our response to the Niti Aayog national AI strategy “The report also blames the current Indian Intellectual Property regime for being “unattractive” and averse to incentivising research and adoption of AI. Section 3(k) of Patents Act exempts algorithms from being patented, and the Computer Related Inventions (CRI) Guidelines have faced much controversy over the patentability of mere software without a novel hardware component. The paper provides no concrete answers to the question of whether it should be permissible to patent algorithms, and if yes, to to what extent. Furthermore, there needs to be a standard either in the CRI Guidelines or the Patent Act, that distinguishes between AI algorithms and non-AI algorithms. Additionally, given that there is no historical precedence on the requirement of patent rights to incentivise creation of AI, innovative investment protection mechanisms that have lesser negative externalities, such as compensatory liability regimes would be more desirable. The report further failed to look at the issue holistically and recognize that facilitating rampant patenting can form a barrier to smaller companies from using or developing AI. This is important to be cognizant of given the central role of startups to the AI ecosystem in India and because it can work against the larger goal of inclusion articulated by the report.”[33]
National infrastructure to support domestic development
Building a robust national Artificial Intelligence solution requires establishing adequate indigenous infrastructural capacity for data storage and processing. While this should not necessarily extend to mandating data localisation as the draft privacy bill has done, capacity should be developed to store data sets generated by indigenous nodal points.
AI Data Storage
Capacity needs to increase as the volume of data that needs to be processed in India increases. This includes ensuring effective storage capacity, IOPS (Input/Output per second) and ability to process massive amounts of data.
AI Networking Infrastructure
Organizations will need to upgrade their networks in a bid to upgrade and optimize efficiencies of scale. Scalability must be undertaken on a high priority which will require a high-bandwidth, low latency and creative architecture, which requires appropriate last mile data curation enforcement.
Conceptualization and Implementation
Awareness, Education, and Reskilling
Encouraging AI research
This can be achieved by collaborations between the government and large companies to promote accessibility and encourage innovation through greater R&D spending. The Government of Karnataka, for instance, is collaborating with NASSCOM to set up a Centre of Excellence for Data Science and Artificial Intelligence (CoE-DS&AI) on a public-private partnership model to “accelerate the ecosystem in Karnataka by providing the impetus for the development of data science and artificial intelligence across the country.” Similar centres could be incubated in hospitals and medical colleges in India. Principles of public funded research such as FOSS, open standards, and open data should be core to government initiatives to encourage research. The Niti Aaayog report proposes a two tier integrated approach towards accelerating research, but is currently silent on these principles.[34]
Therefore,as suggested by the NITI AAYOG Report, the government needs to set up ‘centres of excellence’. Building upon the stakeholders identified in the NITI AAYOG Report, the centers of excellence should involve a wide range of experts including lawyers, political philosophers, software developers, sociologists and gender studies from diverse organizations including government, civil society,the private sector and research institutions to ensure the fair and efficient roll out of the technology.[35] An example is the Leverhulme Centre for the Future of Intelligence set up by the Leverhulme Foundation at the University of Cambridge[36] and the AI Now Institute at New York University (NYU)[37] These research centres bring together a wide range of experts from all over the globe.[38]
Skill sets to successfully adopt AI
Educational institutions should provide opportunities for students to skill themselves to adapt to adoption of AI, and also push for academic programmes around AI. It is also important to introduce computing technologies such as AI in medical schools in order to equip doctors to adopt the technical skill sets and ethics required to use integrate AI in their practices. Similarly, IT institutes could include courses on ethics, privacy, accountability etc. to equip engineers and developers with an understanding of the questions surrounding the technology and services they are developing.
Societal Awareness Building
Much of the discussion around skilling for AI is in the context of the workplace, but there is a need for awareness to be developed across society for a broader adaptation to AI. The Niti Aayog report takes the first steps towards this - noting the importance of highlighting the benefits of AI to the public. The conversation needs to go beyond this towards enabling individuals to recognize and adapt to changes that might be brought about - directly and indirectly - by AI - inside and outside of the workplace. This could include catalyzing a shift in mindset to life long learning and discussion around potential implications of human-machine interactions.
Early Childhood Awareness and Education
It is important that awareness around AI begins in early childhood. This is in part because children already interact with AI and increasingly will do so and thus awareness is needed in how AI works and can be safely and ethically used. It is also important to start building the skills that will be necessary in an AI driven society from a young age.
Focus on marginalised groups
Awareness, skills, and education should be targeted at national minorities including rural communities, the disabled, and women. Further, there should be a concerted focus on communities that are under-represented in the tech sector-such as women and sexual minorities-to ensure that the algorithms themselves and the community working on AI driven solutions are holistic and cohesive. For example, Iridescent focuses on girls, children, and families to enable them to adapt to changes like artificial intelligence through promoting curiosity, creativity, and perseverance to become lifelong learners.[39] This will be important towards ensuring that AI does not deepen societal and global inequalities including digital divides. Widespread use of AI will undoubtedly require re-skilling various stakeholders in order to make them aware of the prospects of AI.[40] Artificial Intelligence itself can be used as a resource in the re-skilling process itself-as it would be used in the education sector to gauge people’s comfort with the technology and plug necessary gaps.
Improved access to and awareness of Internet of Things
The development of smart content or Intelligent Tutoring Systems in the education can only be done on a large scale if both the teacher and the student has access to and feel comfortable with using basic IoT devices . A U.K. government report has suggested that any skilled workforce using AI should be a mix of those with a basic understanding responsible for implementation at the grassroots level , more informed users and specialists with advanced development and implementation skills.[41]The same logic applies to the agriculture sector, where the government is looking to develop smart weather-pattern tracking applications. A potential short-term solution may lie in ensuring that key actors have access to an IoT device so that he/she may access digital and then impart the benefits of access to proximate individuals. In the education sector, this would involve ensuring that all teachers have access to and are competent in using an IoT device. In the agricultural sector, this may involve equipping each village with a set of IoT devices so that the information can be shared among concerned individuals. Such an approach recognizes that AI is not the only technology catalyzing change - for example industry 4.0 is understood as comprising of a suite of technologies including but not limited to AI.
Public Discourse
As solutions bring together and process vast amounts of granular data, this data can be from a variety of public and private sources - from third party sources or generated by the AI and its interaction with its environment. This means that very granular and non traditional data points are now going into decision making processes. Public discussion is needed to understand social and cultural norms and standards and how these might translate into acceptable use norms for data in various sectors.
Coordination and collaboration across stakeholders
Development of Contextually Nuanced and Appropriate AI Solutions
Towards ensuring effectiveness and accuracy it is important that solutions used in India are developed to account for cultural nuances and diversity. From our research this could be done in a number of ways ranging from: training AI solutions used in health on data from Indian patients to account for differences in demographics[42], focussing on natural language voice recognition to account for the diversity in languages and digital skills in the Indian context,[43] and developing and applying AI to reflect societal norms and understandings.[44]
Continuing, deepening, and expanding partnerships for innovation
Continued innovation while holistically accounting for the challenges that AI poses will be key for actors in the different sectors to remain competitive. As noted across case study reports partnerships is key in facilitating this innovation and filling capacity gaps. These partnerships can be across sectors, institutions, domains, geographies, and stakeholder groups. For example: finance/ telecom, public/private, national/international, ethics/software development/law, and academia/civil society/industry/government. We would emphasize collaboration between actors across different domains and stakeholder groups as developing holistics AI solutions demands multiple understandings and perspectives.
Coordinated Implementation
Key sectors in India need to begin to take steps to consider sector wide coordination in implementing AI. Potential stress and system wide vulnerabilities would need to be considered when undertaking this. Sectoral regulators such as RBI, TRAI, and the Medical Council of India are ideally placed to lead this coordination.
Develop contextual standard benchmarks to assess quality of algorithms
In part because of the nacency of the development and implementation of AI, towards enabling effective assessments of algorithms to understand impact and informing selection by institutions adopting solutions, standard benchmarks can help in assessing quality and appropriateness of algorithms. It may be most effective to define such benchmarks at a sectoral level (finance etc.) or by technology and solution (facial recognition etc.). Ideally, these efforts would be led by the government in collaboration with multiple stakeholders.
Developing a framework for working with the private sector for use-cases by the government
There are various potential use cases the government could adopt in order to use AI as a tool for augmenting public service delivery in India by the government. However, given lack of capacity -both human resource and technological-means that entering into partnerships with the private sector may enable more fruitful harnessing of AI- as has been seen with existing MOUs in the agricultural[45] and healthcare sectors.[46] However, the partnership must be used as a means to build capacity within the various nodes in the set-up rather than relying only on the private sector partner to continue delivering sustainable solutions.
Particularly, in the case of use of AI for governance, there is a need to evolve a clear parameter to do impact assessment prior to the deployment of the technology that clearly tries to map estimated impact of the technology of clearly defined objectives, which must also include the due process, procedural fairness and human rights considerations . As per Article 12 of the Indian Constitution, whenever the government is exercising a public function, it is bound by the entire gamut of fundamental rights articulated in Part III of the Constitution. This is a crucial consideration the government will have to bear in mind whenever it uses AI-regardless of the sector. In all cases of public service delivery, primary accountability for the use of AI should lie with the government itself, which means that a cohesive and uniform framework which regulates these partnerships must be conceptualised. This framework should incorporate : (a) Uniformity in the wording and content of contracts that the government signs, (b) Imposition of obligations of transparency and accountability on the developer to ensure that the solutions developed are in conjunction with constitutional standards and (c) Continuous evaluation of private sector developers by the government and experts to ensure that they are complying with their obligations.
Defining Safety Critical AI
The implications of AI differs according to use. Some countries, such as the EU, are beginning to define sectors where AI should play the role of augmenting jobs as opposed to functioning autonomously. The Global Partnership on AI is has termed sectors where AI tools supplement or replace human decision making in areas such as health and transportation as ‘safety critical AI’ and is researching best practices for application of AI in these areas. India will need to think through if there is a threshold that needs to be set and more stringent regulation applied. In addition to uses in health and transportation, defense and law enforcement would be another sector where certain use would require more stringent regulation.
Appropriate certification mechanisms
Appropriate certificate mechanisms will be important in ensuring the quality of AI solutions. A significant barrier to the adoption of AI in some sectors in India is acceptability of results, which include direct results arrived at using AI technologies as well as opinions provided by practitioners that are influenced/aided by AI technologies. For instance, start-ups in the healthcare sectors often find that they are asked to show proof of a clinical trial when presenting their products to doctors and hospitals, yet clinical trials are expensive, time consuming and inappropriate forms of certification for medical devices and digital health platforms. Startups also face difficulty in conducting clinical trials as there is lack of a clear regulation to adhere to. They believe that while clinical trials are a necessity with respect to drugs, the process often results in obsolescence of the technology by the time it is approved in the context of AI. Yet, medical practitioners are less trusting towards startups who do not have approval from a national or international authority. A possible and partial solution suggested by these startups is to enable doctors to partner with them to conduct clinical trials together. However, such partnerships cannot be at the expense of rigour, and adequate protections need to be built in the enabling regulation.
Serving as a voice for emerging economies in the global debate on AI
While India should utilise Artificial Intelligence in the economy as a means of occupying a driving role in the global debate around AI, it must be cautious before allowing the use of Indian territory and infrastructure as a test bed for other emerging economies without considering the ramifications that the utilisation of AI may have for Indian citizens. The NITI AAYOG Report envisions India as leverage AI as a ‘garage’ for emerging economies.[47] While there are certain positive connotations of this suggestion in so far as this propels India to occupy a leadership position-both technically and normatively in determining future use cases for AI in India,, in order to ensure that Indian citizens are not used as test subjects in this process, guiding principles could be developed such as requiring that projects have clear benefits for India.
Frameworks for Regulation
National legislation
Data Protection Law
India is a data-dense country, and the lack of a robust privacy regime, allows the public and private sector easier access to large amounts of data than might be found in other contexts with stringent privacy laws. India also lacks a formal regulatory regime around anonymization. In our research we found that this gap does not always translate into a gap in practice, as some start up companies have adopted self-regulatory practices towards protecting privacy such as of anonymising data they receive before using it further, but it does result in unclear and unharmonized practice..
In order to ensure rights and address emerging challenges to the same posed by artificial intelligence, India needs to enact a comprehensive privacy legislation applicable to the private and public sector to regulate the use of data, including use in artificial intelligence. A privacy legislation will also have to address more complicated questions such as the use of publicly available data for training algorithms, how traditional data categories (PI vs. SPDI - meta data vs. content data etc.) need to be revisited in light of AI, and how can a privacy legislation be applied to autonomous decision making. Similarly, surveillance laws may need to be revisited in light of AI driven technologies such as facial recognition, UAS, and self driving cars as they provide new means of surveillance to the state and have potential implications for other rights such as the right to freedom of expression and the right to assembly. Sectoral protections can compliment and build upon the baseline protections articulated in a national privacy legislation.[48] In August 2018 the Srikrishna Committee released a draft data protection bill for India. We have reflected on how the Bill addresses AI. Though the Bill brings under its scope companies deploying emerging technologies and subjects them to the principles of privacy by design and data impact assessments, the Bill is silent on key rights and responsibilities, namely the responsibility of the data controller to explain the logic and impact of automated decision making including profiling to data subjects and the right to opt out of automated decision making in defined circumstances.[49] Further, the development of technological solutions to address the dilemma between AI and the need for access to larger quantities of data for multiple purposes and privacy should be emphasized.
Discrimination Law
A growing area of research globally is the social consequences of AI with a particular focus on its tendency to replicate or amplify existing and structural inequalities. Problems such as data invisibility of certain excluded groups,[50] the myth of data objectivity and neutrality,[51] and data monopolization[52] contribute to the disparate impacts of big data and AI. So far much of the research on this subject has not moved beyond the exploratory phase as is reflected in the reports released by the White House[53] and Federal Trade Commission[54] in the United States. The biggest challenge in addressing discriminatory and disparate impacts of AI is ascertaining “where value-added personalization and segmentation ends and where harmful discrimination begins.”[55]
Some prominent cases where AI can have discriminatory impact are denial of loans based on attributes such as neighbourhood of residence as a proxies which can be used to circumvent anti-discrimination laws which prevent adverse determination on the grounds of race, religion, caste or gender, or adverse findings by predictive policing against persons who are unfavorably represented in the structurally biased datasets used by the law enforcement agencies. There is a dire need for disparate impact regulation in sectors which see the emerging use of AI.
Similar to disparate impact regulation, developments in AI, and its utilisation, especially in credit rating, or risk assessment processes could create complex problems that cannot be solved only by the principle based regulation. Instead, regulation intended specifically to avoid outcomes that the regulators feel are completely against the consumer, could be an additional tool that increases the fairness, and effectiveness of the system.
Competition Law
The conversation of use of competition or antitrust laws to govern AI is still at an early stage. However, the emergence of numerous data driven mergers or acquisitions such as Yahoo-Verizon, Microsoft-LinkedIn and Facebook-WhatsApp have made it difficult to ignore the potential role of competition law in the governance of data collection and processing practices. It is important to note that the impact of Big Data goes far beyond digital markets and the mergers of companies such as Bayer, Climate Corp and Monsanto shows that data driven business models can also lead to the convergence of companies from completely different sectors as well. So far, courts in Europe have looked at questions such as the impact of combination of databases on competition[56] and have held that in the context of merger control, data can be a relevant question if an undertaking achieves a dominant position through a merger, making it capable of gaining further market power through increased amounts of customer data. The evaluation of the market advantages of specific datasets has already been done in the past, and factors which have been deemed to be relevant have included whether the dataset could be replicated under reasonable conditions by competitors and whether the use of the dataset was likely to result in a significant competitive advantage.[57] However, there are limited circumstances in which big data meets the four traditional criteria for being a barrier to entry or a source of sustainable competitive advantage — inimitability, rarity, value, and non-substitutability.[58]
Any use of competition law to curb data-exclusionary or data-exploitative practices will first have to meet the threshold of establishing capacity for a firm to derive market power from its ability to sustain datasets unavailable to its competitors. In this context the peculiar ways in which network effects, multi-homing practices and how dynamic the digital markets are, are all relevant factors which could have both positive and negative impacts on competition. There is a need for greater discussion on data as a sources of market power in both digital and non-digital markets, and how this legal position can used to curb data monopolies, especially in light of government backed monopolies for identity verification and payments in India.
Consumer Protection Law
The Consumer Protection Bill, 2015, tabled in the Parliament towards the end of the monsoon session has introduced an expansive definition of the term “unfair trade practices.” The definition as per the Bill includes the disclosure “to any other person any personal information given in confidence by the consumer.” This clause excludes from the scope of unfair trade practices, disclosures under provisions of any law in force or in public interest. This provision could have significant impact on the personal data protection law in India. Alongside, there is also a need to ensure that principles such as safeguarding consumers personal information in order to ensure that the same is not used to their detriment are included within the definition of unfair trade practices. This would provide consumers an efficient and relatively speedy forum to contest adverse impacts on them of data driven decision-making.
Sectoral Regulation
Our research into sectoral case studies revealed that there are a number of existing sectoral laws and policies that are applicable to aspects of AI. For example, in the health sector there is the Medical Council Professional Conduct, Etiquette, and Ethics Regulations 2002, the Electronic Health Records Standards 2016, the draft Medical Devices Rules 2017, the draft Digital Information Security in Healthcare Act. In the finance sector there is the Credit Information Companies (Regulation) Act 2005 and 2006, the Securities and Exchange Board of India (Investment Advisers) Regulations, 2013, the Payment and Settlement Systems Act, 2007, the Banking Regulations Act 1949, SEBI guidelines on robo advisors etc. Before new regulations, guidelines etc are developed - a comprehensive exercise needs to be undertaken at a sectoral level to understand if 1. sectoral policy adequately addresses the changes being brought about by AI 2. If it does not - is an amendment possible and if not - what form of policy would fill the gap.
Principled approach
Transparency
Audits
Internal and external audits can be mechanisms towards creating transparency about the processes and results of AI solutions as they are implemented in a specific context. Audits can take place while a solution is still in ‘pilot’ mode and on a regular basis during implementation. For example, in the Payment Card Industry (PCI) tool, transparency is achieved through frequent audits, the results of which are simultaneously and instantly transmitted to the regulator and the developer. Ideally parts of the results of the audit are also made available to the public, even if the entire results are not shared.
Tiered Levels of Transparency
There are different levels and forms of transparency as well as different ways of achieving the same. The type and form of transparency can be tiered and dependent on factors such as criticality of function, potential direct and indirect harm, sensitivity of data involved, actor using the solution . The audience can also be tiered and could range from an individual user to senior level positions, to oversight bodies.
Human Facing Transparency
It will be important for India to define standards around human-machine interaction including the level of transparency that will be required. Will chatbots need to disclose that they are chatbots? Will a notice need to be posted that facial recognition technology is used in a CCTV camera? Will a company need to disclose in terms of service and privacy policies that data is processed via an AI driven solution? Will there be a distinction if the AI takes the decision autonomously vs. if the AI played an augmenting role? Presently, the Niti Aayog paper has been silent on this question.
Explainability
An explanation is not equivalent to complete transparency. The obligation of providing an explanation does not mean that the developer should necessarily know the flow of bits through the AI system. Instead, the legal requirement of providing an explanation requires an ability to explain how certain parameters may be utilised to arrive at an outcome in a certain situation.
Doshi-Velez and Kortz have highlighted two technical ideas that may enhance a developer's ability to explain the functioning of AI systems:[59]
1) Differentiation and processing: AI systems are designed to have the inputs differentiated and processed through various forms of computation-in a reproducible and robust manner. Therefore, developers should be able to explain a particular decision by examining the inputs in an attempt to determine which of them have the greatest impact on the outcome.
2) Counterfactual faithfulness: The second property of counterfactual faithfulness enables the developer to consider which factors caused a difference in the outcomes. Both these solutions can be deployed without necessarily knowing the contents of black boxes. As per Pasquale, ‘Explainability matters because the process of reason-giving is intrinsic to juridical determinations – not simply one modular characteristic jettisoned as anachronistic once automated prediction is sufficiently advanced.”[60]
Rules based system applied contextually
Oswald et al have suggested two proposals that might mitigate algorithmic opacity.by designing a broad rules-based system, whose implementation need to be applied in a context-specific manner which thoroughly evaluates the key enablers and challengers in each specific use case.[61]
- Experimental proportionality was designed to enable the courts to make proportionality determinations of an algorithm at the experimental stage even before the impacts are fully realised in a manner that would enable them to ensure that appropriate metrics for performance evaluation and cohesive principles of design have been adopted. In such cases they recommend that the courts give the benefit of the doubt to the public sector body subject to another hearing within a stipulated period of time once data on the impacts of the algorithm become more readily available.
- ‘ALGO-CARE' calls for the design of a rules-based system which ensures that the algorithms[62] are:
(1) Advisory: Algorithms must retain an advisory capacity that augments existing human capability rather than replacing human discretion outright;
(2) Lawful: Algorithm's proposed function, application, individual effect and use of datasets should be considered in symbiosis with necessity, proportionality and data minimisation principles;
(3) Granularity: Issues such as data analysis issues such as meaning of data, challenges stemming from disparate tracts of data, omitted data and inferences should be key points in the implementation process;
(4) Ownership: Due regard should be given to intellectual property ownership but in the case of algorithms used for governance, it may be better to have open source algorithms at the default. Regardless of the sector,the developer must ensure that the algorithm works in a manner that enables a third party to investigate the workings of the algorithm in an adversarial judicial context.
(5)Challengeable:The results of algorithmic analysis should be applied with regard to professional codes and regulations and be challengeable. In a report evaluating the NITI AAYOG Discussion Paper, CIS has argued that AI that is used for governance , must be made auditable in the public domain,if not under Free and Open Source Software (FOSS)-particularly in the case of AI that has implications for fundamental rights.[63]
(6) Accuracy: The design of the algorithm should check for accuracy;
(7) Responsible: Should consider a wider set of ethical and moral principles and the foundations of human rights as a guarantor of human dignity at all levels and
(8) Explainable: Machine Learning should be interpretable and accountable.
A rules based system like ALGO-CARE can enable predictability in use frameworks for AI. Predictability compliments and strengthens transparency.
Accountability
Conduct Impact Assessment
There is a need to evolve Algorithmic Impact Assessment frameworks for the different sectors in India, which should address issues of bias, unfairness and other harmful impacts of use of automated decision making. AI is a nascent field and the impact of the technology on the economy, society, etc. is still yet to be fully understood. Impact assessment standards will be important in identifying and addressing potential or existing harms and could potentially be more important in sectors or uses where there is direct human interaction with AI or power dimensions - such as in healthcare or use by the government. A 2018 Report by the AI Now Institute lists methods that should be adopted by the government for conducting his holistic assessment[64]: These should include: (1) Self-assessment by the government department in charge of implementing the technology, (2)Development of meaningful inter-disciplinary external researcher review mechanisms, (3) Notice to the public regarding self-assessment and external review, (4)Soliciting of public comments for clarification or concerns, (5) Special regard to vulnerable communities who may not be able to exercise their voice in public proceedings. An adequate review mechanism which holistically evaluates the impact of AI would ideally include all five of these components in conjunction with each other.
Regulation of Algorithms
Experts have voiced concerns about AI mimicking human prejudices due to the biases present in the Machine Learning algorithms. Scientists have revealed through their research that machine learning algorithms can imbibe gender and racial prejudices which are ingrained in language patterns or data collection processes. Since AI and machine algorithms are data driven, they arrive at results and solutions based on available
and historical data. When this data itself is biased, the solutions presented by the AI will also be biased. While this is inherently discriminatory, scientists have provided solutions to rectify these biases which can occur at various stages by introducing a counter bias at another stage. It has also been suggested that data samples should be shaped in such a manner so as to minimise the chances of algorithmic bias. Ideally regulation of algorithms could be tailored - explainability, traceability, scrutability. We recommend that the national strategy on AI policy must take these factors into account and combination of a central agency driving the agenda, and sectoral actors framing regulations around specific uses of AI that are problematic and implementation is required.
As the government begins to adopt AI into governance - the extent to which and the circumstances autonomous decision making capabilities can be delegated to AI need to be questioned. Questions on whether AI should be autonomous, should always have a human in the loop, and should have a ‘kill-switch’ when used in such contexts also need to be answered. A framework or high level principles can help to guide these determinations. For example:
- Modeling Human Behaviour: An AI solution trying to model human behaviour, as in the case of judicial decision-making or predictive policing may need to be more regulated, adhere to stricter standards, and need more oversight than an algorithm that is trying to predict ‘natural’ phenomenon such as traffic congestion or weather patterns.
- Human Impact: An AI solution which could cause greater harm if applied erroneously-such as a robot soldier that mistakenly targets a civilian requires a different level and framework of regulation than an AI solution designed to create a learning path for a student in the education sector and errs in making an appropriate assessment..
- Primary User: AI solutions whose primary users are state agents attempting to discharge duties in the public interest such as policemen, should be approached with more caution than those used by individuals such as farmers getting weather alerts
Fairness
It is possible to incorporate broad definitions of fairness into a wide range of data analysis and classification systems.[65] While there can be no bright-line rules that will necessarily enable the operator or designer of a Machine Learning System to arrive at an ex ante determination of fairness, from a public policy perspective, there must be a set of rules or best practices that explain how notions of fairness should be utilised in the real world applications of AI-driven solutions.[66] While broad parameters should be encoded by the developer to ensure compliance with constitutional standards, it is also crucial that the functioning of the algorithm allows for an ex-post determination of fairness by an independent oversight body if the impact of the AI driven solution is challenged.
Further, while there is no precedent on this anywhere in the world, India could consider establishing a Committee entrusted with the specific task of continuously evaluating the operation of AI-driven algorithms. Questions that the government would need to answer with regard to this body include:
- What should the composition of the body be?
- What should be the procedural mechanisms that govern the operation of the body?
- When should the review committee step in? This is crucial because excessive review may re-entrench the bureaucracy that the AI driven solution was looking to eliminate.
- What information will be necessary for the review committee to carry out its determination? Will there be conflicts with IP, and if so how will these be resolved?
- To what degree will the findings of the committee be made public?
- What powers will the committee have? Beyond making determinations, how will these be enforced?
Market incentives
Standards as a means to address data issues
With digitisation of legacy records and the ability to capture more granular data digitally, one of the biggest challenges facing Big Data is a lack of standardised data and interoperability frameworks. This is particularly true in the healthcare and medicine sector where medical records do not follow a clear standard, which poses a challenge to their datafication and analysis. The presence of developed standards in data management and exchange, interoperable Distributed Application Platform and Services, Semantic related standards for markup, structure, query, semantics, Information access and exchange have been spoken of as essential to address the issues of lack of standards in Big Data.[67]
Towards enabling usability of data, it is important that clear data standards are established. This has been recognized by Niti Aayog in its National Strategy for AI. On one hand, there can operational issues with allowing each organisation to choose their own specific standards to operate under, while on the other hand, non-uniform digitisation of data will also cause several practical problems, most primarily to do with interoperability of the individual services, as well as their usability. For instance, in the healthcare sector, though India has adopted an EHR policy, implementation of this policy is not yet harmonized - leading to different interpretations of ‘digitizing records (i.e taking snapshots of doctor notes), retention methods and periods, and comprehensive implementation across all hospital data. Similarly, while independent banks and other financial organisations are already following, or in the process of developing internal practices,there exist no uniform standards for digitisation of financial data. As AI development, and application becomes more mainstream in the financial sector, the lack of a fixed standard could create significant problems.
Better Design Principles in Data Collection
An enduring criticism of the existing notice and consent framework has been that long, verbose and unintelligible privacy notices are not efficient in informing individuals and helping them make rational choices. While this problem predates Big Data, it has only become more pronounced in recent times, given the ubiquity of data collection and implicit ways in which data is being collected and harvested. Further, constrained interfaces on mobile devices, wearables, and smart home devices connected in an Internet of Things amplify the usability issues of the privacy notices. Some of the issues with privacy notices include Notice complexity, lack of real choices, notices decoupled from the system collecting data etc. An industry standard for a design approach to privacy notices which includes looking at factors such as the timing of the notice, the channels used for communicating the notices, the modality (written, audio, machine readable, visual) of the notice and whether the notice only provides information or also include choices within its framework, would be of great help. Further, use of privacy by design principles can be done not just at the level of privacy notices but at each step of the information flow, and the architecture of the system can be geared towards more privacy enhanced choices.
[1] https://cis-india.org/internet-governance/blog/artificial-intelligence-in-india-a-compendium
[2] https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
[3] https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf
[4] https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal
[5] http://www.nedo.go.jp/content/100865202.pdf
[6] https://www.eu-robotics.net/sparc/10-success-stories/european-robotics-creating-new-markets.html?changelang=2
[7] https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy
[8] https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/
[10] https://www.aisingapore.org/
[11] https://news.joins.com/article/22625271
[12] https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf
[13] https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe https://www.euractiv.com/section/digital/news/twenty-four-eu-countries-sign-artificial-intelligence-pact-in-bid-to-compete-with-us-china/
[14] https://www.aitf.org.in/
[15] http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf
[16] https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
[17] https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy
[18] https://cis-india.org/internet-governance/blog/the-ai-task-force-report-the-first-steps-towards-indias-ai-framework
[19] https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy
[20] https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe
[21] http://pib.nic.in/newsite/PrintRelease.aspx?relid=181007
[22] Ryan Calo, 2017 Artificial Intelligence Policy: A Primer and Roadmap. U.C. Davis L. Review,
Vol. 51, pp. 398 - 435.
[23] https://trai.gov.in/sites/default/files/CIS_07_11_2017.pdf
[24] https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf
[25] http://www.niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf
[26] https://martechtoday.com/bottos-launches-a-marketplace-for-data-to-train-ai-models-214265
[27] https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning
[28] Amanda Levendowski, How Copyright Law Can Fix Artificial Intelligence’s
Implicit Bias Problem, 93 WASH. L. REV. (forthcoming 2018) (manuscript at 23, 27-32),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938.
[29] Id.
[30] H. Brendan McMahan, et al., Communication-Efficient Learning of Deep Networks
from Decentralized Data, arXiv:1602.05629 (Feb. 17, 2016), https://arxiv.org/abs/1602.05629.
[31] Id.
[32] Pierre N. Leval, Nimmer Lecture: Fair Use Rescued, 44 UCLA L. REV. 1449, 1457 (1997).
[33] https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy
[34] https://cis-india.org/internet-governance/blog/niti-aayog-discussion-paper-an-aspirational-step-towards-india2019s-ai-policy
[35] Discussion Paper on National Strategy for Artificial Intelligence | NITI Aayog | National Institution for Transforming India. (n.d.) p. 54. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper.
[36] Leverhulme Centre for the Future of Intelligence, http://lcfi.ac.uk/.
[37] AI Now, https://ainowinstitute.org/.
[38] https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf
[39] http://iridescentlearning.org/
[40] https://cis-india.org/internet-governance/ai-and-governance-case-study-pdf
[41] Points, L., & Potton, E. (2017). Artificial intelligence and automation in the UK.
[42] Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at https://cis-india.org/internet-governance/files/ai-and-healtchare-report.
[43] Goudarzi, S., Hickok, E., and Sinha, A., AI in the Banking and Finance Industry in India, Centre for Internet and Society. Available at https://cis-india.org/internet-governance/blog/ai-in-banking-and-finance.
[44] Paul, Y., Hickok, E., Sinha, A. and Tiwari, U., Artificial Intelligence in the Healthcare Industry in India, Centre for Internet and Society. Available at https://cis-india.org/internet-governance/files/ai-and-healtchare-report.
[45] https://news.microsoft.com/en-in/government-karnataka-inks-mou-microsoft-use-ai-digital-agriculture/
[46] https://news.microsoft.com/en-in/government-telangana-adopts-microsoft-cloud-becomes-first-state-use-artificial-intelligence-eye-care-screening-children/
[47] NITI Aayog. (2018). Discussion Paper on National Strategy for Artificial Intelligence. Retrieved from http://niti.gov.in/content/national-strategy-ai-discussion-paper. 18
[48] https://edps.europa.eu/sites/edp/files/publication/16-10-19_marrakesh_ai_paper_en.pdf
[49] https://cis-india.org/internet-governance/blog/the-srikrishna-committee-data-protection-bill-and-artificial-intelligence-in-india
[50] J. Schradie, The Digital Production Gap: The Digital Divide and Web 2.0 Collide. Elsevier Poetics, 39 (1).
[51] D Lazer, et al., The Parable of Google Flu: Traps in Big Data Analysis. Science. 343 (1).
[52] Danah Boyd and Kate Crawford, Critical Questions for Big Data. Information, Communication & Society. 15 (5).
[53] John Podesta, (2014) Big Data: Seizing Opportunities, Preserving Values, available at
http://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf
[54] E. Ramirez, (2014) FTC to Examine Effects of Big Data on Low Income and Underserved Consumers at September Workshop, available at http://www.ftc.gov/news-events/press-releases/2014/04/ftc-examine-effects-big-data-lowincome-underserved-consumers
[55] M. Schrage, Big Data’s Dangerous New Era of Discrimination, available at http://blogs.hbr.org/2014/01/bigdatas-dangerous-new-era-of-discrimination/.
[56] Google/DoubleClick Merger case
[57] French Competition Authority, Opinion n°10-A-13 of 1406.2010,
http://www.autoritedelaconcurrence.fr/pdf/avis/10a13.pdf. That opinion of the Authority aimed at
giving general guidance on that subject. It did not focus on any particular market or industry
although it described a possible application of its analysis to the telecom industry.
[58] http://www.analysisgroup.com/is-big-data-a-true-source-of-market-power/#sthash.5ZHmrD1m.dpuf
[59] Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D., ... & Wood, A. (2017). Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134.
[60] Frank A. Pasquale ‘Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society’ (July 14, 2017). Ohio State Law Journal, Vol. 78, 2017; U of Maryland Legal Studies Research Paper No. 2017-21, 7.
[61] Oswald, M., Grace, J., Urwin, S., & Barnes, G. C. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality. Information & Communications Technology Law, 27(2), 223-250.
[62] Ibid.
[63] Abraham S., Hickok E., Sinha A., Barooah S., Mohandas S., Bidare P. M., Dasgupta S., Ramachandran V., and Kumar S., NITI Aayog Discussion Paper: An aspirational step towards India’s AI policy. Retrieved from https://cis-india.org/internet-governance/files/niti-aayog-discussion-paper.
[64] Reisman D., Schultz J., Crawford K., Whittaker M., (2018, April) Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability. Retrieved from https://ainowinstitute.org/aiareport2018.pdf.
[65] Sample I., (2017, November 5) Computer says no: why making AIs fair, accountable and transparent is crucial. Retrieved from https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial.
[66] Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. U. Pa. L. Rev., 165, 633.
India’s post-truth society
The op-ed was published in Hindu Businessline on September 7, 2018.
After a set of rumours spread over WhatsApp triggered a series of lynchings across the country, the government recently took the interesting step of placing the responsibility for this violence on WhatsApp. This is especially noteworthy because the party in power, as well as many other political parties, have taken to campaigning over social media, including using WhatsApp groups in a major way to spread their agenda and propaganda.
After all, a simple tweet or message could be shared thousands of times and make its way across the country several times, before the next day’s newspaper is out. Nonetheless, while the use of social media has led to a lot of misinformation and deliberately polarising ‘news’, it has also helped contribute to remarkable acts of altruism and community, as seen during the recent Kerala floods.
While the government has taken a seemingly techno-determinist view by placing responsibility on WhatsApp, the duality of very visible uses of social media has led to others viewing WhatsApp and other internet platforms more as a tool, at the mercy of the user. However, as historian Melvin Kranzberg noted, “technology is neither good nor bad; nor is it neutral”. And while the role of political and private parties in spreading polarising views should be rigorously investigated, it is also true that these internet platforms are creating new and sometimes damaging structural changes to how our society functions. A few prominent issues are listed below:
Fragmentation of public sphere
Jurgen Habermas, noted sociologist, conceptualised the Public Sphere as being “a network for communicating information and points of view, where the streams of communication are, in the process, filtered and synthesised in such a way that they coalesce into bundles of topically specified public opinions”.
To a large extent, the traditional gatekeepers of information flow, such as radio, TV and mainstream newspapers, performed functions enabling a public sphere. For example, if a truth-claim about an issue of national relevance was to be made, it would need to get an editor’s approval.
In case there was a counter claim, that too would have to pass an editorial check. Today however, nearly anybody can become a publisher of information online, and if it catches the right ‘influencer’s attention, it could spread far wider and far quicker than it would’ve in traditional media. While this does have the huge positive of giving space to more diverse viewpoints, it also comes with two significant downsides.
First, that it gives a sense of ‘personal space’ to public speech. An ordinary person would think a few times, do some research, and perhaps practice a speech before giving it before 10,000 people. An ordinary person would also think for perhaps five seconds before putting out a tweet on the very same topic, despite now having a potentially global audience.
Second, by having messages sent directly to your hand-held device, rather than open for anyone to fact-check and counter, there is less transparency and accountability for those who send polarising material and misinformation. How can a mistaken and polarising view be countered, if one doesn’t even know it is being made? And if it can’t be countered, how can its spread by contained?
The attention market
Not only is that earlier conception of public sphere being fragmented, these new networked public spheres are also owned by giant corporations. This means that these public spheres where critical discourse is being shaped and spread, are actually governed by advertisement-financed global conglomerates. In a world of information overflow, and privately owned, ad-financed public spheres, the new unit of currency is attention.
It is in the direct interest of the Facebooks and Googles of the world, to capture user attention as long as possible, regardless of what type of activity that encourages. It goes without saying that neither the ‘mundane and ordinary’, nor the ‘nuanced and detailed’ capture people’s attention nearly as well as the sensational and exciting.
Nearly as addicting, studies show, are the headlines and viewpoints which confirm people’s biases. Fed by algorithms that understand the human desire to ‘fit in’, people are lowered into echo chambers where like-minded people find each other and continually validate each other. When people with extremist views are guided to each other by these algorithms, they not only gather validation, but also now use these platforms to confidently air their views — thus normalising what was earlier considered extreme. Needless to say, internet platforms are becoming richer in the process.
Censorship by obfuscation
Censorship in the attention economy, no longer requires blocking of views or interrupting the transmission of information. Rather, it is sufficient to drown out relevant information in an ocean of other information. Fact checking news sites face this problem. Regardless of how often they fact-check speeches by politicians, only a minuscule percentage of the original audience comes to know about, much less care about the corrections.
Additionally, repeated attacks (when baseless) on credibility of news sources causes confusion about which sources are trustworthy. In her extremely insightful book “Twitter and Tear Gas”, Prof Zeynep Tufekci rightly points out that rather than traditional censorship, powerful entities today, (often States) focus on overwhelming people with information, producing distractions, and deliberately causing confusion, fear and doubt. Facts, often don’t matter since the goal is not to be right, but to cause enough confusion and doubt to displace narratives that are problematic to these powers.
Viewpoints from members of groups that have been historically oppressed, are especially harangued. And those who are oppressed tend to have less time, energy and emotional resources to continuously deal with online harassment, especially when their identities are known and this harassment can very easily spill over to the physical world.
Conclusion
Habermas saw the ideal public sphere as one that is free of lies, distortions, manipulations and misinformation. Needless to say, this is a far cry from our reality today, with all of the above available in unhealthy doses. It will take tremendous effort to fix these issues, and it is certainly no longer sufficient for internet platforms to claim they are neutral messengers. Further, whether the systemic changes are understood or not, if they are not addressed, they will continue to create and expand fissures in society, giving the state valid cause for intervening through backdoors, surveillance, and censorship, all actions that states have historically been happy to do!
Artificial Intelligence in the Governance Sector in India
Ecosystem Mapping:Shweta Mohandas and Anamika Kundu
Edited by: Amber Sinha, Pranav MB and Vishnu Ramachandran
Much of the technological capacity and funding for AI in governance in India is coming from the private sector - a trend we expect will continue as the government engages in an increasing number of partnerships with both start-ups and large corporations alike. While there is considerable enthusiasm and desire by the government to develop AI-driven solutions in governance, including the release of two reports identifying the broad contours of India’s AI strategy, this enthusiasm is yet to be underscored by adequate financial, infrastructural, and technological capacity. This gap provides India with a unique opportunity to understand some the of the ethical, legal and technological hurdles faced by the West both during and after the implementation of similar technology and avoid these challenges when devising its own AI strategy and regulatory policy.
The case study identified five sub-sectors including law enforcement, education, defense, discharge of governmental functions and also considered the implications of AI in judicial decision-making processes that have been used in the United States. After mapping the uses of AI in various sub-sectors, this report identifies several challenges to the deployment of this technology. This includes factors such as infrastructural and technological capacity, particularly among key actors at the grassroots level, lack of trust in AI driven solutions and adequate funding. We also identified several ethical and legal concerns that policy-makers must grapple with. These include over-dependence on AI systems, privacy and security, assignment of liability, bias and discrimination both in process and outcome, transparency and due process. Subsequently, this report can be considered as a roadmap for the future of AI in India by tracking corresponding and emerging developments in other parts of the world. In the final section of the report, we propose several recommendations for policy-makers and developers that might address some of the challenges and ethical concerns identified. Some of these include benchmarks for the use of AI in the public sector, development of standards of explanation, a standard framework for engagement with the private sector, leveraging AI as a field to further India’s international strategy, developing adequate standards of data curation, ensuring that the benefits of the technology reaches the lowest common denominator, adopting interdisciplinary approaches to the study of Artificial Intelligence and developing fairness,transparency and due process through the contextual application of a rules-based system.
It is crucial that policy-makers do not adopt a ‘one-size-fits-all’ approach to AI regulation but consider all options within a regulatory spectrum that considers the specific impacts of the deployment of this technology for each sub-sector within governance - with the distinction of public sector use. Given that the governance sector has potential implications for the fundamental rights of all citizens, it is also imperative that the government does not shy away from its obligation to ensure the fair and ethical deployment of this technology while also ensuring the existence of robust redress mechanisms. To do so, it must chart out a standard rules-based system that creates guidelines and standards for private sector development of AI solutions for the public sector. As with other emerging technology, the success of Artificial intelligence depends on whether it is deployed with the intention of placing greater regulatory scrutiny on the daily lives of individuals or for harnessing individual potential that augment rather than counter the core tenets of constitutionalism and human dignity.
Read the full report here
Cross-Border Data Sharing and India: A study in Processes, Content and Capacity
The crux of the issue lies in the age old international law tenet of territorial sovereignty.Investigating crimes is a sovereign act and it cannot be exercised in the territory of another country without that country’s consent or through a permissive principle of extra-territorial jurisdiction. Certain countries have explicit statutory provisions which disallow companies incorporated in their territory from disclosing data to foreign jurisdictions. The United States of America, which houses most of the leading technological firms like Google, Apple, Microsoft, Facebook, and Whatsapp, has this requirement.
This necessitates a consent based international model for cross border data sharing as a completely ad-hoc system of requests for each investigation would be ineffective. Towards this, Mutual Legal Assistance Treaties (MLATs) are the most widely used method for cross border data sharing, with letters rogatory, emergency requests and informal requests being other methods available to most investigators. While recent gambits towards ring-fencing the data within Indian shores might alter the contours of the debate, a sustainable long-term strategy requires a coherent negotiation strategy that enables co-operation with a range of international partners.
This negotiation strategy needs to be underscored by domestic safeguards that ensure human rights guarantees in compliance with international standards, robust identification and augmentation of capacity and clear articulation of how India’s strategy lines up with the existing tenets of International law. This report studies the workings of the Mutual Legal Assistance Treaty (MLAT) between the USA and India and identifies hurdles in its existing form, culls out suggestions for improvement and explores how recent legislative developments, such as the CLOUD Act might alter the landscape.
The path forward lies in undertaking process based reforms within India with an eye on leveraging these developments to articulate a strategically beneficial when negotiating with external partners.As the nature of policing changes to a model that increasingly relies on electronic evidence, India needs to ensure that it’s technical strides made in accessing this evidence is not held back by the lack of an enabling policy environment. While the data localisation provisions introduced in the draft Personal Data Protection Bill may alter the landscape once it becomes law, this paper retains its relevance in terms of guiding the processes, content and capacity to adequately manoeuvre the present conflict of laws situation and accessing data not belonging to Indians that may be needed for criminal investigations.As a disclaimer,the report and graphics contained within it have been drafted using publicly available information and may not reflect real world practices.
Click here to download the report With research assistance from Sarath Mathew and Navya Alam and visualisation by Saumyaa Naidu
A trust deficit between advertisers and publishers is leading to fake news
The article was published in Hindustan Times on September 24, 2018.
Traditionally, we have depended on the private censorship that intermediaries conduct on their platforms. They enforce, with some degree of success, their own community guidelines and terms of services (TOS). Traditionally, these guidelines and TOS have been drafted keeping in mind US laws since historically most intermediaries, including non-profits like Wikimedia Foundation were founded in the US.
Across the world, this private censorship regime was accepted by governments when they enacted intermediary liability laws (in India we have Section 79A of the IT Act). These laws gave intermediaries immunity from liability emerging from third party content about which they have no “actual knowledge” unless they were informed using takedown notices. Intermediaries set up offices in countries like India, complied with some lawful interception requests, and also conducted geo-blocking to comply with local speech regulation.
For years, the Indian government has been frustrated since policy reforms that it has pursued with the US have yielded little fruit. American policy makers keep citing shortcomings in the Indian justice systems to avoid expediting the MLAT (Mutual Legal Assistance Treaties) process and the signing of an executive agreement under the US Clout Act. This agreement would compel intermediaries to comply with lawful interception and data requests from Indian law enforcement agencies no matter where the data was located.
The data localisation requirement in the draft national data protection law is a result of that frustration. As with the US, a quickly enacted data localisation policy is absolutely non-negotiable when it comes to Indian military, intelligence, law enforcement and e-governance data. For India, it also makes sense in the cases of health and financial data with exceptions under certain circumstances. However, it does not make sense for social media platforms since they, by definition, host international networks of people. Recently an inter ministerial committee recommended that “criminal proceedings against Indian heads of social media giants” also be considered. However, raiding Google’s local servers when a lawful interception request is turned down or arresting Facebook executives will result in retaliatory trade actions from the US.
While the consequences of online recruitment, disinformation in elections and fake news to undermine public order are indeed serious, are there alternatives to such extreme measures for Indian policy makers? Updating intermediary liability law is one place to begin. These social media companies increasingly exercise editorial control, albeit indirectly, via algorithms to claim that they have no “actual knowledge”.
But they are no longer mere conduits or dumb pipes as they are now publishers who collect payments to promote content. Germany passed a law called NetzDG in 2017 which requires expedited compliance with government takedown orders. Unfortunately, this law does not have sufficient safeguards to prevent overzealous private censorship. India should not repeat this mistake, especially given what the Supreme Court said in the Shreya Singhal judgment.
Transparency regulations are imperative. And they are needed urgently for election and political advertising. What do the ads look like? Who paid for them? Who was the target? How many people saw these advertisements? How many times? Transparency around viral content is also required. Anyone should be able to see all public content that has been shared with more than a certain percentage of the population over a historical timeline for any geographic area. This will prevent algorithmic filter bubbles and echo chambers, and also help public and civil society monitor unconstitutional and hate speech that violates terms of service of these platforms. So far the intermediaries have benefitted from surveillance — watching from above. It is time to subject them to sousveillance — watched by the citizens from below.
Data portability mandates and interoperability mandates will allow competition to enter these monopoly markets. Artificial intelligence regulations for algorithms that significantly impact the global networked public sphere could require – one, a right to an explanation and two, a right to influence automated decision making that influences the consumers experience on the platform.
The real solution lies elsewhere. Google and Facebook are primarily advertising networks. They have successfully managed to destroy the business model for real news and replace it with a business model for fake news by taking away most of the advertising revenues from traditional and new news media companies. They were able to do this because there was a trust deficit between advertisers and publishers. Perhaps this trust deficit could be solved by a commons-based solutions based on free software, open standards and collective action by all Indian new media companies.
Why Data Localisation Might Lead To Unchecked Surveillance
The article was published in Bloomberg Quint on October 15, 2018 and also mirrored in the Quint.
In April 2018, the Reserve Bank of India put out a circular requiring that all “data relating to payment systems operated by them are stored in a system only in India” within six months. Lesser requirements have been imposed on all Indian companies’ accounting data since 2014 (the back-up of the books of account and other books that are stored electronically must be stored in India, the broadcasting sector under the Foreign Direct Investment policy, must locally store subscriber information, and the telecom sector under the Unified Access licence, may not transfer their subscriber data outside India).
The draft e-commerce policy has a wide-ranging requirement of exclusive local storage for “community data collected by Internet of Things devices in public space” and “data generated by users in India from various sources including e-commerce platforms, social media, search engines, etc.”, as does the draft e-pharmacy regulations, which stipulate that “the data generated” by e-pharmacy portals be stored only locally.
While companies such as Airtel, Reliance, PhonePe (majority-owned by Walmart) and Alibaba, have spoken up in support the government’s data localisation efforts, others like Facebook, Amazon, Microsoft, and Mastercard have led the way in opposing it.
Just this week, two U.S. Senators wrote to the Prime Minister’s office arguing that the RBI’s data localisation regulations along with the proposals in the draft e-commerce and cloud computing policies are “key trade barriers”. In her dissenting note to the Srikrishna Committee's report, Rama Vedashree of the Data Security Council of India notes that, “mandating localisation may potentially become a trade barrier and the key markets for the industry could mandate similar barriers on data flow to India, which could disrupt the IT-BPM (information technology-business process management) industry.”
Justification For Data Localisation
What are the reasons for these moves towards data localisation?
Given the opacity of policymaking in India, many of the policies and regulations provide no justification at all. Even the ones that do, don’t provide cogent reasoning.
The RBI says it needs “unfettered supervisory access” and hence needs data to be stored in India. However, it fails to state why such unfettered access is not possible for data stored outside of India.
As long as an entity can be compelled by Indian laws to engage in local data storage, that same entity can also be compelled by that same law to provide access to their non-local data, which would be just as effective.
What if they don’t provide such access? Would they be blacklisted from operating in India, just as they would if they didn’t engage in local data storage? Is there any investigatory benefit to storing data in India? As any data forensic expert would note, chain of custody and data integrity are what are most important components of data handling in fraud investigation, and not physical access to hard drives. It would be difficult for the government to say that it will block all Google services if the company doesn’t provide all the data that Indian law enforcement agencies request from it. However, it would be facile for the RBI to bar Google Pay from operating in India if Google doesn’t provide it “unfettered supervisory access” to data.
The most exhaustive justification of data localisation in any official Indian policy document is that contained in the Srikrishna Committee’s report on data protection. The report argues that there are several benefits to data localisation:
- Effective enforcement,
- Avoiding reliance on undersea cables,
- Avoiding foreign surveillance on data stored outside India,
- Building an “Artificial Intelligence ecosystem”
Of these, the last three reasons are risible.
Not A Barrier To Surveillance
Requiring mirroring of personal data on Indian servers will not magically give rise to experts skilled in statistics, machine learning, or artificial intelligence, nor will it somehow lead to the development of the infrastructure needed for AI.
The United States and China are both global leaders in AI, yet no one would argue that China’s data localisation policies have helped it or that America’s lack of data localisation polices have hampered it.
On the question of foreign surveillance, data mirroring will not have any impact, since the Srikrishna Committee’s recommendation would not prevent companies from storing most personal data outside of India.
Even for “sensitive personal data” and for “critical personal data”, which may be required to be stored in India alone, such measures are unlikely to prevent agencies like the U.S. National Security Agency or the United Kingdom’s Government Communications Headquarters from being able to indulge in extraterritorial surveillance.
In 2013, slides from an NSA presentation that were leaked by Edward Snowden showed that the NSA’s “BOUNDLESSINFORMANT” programme collected 12.6 billion instances of telephony and Internet metadata (for instance, which websites you visited and who all you called) from India in just one month, making India one of the top 5 targets.
This shows that technically, surveillance in India is not a challenge for the NSA.
So, forcing data mirroring enhances Indian domestic intelligence agencies’ abilities to engage in surveillance, without doing much to diminish the abilities of skilled foreign intelligence agencies.
As I have noted in the past, the technological solution to reducing mass surveillance is to use decentralised and federated services with built-in encryption, using open standards and open source software.
Reducing reliance on undersea cables is, just like reducing foreign surveillance on Indians’ data, a laudable goal. However, a mandate of mirroring personal data in India, which is what the draft Data Protection Bill proposes for all non-sensitive personal data, will not help. Data will stay within India if the processing happens within India. However, if the processing happens outside of India, as is often the case, then undersea cables will still need to be relied upon.
The better way to keep data within India is to incentivise the creation of data centres and working towards reducing the cost of internet interconnection by encouraging more peering among Internet connectivity providers.
While data mirroring will not help in improving the enforcement of any data protection or privacy law, it will aid Indian law enforcement agencies in gaining easier access to personal data.
The MLAT Route
Currently, many forms of law enforcement agency requests for data have to go through onerous channels called ‘mutual legal assistance treaties’. These MLAT requests take time and are ill-suited to the needs of modern criminal investigations. However, the U.S., recognising this, passed a law called the CLOUD Act in March 2018. While the CLOUD Act compels companies like Google and Amazon, which have data stored in Indian data centres, to provide that data upon receiving legal requests from U.S. law enforcement agencies, it also enables easier access to foreign law enforcement agencies to data stored in the U.S. as long as they fulfill certain procedural and rule-of-law checks.
While the Srikrishna Committee does acknowledge the CLOUD Act in a footnote, it doesn’t analyse its impact, doesn’t provide suggestions on how India can do this, and only outlines the negative consequences of MLATs.
Further, it is inconceivable that the millions of foreign services that Indians access and provide their personal data to will suddenly find a data centre in India and will start keeping such personal data in India. Instead, a much likelier outcome, one which the Srikrishna Committee doesn’t even examine, is that many smaller web services may find such requirements too onerous and opt to block users from India, similar to the way that Indiatimes and the Los Angeles Times opted to block all readers from the European Union due to the coming into force of the new data protection law.
The government could be spending its political will on finding solutions to the law enforcement agency data access question, and negotiating solutions at the international level, especially with the U.S. government. However it is not doing so.
Given this, the recent spate of data localisation policies and regulation can only be seen as part of an attempt to increase the scope and ease of the Indian government’s surveillance activities, while India’s privacy laws still remain very weak and offer inadequate legal protection against privacy-violating surveillance. Because of this, we should be wary of such requirements, as well as of the companies that are vocal in embracing data localisation.
377 Bites the Dust: Unpacking the long and winding road to the judicial decriminalization of homosexuality in India
The article was published in Socio-Legal Review, a magazine published by National Law School of India University on October 11, 2018.
Introduction
After a prolonged illness due to AIDS-related complications, the gregarious Queen front-man Farrokh Bulsara (known to the world as Freddie Mercury) breathed his last in his home in Kensington, London in 1991. Despite being the symbol of gay masculinity for over a decade, Mercury never explicitly confirmed his sexual orientation-for reasons that remain unknown but could stem from prevailing social stigma. Occluded from public discourse and shrouded in irrational fears, the legitimate problems of the LGBT+ community, including the serial killer of HIV/AIDS was still relegated to avoidable debauchery as opposed to genuine illness. Concerted activism throughout the 90’s-depicted on the big screen through masterpieces such as Philadelphia, alerted the Western public of this debacle, which lead to a hard-fought array of rights and a reduction of social ostracization at the turn of the century for the LGBT+ community across western countries. This includes over two dozen countries that have allowed same-sex marriages and a host of others that recognize civil union between same-sex partners in some form.[1]
On 6th September, 2018, Section 377 of the Indian Penal Code – a colonial era law that criminalized “carnal intercourse against the order of nature” bit the dust in New Delhi, at the hands of five judges of the Supreme Court of India (Navtej Johar v Union of India).[2] Large parts of the country celebrated the restoration of the ideals of the Indian Constitution. It was freedom, not just for a community long suppressed, but for the ethos of our foundation that for a century suffered this incessant incongruity. The celebrations were tempered, perhaps by a recognition of how long this fight had taken, the unnecessary hurdles – both judicial and otherwise – that were erected along the way, and a realization of the continued suffering this community might have to tolerate till they truly earn the acceptance they deserve. While the judgment will serve as a document that signifies the sanctity of our constitutional ethos, in the grander scheme of things it is still but a small step, with the potential to catalyze a giant leap forward. For our common future, it is imperative that the LGBT+ community does not undertake this leap alone but is accompanied by the rest of the nation- a nation that recognizes the travails of this long march to freedom.
Long March to Freedom
Modelled on the 1533 Buggery Act in the UK, Section 377 was introduced into the Indian Penal Code by Thomas Macaulay, a representative of the British Raj. While our colonial masters progressed in 1967, the hangover enmeshed in our penal laws lingered on. Public discourse on this legal incongruity emerged initially with the publication of a report titled Less than Gay: A Citizens Report on the Status of Homosexuality in India, spearheaded by activist Siddhartha Gautam, on behalf of the AIDS Bhedbav Virodhi Andolan (ABVA) that sought to fight to decriminalise homosexuality and thereby move towards removing its associated stigma.[3] The ABVA went on to file a petition for this decriminalisation in 1994. The judicial skirmish continued in 2001 with the Naz Foundation, a Delhi-based NGO that works on HIV/AIDS and sexual health, filing a petition by way of Public Interest Litigation asking for a reading down of the Section. The Delhi High Court initially dismissed this petition – stating that the foundation had no locus standi.[4] Naz Foundation appealed against this before the Supreme Court, which overturned the dismissal on technical grounds and ordered the High Court to decide the case on merits.
The two-judge bench of the Delhi High Court held that Section 377 violated privacy, autonomy and liberty, ideals which were grafted into the ecosystem of fundamental rights guaranteed by Part-III of the Indian Constitution.[5] It stated that the Constitution was built around the core tenet of inclusiveness, which was denigrated by the sustained suppression of the LGBT+ community. It was an impressive judgment, not only because of the bold and progressive claim it made in a bid to reverse a century and a half of oppression, but also because of the quality of the judgment itself. It tied in principles of international law, along with both Indian and Foreign judgments in addition to citing literature on sexuality as a form of identity. For a brief while, faith in the ‘system’ seemed justified.
Hope, however, is a fickle friend. Four years from the day, an astrologer by the name of Suresh Kumar Koushal challenged the Delhi High Court’s verdict.[6] Some of the reasons behind this challenge would defy any standard sense of rationality. These included national security concerns – as soldiers who stay away from their families[7] may enter into consensual relationships with each other – leading to distractions that might end up in military defeats. Confoundingly, the Supreme Court’s verdict lent judicial legitimacy to Koushal’s thought process, as they overturned the Naz Foundation judgment and affirmed the constitutional validity of Section 377 on some truly bizarre grounds.[8] Indian constitutional tradition permits discrimination by the state only if classification is based on an intelligible differential between the group being discriminated against from the rest of the populace; having a rational nexus with a constitutionally valid objective. To satisfy this threshold, the Supreme Court stated, without any evidence, that there are two classes of people-those who engage in sexual intercourse in the ‘ordinary course’ and those who do not- thereby satisfying the intelligible differential threshold.[9] As pointed out by constitutional law scholar Gautam Bhatia, this differential makes little sense – an extrapolation of this idea could indicate that intercourse with a blue-eyed person was potentially not ‘ordinary’, since the probability of this occurring is rare.[10] The second justification was based on numbers. The Court argued that statistics pointed to the fact that only 200 people had been arrested under this law, which suggested that it was largely dormant and hence, discrimination doesn’t get established per se.[11] In other words, a plain reading of the judgement might lead one to conclude that the random arrests of a small number of citizens would be constitutionally protected, so long it does not overshoot an arbitrarily determined de minimis threshold! The judgment seemed to drag Indian society ceaselessly into the past. This backward shift internally was accompanied by international posturing by India that opposed the recent wave of UN resolutions which sought to advocate LGBT+ rights.[12]
Thankfully, there remained a way to correct such Supreme Court induced travesties, through what is known as a curative petition, a concept introduced by the Court itself through one of its earlier judgements.[13] Needless to mention, such a petition was duly filed before the Court.[14] While this curative petition was under consideration, last August, a 9-judge bench of the Court spun some magic through a landmark judgment in Just. (Retd.) K S Puttuswamy v Union of India[15] which stated that the ‘right to privacy’ was a recognised fundamental right as per the Indian Constitution. The judgment in Koushal was singled out and criticised by Justice Chandrachud who asserted the fact that an entire community could not be deprived of the dignity of privacy in their sexual relations.
Strategically, this was a master-class. While the right to privacy cannot alone serve as the justification for allowing individuals to choose their sexual orientation, in several common law nations including the UK[16] and the USA[17], privacy has served as the initial spark for legitimizing same-sex relations. A year before the privacy judgment was delivered, a group of individuals had filed a separate petition arguing that Section 377 violated their constitutional rights. The nature of this petition was intrinsically different[18] from the Naz Foundation’s, since the Foundation had filed a ‘public interest litigation’ in a representative capacity whereas this petition affected individuals in their personal capacity, implying that the nature of the claim in each case was different.
The cold case file of this petition that crystallised into the iconic judgment delivered last week, was brought to the fore and listed for hearing in January 2018.[19] Justice Chandrachud’s judgement in Puttaswamy, that tore apart the Koushal verdict, had no small role to play in the unfolding of this saga.[20]
And so the hearings began. The government chose to not oppose the petition and allowed the court to decide the fate of Article 377.[21] This was another convenient manoeuvre by the legislature, effectively shifting the ball into the judiciary’s court, shielding itself from potential pushbacks from its conservative voter-base. However, as public support for decriminalisation started pouring in from various quarters, leaders of religious groups were quick to make their opposition known, leaving the five judges on the bench to decide the fate of a community long suppressed through the clutches of an illegitimate law.
“I am what I am”: The judgement, redemption and beyond
“The mis-application of this provision denied them the Fundamental Right to equality guaranteed by Article 14. It infringed the Fundamental Right to non-discrimination under Article 15, and the Fundamental Right to live a life of dignity and privacy guaranteed by Article 21. The LGBT persons deserve to live a life unshackled from the shadow of being ‘unapprehended felons.”[22]
Justice Indu Malhotra summed up her short judgement with this momentous pronouncement, adding that ‘history owes an apology’[23] to the members of the LGBT+ community, for the injustices faced during these centuries of hatred and apathy. It seems fair to suggest that this idea of ‘righting the wrongs of the past’ became the underlying theme of the Supreme Court’s landmark verdict on the constitutionality of Section 377. Five judges, through four concurring but separate opinions, extracted the essence of the claim against this law – protecting the virtue of personal liberty and dignity. In doing so, it exculpated itself from the travesty of Suresh Kaushal, emancipating the ‘miniscule minority’ from their bondage before the law and took yet another step towards restoring faith in the ‘system’ of which the judiciary is currently positioning itself as the sole conscientious wing. Perhaps the only set of people shamed through this verdict were our parliamentarians, who on two separate occasions in the recent past had thwarted any chance of change when they opposed, insulted and ridiculed Dr. Shashi Tharoor while he attempted to introduce a Bill decriminalizing homosexuality on the floor of the House.[24]
Earlier in the day, the Chief Justice, authoring the lead opinion for himself and Justice Khanwilkar, began with the ominous pronouncement that ‘denying self-expression (to the individual) was an invitation to death’,[25] emphasizing through his long judgement the importance of promoting individuality in all its varied facets- in matters of choice, privacy, speech and expression.[26] Arguing strongly in support of the ‘progressive realization of rights’,[27] which he identified as the soul of constitutional morality, the Chief Justice outlawed the ‘artificial distinction’ drawn between heterosexual and homosexual through the application of the ‘equality’ doctrine embedded in Articles 14 and 15.[28] Noting that the recent criminal law amendment recognizes the absence of consent as the basis for sexual offences, he pointed out the lack of a similar consent-based framework in the context of non peno-vaginal sex, effectively de-criminalizing ‘voluntary sexual acts by consenting adults’ as envisaged within the impugned law.[29] The Chief Justice went on to elaborate that the right to equality, liberty and privacy are inherent in all individuals, and no discrimination on grounds of sex would survive the scrutiny of the law.[30]
Justice Nariman in his separate opinion charted out the legislative history behind the adoption of the Indian Penal Code. In his inimitable manner, he travelled effortlessly across time and space to source historical material and legislations, judicial decisions and literary critique from various jurisdictions to bolster the claim that the discrimination faced by homosexuals had no basis in law or fact.[31] For instance, referring to the Wolfenden Committee Report in the UK regarding decriminalisation of homosexuality which urged legislators to distinguish between ‘sin and crime’, the judge went on to lament the lives lost to mere social perception, including that of Oscar Wilde and Alan Turing.[32] Repelling the popular myth of homosexuality being a ‘disease’, he quoted from the Mental Healthcare Act, 2017, the US Supreme Court’s seminal judgment in Lawrence v Texas[33] and several other studies on the intersection of homosexuality and public health, dismissing this contention entirely. Justice Nariman, invoking the doctrine of ‘manifest arbitrariness’[34] to dispel the notion that the law treating homosexuals was ‘different’. Since it was based on sexual identity and orientation, such a law was a gross abuse of the equal protection of the Constitution.
Justice Chandrachud, having already built a formidable reputation as the foremost liberal voice on the bench, launched a scathing, almost visceral attack against the idea of ‘unnatural sexual offence’ insofar as it applied to homosexuality.[35] Mirroring the concern first espoused by Justice Nariman about the chilling effect of majoritarianism, he wondered aloud what societal harm did a provision like Section 377 seek to prevent. In fact, his separate opinion is categorical in its negation of the ‘intelligible differentia’ between ‘natural’ and ‘non-natural’ sex, sardonically stating the perpetuation of heteronormativity cannot be the object of a law.[36]
As an interesting aside, his judgement in Puttaswamy famously introduced a section called ‘discordant notes’[37] which led an introspective Court to disown and overturn disturbing precedent from the past, most notably the Court’s opinion in the ADM Jabalpur,[38] decided that the right to seek redressal for violation of Fundamental Rights remained suspended as a consequence of the National Emergency.
In a similar act of constitutional manipulation, he delved into a critique of the Apex Court’s judgement in the Nergesh Meerza[39] case. This was a decision which upheld the discriminatory practice of treating men and women as different classes of employees by Air India, denying the women employees certain benefits ordinarily available to men. The Court in Nergesh Meerza read the non-discrimination guarantee in Article 15 narrowly to understand that discrimination based on ‘sex alone’ would be struck down. He held that since the sexes had differences in the mode of recruitment, promotion and conditions of service, it did not tantamount to ‘merely sex based’ categorization and was an acceptable form of classification. In his missionary zeal to exorcise the Court of past blemishes, Dr. Chandrachud observed that interpreting constitutional provisions through such narrow tests as ‘sex alone’ would lead to denuding the freedoms guaranteed within the text. Though not the operative part of the judgement, one hopes his exposition of the facets of the equality doctrine and fallacies in reasoning in Nargesh Meerza will pave the way for just jurisprudence to emerge in sex discrimination cases in the future.[40]
Reverting to the original issue, the judge addresses several key concerns voiced by the LGBT+ community through their years of struggle. He spoke of bridging the public-private divide by ensuring the protection of sexual minorities in the public sphere as well, wherein they are most vulnerable. Alluding to his opinion in Puttaswamy, he declares that all people have an inalienable right to privacy, which is a fundamental aspect of their liberty and the ‘soulmate of dignity’- ascribing the right to dignified life as a constitutional guarantee for one and all. Denouncing the facial neutrality[41] of Section 377, insofar as it targets certain ‘acts and not classes of people’, his broad and liberal reading of non-discrimination goes beyond the semantics of neutrality and braves the original challenge- fashioning a justice system with real equality at its core.
Shall History Absolve Us?
Where to from here then? Can the 500 pages of this iconic judgment magically change the social norms that define the existence of LGBT+ communities in modern Indian society? If the reception of this judgement by the conservative factions within society is anything to go by, the answer is clear enough. Yet, the role of this judgment – in an ecosystem of other enablers – might just be a crucial first step. As noted by Harvard Law School professor Lawrence Lessig, law can create, displace or change the collective expectations of society by channelling societal behaviour in a manner that conforms with its contents.[42] An assessment of the impact of Brown v Board of Education on African-Americans offers an interesting theoretical analogy.[43]
The unanimous decision of the US Supreme Court in Brown marked a watershed moment in American history that struck down the ‘separate but equal’ doctrine which served as the basis for segregation between communities of colour and the dominant White majority in American public schools. While this ruling initially faced massive resistance, it laid the edifice for progressive legislation such as the Civil Rights Act and the Voting Act a decade later.[44] While its true impact on evolving acceptable standards of social behaviour remains disputed with valid arguments on all sides, Brown kick-started a counter-culture that sought to wipe out the toxic norms that the Jim Crow-era had birthed in the 1950s. Along with subsequent decisions by the US Supreme Court, it acted as the catalyst that morphed the boundaries between ‘us’ and ‘them’. Republican Senator Barry Goldwater attempted to stifle this counterculture in 1964 by undertaking a sustained campaign that opposed the dictum in Brown not in opposition to African-Americans but instead in opposition to an overly intrusive federal government that was taking away from the cultural traditions and values, particularly of the South.[45] In the past few years, cultural apathy seems to have taken a more sinister turn as recent incidents of police violence and the rebirth of white supremacist movements indicate.
Lessons from a different context in an alternate society can never be transposed in another without substantial alterations. Discrimination is intersectional and a celebration of identity is a recognition of intersectionality. Therefore, the path ahead for the LGBT+ community lies in crafting a strategy that works for them – a strategy that can draw from lessons learned in other contexts. Last week’s judgment could morph into a point of reference for a counter-cultural movement that works to remove the stains of oppression. The key challenge is carrying this message to swathes of the populace who, goaded by leading public figures, continue to treat homosexuality as an unnatural phenomenon[46].
Being a majority Hindu nation, one possible medium of communication could be reference to ancient Hindu scriptures that do not ostracize individuals based on their sexual orientation but treat them as fellow sojourners on their path to Nirvana, the idea of spiritual emancipation, a central tenet of Hindu belief.[47] Strategically, using this framework as a dangling carrot for religious conservatives may be a potential conversation starter but comes riddled with potholes, as the same scriptures could be interpreted to justify subjugation of women, for example. A more holistic approach might be reading these scriptures into the overarching foundation stone of society -The Indian Constitution, which is not a rigid, static document – stuck in the time of its inception – but is a dynamic one that responds to and triggers the Indian social and political journey. The burden of a constitution, as reiterated by Chief Justice Misra and Dr. Chandrachud is to ‘draw a curtain’ on the past of social injustice and prejudice and embrace constitutional morality, a cornerstone of which is the principle of inclusiveness. Inclusiveness driven by rhetoric in political speeches and storylines on the big screen. Inclusiveness that fosters symbiosis between the teachings of religious scriptures and that of Constitutional Law Professors – an inclusiveness that begets the idea of India, which is a fair deal for all Indians.
…And Justice for all?
In the aftermath of this decision come further legal challenges. Legally, while the ‘right to love’ has been vindicated, the right to formalise this union through societal recognition remains to be established. This judgement paves the way for the acceptance of homosexual relationships, but not necessarily the right to marry for a homosexual couple. There are passages within Justice Chandrachud’s visionary analysis which directly address this concern, and advocate for the ‘full protection’ of the law being extended to the LGBT+ populace. It will certainly be instructive for future courts, and one tends to remain hopeful that the long march to freedom for the LGBT+ community and its supporters will not come to a screeching halt through judicial intervention or State action. If anything, the wings of government should bolster these efforts, in view of this verdict.
That said, social acceptance seldom waits on the sanction of the law.
The outpouring of public support which was witnessed through public demonstrations, social media advocacy and concerted efforts from so many quarters to bring down this draconian law needs to continue and consolidate. There are evils yet, and the path to genuine inclusiveness in this country (as in most others) is littered with thorns. And even greater resistance is likely to emerge when tackling some of these issues, which tend to hit closer home than others.
While this judgement entered into detailed discussions on the issue of consent, it remained disquietingly silent on a most contentious subject, perhaps because it was perceived to be beyond the terms of reference. The exception of marital rape carved out in the Indian Penal Code, which keeps married relationships outside the purview of rape laws, remains as a curse – a reminder that gender equality in this nation will only come at tremendous human cost. The institution of family, that sacrosanct space which even the most liberal courtrooms in India have sought to protect, stands threatened. Malignant patriarchy will raise its head and claim its pound of flesh before the dust settles, and in the interest of freedom, it shall be up to the Apex Court to ensure that it settles on the right side of history. Else, all our progress, howsoever incremental, may be undone by this one stain on our collective conscience.
*Agnidipto Tarafder is an Assistant Professor of Law at the National University of Juridical Sciences, Kolkata, where he teaches courses in Constitutional Law, Labour Law and Privacy.
*Arindrajit Basu recently finished his LLM (Public International Law) at the University of Cambridge and is a Policy Officer at the Centre for Internet & Society, Bangalore
_________________________________________________________________________________________
[1] Gay Marriage Around the World, Pew Research Centre (Aug 8, 2017) available at http://www.pewforum.org/2017/08/08/gay-marriage-around-the-world-2013/.
[2] W. P. (Crl.) No. 76 of 2016 (Supreme Court of India).
[3] Aids Bhedbav Virodhi Andolan, Less than Gay: A Citizen’s Report on the Status of Homosexuality in India (Nov-Dec, 1991) available at https://s3.amazonaws.com/s3.documentcloud.org/documents/1585664/less-than-gay-a-citizens-report-on-the-status-of.pdf.
[4] P.P Singh, 377 battle at journey’s end (September 6, 2018) available at https://indianexpress.com/article/explained/section-377-verdict-supreme-court-decriminalisation-gay-sex-lgbtq-5342008/.
[5] (2009) 160 DLT 277; W.P. (C) No.7455/2001 of 2009 (Delhi HC).
[6] Sangeeta Barooah Pisharoty, It is like reversing the motion of the earth, The Hindu (December 20, 2013) available at https://www.thehindu.com/features/metroplus/society/it-is-like-reversing-the-motion-of-the-earth/article5483306.ece.
[7] Id.
[8] (2014) 1 SCC 1 (Supreme Court of India).
[9] Ibid, at para 42.
[10] Gautam Bhatia, The unbearable wrongness of Koushal v Naz Foundation, Ind Con Law Phil (December 11, 2013)
[11] supra note 8, at para 43.
[12] Manjunath, India’s UN Vote: A Reflection of Our Deep Seated Anti-Gay Sentiments, Amnesty International (Apr 20, 2015) available at https://amnesty.org.in/indias-un-vote-reflection-societys-deep-seated-anti-gay-prejudice/.
[13] The concept of curative petitions was laid down in Rupa Ashok Hurra v. Ashok Hurra, (2002) 4 SCC 388 (Supreme Court of India).
[14] Ajay Kumar, All you need to know about the SC’s decision to reopen the Section 377 debate, FIRSTPOST (February 3, 2016) available at https://www.firstpost.com/india/all-you-need-to-know-about-the-scs-decision-to-reopen-the-section-377-debate-2610680.html.
[15] 2017 (10) SCC 1(Supreme Court of India).
[16] The Wolfenden Report, Brit. J; Vener. Dis. (1957) 33, 205 available at https://sti.bmj.com/content/sextrans/33/4/205.full.pdf.
[17] Griswold v Connecticut, 381 US 479.
[18] Gautam Bhatia, Indian Supreme Court reserves judgment on the de-criminalisation of Homosexuality, OHRH Blog (August 15, 2018) available at http://ohrh.law.ox.ac.uk/the-indian-supreme-court-reserves-judgment-on-the-de-criminalisation-of-homosexuality/.
[19] Krishnadas Rajagopal, Supreme Court refers plea to decriminalize homosexuality under Section 377 to larger bench, The Hindu (January 8, 2018) available at https://www.thehindu.com/news/national/supreme-court-refers-377-plea-to-larger-bench/article22396250.ece.
[20] Puttuswamy, paras 124-28.
[21] Aditi Singh, Government leaves decision on Section 377 to the wisdom of Supreme Court, LIVEMINT (July 11, 2018) available at https://www.livemint.com/Politics/fMReaXRcldOWyY20ELJ0GK/Centre-leaves-it-to-Supreme-Court-to-decide-on-Section-377.html.
[22] supra note 2, at para 20.
[23] Ibid.
[24] Express News Service, Lok Sabha votes against Shashi Tharoor’s bill to decriminalize homosexuality again, Indian Express (March 12, 2016) available at https://indianexpress.com/article/india/india-news-india/decriminalising-homosexuality-lok-sabha-votes-against-shashi-tharoors-bill-again/.
[25] Navtej Johar v. Union of India, W. P. (Crl.) No. 76 of 2016 (Supreme Court of India) at para 1.
[26] Ibid, at para 2.
[27] Ibid, at para 82.
[28]Ibid, at para 224.
[29] Ibid, at para 253.
[30] Ibid.
[31] Separate Opinion, RF Nariman, paras 1-20.
[32] Ibid, at paras 28-9.
[33] Ibid. Lawrence v Texas, 539 US 558 (2003), discussed in paras 108-09.
[34] Ibid, at para 82.
[35] Separate Opinion, DY Chandrachud, at para 28.
[36] Ibid, at para 56-7, 61.
[37] Supra note 20, at para 118-9.
[38] ADM Jabalpur v Shiv Kant Shukla (1976) 2 SCC 521. (Supreme Court of India)
[39] Air India v Nergesh Meerza (1981) 4 SCC 335. (Supreme Court of India)
[40] Supra note 25, at paras 36-41.
[41] Ibid, at paras 42-43, 56.
[42] Lawrence Lessig, The Regulation of Social Meaning, 62 University of Chicago Law Review 943 ,947 (1995)
[43] Brown v. Board of Education of Topeka, 347 U.S. 483.
[44] David Smith, Little Rock Nine: The day young students shattered racial segregation, The Guardian (September 24, 2017) available at https://www.theguardian.com/world/2017/sep/24/little-rock-arkansas-school-segregation-racism.
[45]Michael Combs and Gwendolyn Combs, Revisiting Brown v. Board of Education: A Cultural, Historical-Legal, and Political Perspective (2005).
[46] Poulomi Saha, RSS on 377: Gay sex not a crime but is unnatural, India Today (September 6, 2018) available at https://www.indiatoday.in/india/story/rss-on-section-377-verdict-gay-sex-not-a-crime-but-is-unnatural-1333414-2018-09-06.
[47] S Venkataraman and H Varuganti, A Hindu approach to LGBT Rights, Swarajya (July 4, 2015) available at https://swarajyamag.com/culture/a-hindu-approach-to-lgbt-rights.
Discrimination in the Age of Artificial Intelligence
This was originally published by Oxford Human Rights Hub on October 23, 2018
Image Credit: Sarla Catt via Flickr, used under a Creative Commons license available at https://creativecommons.org/licenses/by/2.0/
In the international human rights law context, AI solutions pose a threat to norms which prohibit discrimination. International Human Rights Law recognizes that discrimination may take place in two possible ways, directly or indirectly. Direct discrimination occurs when an individual is treated less favourably than someone else similarly situated on one of the grounds prohibited in international law, which, as per the Human Rights Committee, includes race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Indirect discrimination occurs when a policy, rule or requirement is ‘outwardly neutral’ but has a disproportionate impact on certain groups that are meant to be protected by one of the prohibited grounds of discrimination. A clear example of indirect discrimination recognized by the European Court of Human Rights arose in the case of DH&Ors v Czech Republic. The ECtHR struck down an apparently neutral set of statutory rules, which implemented a set of tests designed to evaluate the intellectual capability of children but which resulted in an excessively high proportion of minority Roma children scoring poorly and consequently being sent to special schools, possibly because the tests were blind to cultural and linguistic differences. This case acts as a useful analogy for the potential disparate impacts of AI and should serve as useful precedent for future litigation against AI-driven solutions.
Indirect discrimination by AI may occur at two stages. First is the usage of incomplete or inaccurate training data that results in the algorithm processing data that may not accurately reflect reality. Cathy O’Neil explains this using a simple example. There are two types of crimes-those that are ‘reported’ and others that are only ‘found’ if a policeman is patrolling the area. The first category includes serious crimes such as murder or rape while the second includes petty crimes such as vandalism or possession of illicit drugs in small quantities. Increased police surveillance in areas in US cities where Black or Hispanic people reside lead to more crimes being ‘found’ there. Thus, data is likely to suggest that these communities commit a higher proportion of crimes than they actually do – indirect discrimination that has been empirically been shown through research published by Pro Publica.
Discrimination may also occur at the stage of data processing, which is done through a metaphorical ‘black-box’ that accepts inputs and generates outputs without revealing to the human developer how the data was processed. This conundrum is compounded by the fact that the algorithms are often utilised to solve an amorphous problem-which attempts to break down a complex question into a simple answer. An example is the development of ‘risk profiles’ of individuals for the determination of insurance premiums. Data might show that an accident is more likely to take place in inner cities due to more densely packed populations in these areas. Racial and ethnic minorities tend to reside more in these areas, which means that algorithms could learn that minorities are more likely to get into accidents, thereby generating an outcome (‘risk profile’) that indirectly discriminates on grounds of race or ethnicity.
It would be wrong to ignore discrimination, both direct and indirect, that occurs as a result of human prejudice. The key difference between that and discrimination by AI lies in the ability of other individuals to compel the decision-maker to explain the factors that lead to the outcome in question and testing its validity against principles of human rights. The increasing amounts of discretion and, consequently, power being delegated to autonomous systems mean that principles of accountability which audit and check indirect discrimination need to be built into the design of these systems. In the absence of these principles, we risk surrendering core tenets of human rights law to the whims of an algorithmically crafted reality.
Conceptualizing an International Security Regime for Cyberspace
Policy-makers often use past analogous situations to reshape questions and resolve dilemmas in current issues. However, without sufficient analysis of the present situation and the historical precedent being considered, the effectiveness of the analogy is limited.This applies across contexts, including cyber space. For example, there exists a body of literature, including The Tallinn Manual, which applies key aspects (structure, process, and techniques) of various international legal regimes regulating the global commons (air, sea, space and the environment) towards developing global norms for the governance of cyberspace.
Given the recent deadlock at the Group of Governmental Experts (GGE), owing to a clear ideological split among participating states, it is clear that consensus on the applicability of traditional international law norms drawn from other regimes, will not emerge if talks continue without a major overhaul of the present format of negotiations. The Achilles Heel of the GGE thus far has been a deracinated approach to the norms formulation process. There has been excessive focus on the content and the language of the applicable norm rather than the procedure underscoring its evolution, limited state and non state participation, and a lack of consideration for social, cultural, economic and strategic contexts through which norms emerge at the global level. Even if the GGE process became more inclusive and included all United Nations members, strategies preceding the negotiation process must be designed in a manner to facilitate consensus.
There exists to date, no scholarship that traces the negotiation processes that lead to the forging of successful analogous universal regimes or an investigation into the nature of normative contestation that enabled the evolution of the core norms that shaped these regimes. To develop an effective global regime governing cyberspace, we must consider if and how existing international law or norms for other global commons might also apply to ‘cyberspace’, but also transcend this frame into more nuanced thinking around techniques and frameworks that have been successful in consensus building. This paper focuses on the latter and embarks on an assessment of how regimes universally maximized functional utility through global interactions and shaped legal and normative frameworks that resulted, for some time, at least, in broad consensus.
Lessons from US response to cyber attacks
The article was published in Hindu Businessline on October 30, 2018. The article was edited by Elonnai Hickok.
In September, amidst the brewing of a new found cross-continental romance between Kim Jong-Un and Donald Trump, the US Department of Justice filed a criminal complaint indicting North Korean hacker Park Jin Hyok for playing a role in at least three massive cyber operations against the US. This included the Sony data breach of 2014; the Bangladesh bank heist of 2016 and the WannaCry ransomware attack in 2017. This indictment was followed by one on October 4, of seven officers in the GRU, Russia’s military agency, for “persistent and sophisticated computer intrusions.” Evidence adduced in support included forensic cyber evidence like similarities in lines of code or analysis of malware and other factual details regarding the relationship between the employers of the indicted individuals and the state in question.
While it is unlikely that prosecutions will ensue, indicting individuals responsible for cyber attacks offers an attractive option for states looking to develop a credible cyber deterrence strategy.
Attributing cyber attacks
Technical uncertainty in attributing attacks to a specific actor has long fettered states from adopting defensive or offensive measures in response to an attack and garnering support from multilateral fora. Cyber attacks are multi-stage, multi-step and multi-jurisdictional, which complicates the attribution process and removes the attacker from the infected networks.
Experts at the RAND Corporation have argued that technical challenges to attribution should not detract from international efforts to adopt a robust, integrated and multi-disciplinary approach to attribution, which should be seen as a political process operating in symbiosis with technical efforts. A victim state must communicate its findings and supporting evidence to the attacking state in a bid to apply political pressure.
Clear publication of the attribution process becomes crucial as it furthers public credibility in investigating authorities; enables information exchange among security researchers and fosters deterrence by the adversary and potential adversaries.
Although public attributions need not take the form of a formal indictment and are often conducted through statements by foreign ministries, a criminal indictment is more legitimate as it needs to comply with the rigorous legal and evidentiary standards required by the country’s legal system. Further, an indictment allows for the attack to be conceptualised as a violation of the rule of law in addition to being a geopolitical threat vector.
Lessons for India
India is yet to publicly attribute a cyber attack to any state or non-state actor. This is surprising given that an overwhelming percentage of attacks on Indian websites are perpetrated by foreign states or non-state actors, with 35 per cent of attacks emanating from China, as per a report by the Indian Computer Emergency Response Team (CERT-IN), the national nodal agency under the Ministry of Electronics and Information Technology (MEITY) which deals with cyber threats.
Along with other bodies, such as the National Critical Information Protection Centre (NCIIPC) which is the nodal central agency for the protection of critical information infrastructure, CERT-IN forms part of an ecosystem of nodal agencies designed to guarantee national cyber security.
There are three key lessons that policy makers involved in this ecosystem can take away from the WannaCry attribution process and the Park indictment. First, there is a need for multi-stakeholder collaboration through sharing of research, joint investigations and combined vulnerability identification among the various actors employed by the government, law enforcement authorities and private cyber security firms.
The affidavit suggested that the FBI had used information from various law enforcement personnel, computer scientists at the FBI; Mandiant — a cyber security firm retained by the US Attorney’s Office and publicly available materials produced by cyber security companies. Second, the standards of attribution need to demonstrate compliance both with the evidentiary requirements of Indian criminal law and the requirements in the International Law on State Responsibility. The latter requires an attribution to demonstrate that a state had ‘effective control’ over the non-state actor.
Finally, the attribution must be communicated to the adversary in a manner that does not risk military escalation. Despite the delicate timing of the indictment, Park’s prosecution by the FBI did not dampen the temporary thaw in relations between US and North Korea.
While building capacity to improve resilience, detect attacks and improve attribution capabilities should be a priority, we need to remember that regardless of the breakthrough in both human and infrastructural capacities, attributing cyber attacks will never be an exercise in certainty.
India will need to marry its improved capacity with strategic geopolitical posturing. Lengthy indictments may not deter all potential adversaries but may be a tool in fostering a culture of accountability in cyberspace.
Clarification on the Information Security Practices of Aadhaar Report
The report concerned can be accessed here, and the first clarificatory statement (dated May 16, 2017) can be accessed here.
This clarificatory statement is being issued in response to reports that misrepresent our research. In light of repeated questions we have received, which seem to emanate from a misunderstanding of our report, we would like to make the following clarifications.
- Our research involved documentation and taking illustrative screenshots (included in our report) of public webpages on the four government websites listed in our report. These screenshots were taken to demonstrate that the vulnerability existed.
- The figure of 130-135 million Aadhaar Numbers quoted in our Report are, as clearly stated, derived directly by adding the aggregate numbers (of beneficiaries/individuals whose data were listed in the three government websites concerned) and published by the portals themselves in the MIS reports publicly available on the portals. The numbers are as follows:
- 10,97,60,343 from NREGA,
- 63,95,317 from NSAP, and
- 2,05,60,896 from Chandranna Bima (screenshots included in the report).
- 10,97,60,343 from NREGA,
We sincerely hope that this clarification helps with a clearer comprehension of the argument and implications of the said report. We urge those who are using our report in their research to reach out to us to prevent the future misinterpretation of the report.
— Amber Sinha and Srinivas Kodali
DIDP #32 On ICANN's Fellowship Program
These fellows are assigned a mentor and receive training on ICANN's various areas of engagement. They are also given travel assistance to attend the meeting. While the process and selection criteria is detailed on their website, CIS had some questions as to the execution of these.
Our DIDP questioned the following aspects:
- Has any individual received the ICANN Fellowship more than the stated maximum limit of 3 times?
- If so, whose decision and what was the justification given for awarding it the 4th time and any other times after that?
- What countries did any such individuals belong to?
- How many times has the limit of 3 been breached while giving fellowships?
- What recording mechanisms are being used to ensure that awarding of these fellowships is kept track of, stored and updated? Are these public or privately made available anywhere?
Budapest Convention and the Information Technology Act
Introduction
It was drafted by the Council of Europe along with Canada, Japan, South Africa and the United States of America.[1] The importance of the Convention is also indicated by the fact that adherence to it (whether by outright adoption or by otherwise making domestic laws in compliance with it) is one of the conditions mentioned in the Clarifying Lawful Overseas Use of Data Act passed in the USA (CLOUD Act) whereby a process has been established to enable security agencies of in India and the United States to directly access data stored in each other’s territories. Our analysis of the CLOUD Act vis-à-vis India can be found here. It is in continuation of that analysis that we have undertaken here a detailed comparison of the Information Technology Act, 2000 (“IT Act”) and how it stacks up against the provisions of Chapter I and Chapter II of the Convention.[2]
Before we get into a comparison of the Convention with the IT Act, we must point out the distinction between the two legal instruments, for the benefit of readers from a non legal background. An international instrument such as the Convention on Cybercrime (generally speaking) is essentially a promise made by the States which are a party to that instrument, that they will change or modify their local laws to get them in line with the requirements or principles laid out in said instrument. In case the signatory State does not make such amendments to its local laws, (usually) the citizens of that State cannot enforce any rights that they may have been granted under such an international instrument. The situation is the same with the Convention on Cybercrime, unless the signatory State amends its local laws to bring them in line with the provisions of the Convention, there cannot be any enforcement of the provisions of the Convention within that State.[3] This however is not the case for India and the IT Act since India is not a signatory to the Convention on Cybercrime and therefore is not obligated to amend its local laws to bring them in line with the Convention.
Although India and the Council of Europe cooperated to amend the IT Act through major amendments brought about vide the Information Technology (Amendment) Act, 2008, India still has not become a signatory to the Convention on Cybercrime. The reasons for this appear to be unclear and it has been suggested that these reasons may range from the fact that India was not involved in the original drafting, to issues of sovereignty regarding the provisions for international cooperation and extradition.[4]
Convention on Cybercrime |
Information Technology Act, 2000 |
Article 2 – Illegal access Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the access to the whole or any part of a computer system without right. A Party may require that the offence be committed by infringing security measures, with the intent of obtaining computer data or other dishonest intent, or in relation to a computer system that is connected to another computer system. |
Section 43 If any person without permission of the owner or any other person who is incharge of a computer, computer system or computer network - (a) accesses or secures access to such computer, computer system or computer network or computer resource
Section 66 If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to five lakh rupees or with both. |
The Convention gives States the right to further qualify the offence of “illegal access” or “hacking” by adding elements such as infringing security measures, special intent to obtain computer data, other dishonest intent that justifies criminal culpability, or the requirement that the offence is committed in relation to a computer system that is connected remotely to another computer system.[5] However, Indian law deals with the distinction by making the act of unathorised access without dishonest or fraudulent intent a civil offence, where the offender is liable to pay compensation. If the same act is done with dishonest and fraudulent intent, it is treated as a criminal offence punishable with fine and imprisonment which may extend to 3 years.
It must be noted that this provision was included in the Act only through the Amendment of 2008 and was not present in the Information Technology Act, 2000 in its original iteration.
Convention on Cybercrime |
Information Technology Act, 2000 |
Article 3 – Illegal Interception Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the interception without right, made by technical means, of non-public transmissions of computer data to, from or within a computer system, including electromagnetic emissions from a computer system carrying such computer data. A Party may require that the offence be committed with dishonest intent, or in relation to a computer system that is connected to another computer system.
|
NA |
Although the Information Technology Act, 2000 does not specifically criminalise the interception of communications by a private person. It is possible that under the provisions of Rule 43(a) the act of accessing a “computer network” could be interpreted as including unauthorised interception within its ambit.
The other way in which illegal interception may be considered to be illegal is through a combined reading of Sections 69 (Interception) and 45 (Residuary Penalty) with Rule 3 of the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009 which prohibits interception, monitoring and decryption of information under section 69(2) of the IT Act except in a manner as provided by the Rules. However, it must be noted that section 69(2) only talks about interception by the government and Rule 3 only provides for procedural safeguards for such an interception. It could therefore be argued that the prohibition under Rule 3 is only applicable to the government and not to private individuals since section 62, the provision under which Rule 3 has been issued, itself is not applicable to private individuals.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 4 – Data interference 1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the damaging, deletion, deterioration, alteration or suppression of computer data without right. 2 A Party may reserve the right to require that the conduct described in paragraph 1 result in serious harm. |
Section 43 If any person without permission of the owner or any other person who is incharge of a computer, computer system or computer network - (d) damages or causes to be damaged any computer, computer system or computer network, data, computer data base or any other programmes residing in such computer, computer system or computer network; (i) destroys, deletes or alters any information residing in a computer resource or diminishes its value or utility or affects it injuriously by any means; (j) Steals, conceals, destroys or alters or causes any person to steal, conceal, destroy or alter any computer source code used for a computer resource with an intention to cause damage, he shall be liable to pay damages by way of compensation not exceeding one crore rupees to the person so affected. (change vide ITAA 2008) Section 66 If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to five lakh rupees or with both. |
Damage, deletion, diminishing in value and alteration of data is considered a crime as per Section 66 read with section 43 of the IT Act if done with fraudulent or dishonest intention. While the Convention only requires such acts to be crimes if committed intentionally, however the Information Technology Act requires that such intention be either dishonest or fraudulent only then such an act will be a criminal offence, otherwise it will only incur civil consequences requiring the perpetrator to pay damages by way of compensation.
It must be noted that the optional requirement of such an act causing serious harm has not been adopted by Indian law, i.e. the act of such damage, deletion, etc. by itself is enough to constitute the offence, and there is no requirement of such an act causing serious harm.
As per the Explanatory Report to the Convention on Cybercrime, “Suppressing of computer data means any action that prevents or terminates the availability of the data to the person who has access to the computer or the data carrier on which it was stored.” Strictly speaking the act of suppression of data in another system is not covered by the language of section 43, but looking at the tenor of the section it is likely that if a court is faced with a situation of intentional/malicious denial of access to data, the court could expand the scope of the term “damage” as contained in sub-section (d) to include such malicious acts.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 5 – System interference Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, the serious hindering without right of the functioning of a computer system by inputting, transmitting, damaging, deleting, deteriorating, altering or suppressing computer data. |
Section 43 If any person without permission of the owner or any other person who is incharge of a computer, computer system or computer network - (e) disrupts or causes disruption of any computer, computer system or computer network; Explanation - for the purposes of this section - (i) "Computer Contaminant" means any set of computer instructions that are designed - (a) to modify, destroy, record, transmit data or programme residing within a computer, computer system or computer network; or (b) by any means to usurp the normal operation of the computer, computer system, or computer network; (iii) "Computer Virus" means any computer instruction, information, data or programme that destroys, damages, degrades or adversely affects the performance of a computer resource or attaches itself to another computer resource and operates when a programme, data or instruction is executed or some other event takes place in that computer resource;
Section 66 If any person, dishonestly, or fraudulently, does any act referred to in section 43, he shall be punishable with imprisonment for a term which may extend to two three years or with fine which may extend to five lakh rupees or with both. |
The offence of causing hindrance to the functioning of a computer system with fraudulent or dishonest intention is an offence under the IT Act. While the Convention only requires such acts to be crimes if committed intentionally, however the IT Act requires that such intention be either dishonest or fraudulent only then such an act will be a criminal offence, otherwise it will only incur civil consequences requiring the perpetrator to pay damages by way of compensation.
The IT Act does not require such disruption to be caused in any particular manner as is required under the Convention, although the acts of introducing computer viruses as well as damaging or deleting data themselves have been classified as offences under the IT Act.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 6 – Misuse of devices 1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right: a the production, sale, procurement for use, import, distribution or otherwise making available of: i a device, including a computer program, designed or adapted primarily for the purpose of committing any of the offences established in accordance with Articles 2 through 5; ii a computer password, access code, or similar data by which the whole or any part of a computer system is capable of being accessed, with intent that it be used for the purpose of committing any of the offences established in Articles 2 through 5; and b the possession of an item referred to in paragraphs a.i or ii above, with intent that it be used for the purpose of committing any of the offences established in Articles 2 through 5. A Party may require by law that a number of such items be possessed before criminal liability attaches. 2 This article shall not be interpreted as imposing criminal liability where the production, sale, procurement for use, import, distribution or otherwise making available or possession referred to in paragraph 1 of this article is not for the purpose of committing an offence established in accordance with Articles 2 through 5 of this Convention, such as for the authorised testing or protection of a computer system. 3 Each Party may reserve the right not to apply paragraph 1 of this article, provided that the reservation does not concern the sale, distribution or otherwise making available of the items referred to in paragraph 1 a.ii of this article. |
NA |
This provision establishes as a separate and independent criminal offence the intentional commission of specific illegal acts regarding certain devices or access data to be misused for the purpose of committing offences against the confidentiality, the integrity and availability of computer systems or data. While the IT Act does not by itself makes the production, sale, procurement for use, import, distribution of devices designed to be adopted for such purposes, sub-section (g) of section 43 along with section 120A of the Indian Penal Code, 1860 which deals with “conspiracy” could perhaps be used to bring such acts within the scope of the penal statutes.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 7 – Computer related forgery Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the input, alteration, deletion, or suppression of computer data, resulting in inauthentic data with the intent that it be considered or acted upon for legal purposes as if it were authentic, regardless whether or not the data is directly readable and intelligible. A Party may require an intent to defraud, or similar dishonest intent, before criminal liability attaches. |
NA |
The acts of deletion, alteration and suppression of data by itself is a crime as discussed above, there is no specific offence for doing such acts for the purpose of forgery. However this does not mean that the crime of online forgery is not punishable in India at all, such crimes would be dealt with under the relevant provisions of the Indian Penal Code, 1860 (Chapter 18) read with section 4 of the IT Act.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 8 – Computer-related fraud Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the causing of a loss of property to another person by: a any input, alteration, deletion or suppression of computer data, b any interference with the functioning of a computer system, with fraudulent or dishonest intent of procuring, without right, an economic benefit for oneself or for another person. |
NA |
Just as in the case of forgery, there is no specific provision in the IT Act whereby online fraud would be considered as a crime, however specific acts such as charging services availed of by one person to another (section 43(h), identity theft (section 66C), cheating by impersonation (section 66D) have been listed as criminal offences. Further, as with forgery, fraudulent acts to procure economic benefits would also get covered by the provisions of the Indian Penal Code that deal with cheating.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 9 – Offences related to child pornography 1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally and without right, the following conduct: a producing child pornography for the purpose of its distribution through a computer system; b offering or making available child pornography through a computer system; c distributing or transmitting child pornography through a computer system; d procuring child pornography through a computer system for oneself or for another person; e possessing child pornography in a computer system or on a computer-data storage medium. 2 For the purpose of paragraph 1 above, the term "child pornography" shall include pornographic material that visually depicts: a a minor engaged in sexually explicit conduct; b a person appearing to be a minor engaged in sexually explicit conduct; c realistic images representing a minor engaged in sexually explicit conduct. 3 For the purpose of paragraph 2 above, the term "minor" shall include all persons under 18 years of age. A Party may, however, require a lower age-limit, which shall be not less than 16 years. 4 Each Party may reserve the right not to apply, in whole or in part, paragraphs 1, subparagraphs d and e, and 2, sub-paragraphs b and c. |
67 B Punishment for publishing or transmitting of material depicting children in sexually explicit act, etc. in electronic form. Whoever,- (a) publishes or transmits or causes to be published or transmitted material in any electronic form which depicts children engaged in sexually explicit act or conduct or (b) creates text or digital images, collects, seeks, browses, downloads, advertises, promotes, exchanges or distributes material in any electronic form depicting children in obscene or indecent or sexually explicit manner or (c) cultivates, entices or induces children to online relationship with one or more children for and on sexually explicit act or in a manner that may offend a reasonable adult on the computer resource or (d) facilitates abusing children online or (e) records in any electronic form own abuse or that of others pertaining to sexually explicit act with children, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with a fine which may extend to ten lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to seven years and also with fine which may extend to ten lakh rupees: Provided that the provisions of section 67, section 67A and this section does not extend to any book, pamphlet, paper, writing, drawing, painting, representation or figure in electronic form- (i) The publication of which is proved to be justified as being for the public good on the ground that such book, pamphlet, paper writing, drawing, painting, representation or figure is in the interest of science, literature, art or learning or other objects of general concern; or (ii) which is kept or used for bonafide heritage or religious purposes Explanation: For the purposes of this section, "children" means a person who has not completed the age of 18 years. |
The publishing, transmission, creation, collection, seeking, browsing, etc. of child pornography is an offence under Indian law punishable with imprisonment for upto 5 years for a first offence and upto 7 years for a subsequent offence, along with fine.
It is important to note that bona fide depictions for the public good, such as for publication in pamphlets, reading or educational material are specifically excluded from the rigours of the section, Similarly material kept for heritage or religious purposes is also exempted under this section. Such exceptions are in line with the intent of the Convention, since the Explanatory statement itself states that “The term "pornographic material" in paragraph 2 is governed by national standards pertaining to the classification of materials as obscene, inconsistent with public morals or similarly corrupt. Therefore, material having an artistic, medical, scientific or similar merit may be considered not to be pornographic.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 10 – Offences related to infringements of copyright and related rights 1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the infringement of copyright, as defined under the law of that Party, pursuant to the obligations it has undertaken under the Paris Act of 24 July 1971 revising the Bern Convention for the Protection of Literary and Artistic Works, the Agreement on Trade-Related Aspects of Intellectual Property Rights and the WIPO Copyright Treaty, with the exception of any moral rights conferred by such conventions, where such acts are committed wilfully, on a commercial scale and by means of a computer system. 2 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law the infringement of related rights, as define under the law of that Party, pursuant to the obligations it has undertaken under the International Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organisations (Rome Convention), the Agreement on Trade-Related Aspects of Intellectual Property Rights and the WIPO Performances and Phonograms Treaty, with the exception of any moral rights conferred by such conventions, where such acts are committed wilfully, on a commercial scale and by means of a computer system. 3 A Party may reserve the right not to impose criminal liability under paragraphs 1 and 2 of this article in limited circumstances, provided that other effective remedies are available and that such reservation does not derogate from the Party’s international obligations set forth in the international instruments referred to in paragraphs 1 and 2 of this article. |
81 Act to have Overriding effect The provisions of this Act shall have effect notwithstanding anything inconsistent therewith contained in any other law for the time being in force. Provided that nothing contained in this Act shall restrict any person from exercising any right conferred under the Copyright Act, 1957 or the Patents Act, 1970 |
The use of the term "pursuant to the obligations it has undertaken" in both paragraphs makes it clear that a Contracting Party to the Convention is not bound to apply agreements cited (TRIPS, WIPO, etc.) to which it is not a Party; moreover, if a Party has made a reservation or declaration permitted under one of the agreements, that reservation may limit the extent of its obligation under the present Convention.
The IT Act does not try to intervene in the existing copyright regime of India and creates a special exemption for the Copyright Act and the Patents Act in the clause which provides this Act overriding effect. India’s obligations under the various treaties and conventions on intellectual property rights are enshrined in these legislations.[6]
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 11 – Attempt and aiding or abetting 1 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, aiding or abetting the commission of any of the offences established in accordance with Articles 2 through 10 of the present Convention with intent that such offence be committed. 2 Each Party shall adopt such legislative and other measures as may be necessary to establish as criminal offences under its domestic law, when committed intentionally, an attempt to commit any of the offences established in accordance with Articles 3 through 5, 7, 8, and 9.1.a and c of this Convention. 3 Each Party may reserve the right not to apply, in whole or in part, paragraph 2 of this article. |
84 B Punishment for abetment of offences Whoever abets any offence shall, if the act abetted is committed in consequence of the abetment, and no express provision is made by this Act for the punishment of such abetment, be punished with the punishment provided for the offence under this Act. Explanation: An Act or offence is said to be committed in consequence of abetment, when it is committed in consequence of the instigation, or in pursuance of the conspiracy, or with the aid which constitutes the abetment.
84 C Punishment for attempt to commit offences Whoever attempts to commit an offence punishable by this Act or causes such an offence to be committed, and in such an attempt does any act towards the commission of the offence, shall, where no express provision is made for the punishment of such attempt, be punished with imprisonment of any description provided for the offence, for a term which may extend to one-half of the longest term of imprisonment provided for that offence, or with such fine as is provided for the offence or with both. |
As can be seen, both attempts as well as abetment of criminal offences under the IT Act have also been criminalised.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 12 – Corporate liability 1 Each Party shall adopt such legislative and other measures as may be necessary to ensure that legal persons can be held liable for a criminal offence established in accordance with this Convention, committed for their benefit by any natural person, acting either individually or as part of an organ of the legal person, who has a leading position within it, based on: a a power of representation of the legal person; b an authority to take decisions on behalf of the legal person; c an authority to exercise control within the legal person. 2 In addition to the cases already provided for in paragraph 1 of this article, each Party shall take the measures necessary to ensure that a legal person can be held liable where the lack of supervision or control by a natural person referred to in paragraph 1 has made possible the commission of a criminal offence established in accordance with this Convention for the benefit of that legal person by a natural person acting under its authority. 3 Subject to the legal principles of the Party, the liability of a legal person may be criminal, civil or administrative. 4 Such liability shall be without prejudice to the criminal liability of the natural persons who have committed the offence. |
85 Offences by Companies. (1) Where a person committing a contravention of any of the provisions of this Act or of any rule, direction or order made there under is a Company, every person who, at the time the contravention was committed, was in charge of, and was responsible to, the company for the conduct of business of the company as well as the company, shall be guilty of the contravention and shall be liable to be proceeded against and punished accordingly: Provided that nothing contained in this sub-section shall render any such person liable to punishment if he proves that the contravention took place without his knowledge or that he exercised all due diligence to prevent such contravention. (2) Notwithstanding anything contained in sub-section (1), where a contravention of any of the provisions of this Act or of any rule, direction or order made there under has been committed by a company and it is proved that the contravention has taken place with the consent or connivance of, or is attributable to any neglect on the part of, any director, manager, secretary or other officer of the company, such director, manager, secretary or other officer shall also be deemed to be guilty of the contravention and shall be liable to be proceeded against and punished accordingly. Explanation- For the purposes of this section (i) "Company" means any Body Corporate and includes a Firm or other Association of individuals; and (ii) "Director", in relation to a firm, means a partner in the firm. |
The liability of a company or other body corporate has been laid out in the IT Act in a manner similar to the Budapest Convention. While, the test to determine the relationship between the legal entity and the natural person who has committed the act on behalf of the legal entity is a little more detailed[7] in the Convention, the substance of the test is laid out in the IT Act as “a person who is in charge of, and was responsible to, the company”.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 14 1 Each Party shall adopt such legislative and other measures as may be necessary to establish the powers and procedures provided for in this section for the purpose of specific criminal investigations or proceedings. 2 Except as specifically provided otherwise in Article 21, each Party shall apply the powers and procedures referred to in paragraph 1 of this article to: a the criminal offences established in accordance with Articles 2 through 11 of this Convention; b other criminal offences committed by means of a computer system; and c the collection of evidence in electronic form of a criminal offence. 3 a Each Party may reserve the right to apply the measures referred to in Article 20 only to offences or categories of offences specified in the reservation, provided that the range of such offences or categories of offences is not more restricted than the range of offences to which it applies the measures referred to in Article 21. Each Party shall consider restricting such a reservation to enable the broadest application of the measure referred to in Article 20. b Where a Party, due to limitations in its legislation in force at the time of the adoption of the present Convention, is not able to apply the measures referred to in Articles 20 and 21 to communications being transmitted within a computer system of a service provider, which system: i is being operated for the benefit of a closed group of users, and ii does not employ public communications networks and is not connected with another computer system, whether public or private, that Party may reserve the right not to apply these measures to such communications. Each Party shall consider restricting such a reservation to enable the broadest application of the measures referred to in Articles 20 and 21. |
NA |
This is a provision of a general nature that need not have any equivalence in domestic law. The provision clarifies that all the powers and procedures provided for in this section (Articles 14 to 21) are for the purpose of “specific criminal investigations or proceedings”.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 15 – Conditions and safeguards 1 Each Party shall ensure that the establishment, implementation and application of the powers and procedures provided for in this Section are subject to conditions and safeguards provided for under its domestic law, which shall provide for the adequate protection of human rights and liberties, including rights arising pursuant to obligations it has undertaken under the 1950 Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms, the 1966 United Nations International Covenant on Civil and Political Rights, and other applicable international human rights instruments, and which shall incorporate the principle of proportionality. 2 Such conditions and safeguards shall, as appropriate in view of the nature of the procedure or power concerned, inter alia, include judicial or other independent supervision, grounds justifying application, and limitation of the scope and the duration of such power or procedure. 3 To the extent that it is consistent with the public interest, in particular the sound administration of justice, each Party shall consider the impact of the powers and procedures in this section upon the rights, responsibilities and legitimate interests of third parties. |
NA |
This again is a provision of a general nature which need not have a corresponding clause in the domestic law. India is a signatory to a number of international human rights conventions and treaties, it has acceded to the International Covenant on Civil and Political Rights (ICCPR), 1966, International Covenant on Economic, Social and Cultural Rights (ICESCR), 1966, ratified the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD), 1965, with certain reservations, signed the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), 1979 with certain reservations, Convention on the Rights of the Child (CRC), 1989 and signed the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment (CAT), 1984. Further the right to life guaranteed under Article 21 of the Constitution takes within its fold a number of human rights such as the right to privacy. Freedom of expression, right to fair trial, freedom of assembly, right against arbitrary arrest and detention are all fundamental rights guaranteed under the Constitution of India, 1950.[8]
In addition, India has enacted the Protection of Human Rights Act, 1993 for the constitution of a National Human Rights Commission, State Human Rights Commission in States and Human Rights Courts for better protection of “human rights” and for matters connected therewith or incidental thereto. Thus, there does exist a statutory mechanism for the enforcement of human rights[9] under Indian law. It must be noted that the definition of human rights also incorporates rights embodied in International Covenants and are enforceable by Courts in India.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 16 – Expedited preservation of stored computer data 1 Each Party shall adopt such legislative and other measures as may be necessary to enable its competent authorities to order or similarly obtain the expeditious preservation of specified computer data, including traffic data, that has been stored by means of a computer system, in particular where there are grounds to believe that the computer data is particularly vulnerable to loss or modification. 2 Where a Party gives effect to paragraph 1 above by means of an order to a person to preserve specified stored computer data in the person’s possession or control, the Party shall adopt such legislative and other measures as may be necessary to oblige that person to preserve and maintain the integrity of that computer data for a period of time as long as necessary, up to a maximum of ninety days, to enable the competent authorities to seek its disclosure. A Party may provide for such an order to be subsequently renewed. 3 Each Party shall adopt such legislative and other measures as may be necessary to oblige the custodian or other person who is to preserve the computer data to keep confidential the undertaking of such procedures for the period of time provided for by its domestic law. 4 The powers and procedures referred to in this article shall be subject to Articles 14 and 15. Article 17 – Expedited preservation and partial disclosure of traffic data 1 Each Party shall adopt, in respect of traffic data that is to be preserved under Article 16, such legislative and other measures as may be necessary to: a ensure that such expeditious preservation of traffic data is available regardless of whether one or more service providers were involved in the transmission of that communication; and b ensure the expeditious disclosure to the Party’s competent authority, or a person designated by that authority, of a sufficient amount of traffic data to enable the Party to identify the service providers and the path through which the communication was transmitted. 2 The powers and procedures referred to in this article shall be subject to Articles 14 and 15. |
29 Access to computers and data. (1) Without prejudice to the provisions of sub-section (1) of section 69, the Controller or any person authorized by him shall, if he has reasonable cause to suspect that any contravention of the provisions of this chapter made there under has been committed, have access to any computer system, any apparatus, data or any other material connected with such system, for the purpose of searching or causing a search to be made for obtaining any information or data contained in or available to such computer system. (Amended vide ITAA 2008)
(2) For the purposes of sub-section (1), the Controller or any person authorized by him may, by order, direct any person in charge of, or otherwise concerned with the operation of the computer system, data apparatus or material, to provide him with such reasonable technical and other assistant as he may consider necessary.
67 C Preservation and Retention of information by intermediaries (1) Intermediary shall preserve and retain such information as may be specified for such duration and in such manner and format as the Central Government may prescribe.
Rule 3(7) of the Information Technology (Intermediary Guidelines) Rules, 2011 3(7) - When required by lawful order, the intermediary shall provide information or any such assistance to Government Agencies who are lawfully authorised for investigative, protective, cyber security activity. The information or any such assistance shall be provided for the purpose of verification of identity, or for prevention, detection, investigation, prosecution, cyber security incidents and punishment of offences under any law for the time being in force, on a request in writing staling clearly the purpose of seeking such information or any such assistance.
|
It must be noted that Article 16 and Article 17 refer only to data preservation and not data retention. “Data preservation” means to keep data, which already exists in a stored form, protected from anything that would cause its current quality or condition to change or deteriorate. Data retention means to keep data, which is currently being generated, in one’s possession into the future.[10] In short, the article provides only for preservation of existing stored data, pending subsequent disclosure of the data, in relation to specific criminal investigations or proceedings.
The Convention uses the term "order or similarly obtain", which is intended to allow the use of other legal methods of achieving preservation than merely by means of a judicial or administrative order or directive (e.g. from police or prosecutor). In some States, preservation orders do not exist in the procedural law, and data can only be preserved and obtained through search and seizure or production order. Flexibility was therefore intended by the use of the phrase "or otherwise obtain" to permit the implementation of this article by the use of these means.
While Indian law does not have a specific provision for issuing an order for preservation of data, the provisions of section 29 as well as sections 99 to 101 of the Code of Criminal Procedure, 1973 may be utilized to achieve the result intended by Articles 16 and 17. Although section 67C of the IT Act uses the term “preserve and retain such information”, this provision is intended primarily for the purpose of data retention and not data preservation.
Another provision which may conceivably be used for issuing preservation orders is Rule 3(7) of the Information Technology (Intermediary Guidelines) Rules, 2011 which requires intermediaries to provide “any such assistance” to Government Agencies who are lawfully authorised for investigative, protective, cyber security activity. However, in the absence of a power of preservation in the main statute (IT Act) it remains to be seen whether such an order would be enforced if challenged in a court of law.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 18 – Production order 1 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to order: a. a person in its territory to submit specified computer data in that person’s possession or control, which is stored in a computer system or a computer-data storage medium; and b. a service provider offering its services in the territory of the Party to submit subscriber information relating to such services in that service provider’s possession or control. 2 The powers and procedures referred to in this article shall be subject to Articles 14 and 15. 3 For the purpose of this article, the term “subscriber information” means any information contained in the form of computer data or any other form that is held by a service provider, relating to subscribers of its services other than traffic or content data and by which can be established: a the type of communication service used, the technical provisions taken thereto and the period of service; b the subscriber’s identity, postal or geographic address, telephone and other access number, billing and payment information, available on the basis of the service agreement or arrangement; c any other information on the site of the installation of communication equipment, available on the basis of the service agreement or arrangement.
|
Section 28(2) (2) The Controller or any officer authorized by him in this behalf shall exercise the like powers which are conferred on Income-tax authorities under Chapter XIII of the Income-Tax Act, 1961 and shall exercise such powers, subject to such limitations laid down under that Act. Section 58(2) (2) The Cyber Appellate Tribunal shall have, for the purposes of discharging their functions under this Act, the same powers as are vested in a civil court under the Code of Civil Procedure, 1908, while trying a suit, in respect of the following matters, namely - (b) requiring the discovery and production of documents or other electronic records;
|
While the Cyber Appellate Tribunal and the Controller of Certifying Authorities both have the power to call for information under the IT Act, these powers can be exercised only for limited purposes since the jurisdiction of both authorities is limited to the procedural provisions of the IT Act and they do not have the jurisdiction to investigate penal provisions. In practice, the penal provisions of the IT Act are investigated by the regular law enforcement apparatus of India, which use statutory provisions for production orders applicable in the offline world to computer systems as well. It is a very common practice amongst law enforcement authorities to issue orders under the Code of Criminal Procedure, 1973 (section 91) or the relevant provisions of the Income Tax Act, 1961 to compel production of information contained in a computer system. The power to order production of a “document or other thing” under section 91 of the Criminal Procedure Code is wide enough to cover all types of information which may be residing in a computer system and can even include the entire computer system itself.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 19 – Search and seizure of stored computer data 1 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to search or similarly access: a a computer system or part of it and computer data stored therein; and b a computer-data storage medium in which computer data may be stored in its territory. 2 Each Party shall adopt such legislative and other measures as may be necessary to ensure that where its authorities search or similarly access a specific computer system or part of it, pursuant to paragraph 1.a, and have grounds to believe that the data sought is stored in another computer system or part of it in its territory, and such data is lawfully accessible from or available to the initial system, the authorities shall be able to expeditiously extend the search or similar accessing to the other system. 3 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to seize or similarly secure computer data accessed according to paragraphs 1 or 2. These measures shall include the power to: a seize or similarly secure a computer system or part of it or a computer-data storage medium; b make and retain a copy of those computer data; c maintain the integrity of the relevant stored computer data; d render inaccessible or remove those computer data in the accessed computer system. 4 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to order any person who has knowledge about the functioning of the computer system or measures applied to protect the computer data therein to provide, as is reasonable, the necessary information, to enable the undertaking of the measures referred to in paragraphs 1 and 2. 5 The powers and procedures referred to in this article shall be subject to Articles 14 and15. |
76 Confiscation Any computer, computer system, floppies, compact disks, tape drives or any other accessories related thereto, in respect of which any provision of this Act, rules, orders or regulations made thereunder has been or is being contravened, shall be liable to confiscation: Provided that where it is established to the satisfaction of the court adjudicating the confiscation that the person in whose possession, power or control of any such computer, computer system, floppies, compact disks, tape drives or any other accessories relating thereto is found is not responsible for the contravention of the provisions of this Act, rules, orders or regulations made there under, the court may, instead of making an order for confiscation of such computer, computer system, floppies, compact disks, tape drives or any other accessories related thereto, make such other order authorized by this Act against the person contravening of the provisions of this Act, rules, orders or regulations made there under as it may think fit.
|
While Article 19 provides for the power to search and seize computer systems for the investigation into criminal offences of any type of kind, section 76 of the IT Act is limited only to contraventions of the provisions of the Act, rules, orders or regulations made thereunder. However, this does not mean that Indian law enforcement authorities do not have the power to search and seize a computer system for crimes other than those contained in the IT Act; just as in the case of Article 18, the authorities in India are free to use the provisions contained in the Criminal Procedure Code and other sectoral legislations which allow for seizure of property to seize computer systems when investigating criminal offences.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 20 – Real-time collection of traffic data 1 Each Party shall adopt such legislative and other measures as may be necessary to empower its competent authorities to: a collect or record through the application of technical means on the territory of that Party, and b compel a service provider, within its existing technical capability: i to collect or record through the application of technical means on the territory of that Party; or ii to co-operate and assist the competent authorities in the collection or recording of,
traffic data, in real-time, associated with specified communications in its territory transmitted by means of a computer system. 2 Where a Party, due to the established principles of its domestic legal system, cannot adopt the measures referred to in paragraph 1.a, it may instead adopt legislative and other measures as may be necessary to ensure the real-time collection or recording of traffic data associated with specified communications transmitted in its territory, through the application of technical means on that territory. 3 Each Party shall adopt such legislative and other measures as may be necessary to oblige a service provider to keep confidential the fact of the execution of any power provided for in this article and any information relating to it. 4 The powers and procedures referred to in this article shall be subject to Articles 14 and 15. |
69B Power to authorize to monitor and collect traffic data or information through any computer resource for Cyber Security (1) The Central Government may, to enhance Cyber Security and for identification, analysis and prevention of any intrusion or spread of computer contaminant in the country, by notification in the official Gazette, authorize any agency of the Government to monitor and collect traffic data or information generated, transmitted, received or stored in any computer resource. (2) The Intermediary or any person in-charge of the Computer resource shall when called upon by the agency which has been authorized under sub-section (1), provide technical assistance and extend all facilities to such agency to enable online access or to secure and provide online access to the computer resource generating , transmitting, receiving or storing such traffic data or information. (3) The procedure and safeguards for monitoring and collecting traffic data or information, shall be such as may be prescribed. (4) Any intermediary who intentionally or knowingly contravenes the provisions of sub-section (2) shall be punished with an imprisonment for a term which may extend to three years and shall also be liable to fine. Explanation: For the purposes of this section, (i) "Computer Contaminant" shall have the meaning assigned to it in section 43. (ii) "traffic data" means any data identifying or purporting to identify any person, computer system or computer network or location to or from which the communication is or may be transmitted and includes communications origin, destination, route, time, date, size, duration or type of underlying service or any other information.
|
Section 69B in the IT Act enables the government to authorise the monitoring and collection of traffic data through any computer system. Under the Convention, orders for collection and recording of traffic data can be given for the purposes mentioned in Articles 14 and 15. On the other hand, as per the Information Technology (Procedure and safeguard for Monitoring and Collecting Traffic Data or Information) Rules, 2009, an order for monitoring may be issued for any of the following purposes relating to cyber security:
(a) forecasting of imminent cyber incidents;
(b) monitoring network application with traffic data or information on computer resource;
(c) identification and determination of viruses or computer contaminant;
(d) tracking cyber security breaches or cyber security incidents;
(e) tracking computer resource breaching cyber security or spreading virus or computer contaminants;
(f) identifying or tracking of any person who has breached, or is suspected of having breached or being likely to breach cyber security;
(g) undertaking forensic of the concerned computer resource as a part of investigation or internal audit of information security practices in the computer resources;
(h) accessing a stored information for enforcement of any provisions of the laws relating to cyber security for the time being in force;
(i) any other matter relating to cyber security.
As can be seen from the above, the reasons for which an order for monitoring traffic data can be issued are extremely wide, this is in stark contrast to the reasons for which an order for interception of content data may be issued under section 69. The Rules also provide that the intermediary shall not disclose the existence of a monitoring order to any third party and shall take all steps necessary to ensure extreme secrecy in the matter of monitoring of traffic data.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 21 – Interception of content data 1 Each Party shall adopt such legislative and other measures as may be necessary, in relation to a range of serious offences to be determined by domestic law, to empower its competent authorities to: a collect or record through the application of technical means on the territory of that Party, and b compel a service provider, within its existing technical capability: i to collect or record through the application of technical means on the territory of that Party, or ii to co-operate and assist the competent authorities in the collection or recording of, content data, in real-time, of specified communications in its territory transmitted by means of a computer system. 2 Where a Party, due to the established principles of its domestic legal system, cannot adopt the measures referred to in paragraph 1.a, it may instead adopt legislative and other measures as may be necessary to ensure the real-time collection or recording of content data on specified communications in its territory through the application of technical means on that territory. 3 Each Party shall adopt such legislative and other measures as may be necessary to oblige a service provider to keep confidential the fact of the execution of any power provided for in this article and any information relating to it. 4 The powers and procedures referred to in this article shall be subject to Articles 14 and 15. |
69 Powers to issue directions for interception or monitoring or decryption of any information through any computer resource (1) Where the central Government or a State Government or any of its officer specially authorized by the Central Government or the State Government, as the case may be, in this behalf may, if is satisfied that it is necessary or expedient to do in the interest of the sovereignty or integrity of India, defense of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence, it may, subject to the provisions of sub-section (2), for reasons to be recorded in writing, by order, direct any agency of the appropriate Government to intercept, monitor or decrypt or cause to be intercepted or monitored or decrypted any information transmitted received or stored through any computer resource. (2) The Procedure and safeguards subject to which such interception or monitoring or decryption may be carried out, shall be such as may be prescribed (3) The subscriber or intermediary or any person in charge of the computer resource shall, when called upon by any agency which has been directed under sub section (1), extend all facilities and technical assistance to - (a) provide access to or secure access to the computer resource containing such information; generating, transmitting, receiving or storing such information; or (b) intercept or monitor or decrypt the information, as the case may be; or (c) provide information stored in computer resource. (4) The subscriber or intermediary or any person who fails to assist the agency referred to in sub-section (3) shall be punished with an imprisonment for a term which may extend to seven years and shall also be liable to fine. |
There has been a lot of academic research and debate around the exercise of powers under section 69 of the IT Act, but the current piece is not the place for a standalone critique of section 69.[11] The analysis here is limited to a comparison of the provisions of Article 20 vis-à-vis section 69 of the IT Act.
In that background, it needs to be pointed out that two important issues mentioned in Article 20 of the Convention are not specifically mentioned in section 69B, viz. (i) that the order should be only for specific computer data, and (ii) that the intermediary should keep such an order confidential; these requirements are covered by Rules 9 and 20 of the Information Technology (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules, 2009, respectively.
Convention on Cybercrime |
Information Technology Act, 2000
|
Article 22 – Jurisdiction 1 Each Party shall adopt such legislative and other measures as may be necessary to establish jurisdiction over any offence established in accordance with Articles 2 through 11 of this Convention, when the offence is committed: a in its territory; or b on board a ship flying the flag of that Party; or c on board an aircraft registered under the laws of that Party; or d by one of its nationals, if the offence is punishable under criminal law where it was committed or if the offence is committed outside the territorial jurisdiction of any State. 2 Each Party may reserve the right not to apply or to apply only in specific cases or conditions the jurisdiction rules laid down in paragraphs 1.b through 1.d of this article or any part thereof. 3 Each Party shall adopt such measures as may be necessary to establish jurisdiction over the offences referred to in Article 24, paragraph 1, of this Convention, in cases where an alleged offender is present in its territory and it does not extradite him or her to another Party, solely on the basis of his or her nationality, after a request for extradition. 4 This Convention does not exclude any criminal jurisdiction exercised by a Party in accordance with its domestic law. 5 When more than one Party claims jurisdiction over an alleged offence established in accordance with this Convention, the Parties involved shall, where appropriate, consult with a view to determining the most appropriate jurisdiction for prosecution. |
1. Short Title, Extent, Commencement and Application (2) It shall extend to the whole of India and, save as otherwise provided in this Act, it applies also to any offence or contravention hereunder committed outside India by any person. 75 Act to apply for offence or contraventions committed outside India (1) Subject to the provisions of sub-section (2), the provisions of this Act shall apply also to any offence or contravention committed outside India by any person irrespective of his nationality. (2) For the purposes of sub-section (1), this Act shall apply to an offence or contravention committed outside India by any person if the act or conduct constituting the offence or contravention involves a computer, computer system or computer network located in India. |
The Convention provides for extra territorial jurisdiction only for crimes committed outside the State by nationals of that State. However, the IT Act applies even to offences under the Act committed by foreign nationals outside India, as long as the act involves a computer system or computer network located in India.
Unlike para 3 of Article 22 of the Convention, the IT Act does not touch upon the issue of extradition. Cases involving extradition would therefore be dealt with by the general law of the land in respect of extradition requests contained in the Extradition Act, 1962. The Convention requires that in cases where the state refuses to extradite an alleged offender, it should establish jurisdiction over the offences referred to in Article 21(1) so that it can proceed against that offender itself. In this regard, it must be pointed out that Section 34A of the Extradition Act, 1962 provides that “Where the Central Government is of the opinion that a fugitive criminal cannot be surrendered or returned pursuant to a request for extradition from a foreign State, it may, as it thinks fit, take steps to prosecute such fugitive criminal in India.” Thus the Extradition Act gives the Indian government the power to prosecute an individual in the event that such individual cannot be extradited.
International Cooperation
Chapter III of the Convention deals specifically with international cooperation between the signatory parties. Such co-operation is to be carried out both "in accordance with the provisions of this Chapter" and "through application of relevant international agreements on international cooperation in criminal matters, arrangements agreed to on the basis of uniform or reciprocal legislation, and domestic laws." The latter clause establishes the general principle that the provisions of Chapter III do not supersede the provisions of international agreements on mutual legal assistance and extradition or the relevant provisions of domestic law pertaining to international co-operation.[12] Although the Convention grants primacy to mutual treaties and agreements between member States, in certain specific circumstances it also provides for an alternative if such treaties do not exist between the member states (Article 27 and 28). The Convention also provides for international cooperation on certain issues which may not have been specifically provided for in mutual assistance treaties entered into between the parties and need to be spelt out due to the unique challenges posed by cyber crimes, such as expedited preservation of stored computer data (Article 29) and expedited disclosure of preserved traffic data (Article 30). Contentious issues such as access to stored computer data, real time collection of traffic data and interception of content data have been specifically left by the Convention to be dealt with as per existing international instruments or arrangements between the parties.
Conclusion
The broad language and wide terminology used IT Act seems to cover a number of the cyber crimes mentioned in the Budapest Convention, even though India has not signed and ratified the same. Penal provisions such as illegal access (Article 2), data interference (Article 4), system interference (Article 5), offence related to child pornography (Article 9), attempt and aiding or abetting (Article 11), corporate liability (Article 12) are substantially covered and reflected in the IT Act in a manner very similar to the requirements of the Convention. Similarly procedural provisions such as search and seizure of stored computer data (Article 19), real-time collection of traffic data (Article 20), interception of content data (Article 21) and Jurisdiction (Article 22) are also substantially reflected in the IT Act.
However certain penal provisions mentioned in the Convention such as computer related forgery (Article 7), computer related fraud (Article 8) are not provided for specifically in the IT Act but such offences are covered when provisions of the Indian Penal Code, 1860 are read in conjugation with provisions of the IT Act. Similarly procedural provisions such as expedited preservation of stored computer data (Article 16) and production order (Article 18) are not specifically provided for in the IT Act but are covered under Indian law through the provisions of the Code of Criminal Procedure, 1973.
Apart from the above two categories there are certain provisions such as misuse of devices (Article 6) and Illegal interception (Article 3) which may not be specifically covered at all under Indian law, but may conceivably be said to be covered through an expansive reading of provisions of the Indian Penal Code and the IT Act. It may therefore be said that even though India has not signed or ratified the Budapest Convention, the legal regime in India is substantially in compliance with the provisions and requirements contained therein.
Thus, the Convention on Cybercrime is perhaps the most important international multi state instruments that may be used to combat cybercrime, not merely because the provisions thereunder may be used as a model to bolster national/local laws by any State, be it a signatory or not (as in the case of India) but also because of the mechanism it lays down for international cooperation in the field of cyber terrorism. In an increasingly interconnected world where more and more information of individuals is finding its way to the cloud or other networked infrastructure the international community is making great efforts to generate norms for increased international cooperation to combat cybercrime and cyber terrorism. While the Convention is one such multilateral effort, States are also proposing to use bilateral treaties to enable them to better fight cybercrime, the United States CLOUD Act, being one such effort. In the backdrop of these novel efforts the role to be played by older instruments such as the Convention on Cybercrime as well as by important States such as India is extremely crucial.
[1] Explanatory Report to the Convention on Cybercrime, Para 304, https://rm.coe.int/16800cce5b.
[2] The analysis here has been limited to only Chapter I and Chapter II of the Convention, as it is only adherence to these two chapters that is required under the CLOUD Act.
[3] The only possible enforcement that may be done with regard to the Convention on Cybercrime is that the Council of Europe may put pressure on the signatory State to amend its local laws (if it is refusing to do so) otherwise it would be in violation of its obligations as a member of the European Union.
[4] Alexander Seger, “India and the Budapest Convention: Why Not?”, https://www.orfonline.org/expert-speak/india-and-the-budapest-convention-why-not/
[5] Explanatory Report to the Convention on Cybercrime, Para 50, https://rm.coe.int/16800cce5b.
[6] India is a party to the Berne Convention on Literary and Artistic Works, the Agreement on Trade Related Intellectual Property Rights and the Rome Convention. India has also recently (July 4, 2018) announced that it will accede to the WIPO Copyright Treaty as well as the WIPO Performances and Phonographs Treaty.
[7] The test under the Convention is that the relevant person would be the one who has a leading position within the company, based on:
- a power of representation of the legal person;
- an authority to take decisions on behalf of the legal person;
- an authority to exercise control within the legal person.
[8]Vipul Kharbanda and Elonnai Hickock, “MLATs and the proposed Amendments to the US Electronic Communications Privacy Act”, https://cis-india.org/internet-governance/blog/mlats-and-the-proposed-amendments-to-the-us-electronic-communications-privacy-act
[9] The term “human rights” has been defined in the Act as “rights relating to life, liberty, equality and dignity of the individual guaranteed by the Constitution or embodied in the International Covenants and enforceable by courts in India”.
[10] Explanatory Report to the Convention on Cybercrime, Para 151, https://rm.coe.int/16800cce5b. .
[11] A similar power of interception is available under section 5 of the Telegraph Act, 1885, but that extends only to interception of telegraphic communication and does not extend to communications exchanged through computer networks.
[12] Explanatory Report to the Convention on Cybercrime, Para 244, https://rm.coe.int/16800cce5b.
ICANN Workstream 2 Recommendations on Accountability
At the ICANN meeting in March of 2017 in Finland, the second Work Stream (WS2) was launched. The Cross Community Working Group submitted their final report at the end of June 2018 and the purpose of this blog is to look at the main recommendations given and the steps ahead to its implementation.
The new Workstream was structured into the following 8 independent sub groups as per the topics laid down in the WS1 final report, each headed by a Rapporteur:
1. Diversity
2. Guidelines for Standards of Conduct Presumed to be in Good Faith Associated with Exercising Removal of Individual ICANN Board Directors. (Guidelines for Good Faith)
3. Human Rights Framework of Interpretation (HR-FOI)
4. Jurisdiction
5. Office of the Ombuds
6. Supporting Organization/ Advisory Committee Accountability
7. Staff Accountability
8. ICANN Transparency
1. DIVERSITY Recommendations
The sub-group on Diversity suggested ways by which ICANN can define, measure, report, support and promote diversity. They proposed 7 key factors to guide all diversity considerations: Language, Gender, Age, Physical Disability, Diverse skills, Geographical representation and stakeholder group. Each charting organization within ICANN is asked to undertake an exercise whereby they publish their diversity obligations on their website, for each level of employment including leadership either under their own charter or ICANN Bylaws. This should be followed by a diversity assessment of their existing structures and consequently used to formulate their diversity objectives/criteria and steps on how to achieve the same along with the timeline to do so. These diversity assessments should be conducted annually and at the very least, every 3 years. ICANN staff has been tasked with developing a mechanism for dealing with complaints arising out of diversity and related issues. Eventually, it is envisioned that ICANN will create a Diversity section on their website where an Annual Diversity Report will be published. All information regarding Diversity should also be published in their Annual Report.
The recommendations leave much upto the organization without establishing specific recruitment policies for equal opportunities. In their 7 parameters, race was left out as a criteria for diversity. The criteria of ‘diverse skills’ is also ambiguous; and within stakeholder group, it would have been more useful to highlight the priority for diversity of opinions within the same stakeholder group. So for example, to have two civil society organizations (CSOs) advocating for contrasting stances as opposed to having many CSO’s supporting one stance. However, these steps should be a good starting point to improve the diversity of an organization which in our earlier research we have found to be neither global nor multistakeholder. In fact, our recent diversity analysis has shown concerns such as the vast number of the end users participating and as an extension, influencing ICANN work are male. The mailing list where the majority of discussions take place are dominated by individuals from industry bodies. This coupled with the relative minority presence of the other stakeholders, especially geographically (14.7% participation from Asian countries), creates an environment where concerns emanating from other sections of the society could be overshadowed. Moreover, when we have questioned ICANN’s existing diversity of employees based on their race and citizenship, they did not give us the figures citing either lack of information or confidentiality.
2. HUMAN RIGHTS FRAMEWORK OF INTERPRETATION (HR-FOI)
A Framework of Interpretation was developed by the WS2 for ICANN Bylaws relating to Human Rights which clarified that Human Rights are not a Commitment for the organization but is a Core Value. The former being an obligation while the latter are “not necessarily intended to apply consistently and comprehensively to ICANN’s activities”.
To summarize the FOI, if the applicable law i.e. the law practiced in the jurisdiction where ICANN is operating, does not mandate certain human rights then they do not raise issues under the core value. As such, there can be no enforcement of human rights obligations by ICANN or any other party against any other party. Thus, contingent on the seat of the operations the law can vary though by in large ICANN recognizes and can be guided by significant internationally respected human rights such as those enumerated in the Universal Declaration of Human Rights. The United Nations Guiding Principles for Business and Human Rights was recognized as useful in the process of applying the core value in operations since it discusses corporate responsibility to respect human rights. Building on this, Human Right Impact Assessments (HRIA) with respect to ICANN policy development processes are currently being formulated by the Cross Community Working Group on Human Rights. Complementing this, ICANN is also undertaking an internal HRIA of the organization’s operations. It is important to remember that the international human rights instruments that are relevant here are those required by the applicable law.
Apart from its legal responsibility to uphold the HR laws of an area, the framework is worded negatively in that it says ICANN should in general avoid violating human rights. It is also said that they should take into account HR when making policies but these fall short from saying that HR considerations should be given prominent weightage and since there are many core values, at any point one of the others can be used to sidestep human rights. One core value in particular says that ICANN should duly consider the public policy advice of governments and other authorities when arriving at a decision. Thus, if governments want to promote a decision to further national interests at the expense of citizen’s human rights then that would be very much possible within this FOI.
3. JURISDICTION
A highly contentious issue in WS2 was that of Jurisdiction, and the recommendations formed to tackle it were quite disappointing. Despite initial discussion by the group on ICANN’s location, they did not address the elephant in the room in their report. Even after the transition, ICANN’s new by-laws state that it is subject to California Law since it was incorporated there. This is partly the fault of the first Workstream because when enumerating the issues for WS2 with respect to jurisdiction, they left it ambiguous by stating: :
“At this point in the CCWG Accountability’s work, the main issues that need within Work Stream 2 relate to the influence that ICANN ́s existing jurisdiction may have on the actual operation of policies and accountability mechanisms. This refers primarily to the process for the settlement of disputes within ICANN, involving the choice of jurisdiction and of the applicable laws, but not necessarily the location where ICANN is incorporated.”
Jurisdiction can often play a significant role in the laws that ICANN will have to abide by in terms of financial reporting, consumer protection, competition and labour laws, legal challenges to ICANN’s actions and finally, in resolving contractual disputes. In its present state, the operations of ICANN could, if such a situation arises, see interference from US authorities by way of legislature, tribunals, enforcement agencies and regulatory bodies.
CIS has, in the past, discussed the concept of “jurisdictional resilience”, which calls for:
- Legal immunity for core technical operators of Internet functions (as opposed to policymaking venues) from legal sanctions or orders from the state in which they are legally situated.
- Division of core Internet operators among multiple jurisdictions
- Jurisdictional division of policymaking functions from technical implementation functions
Proposing to change ICANN’s seat of headquarters or at the very least, suggest ways for ICANN to gain partial immunity for its policy development processes under the US law would have gone a long way in making ICANN truly a global body. It would have also ensured that as an organization, ICANN would have been equally accountable to all its stakeholders as opposed to now, where by virtue of its incorporation, it has higher legal and possible political, obligations to the United States. This was (initially?) expressed by Brazil who dissented from the majority conclusions of the sub-group and drafted their own minority report, which was supported by countries like Russia. They were unhappy that all countries are still not at an equal footing in the participation of management of Internet resources, which goes against the fundamentals of the multi-stakeholder system approach.
Recommendations:
The recommendations passed were in two categories:
- Office of Foreign Asset Control (OFAC)
OFAC is an office of the US Treasury administering and enforcing economic and trade sanctions based on the American foreign policy and national security objectives. It is pertinent because, for ICANN to enter into a Registration Accreditation Agreement (RAA) with an applicant from a sanctioned country, it will need an OFAC license. What happens right now is that ICANN is under no obligation to request for this license and in either case, OFAC can refuse to provide it. The sub group recommended that the terms of the RAA be modified so that ICANN is required to apply for and put their best efforts in securing the license if the applicant is qualified to be a registrar and not individually subject to sanctions. While the licensing process is underway they should also be helpful and transparent, and maintain on-going communication with the applicant. The same recommendation was made for applicants to the new gTLD program, from sanctioned countries. Other general licenses are needed from OFAC for certain ICANN transactions and hence it was proposed that ICANN pursue the same.
2. Choice of law and Choice of Venue Provisions in ICANN Agreements
In ICANN’S Registry Agreements (RA) and Registration Accreditation Agreement (RAA) the absence of a choice of law provision means that the governing law of these contracts is undetermined until later decided by a judge or arbitrator or an agreement between the parties. It was collectively seen that increased freedom of choice for the parties in the agreement could help in customizing the agreements and make it easier for registries and such to contractually engage with ICANN. Out of various options, the group decided that a Menu approach would be best whereby a host of options(decided by ICANN) can be provided and the party in case choose the most appropriate from them such as the jurisdiction of their incorporation.In RAs, the choice of venue was pre determined as Los Angeles, California but the group recommended that instead of imposing this choice on the party it would be better to offer a list of possible venues for arbitration. The registry can then choose amongst these options when entering into the contract. There were other issues discussed which did not reach fruition due to lack of unanimity such as discussions on immunity of ICANN from US jurisdiction.
4. OFFICE OF THE OMBUDS
Subsequent to the external evaluation of the ICANN Office of the Ombuds (IOO), there were a couple of recommendations to strengthen the office. They were divided into procedural aspects that the office should carry out to improve their complaint mechanism such as differentiating between categories of complaints and explaining how each type would be handled with. The issues that would not invoke actions from the IOO should also be established clearly and if and where these could be transferred to any other channel. The response from all the relevant parties of ICANN to a formal request or report from the IOO should take place within 90 days, and 120 at the maximum if an explanation for the same can be provided. An internal timeline will be defined by the office for handling of complaints and document a report on these every quarter or annually. A recommendation for the IOO to be formally trained in mediation and have such experience within its ranks was further given. Reiterating the importance of diversity, even this sub group emphasized on the IOO bearing a diverse group in terms of gender and other parameters. This ensures that a complainant has a choice in who to approach in the office making them more comfortable. To enhance the independence of the Ombuds, their employment contract should have a 5 year fixed term which only allows for one extension of maximum 3 years. An Ombuds Advisory Panel is to be constituted by ICANN comprising five members to act as advisers, supporters and counsel for the IOO with at least 2 members having Ombudsman experience and the remaining possessing extensive ICANN experience. They would be responsible for selecting the new Ombuds and conducting the IOO’s evaluation every 5 years amongst others. Lastly, the IOO should proactively document their work by publishing reports on activity, collecting and publicizing statistics, user satisfaction information a well any improvements to the process.
These proposals still do not address the opacity of how the Office of the Ombuds resolve these cases since it does not call for; a) a compilation of all the cases that have been decided by the office in the history of the organization b) the details of the parties that are involved if the parties have allowed that to be revealed and if not at the very least, the non sensitive data such as their nationality and stakeholder affiliation and c) a description of the proceedings of the case and who won in each of them. When CIS asked for the above in 2015, the information was denied on ground of confidentiality. Yet, it is vital to know these details since the Ombuds hear complaints against the Board, Staff and other constituent bodies and by not reporting on this, ICANN is rendering the process much less accountable and transparent. This conflict resolution process and its efficacy is even more essential in a multi-stakeholder environment so as to give parties the faith to engage in the process, knowing that the redressal mechanisms are strong. It is also problematic that sexual harassments complaints are dealt by the Ombuds and that ICANN does not have a specific Anti-Sexual Harassment Committee. The committee should be neutral and approachable and while it is useful for the Office of the Ombuds to be trained in sexual harassment cases, it is by no means a comprehensive and ideal approach to deal with complaints of this nature. Despite ICANN facing a sexual harassment claim in 2016, the recommendations do not specifically address the approach the Ombuds should take in tackling sexual harassment.
5. SUPPORTING ORGANIZATION/ ADVISORY COMMITTEE ACCOUNTABILITY
The sub group presented the outcomes under the main heads of Accountability, Transparency, Participation, Outreach and Updates to policies and procedures. They suggested these as good practices that can be followed by the organizations and did not recommend that implementation of the same be required. The accountability aspect had suggestions of better documentation of procedures and decision-making. Proposals of listing members of such organizations publicly, making their meetings open to public observation including minutes and transcripts along with disclosing their correspondence with ICANN were aimed at making these entities more transparent. In the same vein, rules of membership and eligibility criteria, the process of application and a process of appeal should be well defined. Newsletters should be published by the SO/AC to help non-members understand the benefit and the process of becoming a member. Policies were asked to be reviewed at regular intervals and these internal reviews should not extend beyond a year.
6. STAFF ACCOUNTABILITY
Improving the ICANN staff’s Accountability was the job of a different group who assessed it at the service delivery, departmental or organizational level not at an individual or personnel level. They did this by analysing the roles and responsibilities of the Board, staff and community members and the nexus between them. Their observations culminated in the understanding that ICANN needs to take steps such as make visible their performance management system and process, their vision for the departmental goals and how they tie in to the organization’s strategic goals and objectives. They note that several new mechanisms have already been established yet have not been used enough to ascertain their efficacy and thus, propose a regular information acquisition mechanism. Most importantly, they have asked ICANN to standardize and publish guidelines for suitable timeframes for acknowledging and responding to requests from the community.
7. ICANN TRANSPARENCY
The last group of the WS2 was one specifically looking at the transparency of the organization.
a. The Documentary Information Disclosure Policy (DIDP)
Currently the DIDP process only applies to ICANN’s “operational activities”, it was recommended to delete this caveat to cover a wider breadth of the organization’s activities. As CIS has experienced, request for information is often met with an answer that such information is not documented and to remedy the same, a documentation policy was proposed where if significant elements of a decision making process are taking place orally then the participants will be required to document the substance of the conversation. Many a times DIDP requests are refused because one aspect of the information sought is subject to confidentiality. hus one of the changes is to introduce a severability clause so that in such cases, information can still be disclosed with the sensitive aspect redacted or severed. In scenarios of redaction, the rationale should be provided citing one of the given DIDP exceptions along with the process for appeal. ICANN’s contracts should be under the purview of the DIDP except when subject to a non-disclosure agreement and further, the burden is on the other party to convince ICANN that it has a legitimate commercial reason for requested the NDA. No longer would any information pertaining to the security and stability of the Internet be outside the ambit of the DIDP but only if it is harmful to the security and stability. Finally, ICANN should review the DIDP every five years to see how it can be improved.
b. Documenting and Reporting on ICANN’s Interactions with the Government
In a prominent step towards being more transparent with their expenditure and lobbying, the group recommended that ICANN begins disclosing publicly on at least an annual basis, sums of $20,000 per year devoted to “political activities” both in the US and abroad. All expenditures should be done on an itemized basis by ICANN for both outside contractors and internal personnel along with the identities of the persons engaging in such activities and the type of engagement used for such activities amongst others.
cc. Transparency of Board Deliberations
The bylaws were recommended to be revised so that material may be removed from the minutes of the Board if subject to a DIDP exception. The exception for deliberative processes should not apply to any factual information, technical report or reports on the performance or effectiveness of a particular body or strategy. When any information is removed from the minutes of the Board meeting, they should be disclosed after a particular period of time as and when the window of harm has passed.
d. ICANN’s Anonymous Hotline (Whistle-blower Protection)
To begin with, ICANN was recommended to devise a way such that when anyone searches their website for the term “whistle-blower”, it should redirect to their Hotline policy since people are unlikely to be aware that in ICANN parlance it is referred to as the Hotline policy. Instead of only “serious crimes” that are currently reported, all issues and concerns that violate local laws should be. Complaints should not be classified as ‘urgent’ and ‘non-urgent’ but all reports should be a priority and receive a formal acknowledgment within 48 hours at the maximum. ICANN should make it clear that any retaliation against the reporter will be taken and investigated as seriously as the original alleged wrongdoing. Employees should be provided with data about the use of the Hotline, including the types of incidents reported. Few member of this group came out with a Minority Statement expressing their disapproval with one particular aspect of the recommendations that they felt was not developed enough, the one pertaining to ICANN’s attorney-client privilege. The recommendation did not delve into specifics but merely stated that ICANN should expand transparency in their legal processes including clarifying how attorney-client privilege is invoked. The dissidents thought ICANN should go farther and enumerate principles where the privilege would be waived in the interests of transparency and account for voluntary disclosure as well.
The transparency recommendations did not focus on the financial reporting aspects of ICANN which we have found ambiguities with before. Some examples are; the Registries and Registrars are the main sources of revenue though there is ambiguity as to the classifications provided by ICANN such as the difference between RYG and RYN. The mode of contribution of sponsors isn’t clear either so we do not know if this was done through travel, money, media partnerships etc. Several entities have been listed from different places in different years, sometimes depending on the role they have played such as whether they are a sponsor or registry. Moreover, the Regional Internet Registries are clubbed under one heading and as a consequence it is not possible to determine individual RIR contribution like how much did APNIC pay for the Asia and Pacific region. Thus, there is a lot more scope for ICANN to be transparent which goes beyond the proposals in the report.
It is worth noting that whereas the mandate of the WS1 included the implementation of the recommendations, this is not the case for WS2 and thus, by creating a report itself the mission of the group is concluded. This difference can be attributed to the fact that during the first WS, there was a need to see it through since the IANA transition would not happen otherwise. The change in circumstances and the corresponding lack of urgency render the process less powerful, the second time round. The final recommendations are now being discussed in the relevant charting organizations within ICANN such as the Government Advisory Council (GAC) and subsequent to their approval,, it will be sent to the Board who will decide to adopt them or not. If adopted, ICANN and its sub organizations will have to see how they can implement these recommendations. The co-chairs of the group will be the point of reference for the chartering organizations and an implementation oversight team has been formed, consisting of the Rapporteurs of the sub teams and the co-chairs. A Feasibility Assessment Report will be made public in due time which will describe the resources that would take to implement the recommendations. Since it would be a huge undertaking for ICANN to implement the above, the compliance process is expected to take a few years. .
The link to report can be found here.
Regulating the Internet: The Government of India & Standards Development at the IETF
This brief was authored by Aayush Rathi, Gurshabad Grover and Sunil Abraham. Click here to download the policy brief.
Executive Summary
The institution of open standards has been described as a formidable regulatory regime governing the Internet. As the Internet has moved to facilitate commerce and communication, governments and corporations find greater incentives to participate and influence the decisions of independent standards development organisations.
While most such bodies have attempted to systematise fair and transparent processes, this brief highlights how they may still be susceptible to compromise. Documented instances of large private companies like Microsoft, and governmental instrumentalities like the US National Security Agency (NSA) exerting disproportionate influence over certain technical standards further the case for increased Indian participation.
The debate around Transport Layer Security (TLS) 1.3 at the Internet Engineering Task Force (IETF) forms an important case for studying how a standards body responded to political developments, and how the Government of India participated in the ensuing discussions. Lasting four years, the debate ended in favour of greater communications security. One of the security improvements in TLS 1.3 over its predecessor is that is makes less information available to networking middleboxes. Considering that Indian intelligence agencies and government departments have expressed fears of foreign-manufactured networking equipment being used by foreign intelligence to eavesdrop on Indian networks, the development is potentially favourable for the security of Indian communication in general, and the security of military and intelligence systems in particular. India has historically procured most networking equipment from foreign manufacturers. While there have been calls for indigenised production of such equipment, achieving these objectives will necessarily be a gradual process. Participating in technical standards can, then, be an effective interim method for intelligence agencies, defence wings and law enforcement for establishing trust in critical networking infrastructure sourced from foreign enterprises.
Outlining some of the existing measures the Indian government has put in place to build capacity for and participate in standard setting, this brief highlights that while these are useful starting points, they need to be harmonised and strengthened to be more fruitful. Given the regulatory and domestic policy implications that technical standards can have, there is a need for Indian governmental agencies to focus adequate resources geared towards achieving favourable outcomes at standards development fora.
Click here to download the policy brief.
Note: The recommendations in the brief were updated on 17 December 2018 to reflect the relevance of technical standard-setting in the recent discussions around Indian intelligence concerns about foreign-manufactured networking equipment.
Cyberspace and External Affairs:A Memorandum for India Summary
It limits itself to advocating certain procedural steps that the Ministry of External Affairs should take towards propelling India forward as a leading voice in the global cyber norms space and explains why occupying this leadership position should be a vital foreign policy priority. It does not delve into content-based recommendations at this stage. Further, this memorandum is not meant to serve as exhaustive academic research on the subject but builds on previous research by the Centre for Internet & Society in this area to highlight key policy windows that can be driven by India.
This memorandum provides a background to global norms formation focussing on key global developments over the past month; traces the opportunities s for India to play a lead role in the global norms formulation debate and then charts out process related recommendations on next steps towards India taking this forward.
Click here to read more
A Critical Look at the Visual Representation of Cybersecurity
- Edited by Karan Saini / Illustrations by - Paul Anthony George, and Roshan Shakeel
- Download the file here
The existing imagery comprises of largely stereotypical images of silhouettes of men in hoodies, binary codes, locks, shields; all in dark tones of blue and green. The workshop aimed at identifying the concerns with these existing images and ideating on creating visuals that capture the nuanced concepts within cybersecurity as well as to contextualise them for the Global South. It began with a discussion on the various concepts within cybersecurity including disinformation, surveillance in the name of security, security researchers, regulation of big technology companies, gender and cybersecurity, etc. This was followed by a mapping of different visual elements in the existing cybersecurity imagery to infer the biases in them. Further, an ideation session was conducted to create alternate visualisations that counter these biases. A detailed report of the workshop can be read here.
The participants began by discussing the concerning impacts of present visualisations – there is a lack of representation and context of the global south. Misrepresentation of cybersecurity leads people to be susceptible to disinformation, treats cybercrime as an abstract concept that does not have a direct impact, and oversimplifies the problem and its solutions. The ecosystem in which this imagery exists also presented a larger issue. A majority of the images are created as clickbait alongside media articles. Media houses thus benefit from the oversimplification and mystification of cybersecurity in such images.
Through the mapping of existing images present online, several concerns were identified. The vague elements and unclear representation add to the mystification of cybersecurity as a concept. In present depictions, the use of technological devices and objects, leads to the lack of a human element, distancing the threat from any real impact to people using these devices. The metaphor of a physical threat is often used to depict cybersecurity using elements such as a lock and key. Recurring use of these elements gives a false idea of what is being secured or breached and how. Representations rely on tropes regarding the identity of hackers, and fail to capture the vulnerability of the system. The imagery gives the impression that systems which are breached are immensely secure to begin with and are compromised only as a result of sophisticated attacks carried out by malicious actors. The identity of hackers is commonly associated with cyber attacks and breaches, and the existing imagery reinforces this. Visuals showing a masked man or a silhouette of a man in dark background are the usual markers of a malicious hacker in conventional cybersecurity imagery. While there is a lack of representation of women in stock cybersecurity images, another trope found was that of a cheerful woman coder. There were also images of faceless women with laptops[1]. The reductive nature of these images point to deeper concerns around gender representation in cybersecurity.
The participants examined what the implications of such visual representation would be, and why there is a need to change the imagery. How can visual depictions be more representative? Can they avoid subscribing to a homogenised idea of an Indian context – specific without being reductive? Can better depiction broaden understanding of cybercrime and emphasize the proximity of those threats? With technology, concepts are often understood through metaphors – how data is explained impacts how people perceive it. Visual imagery can play a critical role in demystifying concepts when done well; illustrations can change the discourse. They must begin to incorporate intersecting aspects of gender, privacy, susceptibility of vulnerable populations, generational and cultural gaps, as well as manifestations of the described crimes to make technological laypersons more aware of the threat.
Potential new imagery would need to address aspects such as disinformation, the importance of privacy and who has a right to it, change representation of hackers, depict the cybersecurity community, explain specific concepts to both – the general user and to the people part of cybersecurity efforts in the country, the implications of cybercrime on vulnerable populations, and more in an attempt to deconstruct and disseminate what cybersecurity looks like today.
The ideation session involved rethinking specific concepts such as disinformation, and ethical hacking to create alternate imagery. For instance, disinformation was visually imagined as a distortion of an already distorted message being perceived by the viewer. In order to bring attention to the impact of devices, a phone was thought of as a central object to which different concepts of cybersecurity can be connected.
‘Fake News Cascade’ by Paul Anthony George
‘Fake News’ by Paul Anthony George
‘Disinformation/ Fake News’ by Roshan Shakeel; The sketch is about questioning the validity of what we see online, and that every message we see is constructed in some form or the other by someone else.
‘Disinformation/ Fake News’ by Roshan Shakeel; The sketch visualizes how the source of information ('the original') gets distorted after a certain point.
For ethical hacking, a visualisation depicting a day in the life of an ethical hacker was thought of to normalize hacking and to focus on their contribution in security research.
‘A Day in the Life of an Indian Hacker’ by Paul Anthony George
'Surveillance in the Name of Security' by Roshan Shakeel
Resources on ethical hacking (HackerOne)[2] and hacker culture (2600.com)[3] were also consulted as part of the exercise to gather references on the work done by hackers. This allowed a deeper understanding of how the hacker community depicts itself. Check Point Research[4] and Kerala Police Cyberdome[5] were also examined for further insight into cybersecurity. With regard to gender representation, sources that use visual techniques to communicate concerns and advocacy campaigns were also referred to. The Gendering Surveillance[6] initiative by the Internet Democracy project[7], which looks at how surveillance harms and restricts women, also offered insights on the use of illustrations supporting the case studies. Another reference was the "Visualising Women's Rights in the Arab World"[8] project by the Tactical Technology Collective[9]. The project aims to “strengthen the use of visual techniques by women's rights advocates in the Arab world, and to build a network of women with these skills”.[10]
More visual explainers and animations[11] from the Tactical Technology Collective were noted for their broader engagement with digital security and privacy. A video by the Internet Democracy Project that explains the Internet through rangoli[12], was observed specifically for setting the concept in Indian context through the use of aesthetics.
The workshop concluded with a discussion of potential visual iterations – imagery of cybersecurity that is not technology-oriented but focussed on the behavioural implications of access to such technology, illustrated public service announcements enhancing the profile of cybersecurity researchers or the everyday hacker. The impact of the discussion itself can indicate the relevance of such an effort. Artists and designers can be encouraged to create a body of imagery that shifts discourse and perception, to begin visualising for advocacy, demystify and stop the abstraction of cybercrime that can lead to a false sense of security, incorporate unique aspects of the debate within the Indian context, and generate new dialogue and understanding of cybersecurity. A potential step forward from this workshop would be to engage with the design community at large along with the domain experts to create more effective imagery for cybersecurity.
[1] https://www.hackerone.com/
[2] https://2600.com/
[3] https://research.checkpoint.com/about-us/
[4] http://www.cyberdome.kerala.gov.in/
[5] https://genderingsurveillance.internetdemocracy.in/
[6] https://internetdemocracy.in/
[7] https://visualrights.tacticaltech.org/index.html
[8] https://tacticaltech.org/
[9] https://visualrights.tacticaltech.org/content/about-website.html
[10] https://tacticaltech.org/projects/survival-in-the-digital-age-ono-robot-2012/
[11] https://internetdemocracy.in/2018/08/dots-and-connections/
[12] https://www.independent.co.uk/life-style/gadgets-and-tech/features/women-in-tech-its-time-to-drop-the-old-stereotypes-7608794.html
Event Report on Intermediary Liability and Gender Based Violence
With inputs and edited by Ambika Tandon. Click here to download the PDF
Introduction
Background
The topic of discussion was intermediary liability and Gender Based Violence (GBV), the debate on GBV globally and in India evolving to include myriad forms of violence in online spaces in the past few years. This ranges from violence native to the digital, such as identity theft, and extensions of traditional forms of violence, such as online harassment, cyberbullying, and cyberstalking[1]. Given the extent of personal data available online, cyber attacks have led to a variety of financial and personal harms.[2] Studies have explored the extent of psychological and even physical harm to victims, which has been found to have similar effects to violence in the physical world[3]. Despite this, technologically-facilitated violence is often ignored or trivialised. When present, redressal mechanisms are often inadequate, further exacerbating the effects of violence on victims.
TheRoundtable explored ways of how intermediaries can help tackle gender based violence and discussed attempts at making the Internet a safer place for women which can ultimately help make it a gender equal environment. It also analyzed the key concerns of privacy and security leading the conversation to how we can demand more from platforms for our protection and how best to regulate them.
The roundtable had four female and one male participants from various civil society organisations working on rights in the digital space.
Roundtable Discussion
Online Abuse
The discussion commenced with the acknowledgement of it being well documented that women and sexual minorities face a disproportionate level of violence in the digital space, as an extension/reproduction of physical space. GBV exists on a continuum from the physical, verbal, and technologically enabled, either partially or fully, with overflowing boundaries and deep interconnections between different kinds of violence. Some forms of traditional violence such as harassment, stalking, bullying, sex trafficking, extend themselves into the digital realm while other forms are uniquely tech enabled like doxxing and morphing of imagery. Due to this considerations of anonymity, privacy, and consent, need to be re-thought in the context of tech enabled GBV. These come into play in a situation where the technological realm has largely been corporatised and functions under the imperative of treating the user and their data as the final product.
It was noted early on that GBV online can be a misnomer because it can be across a number of spaces and, the participants concentrated on laying down the specific contours of tech mediated or tech enabled violence. One of the discussants stated that the term GBV is a not a useful one since it does not encompass everything that is talked about when referring to online abuse. The phenomenon that gets the most traction is trolling on social media or abuse on social media. This is partly because it is the most visible people who are affected by it, and also since often, it is the most difficult to treat under law. In a 2012 study by the Internet Democracy Project focusing on online verbal abuse in social media, every woman they interviewed started by asserting that she is not a victim. The challenge with using the GBV framework is that it positions the woman as a victim. Other incidents on social media such as verbal abuse where there are rape threats or death threats, especially when there is an indication that the perpetrator is aware of the physical location of the victim, need to be treated differently from say online trolling.
Further, certain forms of violence, such as occurrences of ‘revenge porn’ or the non-consensual sharing of intimate images, including rape videos are easier to fit within the description of GBV. It is important to make these distinctions because the remedies then should be commensurate with perceived harm. It is not appropriate to club all of these together since the criminal threshold for each act is different. Whereas being called a “slut” or a “bitch” would not be enough for someone to be arrested, if a woman is called that repetitively by a large number of people the commensurate harm could be quite significant. Thus, using GBV as a broad term for all forms of violence ends up invisiblising certain forms of violence and prevents a more nuanced treatment of the discussion.
In response to this, a participant highlighted the normalisation of gendered hate speech, to the extent of lack of recognition as a form of hate speech. This lacunae in our law stems from the fact that we inherited our hate speech laws from a colonial era where it was based on the grounds of incitement of violence, more so physical violence. As a result, we do not take the International Covenant on Civil and Political Rights (ICCPR) standard of incitement to discrimination. If the law was based on an incitement to discriminate point of view then acts of trolling could come under hate speech. Even in the United Kingdom where there is higher sentencing for gender based crime as compared to other markers of identity such as race, gender does not fall under the parameters of hate speech. This can also be attributed to the threshold at which criminalization kicks in for such acts.
A significant aspect of online verbal abuse pointed out by a participant was that it does not affect all women equally. In a study, the Twitter accounts of 12 publicly visible women across the political spectrum were looked at for 2 weeks in early December, 2017. They were filtered against keywords and analyzed for abusive content. One Muslim woman in the study had extremely high levels of abuse, being consistently addressed as “Jihad man, Jihad didi or Jihad biwi”. According to the participant, she is also the least likely to get justice through the criminal system for such vitriol and as such, this disparity in the likelihood of facing online abuse and accessing official redressal mechanisms should be recognized. Another discussant reaffirmed the importance of making a distinction between online abuse against someone as opposed to gender based violence online where the threat itself is gendered.
In a small ethnographic study with the Bangalore police undertaken by one of the participants, the police were asked for their opinion on the following situation: A women voluntarily providers photos of herself in a relationship and once the relationship is over, the man distributes it. Is there a cause for redressal?
Policemen responded that since she gave it voluntarily in the first instance, the burden of the consequences is now on her. So even in a feminist framework of consent and agency where we have laws for actions of voyeurism and publishing photos of private parts, it is not being recognized by institutional response mechanisms.
Intermediary Liability
Private communications based intermediaries can be understood to be of two types: those that enable the carriage/transmission of communications and provide access to the internet, and those that host third party content. The latter have emerged as platforms that are central to the exercising of voice, the exchange of information and knowledge, and even the mobilisation of social movements. The norms and regulations around what constitutes gender based violence in this realm is then shaped not only by state regulations, but content moderation standards of these intermediaries. Further, the kinds of preventive tools and tools providing redressal are controlled by these platforms. More than before, we are looking deeper into the role of these companies that function as intermediaries and control access to third party content without performing editorial functions.
In the Intermediary Liability framework in the United States formulated in the 1990s, the intermediaries that were envisioned were not the intermediaries we have now. With time, the intermediary today is able to access and possess your data while urging a certain kind of behaviour from you. There is then an intermediary design duty which is not currently accounted for by the law. Moreover, the law practices a one size fits all regime whereas what could be more suitable is having approached tailored as per the offence. So for child pornography, a ‘removal when uploaded’ action using artificial intelligence or machine learning is appropriate but a notice and takedown approach is better for other kinds of content takedown.
Globally, another facet is that of safe harbour provisions for platforms. When intermediaries such as Google and Facebook were established, they were thought of as neutral pipes since they were not creating the content but only facilitating access. However, as they have scaled and as their role in ecosystem has increased, they are now one of the intervention points for governments as gatekeepers of free speech. One needs to be careful in asking for an expansion of the role and responsibilities of platforms because then complementary to that we will also have to see that the frameworks regulating them need to be revisited. Additionally, would a similar standard be applicable to larger and smaller intermediaries, or do we need layers of distinction between their responsibilities? Internet platforms such as the GAFA (Google, Apple, Facebook and Amazon) yield exceptional power to dictate what discourse takes place and this translates into the the online and offline divide disappearing. Do we then hold these four intermediaries to a separate and higher standard? If not, then all small players will be held to stringent rules disadvantaging their functioning and ultimately, stifling innovation. Thus, regulation is definitely needed but instead of a uniform one, one that’s layered and tailor-made to different situations and platform visibility levels could be more useful.
Some participants shared the opinion that because these intermediaries are based in foreign countries and have primary legal obligations there, the insulation plays out in the citizen’s benefit. It lends itself a layer of freedom of speech and expression that is not present in the substantive law, rule of law framework or the institutional culture in India.
Child pornography is an area where platforms are taking a lot of responsibility. Google has spoken about how they have been using machine learning algorithms to block 40% of such content and Microsoft is also working on a similar process. If we argue for more intervention from platforms, we simultaneously also need to look at their machine learning algorithms. Concerns of how these algorithms are being deployed and further, being incorporated into the framework of controlling child pornography are relevant since there is not much accountability and transparency regarding the same.
Another fraction that has emerged from recent events is the divide between traditional form of media and new media. Taking the example of rape victims and sexual harassment claims, there are strict rules regarding the kinds of details that can be disclosed and the manner in which this is to be done. In the Kathua rape case, for instance, the Delhi High Court sent a notice to Twitter and Facebook for revealing details because there are norms around this even though they have not been applicable to platforms. Hence, there are certain regulations that apply to old media that have now escaped in the frameworks applicable to the new media and at some level that gap needs to be bridged.
Role of Law
One of the participants brought up the question; what is the proper role of the law and does it come first or last? In case of the latter, the burden then falls upon the kind of standard setting that we do as a society. The role of platforms as an entity in mediating the online environment was discussed, given the concerns that have been highlighted about this environment, especially for women. The third thing to be considered is whether we run the risk of enforcing patriarchal behaviour by doubling down on the either of the two aforementioned factors. If legal standards are made too harsh they may end up reinforcing a power structure that is essentially dominated by upper caste men who comprise a majority of staff within law enforcement and the judiciary. Even though the subordinate judiciary do have mahila courts now, the application of the law seems to reify the position of the woman as the victim. This also brings up the question of who can become a victim within such frameworks, where selective bias such as elements of chastity come to play as court functions are undertaken.
An assessment of the way criminal law in India is used to stifle free speech was carried out in 2013 and repeated in 2018, illustrating how censorship law is used to stifle voices of minorities and people critical of the political establishment. Even though it is perhaps time to revisit the earlier conceptualizations of intermediaries as neutral pipes, it is concerning to look at the the court cases regarding safe harbour in India. Many of them are carried out with the ostensible objective of protecting women's rights. In Kamlesh Vaswani V Union of India, the petition claims that porn is a threat to Indian women and culture, ignoring the reality that many women watch porn as well. Pornhub releases figures on viewership every year, and of the entirety of Indian subscribers one third are women. This is not taken into account in such petitions. In Prajwala V Union of India, an NGO sent the Supreme Court a letter raising concerns about videos of sexual violence being distributed on the internet. The letter sought to bring attention to the existence of such videos, as well as their rampant circulation on online platforms. At some point in the proceedings, the Court wanted the intermediaries to use keywords to take down content and keeping aside poor implementation, the rationale behind such a move is problematic in itself. For instance, if you choose sex as one of those words then all sexual education will disappear from the Internet. There are many problems with court encouraged filtering systems like one where a system automatically tells you when a rape video goes up. The question arises of how will you distinguish between a video that was consensually made depicting sexual activities and a rape video. The narrow minded responses to the Sabu Mathew and Prajwala cases originate in the conservative culture regarding sexual activity prevalent in India.
In a research project undertaken by one of the participants in the course of their work, they made a suggestion to include gender, sexuality and disability as grounds for hate speech while working with women’s rights activists and civil society organisations. This suggestion was not well received as they vehemently opposed more regulation. In their opinion, the laws that India has in place are not being upheld and creating new laws will not change if the implementation of legislation is flawed. For instance, even though the Supreme Court stuck down S.66A, Internet Freedom Foundation has earlier provided instances of its continued usage by police officers to file complaints.[4] Hate speech laws can be used to both ends, even though unlike in the US they do not determine whose speech they want to protect. Consequently, in the US a white supremacist gets as much protection as a Black Lives Matter activist but in India, that is not the case. The latest Law Commission Report on hate speech in India tries to make progress by incorporating the ICCPR view of incitement to discriminate and include dignity in the harms. It specifically speaks about hate speech against women saying that it does not always end up in violence but does result in a harm to dignity and standing in society. Often, protectionist forms of speech such as hate speech often end up hurting the people it aims to protect by reinforcing stereotypes.
Point of View undertook a study where they looked at the use of S.67 in the Information Technology (IT) Act which criminalizes obscene speech when you use a medium covered by the IT, in which they found that the section was used to criminalize political speech. In many censorship cases, the people who those provisions benefit are the ones in power.[5] For instance in S.67, obscenity provisions do not protect women's rights, they protect morality of society. Even though these are done in the name of protecting women, when a woman herself decides that she wants to publish a revealing picture of herself online, it is disallowed by the law. That kind of control of sexuality is part of a larger patriarchal framework which does not support women's rights or recognise her sexuality. However, under Indian law, there are quite a few robust provisions for image based abuse, and there is some recognition of women in particular being vulnerable to it. S.66A of the IT Act specifically recognizes that it is a criminal activity to share images of someone’s private parts without their consent. This then also encompasses instances of ‘revenge porn’. That provision has been in place in India since 2008, in contrast to the US where half the states still do not have such a provision. Certain kinds of vulnerability have adequate recognition in the law, thus one should be wary of calls of censorship and lowering the standards for criminalizing speech.
Non-legal interventions
This section centres around the discussions of redressal mechanisms that can be used to address some of the forms of violence which do not emanate from the law. All of the participants emphasized the importance of creating safe spaces through non-legal interventions. It was debated whether there is a need to always approach the law or if it is possible to categorize forms of online violence according to the gravity of the violation committed. These can be in the form of community solutions where law is treated as the last resort. For instance, there was support for using community tools such as ‘feminist trollback’ where humor can be used to troll the trolls. Trolls feed on the fear of being trolled, so the harm can be mitigated by using community initiatives wherein the target can respond to the trolls with the help of other people in the community. It was reiterated that non technical and legal interventions are needed not only from the perspective of power relations within these spaces but also access to the spaces in the first place. Accordingly, the government should work on initiatives that get more women online and focus on policies that makes smartphones and data services more accessible. This would also be a good method to increase the safety of women and benefit from the strength in numbers.
In cases of the non-consensual sharing of intimate images, law can be the primary forum but in cases of trolling and other social media abuse, the question was raised - should we enhance the role of the intermediary platforms? Being the first point of intervention, their responsibility should be more than it currently is. However this would require them to act in the nature of police or judiciary and necessitate an examination of their algorithms. A large proportion of the designers of such algorithms are white males, which increases the possibility of their biases against women of colour for instance, to feed into the algorithms and reinforce a power structure that lacks accountability.
Participants questioned the lack of privacy in design with the example in mind being of how registrars do not make domain owner details private by default. Users have to pay an additional fee for not exposing their details to public and the notion of having to pay for privacy is unsettling. There is no information being provided during the purchasing of the domain name about the privacy feature as well. It was acknowledged that for audit and law enforcement purposes it is imperative to have the information of the owner of a domain name and their details since in cases of websites selling fake medicines, arms or hosting child pornography. Thus, it boils down to the kind of information necessary for law enforcement. Global domain name rules also impact privacy on the national level. The process of ascertaining the suitability and necessity of different kinds of information excludes ordinary citizens since all the consultations take place between the regulatory authority and the state. This makes it difficult for citizens to participate and contribute to this space without government approval.
Issues were flagged against community standards in that the violence that occurs to women is also because the harms are not equal for all. Further, some users are targeted specifically because of the community they come from or the views they have. Often also because, they represent a ‘type’ of a woman that does not adhere to the ‘ideal’ of a woman held by the perpetrator. Unfortunately community standards do not recognise differential harms towards certain communities in India or globally. Twitter, for example, regularly engages in shadow banning and targets people who do not conform to the moral views prevalent in that society where the platform is engaging in censorship. We know these instances occur only when our community members notice and notify us of the same. There is a certain amount of labor that the community has already put in flagging instances of these violations to the intermediary which also needs recognition. In this situation, Twitter is disproportionately handling how it engages with the two entities in question. Community standards could thus become a double edged sword without adding additional protections for certain disadvantaged communities.
Conclusion
Currently, intermediaries are considered neutral pipes through which content flows and hence have no liability as long as they do not perform editorial functions. This has also been useful in ensuring that the freedom of speech is not harmed. However, given their potential ability to remedy this problem, as well as the fact that intermediaries sometimes benefit financially from such activities, it is important to look at the intermediaries’ responsibility in addressing these instances of violence. Governments across the world have taken different approaches to this question[6]. Models, such as in the US, where intermediaries have been solely responsible to institute redressal mechanisms have proven to be ineffectual. On the other hand, in Thailand, where intermediaries are held primarily liable for content, the monitoring of content has led to several free speech harms.
People are increasingly looking at other forms of social intervention to combat online abuse since technological and legal ones do not completely address and resolve the myriad issues emanating from this umbrella term. There is also a need to make the law gender sensitive as well as improving the execution of laws at ground level, possibly through sensitisation of law enforcement authorities. Gender based violence as a catchall phrase does not do justice to the full spectrum of experiences that victims face, especially women and sexual minorities. Often these do not attract criminal punishment given the restricted framework of the current law and need to be seen through the prism of hate speech to strengthen these provisions.
Some actions within GBV receive more attention than others and as a consequence, these are the ones platforms and governments are most concerned with regulating. Considerations of free speech and censorship and the role of intermediaries in being the flag bearers of either has translated into growing calls for greater responsibility to be taken by these players. The roundtable raised some key concerns regarding revisiting intermediary liability within the context of the scale of the platforms, their content moderation policies and machine learning algorithms.
[1] See Khalil Goga, “How to tackle gender-based violence online”, World Economic Forum, 18 February 2015, <https://www.weforum.org/agenda/2015/02/how-to-tackle-gender-based-violence-online/>. See also Shiromi Pinto, “What is online violence and abuse against women?”, 20 November 2017, Amnest International, <https://www.amnesty.org/en/latest/campaigns/2017/11/what-is-online-violence-and-abuse-against-women/>.
[2] Nidhi Tandon, et. al., “Cyber Violence Against Women and Girls: A worldwide wake up call”, UN Broadband Commission for Digital Development Working Group on Broadband and Gender, <http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/images/wsis/GenderReport2015FINAL.pdf>
[3] See Azmina Dhrodia, “Unsocial Media: The Real Toll of Online Abuse against Women”, Amnesty Global Insights Blog, <https://medium.com/amnesty-insights/unsocial-media-the-real-toll-of-online-abuse-against-women-37134ddab3f4>
[4] See Abhinav Sekhri and Apar Gupta, “Section 66A and other legal zombies”, Internet Freedom Foundation Blog, <https://internetfreedom.in/66a-zombie/?
[5] See Bishakha Datta “Guavas and Genitals”, Point of View <https://itforchange.net/e-vaw/wp-content/uploads/2018/01/Smita_Vanniyar.pdf>
[6] ‘Examining Technology-Mediated Violence Against Women Through a Feminist Framework: Towards appropriate legal-institutional responses in India’, Gurumurthy et al., January 2018.
Feminist Methodology in Technology Research: A Literature Review
Abstract
Feminist research methodology is a vast body of knowledge, spanning across multiple disciplines including sociology, media studies, and critical legal studies. This literature review aims to understand key aspects of feminist methodology across these disciplines, with a particular focus on research on technology and its interaction with society. Stemming from the argument that the ontological notion of objectivity effaces power relations in the process of knowledge production, feminist research is critical of the subjects, producers, and nature of knowledge. Section I of the literature review explores this argument along with a range of theoretical concepts, such as standpoint theory and historical materialism, as well as principles of feminist research derived from these, such as intersectionality and reflexivity.
Given its critique of the "god's eye view" (Madhok and Evans, 2014) of objectivist research, feminist scholars have largely developed qualitative methods that are more conducive to acknowledgement of power hierarchies. Additionally, some scholars have recognised the political value in quantification of inequalities such as the wage gap, and have developed intersectional quantitative methods that aim at narrowing down measurable inequalities. Both sets of methods are explored in Section II of the literature review, interspersed with examples from research focused on technology.
Introduction
According to authoritative accounts on the subject, while research focused on gender or women predates its arrival, the field of ‘feminist methodology’ explores questions of epistemology and ontology of research and knowledge. Initiated in scholarship arising out of the second wave of North American feminism, it theoretically anchors itself in the post-modernist and post-structuralist traditions. It additionally critiques positivism for being a project furthering patriarchal oppression. North American feminist scholars critique traditional methods within the social sciences from an epistemological perspective, for producing acontextual and ahistorical knowledge, replicating the tendency of positivist science to enumerate and measure subjective social phenomena. This, according to them, leads to the invisiblising of the web of power relations within which the ‘known’ and ‘knower’ in knowledge production are placed. This is then used to devise methods and underlying principles and ethics for conducting more egalitarian research, aimed at achieving goals of social justice.
The second wave feminist movement was itself critiqued by Black and other feminists from the global South for being exclusionary of non-white and heterosexual identities. Given its origins in the global North, scholars from the South have interrogated the meaning of feminism and feminist research in their context. Some African scholars even detail difficulty in disclosing a project as feminist publicly due to popular resistance to the term feminism, which stems from it being rejected by certain social groups as an alien social movement that’s antithetical to their “African cultural values." Their own critique of “White feminism” comes from its essentialization of womanhood and the resultant negation of the (neo)colonial and racialised histories of African women. This has led scholars from the global South to critically interrogate feminism and feminist methods. They acknowledge the multiplicity of feminisms, and initiate creative inquiries into different forms of feminist methodology. Feminist researchers that work in contexts of political violence, instability, repression, scarcity of resources, poor infrastructure, and/or lack of social security, have pointed out that traditional research methods assume conditions that are largely absent in their realities, leading them to experiment with feminist research.
Feminist research across these variety of contexts raises ontological and epistemological concerns about traditional research methods and underlying assumptions about what can be known, who can know, and the nature of knowledge itself. It argues that knowledge production has historically led to the creation of epistemic hierarchies, wherein certain actors are designated as ‘knowers’ and others as the ‘known’. Such hierarchies wreak epistemic violence upon marginalised subjects by denying them the agency to produce knowledge, and delegitimize forms of knowledge that aren’t normative. Acknowledging the role of power in knowledge production has the radical implication that the subjectivities of the researchers and the researched inherently find their way into research and more broadly, knowledge production. This challenges the objectivity and “god’s eye view” of traditional humanistic knowledge and its processes of production. Feminist research eschews scientifically orthodox notions of how “valid knowledge will look”, and creates novel resources for understanding epistemic marginalization of various kinds. It then provides a myriad of tools to disrupt structural hierarchies through and within knowledge production and dissemination.
Feminist research, given its evolution from living movements and theoretical debates, remains a contested domain. It has reformulated a range of qualitative and quantitative research methods, and also surfaced its own, such as experimental and action-based. What these have in common are theoretical dispositions to identify, critique, and ultimately dismantle power relations within and through research projects. It is thus “critical, political, and praxis oriented. Several disciplines with the social sciences, such as feminist technology studies, cyberfeminism, and cultural anthropology, have built feminist approaches to the study of technology and technologically mediated social relations. However, this continues to remain a minor strand of research on technology.
This literature review aims to address that gap through scoping of such methods and their application in technological research. Feminist methodology provides a critical lens that allows us to explore questions and areas in technology-based research that are inaccessible by traditional methods. This paper draws on examples from technology-focused research, covering key interdisciplinary feminist methods across fields such as gender studies, sociology, development, and ICT for development. In doing so, it actively constructs a history of feminist methodology through authoritative sources of knowledge.
Read the full paper here
European E-Evidence Proposal and Indian Law
The main feature of the E-evidence Proposal is twofold: (i) establishment of a legal regime whereunder competent authorities can issue European Production Orders (EPOs) and European Preservation Orders (EPROs) to entities in any other EU member country (together the “Data Orders”); and (ii) an obligation on service providers offering services in any of the EU member countries to designate legal representatives who will be responsible for receiving the Data Orders, irrespective of whether such entity has an actual physical establishment in any EU member country.
In this article we will briefly discuss the framework that has been proposed under the two instruments and then discuss how service providers based in India whose services are also available in Europe would be affected by these proposals. The authors would like to make it clear that this article is not intended to be an analysis of the E-evidence Proposal and therefore shall not attempt to bring out the shortcomings of the proposed European regime, except insofar as such shortcomings may affect the service providers located in India being discussed in the second part of the article.
Part I - E-evidence Directive and Regulation
The E-evidence Proposal introduces the concept of binding EPOs and EPROs. Both Data Orders need to be issued or validated by a judicial authority in the issuing EU member country. A Data Order can be issued to seek preservation or production of data that is stored by a service provider located in another jurisdiction and that is necessary as evidence in criminal investigations or a criminal proceeding. Such Data Orders may only be issued if a similar measure is available for the same criminal offence in a comparable domestic situation in the issuing country. Both Data Orders can be served on entities offering services such as electronic communication services, social networks, online marketplaces, other hosting service providers and providers of internet infrastructure such as IP address and domain name registries. Thus companies such as Big Rock (domain name registry), Ferns n Petals (online marketplace providing services in Europe), Hike (social networking and chatting), etc. or any website which has a subscription based model and allows access to subscribers in Europe would potentially be covered by the E-evidence Proposal. The EPRO, similarly to the EPO, is addressed to the legal representative outside of the issuing country’s jurisdiction to preserve the data in view of a subsequent request to produce such data, which request may be issued through MLA channels in case of third countries or via a European Investigation Order (EIO) between EU member countries. Unlike surveillance measures or data retention obligations set out by law, which are not provided for by this proposal, the EPRO is an order issued or validated by a judicial authority in a concrete criminal proceeding after an individual evaluation of the proportionality and necessity in every single case.[1] Like the EPO, it refers to the specific known or unknown perpetrators of a criminal offence that has already taken place. The EPRO only allows preserving data that is already stored at the time of receipt of the order, not the access to data at a future point in time after the receipt of the EPRO.
While EPOs to produce subscriber data[2] and access data[3] can be issued for any criminal offence an EPO for content data[4] and transactional data[5] may only be issued by a judge, a court or an investigating judge competent in the case. In case the EPO is issued by any other authority (which is competent to issue such an order in the issuing country), such an EPO has to be validated by a judge, a court or an investigating judge. In case of an EPO for subscriber data and access data, the EPO may also be validated by a prosecutor in the issuing country.
To reduce obstacles to the enforcement of the EPOs, the Directive makes it mandatory for service providers to designate a legal representative in the European Union to receive, comply with and enforce Data Orders. The obligation of designating a legal representative for all service providers that are operating in the European Union would ensure that there is always a clear addressee of orders aiming at gathering evidence in criminal proceedings. This would in turn make it easier for service providers to comply with those orders, as the legal representative would be responsible for receiving, complying with and enforcing those orders on behalf of the service provider.
Grounds on which EPOs can be issued
The grounds on which Data Orders may be issued are contained in Articles 5 and 6 of the Regulation which makes it very clear that a Data Order may only be issued in a case if it is necessary and proportionate for the purposes of a criminal proceeding. The Regulation further specifies that an EPO may only be issued by a member country if a similar domestic order could be issued by the issuing state in a comparable situation. By using this device of linking the grounds to domestic law, the Regulation tries to skirt around the thorny issue of when and on what basis an EPO may be issued. The Regulation also assigns greater weight (in terms of privacy) to transactional and content data as opposed to subscriber and access data and subjects the production and preservation of the former to stricter requirements. Therefore while Data Orders for access and subscriber data may be issued for any criminal offence, orders for transactional and content data can only be issued in case of criminal offences providing for a maximum punishment of atleast 3 years and above. In addition to that EPOs for producing transactional or content data can also be issued for offences specifically listed in Article 5(4) of the Regulation. These offences have been specifically provided for since evidence for such cases would typically be available mostly only in electronic form. This is the justification for the application of the Regulation also in cases where the maximum custodial sentence is less than three years, otherwise it would become extremely difficult to secure convictions in those offences.[6]
The Regulation also requires the issuing authority to take into account potential immunities and privileges under the law of the member country in which the service provider is being served the EPO, as well as any impact the EPO may have on fundamental interests of that member country such as national security and defence. The aim of this provision is to ensure that such immunities and privileges which protect the data sought are respected, in particular where they provide for a higher protection than the law of the issuing member country. In such situations the issuing authority “has to seek clarification before issuing the European Production Order, including by consulting the competent authorities of the Member State concerned, either directly or via Eurojust or the European Judicial Network.”
Grounds to Challenge EPOs
Service Providers have been given the option to object to Data Orders on certain limited grounds specified in the Regulation such as, if it was not issued by a proper issuing authority, if the provider cannot comply because of a de facto impossibility or force majeure, if the data requested is not stored with the service provider or pertains to a person who is not the customer of the service provider.[7] In all such cases the service provider has to inform the issuing authority of the reasons for the inability to provide the information in the specified form. Further, in the event that the service provider refuses to provide the information on the grounds that it is apparent that the EPO “manifestly violates” the Charter of Fundamental Rights of the European Union or is “manifestly abusive”, the service provider shall send the information in specified Form to the competent authority in the member state in which the Order has been received. The competent authority shall then seek clarification from the issuing authority through Eurojust or via the European Judicial Network.[8]
If the issuing authority is not satisfied by the reasons given and the service provider still refuses to provide the information requested, the issuing authority may transfer the EPO Certificate along with the reasons given by the service provider for non compliance, to the enforcing authority in the addressee country. The enforcing authority shall then proceed to enforce the Order, unless it considers that the data concerned is protected by an immunity or privilege under its national law or its disclosure may impact its fundamental interests such as national security and defence; or the data cannot be provided due to one of the following reasons:
(a) the European Production Order has not been issued or validated by an issuing authority as provided for in Article 4;
(b) the European Production Order has not been issued for an offence provided for by Article 5(4);
(c) the addressee could not comply with the EPOC because of de facto impossibility or force majeure, or because the EPOC contains manifest errors;
(d) the European Production Order does not concern data stored by or on behalf of the service provider at the time of receipt of EPOC;
(e) the service is not covered by this Regulation;
(f) based on the sole information contained in the EPOC, it is apparent that it manifestly violates the Charter or that it is manifestly abusive.
In addition to the above mechanism the service provider may refuse to comply with an EPO on the ground that disclosure would force it to violate a third-country law that either protects “the fundamental rights of the individuals concerned” or “the fundamental interests of the third country related to national security or defence.” Where a provider raises such a challenge, issuing authorities can request a review of the order by a court in the member country. If the court concludes that a conflict as claimed by the service provider exists, the court shall notify authorities in the third-party country and if that third-party country objects to execution of the EPO, the court must set it aside.[9]
A service provider may also refuse to comply with an order because it would force the service provider to violate a third-country law that protects interests other than fundamental rights or national security and defense. In such cases, the Regulation provides that the same procedure be followed as in case of law protecting fundamental rights or national security and defense, except that in this case the court, rather than notifying the foreign authorities, shall itself conduct a detailed analysis of the facts and circumstances to decide whether to enforce the order.[10]
Service Provider “Offering Services in the Union”
As is clear from the discussion above, the proposed regime puts an obligation on service providers offering services in the Union to designate a legal representative in the European Union, whether the service provider is physically located in the European Union or not. This appears to be a fairly onerous obligation for small technology companies which may involve a significant cost to appoint and maintain a legal representative in the European Union, especially if the service provider is not located in the EU. Therefore the question arises as to which service providers would be covered by this obligation and the answer to that question lies in the definitions of the terms “service provider” and “offering services in the Union”.
The term service provider has been defined in Article 2(2) of the Directive as follows:
“‘service provider’ means any natural or legal person that provides one or more of the following categories of services:
(a) electronic communications service as defined in Article 2(4) of [Directive establishing the European Electronic Communications Code];[11]
(b) information society services as defined in point (b) of Article 1(1) of Directive (EU) 2015/1535 of the European Parliament and of the Council[12] for which the storage of data is a defining component of the service provided to the user, including social networks, online marketplaces facilitating transactions between their users, and other hosting service providers;
(c) internet domain name and IP numbering services such as IP address providers, domain name registries, domain name registrars and related privacy and proxy services;”
Thus broadly speaking the service providers covered by the Regulation would include providers of electronic communication services, social networks, online marketplaces, other hosting service providers and providers of internet infrastructure such as IP address and domain name registries, or on their legal representatives where they exist. An important qualification that has been added in the definition is that it covers only those services where “storage of data is a defining component of the service”. Therefore, services for which the storage of data is not a defining component are not covered by the proposal. The Regulation also recognizes that most services delivered by providers involve some kind of storage of data, especially where they are delivered online at a distance; and therefore it specifically provides that services for which the storage of data is not a main characteristic and is thus only of an ancillary nature would not be covered, including legal, architectural, engineering and accounting services provided online at a distance.[13]
This does not mean that all such service providers offering the type of services in which data storage is the main characteristic, in the EU, would be covered by the Directive. The term “offering services in the Union” has been defined in Article 2(3) of the Directive as follows:
“‘offering services in the Union’ means:
(a) enabling legal or natural persons in one or more Member State(s) to use the services listed under (3) above; and
(b) having a substantial connection to the Member State(s) referred to in point (a);”
Clause (b) of the definition is the main qualifying factor which would ensure that only those entities whose offering of services has a “substantial connection” which the member countries of the EU would be covered by the Directive. The Regulation recognizes that mere accessibility of the service (which could also be achieved through mere accessibility of the service provider’s or an intermediary’s website in the EU) should not be a sufficient condition for the application of such an onerous condition and therefore the concept of a “substantial connection” was inserted to ascertain a sufficient relationship between the provider and the territory where it is offering its services. In the absence of a permanent establishment in an EU member country, such a “substantial connection” may be said to exist if there are a significant number of users in one or more EU member countries, or the “targeting of activities” towards one or more EU member countries. The “targeting of activities” may be determined based on various circumstances, such as the use of a language or a currency generally used in an EU member country, the availability of an app in the relevant national app store, providing local advertising or advertising in the language used in an EU member country, making use of any information originating from persons in EU member countries in the course of its activities, or from the handling of customer relations such as by providing customer service in the language generally used in EU member countries. A substantial connection can also be assumed where a service provider directs its activities towards one or more EU member countries as set out in Article 17(1)(c) of Regulation 1215/2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters.[14]
Part II - EU Directive and Service Providers located in India
In this part of the article we will discuss how companies based in India and running websites providing any “service” such as social networking, subscription based video streaming, etc. such as Hike or AltBalaji, Hotstar, etc. and how such companies would be affected by the E-evidence Proposal. At first glance a website providing a video streaming service may not appear to be covered by the E-evidence Proposal since one would assume that there may not be any storage of data. But if it is a service which allows users to open personal accounts (with personal and possibly financial details such as in the case of TVF, AltBalaji or Hotstar) and uses their online behaviour to push relevant material and advertisements to their accounts, whether that would make the storage of data a defining component of the website’s services as contemplated under the proposal is a question that may not be easy to answer.
Even if it is assumed that the services of an Indian company can be classified as information society services for which the storage of data is a defining component, that by itself would not be sufficient to make the E-evidence Proposal applicable to it. The services of an Indian company would still need to have a “substantial connection” with an EU member country. As discussed above, this substantial connection may be said to exist based on the existence of (i) a significant number of users in one or more EU member countries, or (ii) the “targeting of activities” towards one or more EU member countries. The determination of whether a service provider is targeting its services towards an EU member country is to be made based on a number of factors listed above and is a subjective determination with certain guiding factors.
There does not seem to be clarity however on what would constitute a significant number of users and whether this determination is to be based upon the total number of users in an EU member country as a proportion of the population of the country or is it to be considered as a proportion of the total number of customers the service provider has worldwide. To explain this further let us assume that an Indian company such as Hotstar has a total user base of 100 million customers.[15] If there is a situation where 10 million of these 100 million subscribers are located in countries other than India, out of which there are about 40 thousand customers in France and another 40 thousand in Malta; then it would lead to some interesting analysis. Now 40 thousand customers in a customer base of 100 million is 0.04% of the total customer base of the service provider which generally speaking would not constitute a “significant number”. However if we reckon the 40 thousand customers from the point of view of the total population of the country of Malta, which is approximately 4.75 Lakh,[16] it would mean approx. 8.4% of the total population of Malta. It is unlikely that any service affecting almost a tenth of the population of the entire country can be labeled as not having a significant number of users in Malta. If the same math is done on the population of a country such as France, which has a population of approx. 67.3 million,[17] then the figure would be 0.05% of the total population; would that constitute a significant number as per the E-evidence Proposal.
The issues discussed above are very important for any service provider, specially a small or medium sized company since the determination of whether the E-evidence Proposal applies to them or not, apart from any potential legal implications, imposes a direct economic cost for designating a legal representative in an EU member country. Keeping in mind this economic burden and how it might affect the budget of smaller companies, the Explanatory Memorandum to the Regulation clarifies that this legal representative could be a third party, which could be shared between several service providers, and further the legal representative may accumulate different functions (e.g. the General Data Protection Regulation or e-Privacy representatives in addition to the legal representative provided for by the E-evidence Directive).[18]
In case all the above issues are determined to be in favour of the E-evidence Directive being applicable to an Indian company and the company designates a legal representative in an EU member country, then it remains to be seen how Indian laws relating to data protection would interact with the obligations of the Indian company under the E-evidence Directive. As per Rule 6 of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (“SPDI Rules”) service providers are not allowed to disclose sensitive personal data or information except with the prior permission of the except disclosure to mandated government agencies. The Rule provides that “the information shall be shared, without obtaining prior consent from provider of information, with Government agencies mandated under the law to obtain information including sensitive personal data or information for the purpose of verification of identity, or for prevention, detection, investigation including cyber incidents, prosecution, and punishment of offences….”. Although the term “government agency mandated under law” has not been defined in the SPDI Rules, the term “law” has been defined in the Information Technology Act, 2000 (“IT Act”) as under:
“’law’ includes any Act of Parliament or of a State Legislature, Ordinances promulgated by the President or a Governor, as the case may be. Regulations made by the President under article 240, Bills enacted as President's Act under sub-clause (a) of clause (1) of article 357 of the Constitution and includes rules, regulations, byelaws and orders issued or made thereunder;”[19]
Since the SPDI Rules are issued under the IT Act, therefore the term “law” referred as used in the would have to be read as defined in the IT Act (unless court holds to the contrary). This would mean that Rule 6 of the SPDI Rules only recognises government agencies mandated under Indian law and therefore information cannot be disclosed to agencies not recognised by Indian law. In such a scenario an Indian company may not have any option except to raise an objection and challenge an EPO issued to it on the grounds provided in Article 16 of the Regulation, which process itself could mean a significant expenditure on the part of such a company.
Conclusion
The framework sought to be established by the European Union through the E-evidence Proposal seeks to establish a regime different from those favoured by countries such as the United States which favours Mutual Agreements with (presumably) key nations or the push for data localisation being favoured by countries such as India, to streamline the process of access to digital data. Since the regime put forth by the EU is still only at the proposal stage, there may yet be changes which could clarify the regime significantly. However, as things stand Indian companies may be affected by the E-evidence Proposal in the following ways:
- Companies offering services outside India may inadvertently trigger obligations under the E-evidence Proposal if their services have a substantial connection with any of the member states of the European Union;
- Indian companies offering services overseas will have to make an internal determination as to whether the E-evidence Proposal applies to them or not;
- In case of Indian companies which come under the E-evidence Proposal, they would be obligated to designate a legal representative in an EU member state for receiving and executing Data Orders as per the E-evidence Proposal.
- If a legal representative is designated by the Indian company they may have to incur significant costs on maintaining a legal representative especially in a situation where they have to object to the implementation of an EPO. The company would also have to coordinate with the legal representative to adequately put forth their (Indian law related) concerns before the competent authority so that they are not forced to fall foul of their legal obligations in either jurisdiction. It is also unclear the extent to which appointed legal representatives from Indian companies could challenge or push back against requests received.
Disclaimer: The author of this Article is an Indian trained lawyer and not an expert on European law. The author would like to apologise for any incorrect analysis of European law that may have crept into this article despite best efforts.
[1] Explanatory Memorandum to the Proposal for Regulation of the European Parliament and of the Council on European Production and Preservation Orders for Electronic Evidence in Criminal Matters, Pg. 4, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018PC0225&from=EN.
[2] Subscriber data means data which is used to identify the user and has been defined in Article 2 (7) as follows:
“‘subscriber data’ means any data pertaining to:
(a) the identity of a subscriber or customer such as the provided name, date of birth, postal or geographic address, billing and payment data, telephone, or email;
(b) the type of service and its duration including technical data and data identifying related technical measures or interfaces used by or provided to the subscriber or customer, and data related to the validation of the use of service, excluding passwords or other authentication means used in lieu of a password that are provided by a user, or created at the request of a user;”
[3] The term access data has been defined in Article 2(8) as follows:
“‘access data’ means data related to the commencement and termination of a user access session to a service, which is strictly necessary for the sole purpose of identifying the user of the service, such as the date and time of use, or the log-in to and log-off from the service, together with the IP address allocated by the internet access service provider to the user of a service, data identifying the interface used and the user ID. This includes electronic communications metadata as defined in point (g) of Article 4(3) of Regulation concerning the respect for private life and the protection of personal data in electronic communications;”
[4] The term content data has been defined in Article 2 (10) as follows:
“‘content data’ means any stored data in a digital format such as text, voice, videos, images, and sound other than subscriber, access or transactional data;”
[5] The term transactional data has been defined in Article 2(9) as follows:
“‘transactional data’ means data related to the provision of a service offered by a service provider that serves to provide context or additional information about such service and is generated or processed by an information system of the service provider, such as the source and destination of a message or another type of interaction, data on the location of the device, date, time, duration, size, route, format, the protocol used and the type of compression, unless such data constitues access data. This includes electronic communications metadata as defined in point (g) of Article 4(3) of [Regulation concerning the respect for private life and the protection of personal data in electronic communications];”
[6] Explanatory Memorandum to the Proposal for Regulation of the European Parliament and of the Council on European Production and Preservation Orders for Electronic Evidence in Criminal Matters, Pg. 17, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018PC0225&from=EN.
[7] Articles 9(4) and 10(5) of the Regulation.
[8] Article 10(5) of the Regulation.
[9] Article 15 of the Regulation.
[10] Article 16 of the Regulation. Also see https://www.insideprivacy.com/uncategorized/eu-releases-e-evidence-proposal-for-cross-border-data-access/.
[11] Article 2(4) of the Directive establishing European Electronic Communications Code provides as under:
‘electronic communications service’ means a service normally provided for remuneration via electronic communications networks, which encompasses 'internet access service' as defined in Article 2(2) of Regulation (EU) 2015/2120; and/or 'interpersonal communications service'; and/or services consisting wholly or mainly in the conveyance of signals such as transmission services used for the provision of machine-to-machine services and for broadcasting, but excludes services providing, or exercising editorial control over, content transmitted using electronic communications networks and services;”
[12] Information Society Services have been defined in the Directive specified as “any Information Society service, that is to say, any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services.”
[13] Proposal for a Directive of the European Parliament and of the Council Laying Down Harmonised Rules on the Appointment of Legal Representatives for the Purpose of Gathering Evidence in Criminal Proceedings, Pg 8, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018PC0226&from=EN.
[14] Proposal for a Directive of the European Parliament and of the Council Laying Down Harmonised Rules on the Appointment of Legal Representatives for the Purpose of Gathering Evidence in Criminal Proceedings, Pg 9, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018PC0226&from=EN.
[15] Hotstar already has an active customer base of 75 million, as of December, 2017; https://telecom.economictimes.indiatimes.com/news/netflix-restricted-to-premium-subscribers-hotstar-leads-indian-ott-content-market/62351500
[16] https://en.wikipedia.org/wiki/Malta
[17] https://en.wikipedia.org/wiki/France
[18] Proposal for a Directive of the European Parliament and of the Council Laying Down Harmonised Rules on the Appointment of Legal Representatives for the Purpose of Gathering Evidence in Criminal Proceedings, Pg 5, available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018PC0226&from=EN.
[19] Section 2(y) of the Information Technology Act, 2000.
Document Actions