You are here: Home / Internet Governance / News & Media / What the government's draft IT intermediary guidelines say

What the government's draft IT intermediary guidelines say

by Admin — last modified Feb 13, 2019 12:31 AM
Intermediaries will have to hand over to government agencies any information within 72 hours. Intermediaries will have to use automated tools to trace the person posting unlawful content.

The article by Abhijit Ahaskar was published in Livemint on February 12, 2019. CIS research was quoted.


With voices for regulating tech companies getting stronger in the wake of growing incidence of fake news being circulated through social media platforms, the Ministry of Electronics and Information Technology (MEITY) of India has decided to re-examine the Information Technology (IT) Intermediary Guidelines, 2011, under the IT Act, 2000.

Setting the wheel in motion, the ministry proposed a draft called Information Technology Intermediaries Guidelines (Amendment), 2018, and released the recommendations on its website for public comments in December 2018. The first round of comments ended on 31 January, 2019 and was made public last week. The second round of comments and counter-comments will close on 14 February, 2019.

What the draft proposes

The term intermediary refers to all tech companies that are hosting user data or are providing users with a platform for communication. This brings all internet, social media, telecom companies in its ambit.

The draft amendment proposes that intermediaries will have to hand over to governmentagencies any information that might be related to cyber security, national security and related with the investigation, prosecution or prevention of an offence, within 72 hours.

They will have to take down or disable content considered defamatory or against national security under Article 19 (2) of the Constitution within 24 hours on being notified by the appropriate government or its agency in addition to using automated tools to identify, remove and trace the origin of such content. Intermediaries with over 55 lakh users will be required to have a permanent registered office with physical address and a senior official who would be available for coordination with law enforcement agencies.

Concerns over the draft guidelines

Microsoft notes that the problem MEITY is trying to address is of fake news. “Existing regulations provide enough powers to work with social media platforms. There may be a case to bring out additional guidelines for certain types of intermediaries like social media platforms. There may also be a case to strengthen other laws which make the punishment of fake news and misuse of social media stringent. The focus should be on the perpetrators of the crime rather than the intermediaries," it has said in response to the guidelines. Regarding deployment of tools to proactively identify and remove unlawful content, Microsoft cautions that intermediaries will have to monitor all content passing through their systems for this, which is a violation of their individual privacy and right to freedom of expression. It will also be technically impractical due to the high cost of deploying such tech.

According to Broadband India Forum, one of the grounds for the Supreme Court striking down Section 66A of the IT Act, 2000, in Shreya Singhal vs Union of India was the vagueness of the terms used in the provision, such as offensive, menacing and dangerous, which invaded the right of free speech. However, words with a similar level of vagueness, such as grossly harmful, harassing and hateful exist in the proposed draft.

The Centre for Internet and Society (CIS) pointed out that existing laws provide enough teeth to the Indian agencies to act. For instance, Section 505 of the IPC has provisions to penalise disinformation while Sections 290 and 153A of the IPC have provisions if the disinformation is being used to create communal strife. CIS has also flagged the scope of the term unlawful as it is not clearly defined, leaving room for broad interpretation. On the traceability clause, CIS draws attention to the lack of clarity on whether it applies on just social media platforms and messaging services or all intermediaries.

This can be a bit of problem for ISPs which may have no access to contents of an encrypted communication sent and received on its network.

Threat to privacy

The traceability clause, which requires intermediaries to use automated tools to trace the person posting unlawful content, came in for a lot of criticism. While the Ministry in an official tweet in January 2018 clarified that it only requires intermediaries to trace the origin of messages which lead to unlawful activities without breaking encryption, experts believe it isn’t possible without lowering encryption standards or building a backdoor to access encrypted communications.

Amnesty International slammed the clause, arguing, “While governments can legitimately use electronic surveillance to protect people from crime, forcing companies to weaken encryption will affect all users’ online privacy. Such measures would be inherently disproportionate, and therefore impermissible under international human rights law."

Wipro in its response rues such a traceability requirement could lead to breaking of encryption on apps such as WhatsApp and Signal, and this will be a major threat to the privacy rights of citizens as enshrined in the Puttaswamy judgment of the Supreme Court.

Undue burden on small companies

Commenting on the 72 hours timeline for furnishing user data, the Internet Freedom Foundation says that such short deadline for compliance can only be fulfilled by large social media platforms. This might make smaller companies over compliant to government demands for immunity resulting in a total disregard for user privacy.

Regarding taking down of unlawful content, technology policy researchers form National Institute of Public Finance & Policy (NIPFP) caution that overzealous implementation along with over reliance on technological tools for the detection of unlawful content would lead to the curtailment of online speech. They pointed out the instance where Facebook had removed posts documenting the ethnic cleansing of Rohingyas as it had classified Rohingya organisations as dangerous militant groups.

Document Actions

Filed under: