Dadri reopens debate on online hate speech
The article by Amulya Gopalakrishnan was published in the Times of India on October 9, 2015. Pranesh Prakash gave inputs.
"Jo bhi gau ka mans khaye, use aur uske parivar ko turant maar do (those who eat beef should be killed along with their families)" is just one example of the kind of tweets that got an FIR filed against the handle. The UP police also booked a person for spreading inflammatory rumours about cow-smugglers killing a police officer.
Their comrades immediately alleged censorship, and various profiles with pictures of weapon-brandishing deities rallied under hashtags of support. Taslima Nasreen summed up their grievance, claiming that "free speech allows hate tweets".
There are, of course reasonable restrictions to free speech when it looks likely to spiral into violence, what a 1989 Supreme Court judgment called a "spark in a powder keg" situation. The IPC has Section 153A, 153B, 295 and 505 and more, which curb speech that promotes enmity between groups on the basis of religion, race, place, birth or language, defiles places of worship, insults religious sentiments, creates public mischief and so on. But social media presents an almost daily dilemma, and makes it clear that it is time for more discriminating decisions on what kinds of extreme speech can be gagged. As the SC judgment knocking down the over-broad Section 66A of the IT Act noted, discussion and advocacy , however, hateful or prejudiced, are not incitement.
All hate speech seeks to sharpen tensions, but not all such speech is equally damaging. As Pranesh Prakash, policy director of the Centre for Internet and Society , Bangalore, puts it, "freedom of speech operates within fields of power".Hate speech either aims to taunt and diminish a minority, or tell others in an in-group that their feelings are shared.Different countries make their own judgment calls as they balance these two values, both fundamental to a democracy: free expression and the defence of human dignity and inclusion.
Internet intermediaries, ISPs or powerful private corporations like Twitter and Facebook, have to comply with court orders and official government requests, but they are not always on the same page about unacceptable content. For a company like Twitter, for instance, the need to preserve individual voices, however discordant, is more valuable than the need to create a more perfect public sphere. It advised offended users to simply block controversial content, though recently , it has begun to consider "direct, repeated attacks on an individual" a potential violation too.
Susan Benesch, of Harvard University's Berkman Center, has suggested a framework to identify a dangerous speech act, which factors in the profile of the speaker, the emotional state of the audience, the content of the speech itself as a call to action, the social context in which it occurs, and the means used to spread it.
The UP police has a social media lab to track and scotch rumours. "That's how we recently busted a false story about a khap panchayat ordering gangrapes," says a UP police official who did not wish to be named. Rather than appealing to the social media company for takedowns -an onerous process, and one where provocations are often difficult to explain -it is easier to find and deal with the source of the content, he says. One can identify problematic material either by location or keywords, says Ponnurangam K, assistant professor at IIIT, Delhi, who has developed the social network analytics tool used by UP police. Given the speed and scale of the internet and the volume of user-generated content, legal curbs cannot be invoked for every instance of hate speech. "It is far more feasible to monitor these rumours and take preventive action on the ground, where the harm is likely to be felt, and to use the same medium to counter the rumours with truth," says Prakash. Social media was assumed to have responsible for spreading the 2011 riots in the UK, but it turned out to be even more effective in stemming the contagion, righting rumours and helping law enforcers.
During the 2013 general election in Kenya, the Umati project trawled social media for trending hate content and tried to counter its effects by exposing and shunning those advocating violence. A repository called Hatebase tries to identify local words and phrases that indicate brewing trouble, to make it easier to find the active signals of threat from the low-level hum -repeated references to cow meat in India, or "sakkiliya", a Sinhala word to disparage Tamils in Sri Lanka.
"The government should work with platforms to find the nodes of dangerous speech, to counter them, and support campaigns for those victimised," says Chinmayi Arun, research director of the Centre for Communication Governance at the National Law University, Delhi, who is leading a three-year project on online hate speech, in collaboration with the Berkman Center.
It is far more effective to boost media literacy, help people sniff out bias and propaganda, understand how photos can be morphed and fake videos passed off as real. "Law enforcers need the imagination and patience to develop these strategies, rather than try to censor controversial speech wherever possible," she says.
Of course, when IT cells of political parties are the fount of the most of these excitable handles, that's easier said than done.