Cry, you nasty trolls
The article by Prasun Chaudhuri was published in the Telegraph on April 26, 2015. Rohini was quoted.
When India's star batsman Virat Kohli failed to perform at the India vs Australia semi-final match at the World Cup, a section of Indian fans started venting their fury on his girlfriend Anushka Sharma on Twitter. The actress, who had flown to Sydney to watch the match, was blamed for India's loss and her Twitter account was flooded with abusive posts. One Atul Khatri tweeted: Hey Anushka, can you please distract the Aussie fielders on the boundary by showing them your lip job? Plleeeaasee. One anonymous tweet requested the "public to boycott Anushka Sharma's films (sic)" while another by Bollywood producer Kamal R. Khan incited his followers to "stone Anushka's house".
The star couple are not alone. Media persons, scholars and celebrities - especially if they are women - often face such vicious attacks on Twitter. Ask Chinmayi Sripada, the Chennai-based singer, or Sagarika Ghose, a prime time TV anchor, or scholar and columnist Ramachandra Guha who have endured worse forms of assaults - ranging from threats of gang rape, torture and murder. Many Twitter users across the world have gone silent and even deactivated their Twitter accounts after being harassed on the platform.
With more and more people around the world facing such vitriolic attacks, Twitter - the San Francisco-based online social networking service - recently decided to protect its users from abusive tweets. It switched on an anti-abuse tool that automatically identifies abusive tweets and hides them from their intended target. According to Twitter, the tool will search for patterns of misuse and identify repeat offenders so as to enable the social media platform to impose account suspension on them. "Users must feel safe on Twitter in order to fully express themselves and we need to ensure that voices are not silenced because people are afraid to speak up," wrote Shreyas Doshi, director of product management at Twitter, in a blog post.
Dick Costolo, Twitter's CEO, admitted two months ago at an internal forum that his company "sucked" at dealing with bullies and abusers. He said he would "start kicking these [abusive] people off... and making sure that when they issue their ridiculous attacks, nobody hears them."
Hemanshu Nigam, former chief security officer of social media platform MySpace and software giant Microsoft in the US, hails Twitter's new move. "The new tools are meant to honour human dignity and safety. Now that online and offline persona of many social media users have converged, it's become essential for tech companies to take steps to protect people from assaults in the cyber world." Nigam, a founder of SSP Blue, a leading online security firm, had sifted through thousands of offensive comments and abusive images during his earlier avatar in social media companies. "People with such evil intentions are minuscule but their twisted expressions can have a profound impact not only on the victims but thousands of impressionable minds of young users," he says.
Abuse on social media platforms can be extremely brutal and traumatising. According to Debarati Halder, a lawyer and cyber victim counsellor based in Tirunelveli, Tamil Nadu, a large proportion of these attacks - especially those where explicit pictures and videos of sexual acts are sent - are perpetrated on women by their former boyfriends or husbands to seek revenge on their ex-partners.
She feels that social media giants have failed to protect their users and that these so-called "new tools" and automated systems fail to screen most cases of abuse. "They (social media platforms) also don't react to reports of abusive behaviour unless they are lodged by celebrities or other influential people," she adds.
While announcing the new policy, Twitter's general counsel Vijaya Gadde wrote in The Washington Post, "At times, this (tweet) takes the form of hateful speech directed at women or minority groups; at others, it takes the form of threats aimed to intimidate those who take a stand on issues. These users often hide behind the veil of anonymity on Twitter and create multiple accounts expressly for the purpose of intimidating and silencing people." She also wrote how technicians at Twitter are going to erect a "better framework to protect vulnerable users, such as banning the posting of non-consensual intimate images."
Rohini Lakshane, programme officer at the Bangalore-based Centre for Internet and Society, says that Twitter had simplified and enhanced its system of reporting abuse in December last year. "Measures such as muting and blocking users and manual review of reports were already in place. The changes included mechanisms for Twitter's review teams to expedite responses from dire forms of abuse," she says.
Evidently, these measures have not been too effective. Says Lakshane, "Women are still disproportionately targeted on Twitter and several users simply choose to leave rather than face the strain of dealing with abuse, rape and death threats, and insults."
Singer Sripada, however, is one of those few Twitter users who stood up against her abusers. When she tweeted in support of Tamil fishermen who were attacked by the Sri Lankan Navy, she was flooded with abusive tweets that were tantamount to sexual harassment. She says, "I took on the abusers - one of them a professor at a top fashion institute. I filed a case under Section 66A of the IT Act (which is now defunct) and they were jailed for two weeks. That was when I saw the worst face of online abuse."
Advocate Halder rues the recent scrapping of Section 66A of the IT Act to protect freedom of speech. "The act could have have been modified to protect victims of abuse." She believes the new Twitter policy to check abuse may not be able to check the spread of the meta data of a post as it is replicated across thousands of sites.
"If the visuals or texts depict explicit sex, these spread like wildfire in voyeuristic websites, mirror sites and caches before any law enforcer anywhere in the world can react," says Siddhartha Chakraborty, a cyber expert based in Calcutta. A single tweet, a Facebook comment or a YouTube video "gone viral" often causes significant damage to an individual or a company before they can even report the abuse, says Rajiv Pratap, a data analyst based in Calcutta and California.
The problem also lies with over 20 million robot users - or automated accounts, not actively operated by humans but remotely controlled by groups of anonymous people - who are difficult to track. "These bots generate a lot of spam and even abusive comments," says Harsh Ajmera, a social media expert based in New Delhi. "Twitter is not striking at all the nasty content, but putting various checks like limiting the reach, asking you to get rid of those tweets which can protect genuine users."
On the other hand, stresses Lakshane, using parameters such as the number of flags (reports of abuse) a tweet receives can have implications for free speech - an unpopular but non-abusive view could also be targeted. Moreover, it's essential for reviewers to understand cultural and linguistic connotations to be able to effectively address abuse.
Still, Nigam is hopeful. He says, "Social media companies are going through a learning curve. As they evolve they will learn how to rein in abusers."
Twitter's 288 million users worldwide are waiting for that to happen.