You are here: Home / Internet Governance / News & Media / WhatsApp races against time to fix fake news mess ahead of 2019 general elections

WhatsApp races against time to fix fake news mess ahead of 2019 general elections

by Admin — last modified Jul 25, 2018 03:27 PM
On Friday, when WhatsApp announced that it would pilot a ‘five media-based forwards limit’ in India, the government came up with an unequivocal reminder.

The article by Venkat Ananth was published in Economic Times on July 24, 2018. Sunil Abraham was quoted.

“When rumours and fake news get propagated by mischief mongers, the medium used for such propagation cannot evade responsibility and accountability. If they remain mute spectators, they are liable to be treated as abettors and thereafter face consequent legal action,” noted a ministry of electronics and information technology (MeitY) statement.

The statement also said there was a need for bringing in traceability and accountability, “when a provocative/inflammatory message is detected and a request is made by law enforcement agencies.”

Significantly, MeitY took aim at WhatsApp’s core end-to-end encryptionbased product feature and its oft-quoted and reiterated commitment to privacy. It was specific, going beyond the usual “do more” requests.

The stand also poses an interesting dilemma for the messenger service. How can it act while protecting its privacy commitment?

“It is practical ly impossible for WhatsApp to regulate content in the peer-to-peer encrypted environment it is set up in,” says Rahul Matthan, partner, Trilegal. “An encrypted platform is what we want. The government is trying to maintain a strict and difficult balance. The government tends to err on the side of violating civil liberties over offering privacy to innocent users. The WhatsApp case is going in that direction.”

No Longer Low-Key

In India, its largest market, WhatsApp has benefitted from quietly operating in the shadows of its more popular parent, Facebook, growing to a currently active user base of 200 million.

However, in the last six months, while it continues to be perceived as an asset by politicos for outreach and propaganda, WhatsApp is now increasingly being tapped by the bad guys to disseminate deliberate misinformation, rumour mongering and fake news. And not the Donald Trump kind either.

It is leading to loss of lives on the ground, through lynchings, kidnappings and related crimes.

WhatsApp spokesperson Carl Woog says, “The recent acts of violence in India have been heartbreaking and reinforce the need for government, civil society and technology companies to work together to keep people safe.”

“By focusing on solutions to fake news inside our smartphones, we are ignoring a tougher problem that requires several complementary solutions,” says Apar Gupta, a Delhi-based lawyer and cofounder of the Internet Freedom Foundation.

“Let us not forget that a platform is not responsible for policing.”

But the general public and government perception — and, to some extent, concern — remains that WhatsApp has been slow to react to these situations.

To Police or Not to Police

Interestingly, the government and ruling party realise WhatsApp could be pivotal to their fortunes in the next electoral cycle — in the run-up to Elections

“The government is coming under increased pressure to act on these lynchings, which is why it is taking a shootthe-messenger kind of an approach,” says Matthan. “An unsophisticated government would have advocated a blanket ban on the source. But here, the government, it appears, wants to regulate tech by having access to your device, through an app, in the case of the (telecom regulator) Trai DND app to battle spam.”

This is also why WhatsApp has intensified its outreach efforts. Over the past 10 days, a team of its US and India-based executives have been meeting key stakeholders in Delhi and Mumbai, including the Election Commission, political parties, the Reserve Bank of India, banks and civil society, as ET reported last week.

The team includes public policy manager Ben Supple, senior director, customer operations, Komal Lahiri and WhatsApp India communication manager Pragya Misra Mehrishi. They are now expected to meet key government officials from MeitY from Monday, sources say.

“The intense outreach efforts is essentially linked to WhatsApp wanting to protect its payments play in India,” says a Delhi-based public policy professional, who did not want to be named as he is not authorised to speak to the media.

“It (WhatsApp) is really worried about Google’s efforts with Tez and the gap that will only widen if the government delays grant of permission.”

WhatsApp is stressing some key points while reinforcing the steps it is taking to counter challenges. One, the best practices of using the platform. Two, the need to work together to prevent abuse of WhatsApp, and three, most importantly, to educate people about the best ways of using the platform. WhatsApp was primarily designed for private, oneon-one messaging or group chats among acquaintances, not for mass broadcast, which parties resort to during elections.

WhatsApp says it is working on a warfooting to tackle the problems. It has introduced product changes to counter user behaviour. There’s more control, where a group ‘admin’ can restrict users who can send messages to the group, modify a group icon or edit description, a feature for which it has taken a leaf out of rival Telegram’s book. To counter fake news, it added a ‘forwarded’ label. And now, limited the forwarding to five in India, and 20 in the rest of the markets, a significant reduction from 250 prior to that.

While the impact of these product tweaks is yet to be seen at an individual user level, the larger concern for WhatsApp today is the potential misuse of its platform to manipulate elections, a very real possibility next year.

Tipping Point

The company’s noticeable change of tack comes after it noticed certain trends during the recent Karnataka elections, during which one of its executives spent a week in Bengaluru.

One of the political parties, which a person aware of the developments in WhatsApp declined to name, was using “dozens of accounts to create thousands of groups,” as part of its campaign.

The party, the source says, was adding random numbers (approximately 100) to the group during creation. By random numbers, he meant people who did not know each other, something WhatsApp can identify using the metadata it collects when a user gives it access to its phone book. WhatsApp deems this behaviour ‘organised spamming.’

“These were real people not necessarily known to each other,” says the person quoted above. “A specific account would be added to that group to be made the admin.”

Mostly, this admin was the number used to create these multiple groups or, in WhatsApp terms, the account that was not behaving the way private or group communication happens.

Also, the users would be a mix of fake accounts, which is a major red flag for WhatsApp. “The group starts with some bulk added users and then the real ones get bulk-added,” says the source. WhatsApp deems this practice a violation of its terms of service.

Company sources add that WhatsApp was able to detect these trends and proactively banned these users before they were able to add people. “In some cases, our systems didn’t catch this in time, but we were able to proactively prevent users from receiving such spam. That detection is now internalised and if someone tries to replicate that behaviour anywhere in the world, we will be able to detect them,” says another person familiar with developments at WhatsApp.

According to several media reports, the BJP and the Congress too created over 30,000 groups for campaigning and organising efforts. To counter organised political spamming, WhatsApp has now begun using machine learning tools. WhatsApp can trace the last few messages in a group and block it entirely from the platform. At the detection level, WhatsApp checks for familiarity. “Do the persons know each other, or have they interacted before?” through metadata it possesses through phone numbers.

The second person quoted in the story says the company now focuses its detection “upstream,” that is, catching the user at the registration stage. “When you register on WhatsApp and immediately create a group, questions asked are, ‘Does this behaviour look like what a regular user does? Or does it look like users who have misused it in the past?’” he says.

WhatsApp, sources tell ET, is also using machine learning to detect sequential numbers that could be used to create these groups. “If they go and buy a phone number, they go to one carrier and its mostly sequential. If we notice 100 numbers with the same prefix have signed up, nearly 80 get automatically banned. What we do is feed these sequences, permutations and combinations to detect good/bad users,” the person quoted above says. “It learns millions of these combination signals on behaviour and help us make a decision.”

Civil Society as a Key Layer

WhatsApp also sees an enabling role for civil society, especially for digital literacy. Its team has currently met seven non-governmental organisations, including digital literacy groups and others involved in the area of financial inclusion. This is part of its public policy efforts while also solidifying its payments play.

“The level of responsibility for a platform is to not consciously cause — and, in fact, to take active measures to prevent — social harm,” says Gupta of IFF. “It has to be done without injury to end-to-end encryption, which offers safety and privacy to users.

Many products and product strategies can be adopted — from increasing media diversity on the platform to promoting auditing features that rely on partnerships with fact-checking organisations. We must demand accountability but resist the rhetorical attraction of technophobia.”

As ET has reported, WhatsApp will adapt a fact-checking model, Verificado 2018, deployed during the recent Mexican presidential elections. Verificado proactively debunked fake news and misinformation on the platform. “The rumours were found to be very similar to India.

Verificado was specifically focused on misinformation from candidates,” says the first person quoted in the story. “Plus, it helped effectively tackle misinformation during an earthquake in Mexico.”

For WhatsApp, one of the key learnings from the Mexico elections was that it could look at the spam reports and categorise them as politics-related. The company, unsurprisingly, saw an increase in political spam in the buildup to election day.

“They realised Verificado assists users to get help within the app. But it also aids news organisations, political parties, the government and users,” adds the person. The company is undertaking a similar exercise in Brazil, where 24 media outlets have come together under the Comprova initiative to fact-check viral content and rumours on WhatsApp.

Sunil Abraham, executive director of the Bengaluru-based Centre for Internet and Society believes WhatsApp can further tweak its product to enable real-time checks. “They can enable a ‘fact check this’ button for users to upload content to a fact-checking database. If the content has already been fact-checked, the score can be displayed immediately. Alternatively, the fact-checking service can return the score at a later date,” he explains.