WhatsApp fails to curb sharing of child sex abuse videos
Videos and pictures of children being subjected to sexual abuse are being openly shared on Facebook’s WhatsApp on a vast scale, with the encrypted messaging service failing to curb the problem despite banning thousands of accounts every day.
The revelations emerged after Israeli researchers warned Facebook, the owner of WhatsApp, in September that it was easy to find and join group chats where — in some instances — up to 256 people were sharing sexual images and videos of children.
These groups were monitored and documented for months by two charities in Israel dedicated to online safety, Netivei Reshet and Screensaverz. Their purpose was often obvious from names such as “cp” and from explicit photographs used on their profile photos.
Such identifiers were not encrypted, and were publicly viewable so as to advertise the illegal content, yet systems that WhatsApp said it had in place failed to detect them.
A review of the groups by the Financial Times quickly found several that were still extremely active this week, long after WhatsApp was warned about the problem by the researchers.
“It is a disaster: this sort of material was once mostly found on the darknet, but now it’s on WhatsApp,” said Netivei Reshet’s Yona Pressburger, referring to the parts of the internet that are purposefully hidden from normal search engines and that criminals use to cloak their activities.
A spokesman for WhatsApp said it “has a zero-tolerance policy around child sexual abuse” and “actively bans accounts suspected of sharing this vile content”.
The messaging app also said it actively scanned WhatsApp group names and profile photos in an attempt to identify people sharing such illegal material. Such techniques led WhatsApp to ban approximately 130,000 accounts in the last 10 days, out of its user base of about 1.5bn.
But the NGOs’ findings illustrate a bigger problem: WhatsApp’s end-to-end encryption, designed to protect privacy, means that the company cannot see the content of the messages users send, making it harder to monitor when child abuse imagery is shared. It can also hamper law enforcement from uncovering illegal activity.
With the users of encrypted messaging services, such as WhatsApp, Apple’s iMessage, Telegram and Signal now numbering in the billions, political pressure has mounted in the US and UK for companies to grant access to criminal investigators.
WhatsApp, which Facebook bought in 2014 for $22bn, finished rolling out end-to-end encryption for messages in 2016.
As a result, even if Facebook wanted to, it could not apply the same tools it uses to remove illegal images and text from its main social networking site and the photo-sharing site Instagram, which it also owns. On those services, software automatically searches for keywords and images of nudity, pornography and violence. Facebook also employs 20,000 content moderators, often low-paid contractors, who review posts manually.
By contrast, WhatsApp has only 300 employees in total, and far fewer resources dedicated to monitoring for illegal activity.
Even so, Hany Farid, a professor of computer science at Berkeley who developed the PhotoDNA system used by more than 150 companies to detect child abuse imagery online, said Facebook could do more to get rid of illegal content on WhatsApp.
“Crimes against children are getting worse and worse, the kids are getting younger and younger and the acts are getting more violent. It’s all being fuelled by these platforms,” he said.
“The problem is deep-rooted in these companies. It’s the ‘move fast and break things’ model.”
Law enforcement officials have noted a change in how paedophiles are using technology to mask their activities.
“We are seeing an uptick in the use of encrypted messaging apps on the offender side, and it poses significant issues for law enforcement in terms of traceability and visibility,” said Cathal Delaney, who leads the team combating online child sexual abuse at Europol.
WhatsApp was at the centre of a 2017 child abuse investigation led by Spanish police dubbed Operation Tantalio that led to the arrest of 38 people in 15 countries. The investigation began in 2016 when Spanish investigators identified dozens of WhatsApp groups that were circulating child sexual exploitation materials. They then traced the mobile phone numbers used to identify individuals involved, as well as those suspected of producing the material.
As successful as Operation Tantalio appeared, law enforcement globally has struggled to stem the tide of child sexual abuse materials online.
The Israeli NGOs stumbled on to WhatsApp’s problem in August after a young man called their hotline to report being exposed to pornography on the messaging service.
On the Google Play store for Android smartphones, there are dozens of free apps that collect links to WhatsApp groups. Via such apps, the NGOs found extensive child abuse material and began to record what they saw.
“We went to great lengths to document as broadly as possible to prove this is not some minor activity,” wrote the NGOs in a report. “The total time spent on checking this was approximately 20 days, from several different devices. During this period we monitored continuously 10 active groups, and dozens more for short intervals.”
The NGOs emailed Jordana Cutler, Facebook’s head of policy in Israel, in early September to warn the company about their findings. They asked to meet Facebook four times, according to emails seen by the FT, but Ms Cutler did not respond to the requests.
Instead, she sent a reply asking for the evidence — which totalled several gigabytes of data and more than 800 videos and images — so it could be sent to teams outside the country for investigation. She also asked the NGOs if they had gone to the police, and suggested that working with the police would be “the most effective way”. Ms Cutler did not pass on the NGOs’ warnings to WhatsApp.
The NGOs declined to send over links because they wanted to meet Facebook directly to discuss what they saw as a broad and persistent problem. They also took their evidence to the Israeli police and filed a complaint, and contacted a member of the Israeli parliament who oversees a committee on children’s safety.
Frustrated by what they saw as inaction on Facebook’s part, the NGOs eventually compiled a 14-page report of their findings and brought it to the FT via an intermediary, Zohar Levkovitz, a well-known Israeli technology executive.
Mr Levkovitz recently founded a start-up called AntiToxin Technologies, which is working to develop tools to protect children online. There is no financial relationship between his company and the NGOs, although Mr Levkovitz’s company does have an interest in calling attention to safety issues raised by children’s use of technology.