New EU rules forces Facebook to turn off some child abuse detection tools

New EU rules forces Facebook to turn off some child abuse detection tools
Image Source

The European Union (EU) is implementing new privacy rules that prompted Facebook to deactivate some of its child abuse detection tools.

According to social media giant Facebook, it had no choice but to turn off these child abuse detection tools since the EU has banned automatic scanning of private messages under its new privacy rules.

ADVERTISEMENT

Turning off detection tools

Under the new privacy rules, dubbed ePrivacy directive, automated systems are effectively prohibited from scanning for child sexual abuse images and other illegal content.

The changes in the rules will only apply to messaging services and not all content uploaded to Facebook. Furthermore, there will be no changes to processes in the UK as the firm said measures are "consistent with applicable laws" there.

The company has decided to go ahead and turn off several interactive options and provide just a core messaging service until it is allowed to bring back the other options.

ADVERTISEMENT

Among the features removed were group chat polls and setting of nicknames for friends on Messenger, as well as the sharing of augmented-reality face filters via direct message on Instagram.

In a statement, Facebook said: "We're still determining the best way to bring these features back. It takes time to rebuild products in a way that work seamlessly for people and also comply with new regulation."

Impact on child protection

Child protection advocates have warned against possible problems that may emerge from the new rules and Microsoft has opted to make no changes in its system, pointing out that the most responsible approach is to keep the technology running.

ADVERTISEMENT

John Carr of the Children's Charities' Coalition on Internet Safety argued: "This train crash has been approaching since the summer. Neither the EU nor one of the world's largest and most powerful tech companies could find a way of avoiding it. It is a very sad day for the children of Europe."

"We are heading for a very strange world if privacy laws can be used to make it easier for pedophiles to contact children, or for people to circulate or store pictures of children being raped," Carr claimed.

The updated rules inadvertently ban the use of advanced tools designed to spot newly created violent and exploitative images not yet logged by investigators, as well as online conversations that have the signs of abusers conditioning possible victims.

According to National Society for the Prevention of Cruelty to Children (NSPCC) head of policy Anna Edmundson, the ability of tech companies to scan these contents was "fundamental" to protection initiatives.

In November 2019, Facebook said it was able to remove 11.6 million pieces of content related to child nudity and child sexual abuse between July and September 2019.

These initiatives were launched following criticism over the death of 14-year-old Molly Russell, who killed herself in 2017 and according to her father, he found large amounts of graphic material about self-harm and suicide on her Instagram account.

Facebook vice-president Guy Rosen wrote in a blog: "We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior."

"We place a sensitivity screen over content that doesn’t violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery," Rosen added.