NSPCC criticizes Instagram for decline in harmful content removal

NSPCC criticizes Instagram for decline in harmful content removal
Image Source

Children's charity, the National Society for the Prevention of Cruelty to Children (NSPCC), has called out Instagram for the decline in removal of harmful content on the platform.

The NSPCC called the decrease in the removal of harmful content about suicide and self-harm on Facebook-owned Instagram app a "significant failure in corporate responsibility".

ADVERTISEMENT

Data from Facebook revealed that removal went down by almost 80% between April and June this year compared to last year.

Decline in harmful content removal

Restrictions associated with the coronavirus pandemic forced Facebook to send home most of its content moderators. According to Facebook, it placed priority on the removal of the most harmful content.

Recent numbers have shown that the lifting of restrictions and return of moderators to work pushed the number or removals back up to pre-pandemic levels.

ADVERTISEMENT

Tara Hopkins, Instagram's head of public policy, released a statement saying: "We want to do everything we can to keep people safe on Instagram and we can report that from July to September we took action on 1.3m pieces of suicide and self-harm content, over 95% of which we found proactively."

"We've been clear about the impact of Covid-19 on our content-review capacity, so we're encouraged that these latest numbers show we're now taking action on even more content, thanks to improvements in our technology," she added.

Hopkins assured: "We're continuing to work with experts to improve our policies and we are in discussions with regulators and governments about how we can bring full use of our technology to the UK and EU so we can proactively find and remove more harmful suicide and self-harm posts."

ADVERTISEMENT

In recent years, Facebook has ramped up its efforts in taking down harmful content following the death of 14-year-old Molly Russell, who killed herself in 2017 and according to her father, he found large amounts of graphic material about self-harm and suicide on her Instagram account.

Between July and September 2019, Facebook said it removed 11.6 million pieces of content related to child nudity and child sexual abuse.

Technology not enough to control harmful content

The NSPCC pointed out that the decline in content removal had "exposed young users to even greater risk of avoidable harm during the pandemic". Facebook responded that "despite this decrease we prioritized and took action on the most harmful content within this category".

However, Chris Gray, an ex-Facebook moderator now involved in a legal dispute with the company, said: "I'm not surprised at all. You take everybody out of the office and send them home, well who's going to do the work?"

Without content moderators, content filtering is left to automatic systems, which may still overlook posts that even had trigger warnings flagging that the images featured contain blood, scars and other forms of self-harm.

Gray argued: "It's chaos, as soon as the humans are out, we can see... there's just way, way more self-harm, child exploitation, this kind of stuff on the platforms because there's nobody there to deal with it."

In early February, the NSPCC, along with other child protection organizations, asked Facebook to halt its plans to implement message encryption. They argued that message encryption on Messenger and Instagram Direct could undermine initiatives to catch offenders.