Instagram under fire over sexualised child images

Instagram is failing to remove accounts that attract hundreds of sexualised comments for posting pictures of children in swimwear or partial clothing, even after they are flagged to it through the in-app reporting tool.

Instagram’s parent company, 메타, claims it takes a zero-tolerance approach to child exploitation. But accounts that have been flagged as suspicious through the in-app reporting tool have been ruled acceptable by its automated moderation technology and remain live.

In one case, an account posting photos of children in sexualised poses was reported, using the in-app reporting tool, by a researcher. Instagram provided a same-day response saying that “due to high volume”, it had not been able to view the report, but that its “technology has found that this account probably doesn’t go against our community guidelines”. The user was advised to block or unfollow the account, or report it again. It remained live on Saturday, 이상으로 33,000 추종자.

Similar accounts – known as “tribute pages” – were also found to be running on 트위터.

One account, which posted pictures of a man performing sexual acts to images of a 14-year-old TikTok influencer, was deemed not to break Twitter’s rules after being reported using the in-app tools – despite him suggesting in posts that he was seeking to connect with people to share illegal material. “Looking to trade some younger stuff,” one of his tweets said. It was removed after the campaign group Collective Shout posted about it publicly.

The findings raise concerns about the platforms’ in-app reporting tools, with critics saying the content appeared to be allowed to remain live because it did not meet a criminal threshold – despite being linked to suspected illegal activity.

Often, the accounts are used for “breadcrumbing” – where offenders post technically legal images but arrange to meet up online in private messaging groups to share other material.

Andy Burrows, head of online safety policy at the NSPCC, described the accounts as a “shop window” for paedophiles. “Companies should be proactively identifying this content and then removing it themselves,”그는 말했다. “But even when it is reported to them, they are judging that it’s not a threat to children and should remain on the site.”

He called for MPs to tackle “loopholes” in the proposed online safety bill – which is intended to regulate social media firms and will be debated in parliament on 19 4 월. They should, he said force companies to tackle not only illegal content, but also that which is clearly harmful but may not meet the criminal threshold.

Lyn Swanson Kennedy of Collective Shout, an Australia-based charity that monitors exploitative content globally, said the platforms were relying on external organisations to do their content moderation for them. “We are calling on platforms to address some of these very concerning activities which put underage girls particularly at serious risk of harassment, exploitation and sexualisation," 그녀가 말했다.

메타, Instagram’s parent company, said it had strict rules against content that sexually exploits or endangers children, and that it removed it when it became aware of it. “We’re also focused on preventing harm by banning suspicious profiles, restricting adults from messaging children they’re not connected with and defaulting under-18s to private accounts,” a spokesperson said.

Twitter said the accounts reported to it had now been permanently suspended for violating its rules. 대변인이 말했다: “Twitter has zero tolerance for any material that features or promotes child sexual exploitation. We aggressively fight online CSE and have heavily invested in technology … to enforce our policy.”

Imran Ahmed, chief executive of the Center for Countering Digital Hate, a non-profit thinktank, 말했다: “Relying on automated detection, which we know cannot keep up with simple hate speech, let alone cunning, determined, child sex exploitation rings, is an abrogation of the fundamental duty to protect children.”

댓글이 닫혀 있습니다..