Tech firms, with the support of various governments, have signed a pledge for a collaborative effort to end the spread of child sexual abuse content created by AI.
Firms like TikTok, Snapchat, Stability AI, and others have signed a joint pledge to stop the proliferation of AI-generated images of child sexual abuse that have been rampant lately following the boom in generative AI.
They made the pledge at an event hosted by Home Secretary Suella Braverman in partnership with the Internet Watch Foundation (IWF) on Monday.
Governments weigh in
According to a Reuters report, Britain announced the pledge, which lists the US, German, and Australian governments among the 27 signatories.
“We resolve to sustain the dialogue and technical innovation around tackling child sexual abuse in the age of AI,” reads part of the statement.
“We resolve to work together to utilize responsible AI for tackling the threat of child sexual abuse and commit to continue to work collaboratively to ensure the risks posed by AI in tackling child sexual abuse do not become insurmountable.”
The pledge comes as the UK government is also looking at further investing in the use of AI to tackle child sexual abuse online following an increase in AI-generated images depicting child sexual violence.
“We cannot let this go unchecked,” said Braverman, adding that “the pace at which these images have spread online is shocking, and that’s why we have convened such a wide group of organizations to tackle this issue head-on.”
Braverman said the proliferation of child sexual abuse images generated by AI is now “an online scourge.”
“This is why tech giants must work alongside law enforcement to clamp down on their spread. The pictures are computer-generated but they often show real people; it depraves and damages lives,” she added.
Today, alongside the @IWFhotline, I hosted an event to bring tech companies, charities and academics together to clamp down on depraved AI generated child sexual abuse material. We must protect our children online and ensure that AI is a force for good. pic.twitter.com/AfQwQCM0tF
— Suella Braverman MP (@SuellaBraverman) October 30, 2023
Shocking AI images of child sexual abuse
The IWF has released shocking statistics of AI-generated images of child sexual abuse, raising fears of society normalizing violence against children.
In a month alone, the IWF investigated more than 11,000 AI images shared on a dark web child abuse forum.
Of these figures, almost 3,000 were confirmed to breach the UK law by depicting child sexual abuse, according to the IWF.
“We first raised the alarm about this in July. In a few short months, we have seen all our worst fears realized,” said IWF CEO Susie Hargreaves.
“The realism of these images is astounding and improving all the time. The majority of what we’re seeing is now so real and so serious that it would need to be treated exactly as though it were real imagery under UK law,” added Hargreaves.
More signatories involved
Other signatories to the pledge included the National Society for the Prevention of Cruelty to Children (NSPCC), the National Police Chiefs’ Council, and the National Crime Agency.
NSPCC CEO Peter Wanless said: “AI is being developed at such speed that it’s vital that the safety of children be considered explicitly and not as an afterthought in the wake of avoidable tragedy.”
“Already we are seeing AI child abuse imagery having a horrific impact on children, traumatizing and retraumatizing victims who see images of their likeness being created and shared,” added Wanless.
According to the NCA, there are an estimated 680,000 to 830,000 adults in the UK, which is about 1.3% to 1.6% of the adult population, posing “some degree of sexual risk to children, which is why tackling sexual abuse is a priority for the NCA.”
“We will investigate and prosecute individuals who create, share, possess, access, or view AI-generated child sexual abuse material in the same way as if the image were of a real child,” said Chris Farrimond, Director of Threat Leadership at NCA.
The announcement comes as the UK prepares to host the AI Safety Summit. The summit is expected to deliberate on responsible AI development.
In 2023, we found the first reports of child sexual abuse material (CSAM) generated by artificial intelligence (AI). Read our Research Report with the latest findings on how #AI is increasingly being used to create child sexual abuse imagery online: https://t.co/kj7eEbhIp2 pic.twitter.com/51UsipTlfm
— Internet Watch Foundation (IWF) (@IWFhotline) October 30, 2023