X is head-hunting content moderators to tackle child sexual exploitation and other explicit content after AI-generated X-rated images of Taylor Swift went viral on the platform.
The social media company will hire 100 full-time employees to tackle the proliferation of deepfake content that is now prevalent not only on X but across social media platforms.
Trust and Safety Center
With the additional staff, the Elon Musk-owned business is also now looking at developing a “trust and safety center” in Austin, Texas. Here, agents will be responsible for reviewing content in line with the company’s safety rules.
The developments also come as social media platforms are now being used by bad actors to perpetrate violence, racism, fraud, and child sexual exploitation.
“X does not have a line of business focused on children, but it’s important that we make these investments to keep stopping offenders from using our platform for any distribution or engagement with CSE content,” said Joe Benarroch, who is the head of business operations at X.
The platform has been criticized following the proliferation of antisemitic and neo-Nazi content. This has resulted in many advertisers disseminating the platform.
In 2022, Musk acquired the social media platform—then Twitter—for $44 billion to allow free speech on the platform. This has not turned out well, as the tech billionaire has come under fire after cutting “headcount from the company’s trust and safety operations.”
Also read: Tech Industry Hit By Massive Job Cuts as Firms Replace Workers with AI
Temporary measures
In recent weeks, the platform has been flooded with AI-generated explicit images of Taylor Swift, prompting her legion of fans to counter this with their own campaign with more positive images of the singer. The fans ran with the hashtag #ProtectTaylorSwift.
For X, the platform blocked further searches for the musician. According to the New York Post, users who attempted to search for her name on Monday got an error message asking them to retry, adding, “Don’t fret—it’s not your fault.”
But posts with her name, however, appeared whenever users added quotes around her name, with some experts arguing that X’s approach to handling the issue left a lot to be desired.
“This is a temporary action and done with an abundance of caution as we prioritize safety on this issue,” Benarroch said in a statement.
However, some of Taylor Swift’s fans took to reporting accounts that shared the deepfake images.
Tracing the problems
With generative AI continuing to gain traction across the globe, there are now sophisticated tools that are also available to spread malicious content. With the Taylor Swift incident, Reality Defender, a deepfake detection group, tracked numerous pornographic materials that depicted the pop star, especially on X, while others were on Facebook and other social media platforms.
According to Ben Decker of the threat intelligence group Memetica, the images of Taylor Swift first came from a campaign called Lewd DALL-E, which specializes in sexually explicit AI-generated images of celebrity women.
“It’s part of a longstanding, adversarial relationship between trolls and platforms,” Decker said.
“As long as platforms exist, trolls are going to try to disrupt them. And as long as trolls exist, platforms are going to be disrupted. So the question really becomes, How many more times is this going to happen before there is any serious change?”
The White House on Friday weighed in on the matter, calling the fake images “alarming” and calling on social media platforms to take on the responsibility to prevent the spread of misinformation.