YouTube will soon require content creators to label videos made using AI or other digital tools as part of efforts to combat deepfakes that could mislead viewers, the company said in a new policy update published Nov. 14.
The video-hosting site warned that creators who fail to do so risk having their accounts removed or suspended from earning advertising revenue on YouTube.
It will also allow users to request AI-created videos that simulate an identifiable person to be pulled down. The new policy will go into effect in the coming months.
Also read: Upcoming YouTube AI Tool Lets Fans Sound Like Artists
Identifying ‘realistic’ AI content
In a blog post, YouTube vice presidents of product management Jennifer Flannery O’Connor and Emily Moxley said the platform now requires creators to disclose and label videos that include “manipulated or synthetic content that is realistic, including using AI tools.”
For example, the executives said videos that “realistically depict an event that never happened” and or deepfakes “showing someone saying or doing something they didn’t actually do” will need clear labels in their description showing they were made with AI.
“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts, public health crises, or public officials,” Flannery O’Connor and Moxley said.
YouTube already prohibits “technically manipulated content that misleads viewers and may pose a serious risk of egregious harm,” the company wrote in its blog post.
“However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created,” it said.
The new policy is designed to make it easy for viewers to separate synthetic videos from real ones and expands on measures YouTube announced in September requiring disclosures for AI-generated political ads on the site.
It comes amid debate over the potential pitfalls that arise as AI gets more advanced, making it difficult to distinguish between real and fake life-like content, also known as “deepfakes,” on social media and elsewhere on the Internet.
YouTube prioritizes ‘sensitive topics’ like elections
Deepfakes are realistic but fake images or videos created with artificial intelligence that are used to impersonate someone else, including their voice, often for malicious reasons. That worry has prompted several online platforms to change their rules to tackle the problem.
As MetaNews recently reported, Facebook and Instagram-owner Meta Platforms will require advertisers to disclose the use of AI in ads about elections, politics, and social issues starting next year. It also stopped political advertisers from using their own AI tools to create ads.
Short video-sharing platform TikTok introduced a new label for AI-generated content earlier this year and demands that users disclose when content depicting “realistic scenes” is created with AI. It also prohibits AI-generated deepfakes of young people and private figures.
YouTube’s labels will appear in videos’ description panels. However, for videos that discuss sensitive topics like elections and conflicts, the labels will be placed more prominently within the video player to ensure viewers are aware of the content’s origin.
In addition, content made with YouTube’s own generative AI tools, such as the text-to-speech and music-to-image tools, will also be clearly labeled. AI-generated material that violates YouTube’s community guidelines will be removed from the site completely.
“For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers,” YouTube explained.
YouTube is holding creators accountable for complying with the new AI content labeling requirements. Creators who fail to label synthetic content that should be disclosed under the new policy may face penalties such as content removal or have their videos demonetized.
Users can request that AI content be removed
Apart from the content labels, YouTube is also developing a new feature that allows users to request the removal of AI-created or synthetic depictions of real people. The change follows a surge in deepfakes featuring celebrity women in non-consensual pornographic content.
The company said under its privacy rules, users will be allowed to flag videos “that simulate an identifiable individual, including their face or voice.” YouTube will evaluate each takedown request, considering factors such as whether the video is intended as parody or satire.
It will also consider whether the people portrayed in the video can be identified, as well as the prominence of the individuals involved. For well-known figures or public officials, a higher threshold for removal may apply.