Skip to content Skip to sidebar Skip to footer

YouTube now lets you request removal of AI-generated content that simulates your face or voice

Meta is not the only company grappling with the rise in AI-generated content and how it affects its platform. YouTube also quietly rolled out a policy change in June that will allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content under YouTubeโ€™s privacy request process. Itโ€™s an expansion on its previously announced approach to responsible AI agenda first introduced in November.

Instead of requesting the content be taken down for being misleading, like a deepfake, YouTube wants the affected parties to request the contentโ€™s removal directly as a privacy violation. According to YouTubeโ€™s recently updated Help documentation on the topic, it requires first-party claims outside a handful of exceptions, like when the affected individual is a minor, doesnโ€™t have access to a computer, is deceased or other such exceptions.

Simply submitting the request for a takedown doesnโ€™t necessarily mean the content will be removed, however. YouTube cautions that it will make its own judgment about the complaint based on a variety of factors.

For instance, it may consider if the content is disclosed as being synthetic or made with AI, whether it uniquely identifies a person and whether the content could be considered parody, satire or something else of value and in the publicโ€™s interest. The company additionally notes that it may consider whether the AI content features a public figure or other well-known individual, and whether or not it shows them engaging in โ€œsensitive behaviorโ€ like criminal activity, violence or endorsing a product or political candidate. The latter is particularly concerning in an election year, where AI-generated endorsements could potentially swing votes.

YouTube says it will also give the contentโ€™s uploader 48 hours to act on the complaint. If the content is removed before that time has passed, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means fully removing the video from the site and, if applicable, removing the individualโ€™s name and personal information from the title, description and tags of the video, as well. Users can also blur out the faces of people in their videos, but they canโ€™t simply make the video private to comply with the removal request, as the video could be set back to public status at any time.

The company didnโ€™t broadly advertise the change in policy, though in March it introduced a tool in Creator Studio that allowed creators to disclose when realistic-looking content was made with altered or synthetic media, including generative AI. It also more recently began a test of a feature that would allow users to add crowdsourced notes that provide additional context on videos, like whether itโ€™s meant to be a parody or if itโ€™s misleading in some way.

YouTube is not against the use of AI, having already experimented with generative AI itself, including with a comments summarizer and conversational tool for asking questions about a video or getting recommendations. However, the company has previously warned that simply labeling AI content as such wonโ€™t necessarily protect it from removal, as it will still have to comply with YouTubeโ€™s Community Guidelines.

In the case of privacy complaints over AI material, YouTube wonโ€™t jump to penalize the original content creator.

โ€œFor creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,โ€ a company representative last month shared on the YouTube Community site where the company updates creators directly on new policies and features.

In other words, YouTubeโ€™s Privacy Guidelines are different from its Community Guidelines, and some content may be removed from YouTube as the result of a privacy request even if it does not violate the Community Guidelines. While the company wonโ€™t apply a penalty, like an upload restriction, when a creatorโ€™s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeated violations.

Leave a comment