YouTube To Flag Up Misleading AI Clips

John Lister's picture

YouTube says video creators must reveal when they have used artificial intelligence tools. However, the rules only apply in specific circumstances.

According to YouTube, creators using AI is not a problem in itself. Instead it wants viewers to be better informed about "whether the content they're seeing is altered or synthetic."

The new requirement only applies when people use such tools create "realistic content", which YouTube defines as "content a viewer could easily mistake for a real person, place, scene or event." (Source:

Animation OK

There's no need to label AI-based content that is "clearly unrealistic". This includes anything with animated content or special effects. These include background blurring, lighting filters and effects to make video look like it was made in a vintage era.

The rule also doesn't apply to cases where creators have used AI for associated tasks such as generating captions or writing scripts.

While the distinction between realistic and unrealistic can be blurry, YouTube says that context is key. For example, an AI-generated video of a tornado wouldn't necessarily need labeling. However, apparent footage of the tornado moving towards a real town would.

Warning Label

When creators do disclose they've used AI tools, a label will appear in the description of the video. However, in some cases the label will also appear more prominently in the video itself. This will include videos about sensitive topics such as elections, finance, health and news.

It doesn't appear YouTube is going to enforce the policy that heavily. It says it will only penalize creators who consistently fail to disclose AI use.

Rather oddly, YouTube also says it may add a label itself when the creator should have done so but failed. It's not clear if that will happen only when somebody reports a video or if YouTube has tools that can automatically spot videos requiring the label.

Tighter enforcement is coming for cases of AI content that falsely appears to show a real person's face or voice. Such videos aren't automatically banned, but the person portrayed will be able to request their removal. (Source:

What's Your Opinion?

Is this a sensible policy by YouTube? Does it matter if videos on the site are "genuine"? When, if ever, should AI-generated videos be labeled as such?

Rate this article: 
Average: 5 (3 votes)


Chief's picture

Since we know YouTube has their own agenda and has demonitized and blocked video later to be proven correct while at the same time allowing obvious propoganda and misinformation to propogate unhindered;

Since we know AI will make things up and double-down when caught;

YouTube would best serve us by being less arrogant about their meddling, possibly by putting their label(s) underneath the video title where the user can notice and then move on.

The last thing we need is some third-party butting in with their opinion. We can read the comments for that.