Youtube: Humans + AI to Combat Terror, Hate Videos

John Lister's picture

Google says it will use a combination of artificial intelligence and human expertise to combat online videos promoting hatred and terrorism. It's also going to tackle videos that follow the letter but not the spirit of its rules and will intentionally promote videos that can counter terrorist recruitment.

The four steps are part of what Google and YouTube calls an ongoing program to enforce the principle that "there should be no place for terrorist content on our services."

Step one is to improve the way Google's systems automatically spot videos that promote terrorism and extremism. Google says automated tools are already responsible for finding half of the terrorism-related content that it has removed in the past six months.

AI To Judge Context

The key now will be improving the artificial intelligence to do a better job of detecting the context of a video. Google notes that the exact same footage could be perfectly legitimate in a mainstream news report, but could glorify violence when presented in a different context - for example: with accompanying spoken or text commentary.

The second step is to increase the number of people in its "Trusted Flagger" program, made up of independent experts on terrorism. When people in the program report a video as breaking YouTube rules, it goes to the top of the priority list for Google staff to check and consider removing. Google says its staff remove 90 percent of videos reported by Trusted Flaggers, compared with just 30 percent of reports from ordinary users. As part of the expansion, Google will move from working with and funding 63 specialist organizations to 113. (Source: blog.google)

Inflammatory Videos Downplayed

The third measure is a crackdown on videos that don't specifically breach Google rules, but still contain inflammatory content. Google has previously said it's reluctant to remove such videos because of its support of free speech. It now says that although such videos won't be deleted, they'll be placed behind a warning message, will not carry comments or user ratings, and won't be eligible for advertising. (Source: techcrunch.com)

Finally, Google says it will expand its work to promote videos that are specifically aimed at deterring potential terrorist recruits. Such videos will appear as targeted ads shown to people searching for terror-related material.

What's Your Opinion?

Do the measures go far enough? Is it right to restrict videos even if they don't breach the wording of the rules? Is combining artificial intelligence and human expertise a smart way to deal with the sheer number of videos on YouTube?

Rate this article: 
Average: 5 (1 vote)

Comments

ecash's picture

This just added Years to videos..
Scanning and viewing EVERY frame of video??

Iv seen enough videos that I know how they hide them on the net most often..
Even on youtube, there are hidden videos.. AND tons to FAKE videos that are adverts for Sites to SHOW videos..

This is harder then sorting pennies by Date..and then looking up values in books..

Im sorry for any person that has this job..