Report: Facebook Moderation May Never Be Enough

John Lister's picture

Internal Facebook documents suggest automated moderating will never pick up more than a small portion of the hate speech posted on the site. The figures suggest Facebook has used some creative wording when publicly arguing how well its systems are working.

The company has a natural interest in using automated moderation, particularly with artificial intelligence that doesn't simply follow rules to spot content that breaks site rules, but develops its own methods for detecting it.

That's because it's highly unlikely Facebook could every employ enough humans to manually moderate posts (before or after they are published) given the site gets an estimated five billion pieces of content a day. (Source:

100 Hate Speech Comments Could Slide By

Internal documents made public by a former employee show that AI tools pick up posts that make up around three to five percent of total views of material with hate speech and 0.6 percent of views of material which depicts or incites violence.

Another document noted the overall "detection" rate meant a Facebook page group could on average host 100 posts or comments that broke the rules on hate speech before facing a 30 day suspension. (Source:

AI Moderation May Never Be Enough

Meanwhile an engineer said that between human and automated moderation it's possible the company will never have a system that detects the majority of rule-breaking posts. The engineer added that the current approach might max out at a 10 to 20 percent detection rate.

This depressing data is somewhat at odds with Facebook's public spin which suggests automated moderation does a great job. It's even claimed that the AI detects 98 percent of hate speech before users report it.

While this may be true, the catch appears to be that it's largely down to users simply not bothering to report material that gets through the moderation. That's partly because they may think it's not likely to have any effect, but partly because Facebook intentionally made it more difficult to make a report. Either way, the 98 percent claim doesn't reflect the material that breaks the rules but never gets reported.

What's Your Opinion?

Have you ever reported content on Facebook that broke the rules? Do you think Facebook is serious about the problem? Can any combination of human and automated moderation ever cope with the sheer scale of content on the site?

Rate this article: 
Average: 4.9 (9 votes)


pctyson's picture

I will be slammed for this but I am going to use my so called "right to free speech" supported by my countries constitution (for now anyway) to post this.
Unless direct death threats are made, why are posts being removed? Just because you do not like what someone says it does not mean they should not have the right to say it. Posts by these imbecile racists etc... forces them out of their cowardly hiding and into the light. Counter their idiocy with your point of view and facts and by using logic and integrity or just ignore it. Idiots usually go away if you just ignore them.
America, in the past, has always allowed Zuckerberg and others to speak freely. This has changed DRAMATICALLY. Who gets to decide what is hate speech? For example, is it hate to take or not take the "jab" and then post about it? Many are arguing that it is. They are also removing posts, by using their tweaked algorithms, to determine what is true and what is not. Political, medical, scientific etc...points of view are being removed under those same algorithms. People must learn to do their own research and then decide for themselves what is true and what is not. Unless a direct physical death threat is made, which has never been allowed, let the people have free speech!!!

Slick's picture

Totally agree with pctyson. If you don't like the show, change the channel. Freedom of speech means FREEDOM OF SPEECH!!

bigton's picture

The right to free speech is important, and this should not be compromised. But the problem with Facebook is how this right has degenerated into hate of others in society and this can occasionally lead to some believing they have the right and the support to use violence. It then encroaches on peoples rights for peace and safety. With a powerful platform like Facebook we should be concerned about just how much it can manipulate people into certain courses of action or can be used to direct people to action. Is this perhaps how the next revolution will start? After all, Mr Zuckerberg has pretty much admitted he is not able to control it anymore.

pctyson's picture

Just a note...You are right that Zuckerburg (sorry got that wrong should have chekced :( ) does have a right to control HIS platform. The only issue that seems to arise is that his company and platform rose rapidly in this new technological age. Facebook through Zuckerberg grew so fast and he used that growth and power to actively suppress competing free speech platforms ( In my post, I attempted to make it clear that harming other is NOT right. What I was attempting to show is that points of view are VERY subjective and we must be VERY careful. If I say "I hate a certain type of ethnic food and that it should be elminated from society", does that mean I hate the food and the food should be eliminated or does it mean that I hate the ethnicity of the people that make that type of food, therefore, those who make that food should be eliminated? There IS a difference. Who gets to decide what I meant? IT DOES MATTER when we are talking about free speech!

kitekrazy's picture

They work pretty hard at moderating conservative comments or anything resembling common sense.
Unfortunately the web in general creates an atmosphere of incivility. Look at comments on pages of product developers.

When things grow so big moderation is not really possible. All one had to do is look at the rudeness that goes on in Steam Community forums.

pctyson's picture

kitekrazy, they have been around since before the internet (BBS's,fidonet etc...) They used to be called flamers. Ignore them and get on with it! Just saying :)