Fake News: Many Twitter 'Debates' are Actually Bots

John Lister's picture

Human tiredness could be the key to spotting automated fake posts online, according to researchers at the University of Southern California. They are developing systems to distinguish between posts by humans and those posted by automated programs, otherwise known as 'bots' - which is a short form for robots.

Since most of the online world gets their news using smartphones these days, fake news is a serious problem. It's also an incredibly serious issue for social media sites such as Facebook and Twitter, where news is often talked about, debated, and forwarded to others.

Bots Designed To Cause Friction

The use of bots has evolved over the years. At one stage, bots were primarily used to spread promotional spam via messages, or as way to falsely increase the apparent interest or support for particular topics.

Recently, however, automated posts have been part of deliberate campaigns by some governments to provoke and infuriate people from other nations, spark divisions, and even undermine confidence in the democratic process. This was certainly the case with the Cambridge Analytica scandal; in this case, the company used social media to sway votes leading up to the 2016 presidential election.

8.4 Million Tweets Analyzed

A team lead by Emilio Ferrara has now analyzed 8.4 million tweets from 3,500 Twitter accounts known to come from humans, and 3.4 million tweets known to come from 5,000 bots. The classifications came from a combination of existing algorithms and manual verification. (Source: newscientist.com)

Perhaps surprisingly, the average human user who engaged in debate online would post four to five times as often as a bot. However, it was the pattern rather than quantity of posts that most accurately indicated if the user was genuine.

Human Responses Tail Off

The researchers found that humans started off posting less frequently during a particular debate, then increased the rate of replies. However, the length of each subsequent post diminished over time, suggesting human users suffer from cognitive depletion: in other words, getting worn down by an online argument and putting less effort into replies.

In contrast, the rate of posts from the bots was much more consistent. In particular, there were clear spikes at 30-minute and 60-minute intervals. This suggests automated posting where the bot isn't actually taking any notice of what people are posting, and instead are seeking to prolong the supposed argument.

The big downside of the research is that the archive of posts analyzed was from 2017. It's possible that bots have become more sophisticated and even more human-like since then. It's also possible human behavior has changed - for example, being less willing to engage in arguments with "people" they don't know.

What's Your Opinion?

Do these findings surprise you? Would this be a useful way to try to filter out posts from suspected bots? Have you ever suspected somebody replying to you online is not actually human?

Rate this article: 
Average: 5 (9 votes)