New FB, Messenger, Whatsapp Tools Prevent Scams, Fake News

John Lister's picture

Facebook is testing new tools to stop people being fooled by bogus messages. Meanwhile it's subsidiary "Whatsapp" is working on ways to stop hoaxes and other false information.

The Facebook trial is for a series of tools relating to the direct messaging elements of the service, which includes messages through the website itself and via the dedicated "Messenger" app for mobile devices.

The tools are aimed at messages which come from people or organizations the user hasn't previously communicated with, which carry a higher risk of being bogus than from genuine contacts. The idea is to tell the user about suspicious elements to help them assess a message's credibility.

Country Of Origin Revealed

While the full details are under wraps, at least three elements of the test have been made public. The first is that the user will get details about the location of the sender, taken from the phone number associated with their account. In the example Facebook made public, the user gets a warning that the message has come from Russia. (Source: vice.com)

The second warning is if the account has only recently been created. That's often a signal that the account is bogus and has been created specifically to spread malware links, pass on bogus information or try to trick a user into handing over personal or security information.

The final measure is to warn the user if the message has come from a different account that has the same name as an existing Facebook friend. That could help overcome scams where somebody creates a bogus account using the name and photo of an existing user, then tries to trick their friends.

Forwarded Messages Highlighted

Another anti-scam tool is being tested at WhatsApp, a messaging service that's owned by Facebook but run separately. It will now mark any message that has been forwarded by the sender rather than actually written by them.

The idea is to make people more skeptical about such messages. It's designed to slow down the phenomenon by which a misleading message can spread extremely quickly because people who receive it from a friend don't stop to think whether the message is accurate. Such a spread has even been linked to the killing of several men in India after bogus messages claimed they had kidnapped children. (Source: techcrunch.com)

What's Your Opinion?

Are these tools likely to be effective? Is there anything else messaging companies should do to tackle misleading or bogus messages? Is there a limit of what can be done for naive or gullible users?

Rate this article: 
Average: 5 (5 votes)

Comments

Dennis Faas's picture

I can't believe it took this long for Facebook to: (a) figure out which messages were fake and (b) better inform users of potential fake news, scams, hoaxes, etc. While the IP addresses are definitely revealing (especially if the content comes from Russia, China or India), IPs can be spoofed if the scammers use a VPN - though I'm sure Facebook has thought of that already. They would only need to know IP addresses of all VPNs and blacklist them from using the service. This would be difficult but not impossible to vet since IPs get added and removed constantly.