AI Struggles to Write Malware

John Lister's picture

Artificial Intelligence tools aren't as useful for writing malware as it first seemed. However, they may be useful for phishing scams and other social engineering.

Two recent security company reports covered by The Register explored how malware scammers are particularly interested in AI tools that generate material. The theory goes that such tools could write code designed to exploit vulnerabilities in software and websites. (Source: theregister.com)

It's not a completely outlandish theory as some users have found such tools can efficiently write code for a particular task. It can take multiple attempts to correctly tell the tools what the user is looking to create but, once done, the tools can write code quickly and without as much risk of error or redundancy as some human coders.

Human Expertise Still Needed

The researchers suggest that for now at least, that's not working out for malware. The amount of work needed to "train" the tool and check the code is written correctly and works means only expert coders can use the technique. Even then, it saves little time over writing the code manually.

Part of the limitation with malware is that many publicly available malware tools have safeguards designed to stop people using them for malicious purposes. While these can be bypassed in some cases, it adds extra work and removes even more of the benefits of automating malware. (Source: trendmicro.com)

One area where AI can help malware creators is through so-called "fuzz testing". This simply means adding code, often randomly generated, to software to see if it uncovers any bugs. Doing this manually is extremely slow and with a tiny chance of success. If AI can automate and even improve this process, it may become more efficient and worthwhile for scammers.

Scam Email Writing Automated

However, it appears AI's main benefit for scammers doesn't involve malware at all. Instead it can be useful for phishing and other scams where bogus emails and messages try to trick users into clicking on links and providing login details and other personal information.

In some cases AI can help generate thousands of variants of the same basic message, letting scammers test which are most likely to fool victims. In other cases, it lets scammers create more believable messages in languages they don't speak themselves, expanding their base of potential victims.

What's Your Opinion?

Do you fear AI will help malware scammers in the long run? Have you noticed an increase in suspicious emails recently? Has the content become more plausible?

Rate this article: 
Average: 5 (3 votes)

Comments

doulosg's picture

99% of the phishing emails are so obviously fake, it's elementary to avoid them. Especially when there are dozens of the exact same message in my inbox. The very simple one, though, that I might slip on, is the simple, "The security code for your <xyz> account is 123654." Fill in something obvious, like "Microsoft," for xyz, and it could be a very believable message.

Since most phishing emails appear to be written by non-english speakers, which provides the obvious marks of a scam, AI could potentially be the game changer that allows these folks to upscale their language skills.