Software Updates May Bring Back Zero-day Bugs

John Lister's picture

At least half the zero-day bugs discovered by Google this year were preventable according to one of its security experts. She pointed to sloppiness by software developers.

The claims came in a talk and subsequent blog post by Maddie Stone. She's part of Google's Project Zero security program.

While precise definitions sometimes vary, the general principle of a zero-day bug is that it's where attackers are exploiting the vulnerability before the software developers have a chance to develop a fix - in most cases because they aren't even aware of the bug.

The name comes from the way the developers have a "zero day" head start in the race to update the software and patch computers before the attackers can take advantage.

Old Bugs Return

Google normally issues an annual report on zero day bugs, most recently noting a major increase in the number discovered in 2021. However, Stone made a "bonus report" on the 18 zero-day bugs found so far in 2022 as they had some unusual patterns.

She said nine of the 18 bugs were simply variants of previously discovered vulnerabilities that were patched but where attackers found a new way to exploit them. Of those, four were discovered last year, meaning the fix didn't last long.

Stone also said nine of the 18 bugs "could have been prevented with more comprehensive patching and regression tests." (Source:

Updates May Cause Problems

Developers normally carry out regression tests when they update their software, for example to add new features or fix performance bugs. A regression test means checking whether previously fixed bugs have become a problem again after the update.

In other words, there's a good chance that in several of the cases this year, the problem wasn't that attackers found something new. Instead, updates to software unintentionally stopped previous patches from doing their job.

Stone criticized both Microsoft and Google itself for failing to do enough to fix the root cause of zero-day bugs. Often this involves problems with the way operating systems or browsers handle memory, the idea being to make sure no application (including malware) can access data the computer is handling from another application. (Source:

What's Your Opinion?

Are you surprised by these findings? Do you trust software developers to find permanent solutions to security flaws? Is it worth reducing the frequency of updates to software if it means less risk of accidentally undermining previous security patches?

Rate this article: 
Average: 4.3 (7 votes)


Chief's picture

I got out of programming back before the MSDOS days.
All I did was fix or modify others' work.
Even then, end users were reticent to pay to fix things unless they were really broken and creators of said programs were even less likely to revisit their work.

As operating systems have evolved into the monstrous dinosaurs they are now, trying to be everything to everyone, they have become ever more complicated where the programmer really has no idea how their module will affect the entire system. Of course, each programmer will solemly swear their bit will have no effect on anything other than what it is supposed to.

Naturally, Zero Day exploits prove them wrong every time.

I see this issue as an unending conundrum as those in charge are under too much pressure to move quickly without adequate understanding or testing, and those who create do so without knowing or understanding the parameters of the entire project or system.

What is the answer?

Until each subsystem plays in it's own sandbox, this conundrum will never be solved.

Some will make money creating. Some will make money expoiting. Some will make money fixing.

Meanwhile, the end user will continue to exploit or be exploited and complain about it.

Basically, if you can't stand the heat, get out of the kitchen!