As a general rule, IT departments are focused on the next threat: the zero-day vulnerabilities lurking in the system, the trapdoors hidden from view. This is understandable. We fear the unknown, and zero-day vulnerabilities are unknown by definition. The mind inevitably leaps ahead to the untold damage they might cause if and when attackers finally identify them.
But this focus on the next threat, the unknown risk, might be harming the organization. Because as it turns out, most of the vulnerabilities businesses should be worrying about have already been identified.
According to a recent report from Securin, the vast majority — 76% — of vulnerabilities exploited by ransomware in 2022 were old, discovered between 2010 and 2019. Of the 56 vulnerabilities tied to ransomware in 2022, 20 of them were old vulnerabilities discovered between 2015 and 2019.
In other words: At a time when ransomware attacks are perhaps the biggest threat facing organizations, the vulnerabilities most often exploited by ransomware attackers are already known to us. And yet countless companies have left themselves open to them.
IT departments can’t entirely be blamed for this persistent problem — most are overworked, overstretched, and engaged in triage with a never-ending cascade of threats from every direction. Still, proper cybersecurity hygiene mandates that IT teams take these old vulnerabilities seriously and factor them into their everyday security processes.
Why Old Vulnerabilities Are Neglected
Before examining how exactly companies can get more vigilant about old vulnerabilities, let’s drill deeper into the problem as it exists today.
To begin with, it’s worth noting that this isn’t an abstract concern. Just earlier this year, it was revealed that multiple threat actors had exploited a 3-year-old vulnerability in Progress Telerik to breach a part of the US government. “Exploitation of this vulnerability allowed malicious actors to successfully execute remote code on a federal civilian executive branch (FCEB) agency’s Microsoft Internet Information Services (IIS) web server,” the affected agencies said.
Part of the problem here boils down to the life cycle of a given vulnerability. When a vulnerability is first identified — when a zero-day vulnerability is born — everyone pays attention. The vendor issues and deploys a patch, and some percentage of affected IT teams tests and install it. Of course, not every affected IT team gets around to it — they might think it’s not a priority, or it might just slip through the cracks of their process.
Months or years pass, and the zero-day vulnerability becomes just another one of hundreds of old vulnerabilities. High turnover in IT departments means new arrivals might not even be aware of the old vulnerability. If they are aware of it, they might assume it’s already been taken care of. In any case, they have other things to worry about — including but not remotely limited to all the new zero-day vulnerabilities being identified on a regular basis.
And so the old vulnerability lives on in the network, just waiting to be rediscovered by a savvy attacker.
Working Proactively to Patch Old Vulnerabilities
Given all of that, there’s no question that businesses need to be more vigilant about old vulnerabilities. Granted, keeping one eye on the past and one eye on the future isn’t easy, especially not when IT departments have so much else to worry about. And it’s true that IT departments can’t expect to patch everything. But there are fairly simple approaches that can minimize the risk of an old vulnerability coming back to haunt an unprepared organization.
The simplest and most effective approach involves getting optimized patch management processes in place. That means achieving a comprehensive view of your attack surface — including old vulnerabilities — and making conscious judgments about the best way to allocate your IT team’s resources.
These judgments should be informed by standard vulnerability repositories like the National Vulnerability Database (NVB) and MITRE. But they should also go beyond them. The fact is that the vulnerability repositories most often consulted by IT departments contain glaring holes, and these unfortunate omissions play a definite role in the continued exploitation of old vulnerabilities by bad actors. And that’s not to mention the fact that many standard risk calculators tend to underestimate risk.
The simple fact is that organizations cannot properly evaluate the threats they’re facing if they’re working off of impartial or improperly weighted information — they need to know the precise risks they’re facing, and they need to be able to properly prioritize those risks.
At the end of the day, a vulnerability is a vulnerability, whether it was identified five years ago or five hours ago. The age of a vulnerability is irrelevant if and when it’s exploited — it’s capable of leading to just as much damage. But for IT teams, old vulnerabilities do possess one distinct advantage: we already know about them. Putting that knowledge to use — working proactively to identify and patch those vulnerabilities — is essential to keeping today’s organizations secure.