Enterprises are making tremendous investments in their digital transformations, and no wonder: Increasingly, those who can more rapidly part from old, manual and antiquated ways of managing technology and shift to new ways of thinking will come out on top.
That’s especially true when it comes to rules-based policy enforcement. Put simply, continuing to rely on this outdated approach hampers efforts to build more efficient and agile businesses
For more than two decades, rules that enforced data-security policy have been the norm. And getting security rulesets to work properly was part science, part art and part budget, because it took teams of super-smart professionals to get them coordinated and optimized. This is because, essentially, rules are not scalable.
For instance, security teams would create rules that attempted to predict how an employee would use a particular system, as well as how the data they manipulated should flow, how systems and applications should act, how network traffic should flow, and so forth. Building such rulesets is how the industry managed intrusion-detection systems, firewalls, endpoint anti-malware, web application security, data-leak protection and more, since their inceptions. As a result, rules are everywhere.
A lot of the old security strategies around rules attempted to predict how users were going to use their systems. As systems became more complex and the number of applications and amount of data grew, it became an exercise in futility. Today, there are infinite combinations of potential scenarios that must be accounted for. It’s just too complicated to predict every possible combination for rules to function effectively.
As an example, I’ll share a personal story. I recently ran into conflicting rules when I tried to access a certain system. One rules-enforced policy allowed me access it, but another denied my access. Depending on the order the rules were triggered, I was granted and then denied access, or denied access altogether. It’s a good illustration of how we’ve created a world where rules and their resulting dependencies are too complicated and complex to be useful.
The challenges associated with rules are more than a minor nuisance. The sheer volume of alerts they generate and the errors they produce overwhelm security analysts and IT managers. Wave after wave of false positive alerts flood their screens daily.
Analysts try to make sense of them all. They’ll collect all of the alerts being generated and incorporate them into their security information and event management systems to try to determine where data may be flowing, where it shouldn’t, identify indications of a compromise, and try to spot patterns and anomalies that should be investigated. But the reality is there is just so much going on that it’s impossible to make heads or tails out of it. It’s a total time-suck.
The failure of rules can also pose a serious risk to security efforts. Whether it’s network monitoring, data loss-protection tools or other security technologies, analysts will tune system thresholds to ridiculously low levels in an attempt to stop the flood of policy-based alerts. Or, they will shut alerts off altogether and put the system in monitor mode, which leaves systems and data unnecessarily open to risk. As for enterprises that try to maintain suitable policy thresholds, their analysts will waste time chasing ghosts — situations that look risky but aren’t — and attackers can slip in through the noise.
Bottom line: The modeling of enterprise security, infrastructure, applications, and data flows, as well as the possible actions of adversaries and malicious or careless insiders are just too complex to try to express in rules. There are too many variables involved, too many changing conditions, and it’s just impossible to keep up, no matter how talented and smart analysts are.
So, there needs to be a change. And that doesn’t mean that rules need to be stricter or smarter. It means we must shift our mindset and use a new approach altogether – one that focuses on the data itself and not the rules that govern users. The mantra is simple…protect all data and trust no one.
This new approach fundamentally shifts the emphasis of a data security program, for example, from prevention to data protection. It works based on the assumption that all data is important. Sales pipelines, forecasts, competitive campaigns, customer contact information, product road maps, prototype drawings — it’s all critical IP and worth protecting.
This next-gen approach to data protection also assumes that you trust no one. In other words, the system doesn’t care if an employee is a trusted user or not. It works at the data level, tracking and monitoring all data activity and flagging anomalies, while keeping copies of all files for fast retrieval and analysis. Once anomalies are spotted, only then do analysts investigate further.
It should be clear that enterprises don’t have time to create and manage predictive rules manually anymore. And, perhaps more importantly, those enterprises that move away from policy-based security defined by manual rulemaking will not only fall behind their competitors, but they’ll also be much less secure than those who focus on safeguarding their data.
(About Rob Juncker, senior vice president of research and development and operations at Code42. His background is in security, cloud, mobile and IT management. Before joining Code42, Juncker was vice president of research and development at Ivanti, a leader in the security and IT management space.)
(Enjoy additional insights from Threatpost’s InfoSec Insider community by visiting past contributions.)