Recent advancements in artificial intelligence (AI) have rekindled the spirit of fully automated vulnerability remediation. The industry is booming with attempts to provide tailored remediation that works in your code base, taking into account your unique environment and circumstances, powered by generative AI. The tech is incredible and is already showing signs of success. The big question remains: Are we ready to embrace it?
Ask any developer using GitHub Copilot or one of the alternatives, and you will find wonderful examples of how AI can generate context-aware code completion suggestions that save a ton of time. You’ll also find examples of irrelevant, overly complex, or flat-out-wrong suggestions generated at bulk.
There’s no doubt we’re witnessing a breakthrough in technology that will produce automated code generation far better than everything we’ve seen before. However, tech is only one piece in the remediation puzzle, with significant weight falling on process and people.
New Tech, Old Challenges
Every change to an application is a balancing act between introducing improvements and protecting existing functionality. Urgent changes, including security fixes, take this problem to the extreme by introducing tight schedule constraints and a strong pressure to get things right. Applying patches can have unexpected consequences, which in the worst case can mean an outage.
Ask any IT manager who handles patching, and you will hear a never-ending list of horror stories where users were unable to go about their day-to-day work because of a seemingly benign patch. But failing to apply a patch, only to have the vulnerability exploited as part of a breach, can also have devastating consequences for the entire organization, as readers of this column are acutely aware.
Good software engineering is focused on finding a balance that maintains the ability to apply changes to the application at a fast pace while protecting the application and its maintainers from bad changes. There are plenty of challenges in achieving this goal, including legacy software that cannot be easily changed and ever-changing system requirements, to name just a couple.
In reality, maintaining the ability to change software is a difficult goal that cannot always be attained, and teams have to accept the risk of some changes resulting in unexpected consequences that need more remediation. The main challenge for engineers lies in ensuring that the proposed change would produce expected results, not in writing the actual code change, which generative AI can now do for us. Security fixes are no different.
Overlapping Responsibility for Application Security
Another major challenge that becomes acute in large enterprises is the fractioning of responsibility. A central AppSec team in charge of reducing risk across the organization cannot be expected to understand the potential consequences of applying a specific fix to a particular application. Some solutions, such as virtual patching and network controls, allow security teams to fix problems without relying on development teams, which could simplify mitigation, reduce required engineering resources, or eliminate buy-in.
Politics aside, solutions like these are blunt tools that are bound to cause friction. Network controls, such as firewalls and Web application firewalls (WAFs), are an area where IT and security traditionally have much autonomy, and developers just have to deal with it. They represent a clear choice to put control before productivity and to accept the added friction for developers.
For application vulnerabilities, fixes require changing either the application’s code or the application’s environment. While changing the application’s code is within the scope of responsibility of a development team, changing the environment has always been a way for security teams to intervene and might present a better path for AI-generated remediations to be applied.
In the on-premises world, this usually meant security agents managing workloads and infrastructure. In managed environments, like a public cloud provider or low-code/no-code platform, security teams could fully understand and examine changes to the environment, which allowed deeper intervention in application behavior.
Configuration, for example, can change the behavior of an application without changing its code, thus applying a security mitigation while limiting consequences. A good example of this is enabling built-in encryption-at-rest for databases, preventing public data access, or masking sensitive data handled by a low-code app.
Striking the Right Balance
It is important to note that environment changes can have adverse effects on the application. Encryption comes at a performance cost, and masking makes debugging more difficult. However, these are risks more and more organizations are willing to take for the benefit of increasing security mitigation at lower engineering cost.
At the end of the day, even once a mitigation is available, organizations have to balance the risk of security vulnerabilities with the risk of applying mitigations. It is clear that AI-generated mitigations reduce the cost of remediation, but the risk of applying them will always exist. However, failing to remediate for fear of consequences puts us on one end of the spectrum between those two risks, which is far away from a perfect balance. Applying any auto-generated remediation automatically would be the other end of the spectrum.
Instead of choosing either extreme, we should acknowledge both the vulnerability risks and the mitigation risks and find a balance between the two. Mitigations will sometimes break applications. But choosing not to accept this risk means, by default, choosing to accept the risk of security breach due to lack of mitigation.