Most security teams can benefit from integrating artificial intelligence (AI) and machine learning (ML) into their daily workflow. These teams are often understaffed and overwhelmed by false positives and noisy alerts, which can drown out the signal of genuine threats.
The problem is that too many ML-based detections miss the mark in terms of quality. And perhaps more concerning, the incident responders tasked with responding to those alerts can’t always interpret their meaning and significance correctly.
It’s fair to ask why, despite all the breathless hype about the potential of AI/ML, are so many security users feeling underwhelmed? And what needs to happen in the next few years for AI/ML to fully deliver on its cybersecurity promises?
Disrupting the AI/ML Hype Cycle
AI and ML are often confused, but cybersecurity leaders and practitioners need to understand the difference. AI is a broader term that refers to machines mimicking human intelligence. ML is a subset of AI that uses algorithms to analyze data, learn from it, and make informed decisions without explicit programming.
When faced with bold promises from new technologies like AI/ML, it can be challenging to determine what is commercially viable, what is just hype, and when, if ever, these claims will deliver results. The Gartner Hype Cycle offers a visual representation of the maturity and adoption of technologies and applications. It helps reveal how innovative technologies can be relevant in solving real business problems and exploring new opportunities.
But there’s a problem when people begin to talk about AI and ML. “AI suffers from an unrelenting, incurable case of vagueness — it is a catch-all term of art that does not consistently refer to any particular method or value proposition,” writes UVA Professor Eric Siegel in the Harvard Business Review. “Calling ML tools ‘AI’ oversells what most ML business deployments actually do,” Siegel says. “As a result, most ML projects fail to deliver value. In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective.”
While AI and ML have undoubtedly made significant strides in enhancing cybersecurity systems, they remain nascent technologies. When their capabilities are overhyped, users will eventually grow disillusioned and begin to question ML’s value in cybersecurity altogether.
Another key issue hindering the broad deployment of AI/ML in cybersecurity is the lack of transparency between vendors and users. As these algorithms grow more complex, it becomes increasingly difficult for users to deconstruct how a particular decision was rendered. Because vendors often fail to provide clear explanations of their products’ functionality citing confidentiality of their intellectual property, trust is eroded and users will likely just fall back on older, familiar technologies.
How to Fulfill the Cybersecurity Promise of AI and ML
Bridging the gulf between unrealistic user expectations and the promise of AI/ML will require cooperation between stakeholders with different incentives and motivations. Consider the following suggestions to help close this gap.
The ultimate goal of cybersecurity is to prevent attacks from happening rather than simply reacting to them after the fact. By delivering ML capabilities that security teams can put into practice, we can break the hype cycle and begin fulfilling its lofty promise.