Generative AI is taking the IT security industry by storm. Every vendor has a story to tell about new use cases or how they are incorporating generative AI and large language models (LLMs) into their security offerings, including Microsoft and Google.
Generative AI and LLM training are multimillion-dollar endeavors. Although ChatGPT is frequently discussed, it is only useful today in the security space because of constraints placed on it, including no access to the live Internet and safety tuning. However, security practitioners live very much in the now, with zero-day threats and an endless flow of new threats, tactics, and techniques. Connecting generative AI to the local enterprise data store and allowing access to the Internet are necessary to realize the full potential of this revolutionary technology.
Leading security providers are doing just this by allowing Internet access, providing APIs to their security-specific generative AI solutions, and training the LLMs against their vast troves of security intelligence. Therefore, it is appropriate for forward-leaning security services providers and enterprise security leaders to think about the role of generative AI in a security operations center (SOC), including infusing tools and processes with this powerful capability. Here are some of the ways security-focused generative AI can benefit different members of the SOC team.
Level 1: Cybersecurity Specialists
Specialists are the entry-level staff in the SOC who triage the stream of alerts generated when the technology identifies an unusual behavior or defined alert condition. They are charged with confirming true positives and filtering false positives. Generative AI can help them understand what an alert means and make better decisions about whether to escalate the issue — especially when it happens during potentially inconvenient hours in the middle of the night. AI is a resource that never sleeps.
Generative AI could explain not only an atomic event but also a sequence of events, and it can shed light on a vulnerability that might be affecting a specific device. Ultimately, we’ll see generative AI used to automate some of the work at this level, including triaging and prioritizing alerts in much the same way we use AI and machine learning in the SOC today. What generative AI adds is the ability for humans to ask questions and get deeper responses than they get today using search engines.
Level 2: Cybersecurity Analysts
These folks take the handoff from Level 1, validate the true positive, compile all the relevant data, and investigate the incidents. In the managed security services space, this can be particularly challenging because analysts deal with multiple, diverse customer environments. That means managed SOC operators need to develop specialists who deeply understand specific environments — but there are practical limits to the number of specialists a single provider can have available 24×7.
Generative AI can be a terrific resource for parsing a sequence of events. It can quickly and efficiently provide an explanation of what took place, the nature of the threat, and the vulnerability of the attacked resource. Instead of being specialists in one tech stack that is liable to change, Level 2 analysts must develop deep expertise in using generative AI. The buzzword is “prompt engineering,” or knowing how to structure a prompt to get an optimized response from the AI. After all, the answers are in the data; knowing how to ask the right question is the art form.
Level 3: Analysts
These people are the most sophisticated generative AI users, employing it to speed up their work in threat response, forensics, and threat hunting. They can leverage the AI’s ability to write scripts or search queries to further investigate a threat.
Other AI Applications in the SOC
Generative AI can help people in a range of other roles in the SOC, including:
Generative AI is powerful technology that, once we figure out how to apply it effectively, will reduce the mean time to detect and respond to threats. This is a primary goal for every security team, followed by improved accuracy and cost reduction.
Cue the caveats. Of course, cybercriminals will (ab)use this technology to cook up new and more sophisticated threats and examine code to find vulnerabilities. We also have to stay aware of the inherent shortcomings of generative AI. It’s only as good and as current as the data it’s trained on. It can produce incorrect or biased results. And the answers you get are only as good as the questions you ask.
That said, it can help alleviate many pain points in the cybersecurity industry, including the shortage of skilled people and the increasing complexity of the infrastructures we protect. Will it replace people? No. Will it help them be more effective and productive? Yes, when used correctly.