Feds say AI favors defenders over attackers in cyberspace — so far As large language models and other artificial intelligence tools have proliferated more widely, researchers remain divided on whether highly capable AI tools will provide an advantage to attackers or defenders in cyberspace. According to two U.S. officials on the frontlines of securing American computer systems, so far AI is giving an advantage to the defender — for now. “Right now, there are probably more cybersecurity benefits from using AI than there are threats from our adversaries using it. But that’s a precarious balance, and something we at the FBI are not taking for granted,” Cynthia Kaiser, deputy assistant director for the FBI’s cyber division, said Tuesday during a speech at the Trellix Cybersecurity Summit. Researchers have warned that generative AI might be useful for malicious hackers to discover vulnerabilities and automatically write code exploiting them, But Kaiser highlighted a number of ways that the technology is being used by defenders to become more efficient, including by detecting malicious activity on victim networks, hunting and incident response, software development and other activities. Rob Silvers, the undersecretary for strategy, policy and plans at the Department of Homeland Security, echoed that sentiment Tuesday. Silvers said that thus far he’s witnessed cybersecurity practitioners make better use of generative AI than attackers. Silvers cautioned that “the jury’s still out” on whether AI will be “a net benefit to attackers or defenders.” But for now, defenders retain the advantage in his view. “ At this moment in time, I have seen deployed in the wild more defensive promising uses for AI than I have offensive actual uses,” Silvers said. Highly capable hacking groups appear to be experimenting with using AI, but researchers have seen little evidence that they are delivering major benefits. A report this month from Microsoft and OpenAI found that while advanced hacking groups from China, Russia, Iran and North Korea are all experimenting with large language models in their hacking operations, thus far they have derived only modest uses for it. Make attribution hard again Kaiser and Silvers both cautioned that this status quo might not hold over the long term and that defenders in the federal government and industry can’t afford to rest on their laurels. For years, “attribution is hard” was a running theme in cybersecurity, reflecting the complexity involved when attempting to tie a specific cyber operation or piece of malware back to its owner. But in recent years, government and industry have been able to make significant strides in this area, as intelligence agencies learned to pair their own non-public intelligence with superior threat intelligence from a bustling private sector to unmask and expose hacks carried out by foreign nations and other malicious actors. According to Kaiser, the pendulum is again starting to swing back to an environment in which it’s getting easier for foreign hacking groups to hide their presence in victim networks and obfuscate their origins. As an example, she cited the activities of Volt Typhoon, a hacking group linked to China that has extensively targeted U.S. critical infrastructure. Kaiser noted that Chinese actors have used “living off the land” techniques and obfuscation to “remain undetected, and continue to lurk in our systems, waiting for the right moment to cause devastating impacts.” Generative AI may not be completely upending the cybersecurity landscape right now, but Kaiser indicated that it is making it easier and more efficient for hackers working on behalf of foreign governments like China, Russia, Iran and North Korea to target and compromise victims.