First Wave of Vulnerability-Fixing AIs Available for Developers

First Wave of Vulnerability-Fixing AIs Available for Developers

GitHub has joined a growing list of companies offering AI-powered bug-fixing tools for software developers.

Developers who sign up for the beta program as part of GitHub’s Advanced Security can scan their code with CodeQL, the company’s static-analysis scanner, and fixes will be suggested for the most critical vulnerabilities. The feature will automatically find and fix issues, offering “precise, actionable suggestions” for any pull request, and should reduce developers’ time to remediate vulnerabilities, says Justin Hutchings, senior director of product management at GitHub.

“We have optimized the set of queries that we provide to developers by default with code scanning to those alerts that we think are the highest precision and the highest severity,” Hutchings says. “So we’re only interrupting developers, in those cases, when we think we have very high confidence reasons to believe that this is a problem that they should deal with.”

With code scanning autofix, GitHub joins other application-security firms in turning to artificial intelligence (AI) platforms to fix vulnerabilities. Established player Veracode launched its platform, Veracode Fix, in June as a way of helping developers address the massive delay in fixing vulnerabilities. About 75% of vulnerabilities are typically left unfixed for more than a month, the company says.

Startup companies have also taken advantage of the excitement around generative AI and ChatGPT to launch their own bug-fixing services. In August, Mobb’s AI-powered solution for triaging vulnerability reports and providing fixes won the Black Hat Startup Spotlight competition. That same month, startup Vicarius announced vuln_GPT, a generative AI service that will find and fix vulnerabilities and misconfigurations using data from a remediation database run by the firm.

The tools aim to fix the vast security debt that developers and application-security professionals face every day, says Michael Assraf, CEO and co-founder of Vicarius.

“Vulnerability remediation is broken, for many reasons. Consolidation, personalization, and scalable remediation are definitely some of the top challenges,” he says. “We’ve taken many steps forward, but there’s still a long way to go as organizations can’t or don’t have the capacity to deploy required changes even when they know they need to.”

More Security in the Workflow

Automation through various generative AI capabilities will quickly become part of how developers work because the techniques make workers more efficient. Developers can turn the work of triaging and fixing vulnerabilities, which can take an average of five hours in enterprises, into minutes through the use of AI, says Eitan Worcel, CEO and co-founder at Mobb.

“Automated fixes are coming, whether it’s AI or not,” he says. “The good part of that is the No. 1 thing that developers should do is increase their testing coverage, and this allows them to do that.”

Overall, developers are 15% to 30% more productive in writing and fixing code, according to an initial survey by Forrester Research.

“Certainly, I think the productivity gains are there,” says Janet Worthington, a senior security analyst with Forrester. “I think those all help you … save time, but you still need to make sure that you’re checking. So there still needs to be a developer in the loop.”

Developers should expect to see more AI capabilities integrated into how they work, including embedding security in the integrated development environment (IDE), adding AI checks of pull requests, and generally reducing the friction that developers encounter when they triage and fix vulnerabilities, says GitHub’s Hutchings.

“We’ve tried to take kind of a unique approach in terms of bringing security capabilities to developers where they work,” he says.

Don’t Trust, Certainly Verify

While the promise of AI improving cybersecurity functions is readily apparent, whether current AI systems are up to the task remains to be seen.

On the positive side, researchers presented evidence during last year’s Black Hat conference that GPT-3-based models could help incident responders sift through massive amounts of data to find security-specific information, allowing natural-language threat hunting and better classification of websites. And in August, the Defense Research Projects Research Agency (DARPA) launched a two-year competition aimed at using AI to improve software.

GitHub has certainly seen its efforts take off. In 2022, 35% of the code checked in by developers using its service were suggested by the company’s AI assistant, Copilot. This year, developers are on track to increase that share to 60%, and the company expects it to grow to 80% in five years, GitHub’s Hutchings says.

“Not only are developers completing tasks faster — nearly 90% report [that they do] — but what’s even more powerful is it helps them stay in the flow, focus on more satisfying work, and conserve mental energy,” he says.

Yet generative AI systems that make connections between unrelated information — often referred to as “hallucinations” — remain a danger and could result in bad suggestions for code fixes. Nearly one-third of developers (32%) have concerns over AI used in development and 59% of corporate boards worry about AI’s use in their businesses, according to separate surveys.

AI, Everywhere

There is a sense that AI will eventually become part of every developer’s experience; it’s not a matter of if, but when.

“AI will eat the world, and more particular and relevant to us, AI will eat the security world,” says Vicarius’ Assraf, channeling his inner Marc Andreessen.

The founder’s vision goes beyond just suggesting coding patterns to developers to eliminate vulnerabilities — he wants to make possible AI agents that can autonomously fix software.

“The ultimate goal is to build a worm-like crawler that will jump around the infrastructure and remediate threats completely independently, with no human intervention or minimal validation,” Assraf says. “That will increase cyber hygiene in a scalable and efficient way, which doesn’t necessarily require an expensive set of products or strong security personnel.”