Security teams are confronting a new nightmare this Halloween season: the rise of generative artificial intelligence (AI). Generative AI tools have unleashed a new era of terror for chief information security officers (CISOs), from powering deepfakes that are nearly indistinguishable from reality to creating sophisticated phishing emails that seem startlingly authentic to access logins and steal identities. The generative AI horror show goes beyond identity and access management, with vectors of attack that range from smarter ways to infiltrate code to exposing sensitive proprietary data.
According to a survey from The Conference Board, 56% of employees are using generative AI at work, but just 26% say their organization has a generative AI policy in place. While many companies are trying to implement limitations around using generative AI at work, the age-old search for productivity means that an alarming percentage of employees are using AI without IT’s blessing or thinking about potential repercussions. For example, after some employees entered sensitive company information onto ChatGPT, Samsung banned its use as well as that of similar AI tools.
Shadow IT — in which employees use unauthorized IT tools — has been common in the workplace for decades. Now, as generative AI evolves so quickly that CISOs can’t fully understand what they’re fighting against, a frightening new phenomenon is emerging: shadow AI.
From Shadow IT to Shadow AI
There is a fundamental tension between IT teams, which want control over apps and access to sensitive data in order to protect the company, and employees, who will always seek out tools that help them get more work done faster. Despite countless solutions on the market taking aim at shadow IT by making it more difficult for workers to access unapproved tools and platforms, more than three in 10 employees reported using unauthorized communications and collaboration tools last year.
While most employees’ intentions are in the right place — getting more done — the costs can be horrifying. An estimated one-third of successful cyberattacks come from shadow IT and can cost millions. Moreover, 91% of IT professionals feel pressure to compromise security to speed up business operations, and 83% of IT teams feel it’s impossible to enforce cybersecurity policies.
Generative AI can add another scary dimension to this predicament when tools accumulate sensitive company data that, when exposed, could damage corporate reputation.
Mindful of these threats, in addition to Samsung, many employers are limiting access to powerful generative AI tools. At the same time, employees are hearing time and time again that they’ll fall behind without using AI. Without solutions to help them stay ahead, workers are doing what they’ll always do — taking matters into their own hands and using the solutions they need to deliver, with or without IT’s permission. So it’s no wonder that the Conference Board found that more than half of employees are already using generative AI at work — permitted or not.
Performing a Shadow AI Exorcism
For organizations confronting widespread shadow AI, managing this endless parade of threats may feel like trying to survive an episode of The Walking Dead. And with new AI platforms continually emerging, it can be hard for IT departments to know where to start.
Fortunately, there are time-tested strategies that IT leaders and CISOs can implement to root out unauthorized generative AI tools and scare them off before they begin to possess their companies.
Shadow AI is haunting businesses, and it’s essential to ward it off. Savvy planning, diligent oversight, proactive communications, and updated security tools can help organizations stay ahead of potential threats. These will help them seize the transformative business value of generative AI without falling victim to the security breaches it will continue to introduce.