With the marketplace awash in new AI tools and existing tools rolling out sparkling new AI features, organizations lack visiblity in what AI tools are in use, how they are used, who has access, and what data is being shared. Data from Nudge Security shows organizations have an average of six AI tools in use, with ChatGPT and Jasper.ai leading the way in adoption.
As businesses try, adopt, and abandon new generative AI tools, enterprise IT, risk, and security leaders are left trying to govern and secure their use without hindering innovation. While developing security policies to govern AI use is important, it is not possible without visibility into what tools are being used in the first place.
The chart shows how widespread ChatGPT (OpenAI.com) adoption is among enterprises, and there are plenty of other contenders scrabbling for mindshare. Some of the AI tools are not as well known as ChatGPT, such as rytr.me and wordtune.com, but security teams still have to know the tool and be able to create policy governing their use. Huggingface.co is another AI tool that is fairly well known, and it is solidly in the middle of the pack.
Enterprise security teams have to consider how to handle discovery – discovering which generative AI tools have been introduced into the environment and by whom – as well as risk assessment, but reviewing the AI vendor’s security and risk profile. What’s also important is that as business users set up experimental accounts to try out these services, and then abandon them, the organization has to make sure those accounts are deactivated properly.