While the push-pull between defenders and attackers using artificial intelligence continues, there’s another security dimension to machine intelligence that should be of concern. Just as the rise of IoT devices has created an inadvertent new threat surface ripe for introducing vulnerabilities, some say that AI developers are rushing their wares to market without building in appropriate security controls.
While we are not talking about IA doomsday predictions for humanity from the likes of Elon Musk, there are a number of experts urging promoters of AI to pump the brakes when it comes to cybersecurity.
“In traditional engineering, safety is built in upfront – but in software applications, security is all too often brought in from the rear,” said Mark Testoni, CEO of SAP’s NS2 national security division. “Developers are instead thinking about consumer convenience or running an enterprise. Most businesses will try to create more convenience for customers and employees, which means more connections and IoT devices, and using tools like AI.”
Even so, because of AI’s capacity to help organizations can be more competitive or achieve better outcomes, there are often opposing forces inside an organization when it comes to locking down its risk.
For instance, in healthcare, an abundance of image data for many medical ailments, like tumors, is being used to train AI models to detect these conditions earlier and more accurately. And according to stats from market researchers at IDC, AI investments in the healthcare sector reached $12.5 billion in 2017 alone.
The American Medical Association recently weighed in on the issue with a full bulletin covering policy recommendations for using AI, noting the potential upside for the technology when it comes to patient outcomes.
“As technology continues to advance and evolve, we have a unique opportunity to ensure that augmented intelligence is used to benefit patients, physicians, and the broad health care community,” said AMA board member Jesse Ehrenfeld. “Combining AI methods and systems with an irreplaceable human clinician can advance the delivery of care in a way that outperforms what either can do alone.”
However, all too often, the investments being made in AI – in healthcare and across vertical industries – aren’t always properly vetted. IBM learned this lesson last year when last summer when doctors at Memorial Sloan Kettering Cancer Center halted a trial with IA poster-child Watson. According to reports, the hospital became disenchanted by Watson’s AI-inspired treatment recommendations that could have put a patient’s health at further risk.
“Software-engineering teams adding on cool AI features to their wares – usually just buying it off the shelf and populating it without security testing,” said Fred Kneip, CEO, CyberGRX. “They basically just pull in a bunch of online code that they bought online with a credit card into a tool and call it a day. People are excited to work with AI, but right now there are no rules in place for security testing.”
And that can cause big issues if a cybercriminal decides to target these weaknesses.
“The industry is currently in the early stages of re-evaluating operations with regards to new cyber-threats and the integration of AI and IoT systems with life-supporting technologies, making it imperative to ensure new medical devices are well-deployed and operated properly,” BDO pointed out.
Locking down the risk in any implementation, be it healthcare or beyond, could be easier said than done – after all, complexity has always been the enemy of security.
“AI technology is pretty complex, built around processing large amounts of data and learning from it,” explained Oliver Tavakoli, CTO at Vectra, in an interview. “This makes AI a potent and hard-to-protect attack vector. Ultimately, there are hundreds of thousands of lines of code behind AI interfaces and entities – and some are even neural networks that are not totally understandable by the people that created them. These represent matrices of 1,000 by 1,000 by 1,000 data points, and there are that many values in this 3D cube that the neural network has learned. Finding every vulnerability in that footprint is not a reasonable goal.”
The other issue that is all too often swept under the rug is the sheer amount of data that AI requires to be successful – that means that a compromise can open the gates to a firehose of valuable, sensitive data, depending on where the application is deployed.
“Too many companies will say, just send my data to AIs without thinking about how it will be protected,” Kneip added. “What makes me uncomfortable is that for those tools to be successful they need as much data as possible as quickly as possible, to identify patterns a human eye can’t. But if you’re just going to throw everything at them, you have to understand that you’re putting out very sensitive information, even information that’s unnecessary to share. The immediacy, real-time aspect of these AI engines doesn’t allow much time to think through security implications of what exactly you’re sharing.”
Thus, companies should be considered in their implementations of AI, and understand where potential risk lies.