MONTREAL – As businesses increasingly turn to the cloud and to software-as-a-service applications, they are finding themselves with new attack surfaces and new types of threats – specifically, hard-to-thwart supply-chain attacks that have the potential for large amounts of collateral damage. In a keynote presentation at Virus Bulletin 2018 in Montreal on Wednesday, John Lambert, distinguished engineer at the Microsoft Threat Intelligence Center, laid out what the new normal looks like.
Put simply, the move to the cloud doesn’t mean that old adversaries focused on compromising the network perimeter simply go away. They just change up their tactics. Software supply-chain attacks in particular are on the rise, he added.
“Adversaries study our networks and our technology and our businesses, and it’s important that we study back,” Lambert explained. “If you’re a customer and you’re choosing a cloud or a SaaS service, expect your attackers to follow you. They’ll infiltrate by changing up their techniques to attack you and get at the same data they were trying to get to when they were attacking you on-prem.”
Supply-Chain Attacks Surge
The most notable emerging tactic is the targeting of the application-development supply chain, which Lambert said is becoming “pervasive across segments.”
Typo-squatting is a technique where a piece of software or a URL is given a name that’s almost identical to a legitimate property. In the npm example, one of the malicious packages was named “mongose” – and it contained source code for the legitimate Mongoose project along with extra malicious code.
Similarly, Lambert brought up the increasing use of Python. Last year, 10 malicious libraries, with similar names to legitimate code modules, were found on the Python Package Index, often abbreviated as PyPI. These packages contained the exact same code as the upstream libraries, except for an installation script with malicious properties.
Microsoft itself was even compromised this way, Lambert added. Its PowerShell Gallery feature allows developers to create and upload various scripts. After Spectre and Meltdown were discovered, Microsoft published a script to help people understand if they were vulnerable to the two issued; it was dubbed “Speculation Control.”
The malicious code was hidden in the white space.
“Someone uploaded ‘Speculations Control, plural, which mimicked all of the functions and was almost identical to the real code,” Lambert said. “But it contained a backdoor and a decoder.”
The attackers were savvy in this case—the only sign that things were amiss was the code for the decoder; the backdoor itself cleverly made use of a white space to hide.
The bad news is that thanks to how close the names are to the real thing, developers may have been none-the-wiser when building applications with the malicious packages – thus unknowingly turning out malware-laced applications that could be adopted by thousands of users. For instance, in the Python case, the libraries were subsequently incorporated into software “multiple times,” according to Slovakia’s National Security Authority.
“Attackers are preying upon developers, and counting on them not picking up on the typos,” said Lambert. “This has the potential to create very broad attacks where the malicious functions cause lots of collateral damage.”
In terms of how defenders can start to cope with these new trends, Lambert pointed out that machine learning can provide the edge that the security industry needs to stay up with the threats; he noted that in general, defense is moving from “using security to protect data, to using data to provide security.”
“Now, the data and the telemetry and customer traffic information are all giving us a picture of the attacks, and this is informing how security works,” Lambert said. “To analyze that, machine learning is becoming an important tool.”
He used the concrete example of Windows error reporting to show how this could be put into action. When something crashes on Windows, a report is sent to Microsoft; as a result, Lambert said that the software giant receives billions of crash reports per month.
“The first thing we do is to ‘bucket’ what’s coming in so engineers can work on this efficiently,” Lambert said. “When a line of code crashes, we receive the app name, version and timestamp; the module’s version and timestamp; exception code; and what’s offset in the module at fault. So, it fingerprints the event.”
Machine learning helps ‘fingerprint’ incidents.
He said that when these reports are viewed through the lens of security, it’s possible to identify those that need further examination. For instance, if an exception code that corresponds to an unknown stack buffer overrun is identified.
“In that case, we know it’s security-relevant,” Lambert explained. “We’re not sure if it’s an exploit, but it’s worth further investigation.”
Similarly, the fingerprint will show if, say, there was a CSRS.exe involved with no file version (likely malware); if the crash involved a known exploit for the code in question, and sometimes it can detect if it could be a zero-day is headed into the wild.
After this initial examination, researchers can use the crash’s full data dump to better evaluate what’s happening.
“The idea is to analyze things in minutes, not hours, so systems are evolving towards targeted, tailored, deeper analysis, using telemetry as its heart,” Lambert said. “The good news is that many security systems connect to the cloud to collect telemetry and send information back. We are very much moving to a world where security’s value is the product plus the data coming from your customers.”