In August on a stage at Black Hat USA, I described in detail how Microsoft guest accounts could gain access to view and manipulate sensitive corporate data, including SQL servers and Azure resources. On top of that, I showed how Power Platform could be leveraged by a hacker to creating internal phishing applications that automatically authenticate their victims and to create a backdoor that persists even if the hacked user is deleted. These are still open issues today, as mitigation falls in the customer’s side of the shared responsibility model — meaning every Microsoft customer would have to monitor and harden their own environments to mitigate these security holes.
Preparing for the talk, I thought long and hard about what information to share, being well aware of the double-edged sword that security research can be. How can I share enough to raise awareness and drive people to action while not making the problem worse by putting it on hackers’ radar? After considering that we’ve already observed all of these issues being exploited in the wild, I decided to share the information. Hackers were already aware of the issues and were actively exploiting them; it was important that we leveled the playing field and gave security teams the knowledge and tools they need to keep their organizations secure.
This security researcher’s dilemma is not new, and I’m definitely not the first or only one to have to deal with it. I could point to a few other researchers who were in a similar position, where they could either remain silent or educate everyone about an unsolved security issue.
The Bad Old Days
Gone are the days when security researchers used to drop zero-day vulnerabilities on the Black Hat or DEF CON stages. That is, of course, a very good thing — although we did lose something as a security community, but more on that later. In conjunction, most vendors realize that security researchers are acting to keep them honest and improve the security state of the entire community. As Kymberlee Price put it in a recent interview with Ryan Naraine, just because security researchers are publishing vulnerabilities, it doesn’t make them the enemy; if they were the bad guys, they would be using the vulnerability — they wouldn’t tell you about them at all.
Admittedly we do still get zero-day drops now and again, with the pain of recent example Log4Shell being a fresh memory. But it feels like the average researcher, especially one that works for a respectable security vendors or consultancy, goes the vulnerability disclosure route first.
It is important to remember why people are sharing this information publicly. It is because they don’t feel like they can get the vendor to fix the problem within a reasonable timeframe. In the bad old days, security researchers essentially lit fires that forced vendors to fix things right away.
Where We Are Today
We’re mostly in a whole different ballpark today. Most security researchers I know engage with the vendor, wait around for reply, and then wait some more before they go out and expose things publicly.
It is important to notice the balance of power here. As a researcher, you typically find yourself facing a giant enterprise with endless resources, a strong media presence, and a whole bunch of lawyers. In many cases, you can get the feeling that those endless resources are used to avoid a PR crisis and nullify the issue rather than face it and actually make customers more secure. While some organizations do help researchers through those challenges, it always feels like David vs. Goliath.
The main issue with responsible disclosure, coordinated disclosure, and the popular vulnerability disclosure platforms today is that they put all of the decision at the sole discretion of the organization whose vulnerability is being reported, without any transparency. Sure, we have the CVE system. But issuing those is mostly at the vendor’s discretion. For the cloud services we all rely on today, the situation is even worse, with many vendors refusing to issue a CVE and having no transparency for security issues discovered and fixed on their services.
Keeping Ourselves Honest
We’ve long known that discussing problems out in the open is the best way to push ourselves to do the right thing. We seem to rediscover this fact again and again in different contexts, be it developing open source software, challenging security by obscurity, or launching initiatives for open government. In today’s state of vulnerability disclosure, many feel the pendulum has swung too much to one side, pushing vendors to make choices that minimize short-term visibility concerns at the cost of long-term customer trust and the security of the ecosystem.
Vendor security teams that receive vulnerability reports are doing incredible jobs trying to get their organizations to fix the issues and build strong relationships with researchers. But they too need help. Creating urgency to fix an issue is difficult when the organization feels they control the situation, even while their customers might be at risk.
Security conferences are where security researchers can help vendors make the right choices. They provide a tiny stick a security researcher can poke the vendor with, in hopes of spurring them into action. Information is put out there for the entire community to see and decide whether they accept the current state of things. In public.