News Wrap: Steam Zero Day Disclosure Drama, Unix Utility Backdoor | Threatpost

Why did Valve-owner Steam say it made a “mistake” turning a researcher away from its bug bounty program? Who was behind a backdoor that was purposefully introduced into a utility utilized by Unix and Linux servers? And why is Facebook coming under fire for its “Clear History” feature? Threatpost editors Lindsey O’Donnell and Tom Spring break down the top stories of the week that have the infosec space buzzing, including:

For the full podcast, listen below or download here.

Below is a lightly-edited transcript of the news wrap podcast.

Lindsey O’Donnell: I’m Lindsey O’Donnell with Threatpost, and I’m here today with Tom Spring to break down for you the top news of this week ended August 22. Tom, thanks for joining the Threatpost podcast today. How are you doing today?

Tom Spring: I’m doing great. Thanks for asking.

LO: Good. Well, we’re just ending a big week. But we should probably talk about one of the biggest stories that you wrote about that garnered the most interest for a lot of Threatpost readers, which was the backdoor that was discovered on the Webmin utility for Unix servers. That was a really interesting story.

TS: Yeah, it just goes to show you how susceptible some of these libraries are to manipulation and the clever way that people are now abusing – whether it be a repository or whether it be a Git library – it’s really spooky, this backdoor that was found, just recently, a couple weeks ago. I should say, even just earlier this week. It’s been a evolving story. It touches on DEF CON and touches on zero days. But in a nutshell, you’re right, there was a backdoor found in this utility for Linux and Unix servers, called Webmin that could give attackers basically control over their servers, and it was sort of a worst case scenario.

But what’s interesting about it is the seeds of this attack, of this vulnerability, were planted in – I believe it was April 2018 — I’m not too sure of the month. But it was last year, there was a library that was put into the code that is behind this Webmin tool, and they backdated it, and it kind of existed and it was not exploited, and if I understand things correctly, it went unnoticed for almost a year. And then during DEF CON, what happened was researchers were looking at the code for Webmin, and they discovered a way to exploit the utility, using the vulnerability found in a CGI script “password change”. And that was the first red flag that led to more attention to what was going on with this script, and what they discovered was that it was not a mistake and it was intentionally inserted into GitHub, and it was backdated as several versions of the Webmin went out to users. And then you know, sort of everything unraveled, they patched it, it has a happy ending. And now, I think the patches went out, the community is aware, the Webmin utility has been fixed. Obviously, there’s probably a number of different, there’s probably a percentage that hasn’t been patched yet. But I think there’s a lot of awareness around this problem.

But we’ve gotten some comments on the story asking about who’s behind this. It’s one thing to say, “Okay, well, we understand what happened, but who is behind inserting this malicious code?” And then the other feedback that we’ve been getting on this story is, is just how difficult it is to rely on one set of eyes, or even a couple set of eyes in terms of looking at code, and making sure it’s secure, and not trusting a lot of these commits to make sure that they’re saying like, that it’s not a foregone conclusion that it’s secure. And it’s, I think it’s really kicked up quite a bit of discussion within the open source community in terms of how to handle problems like this and we see a lot of this within repositories in terms of malicious code, or just bad code reuse, in terms of libraries with software developers, it really is a tremendous challenge. And I think this was a really interesting example of how this can be abused by a malicious actor.

LO: Did Webmin give any indication about what it might change in the future to stop something like this from happening again? Or is that up to speculation at this point about what can be done?

TS: Well, so they’re going to be updating their build process to use checked-in code from GitHub, rather than a local directory that is kept in sync – that had a lot to do with how this was overlooked. They suggest users rotate all passwords and keys accessible from old build systems and auditing all GitHub check ins over the past year to look for commits, that may have introduced similar vulnerabilities.

So, like I said, again, you hear a lot about the preventative measures that are being taken that are taking place to prevent these things from happening in the future. We’re seeing more code dependency, more code reuse dependency. And I think we’ll probably hear about more tools, more solutions, and repositories making more noise about about how they’re ensuring that what they’re doing is better than what other repositories are doing to keep code safe.

LO: Right. Yeah, for sure. Well, I wrote an interesting story too this week. So I covered an ongoing story that Tara actually reported first last week, and that we chatted about on last week’s news wrap; Tom, if you remember the zero day that was discovered in Steam by a researcher last week.

TS: Yeah.

LO: So that story has continued into an entire whirlwind of drama this week. The researcher said that he was barred from Steam’s owner, Valve’s, bug bounty program after disclosing that initial zero day vulnerability for the Steam gaming client. And then on the heels of that he also disclosed another zero day privilege escalation vulnerability. So it was a little crazy. And then, just last night, Thursday evening, according to reports, after all that, Valve patched the recent Steam zero day, and essentially called turning away this researcher who had found the zero days a big mistake, and they updated their bug bounty program to address the issue.

TS: And Lindsey, was this was this HackerOne?  I’m just trying to figure out who’s actually apologetic for what essentially, it seems like getting this this researcher angry.

LO: Yeah. So this was Valve. But let me take a step back. If you remember, last week, the researcher had some back and forth with valve about the initial flaw that he had disclosed via its HackerOne bug bounty platform. And essentially, it came down to the fact that Valve didn’t consider a local escalation of privilege bugs to be part of its bug bounty platform.

TS: Yeah, and we got a lot of comments that were in support of Valve’s position on that, a lot of fanboys. But go ahead.

LO: So anyways, what happened was eventually, Valve told the researcher that he was not allowed to publicly release the bug details, but he did anyways, 45 days after the initial disclosure, and then after that, the researcher said that things essentially escalated and that he was banned from the platform. That led to a big discussion around kind of disclosure in the hacker community. But you know, I guess the story, it kind of has a happy ending at this point, if Valve has admitted that it made a mistake and banning the researcher. And the other part was that it also updated its bug bounty program to now start accepting local privilege escalation class vulnerabilities.

TS: That really warms my heart because there’s so much animosity between these bug bounty programs and the researchers – or at least there can be. If you were to ask me yesterday how this was going to play out or how it was going to play out, I would have I would have said another bug bounty researcher standoff gone awry. I had little to no hope that that there was going to be any resolution on this. And it has been a big soap opera, really interesting stuff.

LO: It’s led to discussion around, as you said Tom, disclosure issues like these in the hacker community. And Katie Moussouris has weighed in and a couple of others, and it’s kind of been split the reactions that I’ve seen online.

TS: What’s Katie’s take on that? I’m just curious.

Good for Valve for apologizing for the mistake in their dismissal of the vulnerabilities.
Their bounty triage provider chalked it up to disclosure being a “murky process”.
*Basic* triage is “murky”?
Isn’t the outsourced service supposed to navigate that?

— Katie Moussouris (@k8em0) August 22, 2019

LO: Yeah. I mean, she was basically pointing out how this is yet another kind of issue that we’re seeing when it comes to bug bounty programs. Because as you know, Katie, she has talked a lot about some of the hurdles that bug bounty programs kind of need to go over.

TS: Yes, she’s a very strong advocate for bug bounty programs and getting them right, that’s for sure.

LO: Yeah. So I mean, on Twitter, she did say that vendors have labeled full disclosure is responsible and planted onus on researchers, while completely skirting their own liability and negligence. And basically said, if the vendor failed to address it, suddenly, it’s the researchers fault for speaking up, and how is that fair?

TS: Yeah, I’m sure you’ve spoken to researchers that are very unhappy about bug bounty programs, they get involved in them. And they’ve got handcuffs on, they find the vulnerabilities and they’ve signed non-disclosure agreements. And the vendor sits on the vulnerability and doesn’t fix it. And the researcher wants either to get paid and get notoriety, or just wants the internet to be a safer place for things to get fixed. And they just basically have a gag order. And if they want to go public with the vulnerability, they risk the backlash, and I’m not too sure if that’s what happened in this case. But Valve and Steam, it doesn’t get much bigger in terms of an online gaming community. And, I don’t know why anybody would be sitting on a bug, a dangerous bug impacting potentially as many users as is where they use Steam, if it’s not hundreds of millions, at least 100 million users.

LO: I feel like there needs to be some sort of mediator almost, between these companies, and the researchers who are participating in their programs to what level platforms like HackerOne, like a Bugcrowd play in that. But I do feel as though for something like this there, there needs to be someone who’s like, either to the vendor, like you can’t just kick someone off because they they reported something. And then on the other hand, they need someone who can go to researchers who may be having their own their own issues. And this story is definitely split up, some are arguing that the incident points to an issue and bug bounty platforms, as I said, which is that you can’t just ban someone from the platform after they find something that you don’t like, but then others are arguing that the researcher shouldn’t have disclosed the second bug by in this method by essentially going around Valve and being like, well, you banned me. So now I’m going to disclose this zero day vulnerability. But yeah, I mean, in terms of other big news this week, did you see that Facebook Clear History button news about them kind of rolling out their new Clear History feature? That was kind of interesting.

TS: Yeah. Well, I guess it doesn’t have much of an impact for folks here in the US, please yet. And it’s not anything to get too excited about either. If you know more about it, please do share. But I don’t think we can all breathe a sigh of relief quite yet in terms of Facebook and the data that they collect.

LO: As you mentioned, it sounds like this is just being rolled out in Ireland, South Korea or Spain, so like very random countries that don’t affect us here in the US. But yeah, I was reading reports and articles that were saying that while Facebook has this Clear History button that’s supposed to kind of clear all your data. And consumers were really wanting that ability to wipe out all the data that Facebook has on us. It sounds like it’s not really what people had hoped for and what they had expected. And it doesn’t really truly clear all of our history. It sounds like essentially, it just like still takes your data, but it will anonymize you so that I guess your data isn’t attached to you. But it’s still collecting your data, essentially. And I think that’s what has people riled up at this point.

TS:  Yeah. And I think that by virtue of the fact that you actually push the button, it sort of red flags you as well, and it takes extra effort to anonymize you, but also to scrutinize you at the same time. And I gotta figure, this is one of those feel good things that doesn’t serve anybody but Facebook, you know, it says, Oh, we have we’ve got a button for that now, and you don’t have to worry about it, we’re going to see a lot more of these types of privacy pushes. Some of them, I know that you just wrote about something that Google’s working on as well, where you’ve got the big tech giants, who are feeling the heat from government, whether it be the US government, or foreign governments are very concerned about data privacy, and about the amount of information that’s being collected. And they’re coming up with a lot of new solutions, to try to address that situation. And I think what they’re doing is they obviously don’t want to hurt their bottom line. And they’re coming as close as they can to offering a genuine solution without actually having them hurt the billions in profit that they make every year. It’s  a fine line that they’re walking, I think they’re really trying to head off a lot of the possible regulations that are coming down the pike by saying, look, we’ve got a button for that, look, we’ve got a browser extension for that.

LO: Yeah, I mean, it is interesting, because what the alternative would be is that we’re essentially consuming free content online, in return for our data. So the other option, that Facebook that companies like Google are telling us is that we that we would have to pay instead.

TS: I don’t know, Lindsey, it’s not a black and white issue. I don’t think you’re suggesting it is. But you know, if they’re going to say you can use Google Chrome and surf the internet for free, because we can take every little tiny piece of data that we can about you and monetize it, there has to be there has to be a middle ground. And I have I’ve heard, we’re seeing micro payments becoming a bigger reality. I’m not familiar with some of the success stories that newspapers and and other websites are having in terms of making, giving access these walled gardens that are going up, left and right. I mean, maybe, maybe, maybe that’s where we’re headed. I don’t see Facebook ever charging for access to their world or or Google Chrome. I mean, it kind of might be nice if you pay $10 a year, you don’t have to worry about being tracked as much. But I still feel like even if you paid $10 a year to Facebook, and they said they weren’t tracking you, they probably be like, oops, I’m sorry, you’re tracking you.

LO: So we’re basically almost into deep at this point.

TS: Yeah, I don’t know. But I just don’t buy the argument, that you’re getting it for free. So we should be able to collect every website that you go to browser fingerprints, IP address, where you do your banking, your health care provider –  mean, these guys, these guys are making billions and trillions. And if they weren’t so hungry to make to keep their bottom line and might be able to figure out how to find a little more of a middle ground where they don’t have to completely suck up every little detail of your life to be able to monetize it.

LO: Yeah, that’s fair. I don’t know. I think at this point like you mentioned, they’re really trying to kind of stave off regulation and who knows if that’s going to work or not at this point, because it is getting so much traction, but all right. Well, I think we’ve had a very busy week, Tom, thanks for coming on to talk a little bit more about the biggest stories that Threatpost wrote about this week. Hopefully, we’ll have a quieter weekend.

TS: Yeah. Yeah, for sure. Thanks, Lindsey.

LO: All right. Thanks. Catch us next week on the Threatpost podcast.