Threatpost editors break down the top news stories for the week ended Nov. 8. The hot stories of the week include:
Below is a lightly-edited transcript of the news-wrap podcast.
Lindsey O’Donnell-Welch: Hi, everyone, this is Lindsey O’Donnell-Welch and Tara Seals with Threatpost here today to break down the top news of this week ended November 8. Tara, how’s it going today?
Tara Seals: Pretty good, Lindsey, how are you?
LO: I’m good, good. I was just looking over the Threatpost homepage at the news of the week and we actually – just diving into it – had a bunch of data-breach news this week relating to both external breaches, but then also some that came from internal threats as well. I don’t know if you had been keeping count of them. But there was the news of the Utah eye clinic that was breached that had impacted 20,000 patients — and that’s the run-of-the-mill data breach news that we see every week. But then we also had other breach news. Tom Spring, our editor-in-chief, wrote a really interesting feature about how we’re seeing more and more breach fine payouts. And how despite that, we’re still seeing data breaches, and how that begs the question, why aren’t big breach fines translating to fewer breaches?
TS: Yeah, definitely. And I thought Tom’s feature was pretty interesting because he had a lot of feedback from various researchers, in terms of whether or not more regulation and more penalties from various agencies is moving the dial at all,when it comes to how enterprises approach data privacy. So I think the verdict’s out, but so far we haven’t really seen that make a dent in the number of breaches — if anything, they continue to accelerate. So, not a very not a very heartwarming conclusion to that article. But certainly, very informative. And it’s interesting to see how that’s going to play out over time, as calls for regulation, both on the state level as well as on the national level and worldwide, continue to accelerate. So we’ll see.
LO: Well, we’re seeing more and more, at least it feels like it to me, more fines and more of some of the big guys like Equifax, Google, Facebook being hit with large fines, but I don’t really know how much that’s really going to help at all. And there was a law that was proposed earlier this month that was actually targeting Facebook, that in addition to fines would require additional penalties such as jail time for executives, so maybe things like that might help a little bit.
TS: It’s interesting that you brought up jail time because one of the people that Tom talked to for that article, I was reading it this morning, basically said that outside of things that would not be politically acceptable here in the U.S., – things like jail time and more personal punishments – short of that, not a lot is going to make a dent [in corporate behavior]. Largely because a lot of the fines, even though they’re huge when you think of the multimillions of dollars [involved], when you put that up against the actual size of the company, like Facebook for example, a lot of them translate into slaps on the wrist, right? So they have to decide whether or not this is just going to be a cost of doing business. Is it worth paying that out, in order to not have to completely overhaul all of their policies and storage infrastructure and all that kind of stuff? Or if they’re actually going to try to be better stewards going forward? So that’s going to be interesting.
LO: It’s a good thing to think about the concept of large enterprises versus smaller companies and how they’re impacted. I mean, if you look at a lot of the small- and medium-sized businesses [SMBs], they don’t really have the resources, or as much money, I guess, to allocate towards security measures. And when they get hit by data breaches, they not only cannot afford some of the fines that are proposed, but their business might be completely over, depending on the breach. For large enterprises like you said, some of these fines are just a drop in the bucket.
You actually have a webinar, is it next Wednesday? You’re going to be discussing some of the large enterprise themes around data breaches.
TS: Yeah, actually, shameless plug here for my webinar coming up next Wednesday, the 13th at 2 p.m. EST. I’m going to be joined by Chip Witt who is with SpyCloud, and they’ve done a lot of research around trends and data-breach fallout when it comes to the Fortune 1,000, just by slicing and dicing a lot of information that they’re getting from their telemetry. And it’s pretty interesting in terms of what constitutes risk for a large enterprise versus what constitutes risk for an SMB, as you pointed out. It’s not necessarily the fact that the cost of mitigation is going to ruin the business, which is, as you also pointed out, what SMBs worry about. But it’s really more about the follow-on attacks and account takeover and things like that. Much, much more damaging attacks can follow from that initial data breach when you’re talking about a large enterprise.
Also with greater risk in theory should come greater responsibility. But we’ll also discuss some of the challenges that exist when you’re a large enterprise when it comes to trying to lock all of that down when you have such a large complex footprint and so many different types of stakeholders.
LO: And that amount and level of employees within the organization too, juggling the insider-threat risk that could happen, intentionally or unintentionally, and password issues that you may see there and third-party risk, such as partners and the supply chain; just so much to worry about across the board now.
TS: Yeah, the supply-chain aspect obviously is huge, because they have so many different partners and suppliers. I mean, if you’re a Fortune 1000 company you’ve got tens of thousands of people, working with you and for you across the globe. And then also, as you point out, the insider-threat issue, I mean, yeah, that’s a lot to keep track of individually, and some of the the policy enforcement that’s required there becomes more difficult when you have locations scattered all over the world. And as a matter of fact, Lindsey it reminds me, you read a story about a rogue insider at Trend Micro this week that I thought was interesting.
LO: When we talk about data breaches, we usually look at external threats. But yes, we did have a couple of news stories actually this week having to do with, as you say, rogue employees. The first one was at Trend Micro, they said that a rogue employee sold the data of 68,000 customers, and they discovered the employee sold this data to a malicious third party, which they did not name, and that third party then used that data to start targeting customers with scam calls. So they said that the employee has since been terminated. They were first aware of the incident when Trend Micro customers started reporting that they had been receiving these scam calls that were purporting to come from Trend Micro support staff, but they clearly were not, and they tracked this back to the employee and have disclosed the incident this week. So that brings up the insider-threat issue.
And then also, there was some big news that came out on Wednesday night about the Department of Justice charging two former Twitter employees who were working with the Saudi Arabian government to actually snoop on political dissidents’ Twitter accounts. And the court document basically says that the two Twitter employees had access to as many as 6,000 Twitter accounts without authorization. And then they pass these back to the Saudi Arabian government…they would collect the email addresses, the phone numbers that were associated with these accounts, as well as other information. So I thought that was interesting, just the difference there between these two situations. One was a database that was sold to allegedly a scam company or whatever. And then the other was allegedly Twitter employees who were passing information to an actual government.
TS: Yeah, two different motivations for for sure. I’m curious on the Trend Micro front, that rogue employee, did he or she sell this with the understanding that this third party would then go on to mount these scam attacks, were they in cahoots with this third party, I wonder, or did they just see an opportunity to make a buck? I also wonder if they were groomed for this, if they were approached and then just sort of tempted into it, or if they saw an opportunity themselves and went looking for a buyer. I’m curious about the dynamics there. I don’t know if we have any answers on that front.
LO: I think that based on reading between the lines, the employee probably did know who they were selling to and what their motivations were, but they didn’t offer any more information about the timeline there. I mean [about] whether this was an employee who came into the company with the intent to do this or someone who maybe was approached while they were at the company and and convinced or urged to do this. But that’s the big question, right? When it comes to insider threats, what is happening on the employee’s side, the rogue employee’s side that motivates them to do these things? And how can companies prevent that from happening either from the get-go or from happening over time? It’s a difficult topic to have to bring up because in some cases, it has you looking at your own employees [suspiciously] if you’re a company.
TS: Right, absolutely. And then on the on Twitter front, how do you vet people for — well you can’t really — vet people for their political views? So there’s no way to really tell, is this person basically a Saudi government supporter that is looking to their access in order to sniff out dissidents and take a look at a private account information? The other thing that I was thinking about was, now that we have a lot of these encrypted messaging services like WhatsApp and Telegram and things that have been compromised by nation-state actors looking to snoop into what civil society and dissidents are doing and what journalists are doing, there’s not really a safe repository for communications, right? So this is just one more example of that increasingly, there’s nowhere really to hide if you’re someone who is of political interest to an authoritarian government, what are you supposed to do? So it’s kind of disturbing on that front as well.
LO: The Twitter story was definitely of interest to me because it did have that human-interest/political aspect to it, and what the implications are for other social-media companies, right? Because I mean, if you look at Twitter and Facebook, they’re constantly grappling with this issue of free speech and different opinions on their platforms, and then you have this case where a government is coming in and hiring employees who actually work at those firms in secret to try to track some of these accounts, it just brings in a really dangerous aspect to how these platforms work and how they operate.
TS: Yeah, I mean, absolutely. And, Twitter has been used as a platform for dissidents and people trying to organize protests and things like that and trying to fly under the radar because you don’t have to have your actual identity associated with your Twitter handle. So I presume — I did not read the court document — but I presume that that’s the part of the information they were trying to uncover is to associate the actual real email addresses and persons with the Twitter handles.
LO: Well, in other news, there is more Alexa and Siri and smart-speaker drama this week. I don’t know Tara if you saw the article about the new hack for smart-voice assistants that essentially could hack them with a laser light beam? But that was a fun one to write up earlier this week.
TS: Yeah, I definitely saw that – you have lasers, you have Alexa, you have Siri, I mean, what’s not to love about the story? That was certainly something. So tell us a little bit about what that’s all about?
LO: Yeah, researchers came out with a new attack called “light commands,” which allows an attacker to use just a laser light beam and a couple of other [pieces of] cheap equipment to point a laser at the microphones on various voice assistants like Amazon Alexa. They also tried it on Siri, on Facebook portal, on Google Assistant…Microphones will convert sounds, so voice commands if you talk to your Alexa, into electrical signals. But then in addition to sound, researchers found that the microphones also react to light being aimed directly at them. Because if you have a high-intensity light like a laser, then it will cause the voice command to activate and vibrate. And so what they did is, they could encode inaudible commands into different frequencies of laser light, point them at the microphones, and trigger various commands.
[For instance], they said that they were able to open the garage door if the voice assistant enabled that type of command. So if you’re an attacker, what you could do is you could stand outside a window and point this laser beam through the window at the microphone and you would be able to then trigger these commands. Researchers were saying that by doing that, then an attacker could essentially either rob a house or make online purchases or even remotely start vehicles. So there were a bunch of malicious commands that they could do.
TS: Yeah, that’s just physics-tastic really, it’s just brilliant. But the other [interesting] thing is in terms of the distance at which this will work, you had said 360 feet in the article which, let’s just do some level-setting here for the audience, is longer than a football field. So you don’t have to be right up against somebody’s device. I mean, you just have to have accurate aim, I guess. But conceivably, you could be undetected and be at some some distance away.
LO: Actually they gave a little video overview of the attack and in the attack, they were joking that one way to maliciously trigger voice assistants is to stand outside the house and instead of using the light, yell commands — and they were like, this will not work, you’ll probably get caught that way.
TS: Yeah, well, I know that, on weekends when I am over at my friend’s house, she has Alexa, we always shout over each other to try to get Alexa to play the music that we want her to play. This is just sort of a malicious version of that.
LO: I did reach out to Amazon and Google, and they said that they’re all reviewing the research paper and looking at ways that they can improve the security of the devices. But that said, because this is in the design of the microphone, it’s going to be a difficult issue to tackle from the perspective of Amazon, Google, Apple and Facebook.
TS: Well, lots of good stories this week.
LO: Yeah, definitely. And we should probably wrap this up. But Tara, again, I’m looking forward to your webinar next week. And for anyone listening, be sure to sign up. I’ll include a registration link in the podcast article. And I am excited to hear about the data-breach themes that you guys will be discussing.
TS: Yeah, it should be really interesting. And it’s pretty specific to large enterprises, and so we should have some really granular, cool information coming out. And I’m looking forward to it.
LO: Tara, thanks again for coming on and talking about the biggest stories of the week.
TS: Thanks so much, Lindsey. Have a great weekend.
LO: Thanks, you too. And for all of our listeners, catch us next week on the Threatpost podcast.