Fake news peddlers have devised a cunning new way to stump Facebook, Twitter and others cracking down on lies and half-truths spreading on social media. Instead of linking to fake news, bad actors are now linking to posts promoting older news articles that may no longer be accurate – but won’t be reported as fake as they once were legitimate news.
Threatpost editor Tara Seals sat down with Staffan Truvé, the co-founder and CTO of Recorded Future, at the Security Analyst Summit in Singapore this week to discuss a report on the new report’s findings.
Below is a lightly-edited transcript of the podcast.
Tara Seals: Hi, everybody. This is Tara Seals, senior editor at Threatpost, and I am joined here with Staffan Truvé, the co-founder and CTO at Recorded Future. We are here in Singapore at the Security Analyst Summit. And Staffan is going to be speaking on influence campaigns later today. So Staffan, welcome.
Staffan Truvé: Thank you.
TS: And so tell us a little bit about what you plan on addressing in your talk.
ST: So what I’ll be talking about today is two parts really, one part is talking about some tools and methods we’ve developed to identify and analyze influence campaigns. And the other one is going to be a concrete example of a campaign we’ve seen recently on the European side.
TS: Interesting. Okay, so tell me a little bit about the influence campaign that you guys have recently spotted.
ST: So this is a bit of a new kind of thing. You know, traditionally, when people think about influence campaigns, you tend to associate it with fake news, sort of extreme groups and so on. This is slightly different. I think the major difference is that these guys are not using fake news. They’re using old news… what they’re doing is they’re publishing news about terror events, you know, which are a couple of years old, as if they were new.
TS: Oh, wow.
ST: Yes it’s kind of an interesting twist to it. And you know, you could wonder why and our conclusion is that one reason for doing this could actually be to avoid being taken down by the different social media, you know, because the they are not actually producing fake news, they are relaying old news in a clever way. But for most people who sort of happened to see this news will probably not notice that it’s old.
They were including links, but most people won’t follow the links. And you know, if they do, then they probably won’t see the publishing date. So we think it’s a sort of normal, in that sense, sort of influence campaign, which is trying to instill uncertainty and fear in the audience. But they’ve sort of shifted the way they do it slightly.
TS: Yeah, that’s really interesting. Now, in terms of uncovering these influence campaigns, I think you guys have a specific methodology for tracking these down and trying to identify what is trolling and what isn’t.
ST: Right, this is one of the key things is how can you automatically detect these kind of campaigns, and normally when people do this, it’s sort of manually people discover something, they find it’s fake, they report it somewhere, there is some kind of investigation and then maybe some accounts get suspended, things like that.
So the way we do it is we’re taking advantage of the very broad collection, which Recorded Future does, and essentially, we’re looking for anomalies. So anomalies could for example be some major event, which is only being reported in one language, but not in others. We had an example last summer when there was some alleged protests in Sweden, but they were only reported in Russian for some reason, you know, a good sign of something strange happening. And another example could be if it’s, say, a major terror event, which is only reported in social media, but not in mainstream news.
So essentially, we’re doing an anomaly detection on both the volumes, the languages, and the resource categories where things are being talked about.
TS: So once you identify one of these influence campaigns, what what are the next steps to take and does that vary by region in terms of how you adjust them?
ST: I mean, so most of these, we don’t typically take action. It’s not our main focus, you know, if there is something which is purely criminal, we report it. But otherwise, if it’s a new way of doing it, or if we think it’s worth highlighting, then we typically write a report ourselves on it.
TS: There have been some high-profile influence campaigns in the United States surrounding the midterm elections, leading up to those. There were some fairly high-profile take-downs of those on the part of Facebook and Twitter. And they actually faced some blowback for that, in terms of people looking at this as being a kind of censorship. How would you address those concerns?
ST: I think it’s a very valid concern, actually. I mean, it’s fine line, you know, what do we actually want these platforms to do? Do we want them to act as the publishers or not? I think they are a bit schizophrenic themselves here, you know, whether they are publishers or not. It’s also interesting to note that these platforms which are proponents of free speech, are not always proponents of free listening. The fact that they are not facilitating for people to want to try to investigate if these campaigns are going on.
So I think if the goal here is to have openness, I think … Free speech is very important, of course, I think, but I think it should also be accompanied with the right to detect these kind of things and of course, talk about it and publish it. And then the the takedown i think is a separate issue. You know, it’s when people are not clearly in violation of, let’s say, human rights or the rules you have in different countries about free speech, then it’s it’s a tricky thing, really.
TS: And it’s interesting, too, because, you know, Facebook, in particular has been raked over the coals in terms of what role they actually play. Are they a publisher, or they simply a platform?
ST: Depending on which week you listen to Mark Zuckerberg he has a different view, right?
TS: Exactly, depending on who’s questioning him and what congressional hearing he’s been hauled into.
ST: Yeah. I think that there’s actually one thing that people often don’t make the distinction about. But I think there’s a distinction between influence campaigns, you know, which like the Cambridge Analytica, one, which was using ads on Facebook, whether it’s also an economic incentive for them to do things, as compared to other campaigns… to just communicate messages. So I think those two are distinct as well.
TS: And so what I guess what level of awareness to you feel that the population has gotten to in terms of being able detect what is sort of the bot farm somewhere that is trying to influence something, versus maybe someone who’s just really feeling very strongly about their politics and they’re out there on social media?
ST: It’s interesting. So some friends of mine at the Swedish Research Defense Agency actually did work on this on the Swedish elections last year, and I think they did one good thing there, they talked about accounts which were exhibiting bot-like behavior, but when I looked into that it actually one of the some of the most boy-like accounts are actually Members of Parliament, with evidently too much time to spend on social media. So I think the general audiences awareness has gone up. But I still think it’s extremely hard to unveil some of these campaigns and to understand if it’s sort of paid by campaign with some clear goal or if it’s just some very enthusiastic person. And you could you could argue that there is a sort of continuum, that there is no distinct clear distinction, right.
TS: Absolutely. So in terms of, you know, at least in the U.S., anyway, there has been hyper awareness or fear of “fake news” in quotes. People throw that term out all the time, it’s become almost a cultural trope at this point. And so, you know, there’s there’s a certain level of distrust and in terms of the information being disseminated via social media in particular, and news stories and so on. So is that a virtue? Is that a bad thing? I personally think that it’s terrible that you have people that don’t trust mainstream news sources anymore. But how is this shaping the culture at large?
ST: I think you’re pointing to something very important here. And that is that it’s good that people have a more critical attitude towards media, you know, maybe we’ve been for decades, sort of believing big media, that they don’t lie, especially, you know, public service you probably think is better than other media and so forth. But as you said, maybe it’s sort of going overboard now that people have become so cautious of this that they start distrusting even trustworthy sources. But of course, I mean, everyone reporting anything has some kind of agenda behind it.
TS: Absolutely, there’s always an agenda. The question is whether that should be a concerning agenda. And if it’s fact-based or not.
ST: Right, I think it’s interesting also to know how the various sites which pop up which do fact checking again, you get the same question there. Even if you do try to do rigorous fact checking, there is always again, you could be reporting on what’s actually true, but you could of course decide not to report on things which are contradictory to that. So again, then you will not be fake or false news. It will just be what some of my colleagues like to call “hyper-partisan news.”
TS: Have you had any interesting discussions here so far around this with your colleagues or conference-goers?
ST: You know, I would say that this is not the core focus of a conference like this. I think if you look at it, a lot of the cyber security research going on here is more technically oriented.
I think it’s a shame I think, really, we should be viewing these kinds of campaigns and these kind of attacks as equally important and something which this community should be addressing as well.
TS: Yeah, which is why it’s great that you’re here to give your your talk. Well, thank you so much for joining me. Again this is Tara seals with Threatpost and I’m here with Staffan Truvé, the co-founder and CTO at Recorded Future. Thanks so much for your time.