Google Search Results Spoofed to Create Fake News | Threatpost | The first stop for security news

A spoofing technique that creates fake Google search results has been uncovered, which could be used in political influence campaigns or for other nefarious purposes.

In this age of fake news, people are more wary than ever of efforts to sway public opinion using disinformation – this has led to a wholesale questioning of news sources spread around on social media. It’s also prompted Facebook, Twitter and others to action in terms of cracking down on influence campaigns.

Less top of mind however are systems that we’ve come to trust and rely upon – namely, Google searches.

Despite some accusations that Google has gamed its search algorithms to return left-skewing news results (a charge it categorically has denied), most people trust the search engine to return relevant and accurate information. A spoofing technique takes advantage of Google’s perceived legitimacy to create more believable false information, by simply adding two parameters to any Google Search URL.

According to the independent Dutch researcher Wietze Beukema, the approach makes use of Knowledge Cards, which are boxes on the right-hand side of the screen that contain relevant information to whatever search query a user types into Google Search. For instance, a search for “MSNBC” offers regular search results along with a Knowledge Card with key facts about the news outlet.

Knowledge Cards are far from completely solid sources of information – according to the researcher, most information comes straight from Wikipedia and a mix of other sources like corporate boilerplate.

“Unfortunately, Knowledge Graph doesn’t tell you where it got the information from,” Beukema wrote in a blog post this week. “In addition, the algorithm sometimes mixes up information when there are multiple matches (e.g. people with the same name). This has led to a small number of incidents regarding the feature’s accuracy.”

Nonetheless, most people take the information at face value, opening the door to social engineering.

“People have effectively been trained to take information from these boxes that appear when googling,” said Beukema. “It’s convenient and quick – I have caught myself relying on the information presented by Google rather than studying the search results.”

It turns out that anyone can attach a Knowledge Graph card to their Google Search, which might be helpful if you want to share information provided in a Knowledge Graph card with someone else. Each Knowledge Graph has a unique identifier (the &kgmid parameter), which can be added to the URL for the original search query. An attacker can thus add any Knowledge Card they choose to run alongside search results for any given query:

“For instance, you can add the Knowledge Graph card of Paul McCartney (kgmid=/m/03j24kf) to a search for the Beatles, even though that card would normally not appear for that query,” explained Beukema.

It’s also possible to create a URL that only shows the Knowledge Graph card and omits any search results, by adding the &kponly parameter to the URL. The search bar will still be visible with the original query though, even if it has nothing to do with the Knowledge Card that’s being shown.

These two things combined can be leveraged to carry out propaganda efforts. For instance, a malicious actor could post a custom query URL on social media, supposedly showing real Google search results for a hot-button topic. The researcher used the example of the query, “Who is responsible for 9/11?” After tampering with the URL, it’s possible to suggest that George W. Bush was responsible for the 9/11 terrorist attack:

While anyone who goes to replicate the search results would get a different answer (Osama bin Laden, of course), it can still be a powerful tool for spreading disinformation, particularly to those with a confirmation bias who may be simply scrolling through the News Feed, for instance.

“This allows you to trick others into believing something is true,” Beukema said. “After all, it is a legitimate Google Search link and since we have been trained to trust the answers provided by Google, there must be some truth in it, right?”

The researcher filed a bug report a year ago with Google, advocating the disabling of the &kponly parameter in particular; he said that the internet giant doesn’t consider the issue to be an addressable vulnerability.

“I disagree: in this day and age of fake news and alternative facts, it is irresponsible to have a ‘feature’ that allows people to fabricate false information on a platform trusted by many,” Beukema said.

Google hasn’t responded to a request for comment from Threatpost.