It’s one of the most well-worn clichés in cybersecurity — humans are the weakest link when it comes to defending computer systems. And it’s also true.
Every day, we click links we shouldn’t, download attachments we should avoid and fall for scams that all too often are obvious in hindsight. Overwhelmed by information, apps and devices — along with our increasingly short attention spans — we are our own worst enemies in cyberspace.
The natural human weaknesses that make defending the open internet so difficult are well understood and plenty of companies and organizations work to make the average person behind the keyboard better at digital self-defense. But what cybersecurity researchers haven’t focused much attention on until now are the psychological weaknesses of attackers. What are their deficiencies, habits or other patterns of behavior that can be used against them? What mistakes do they typically make? And how can those traits be used to stop them?
A new project at the Intelligence Advanced Research Projects Activity — the U.S. intelligence community’s moonshot research division — is trying to better understand hackers’ psychology, discover their blind spots and build software that exploits these deficiencies to improve computer security.
“When you look at how attackers gain access, they often take advantage of human limitations and errors, but our defenses don’t do that,” Kimberly Ferguson-Walter, the IARPA program manager overseeing the initiative, told CyberScoop. By finding attackers’ psychological weaknesses, the program is “flipping the table to make the human factor the weakest link in cyberattacks,” she said.
Dubbed Reimagining Security with Cyberpsychology-Informed Network Defenses or “ReSCIND,” the IARPA initiative is an open competition inviting expert teams to submit proposals for how they would study hackers’ psychological weaknesses and then build software exploiting them. By funding the most promising proposals, IARPA hopes to push the envelope on how computers are defended.
The project asks participants to carry out human-subject research and recruit computer security experts to determine what types of “cognitive vulnerabilities” might be exploited by defenders. By recruiting expert hackers and studying how they behave when attacking computer systems, the project aims to discover — and potentially weaponize — their weaknesses.
Ferguson-Walter describes “cognitive vulnerabilities” as an umbrella term for any sort of human limitation. The vulnerabilities a cyber psychological defense system might exploit include the sunk cost fallacy, which is the tendency of a person to continue investing resources in an effort when the more rational choice would be to abandon it and pursue another. In a network defense context, this might involve tricking an attacker into breaking into a network via a frustrating, time-consuming technique.
Another example Ferguson-Walter cites to explain what weaknesses might be exploited is the Peltzman Effect, which refers to the tendency of people to engage in more risky behavior when they feel safe. The canonical example of the Peltzman Effect is when mandatory seatbelt laws were put into effect and drivers engaged in more risky driving, thinking that they were safe wearing a seat belt. The effect might be used against attackers in cyberspace by creating the perception that a network is poorly defended, inducing a sense of safety and resulting in less well-concealed attack.
Just as the tools of behavioral science have been used to revolutionize the fields of economics, advertising and political campaigning, ReSCIND and the broader field of cyber psychology aims to take insights about human behavior to improve outcomes. By placing the behavior of human beings at the center of designing a defensive system, cyber psychology aims to create systems that address human frailties.
“Tactics and techniques used in advertising or political campaigning or e-commerce or online gaming or social media take advantage of human psychological vulnerability,” says Mary Aiken, a cyber psychologist and a strategic adviser to Paladin Capital Group, a cybersecurity-focused venture capital firm. Initiatives such as ReSCIND “apply traditional cognitive behavioral science research — now mediated by cyber psychological findings and learnings — and apply that to cybersecurity to improve defensive capabilities,” Aiken said.
Cybersecurity companies are using some tools of cyber psychology in designing defenses but have not done enough to integrate the study of human behavior, said Ferguson-Walter. Tools such as honeypots or decoy systems on networks might be thought of as turning the psychological weaknesses of attackers against them, but defenders could do more to exploit these weaknesses.
Among the central challenges facing ReSCIND participants is figuring out what weaknesses a given attacker might be susceptible — all while operating in a dynamic environment. To address this, the project proposal asks participants to come up with what it conceives of as “bias sensors” and “bias triggers,” which, together, identify a vulnerability and then induce a situation in which an attacker’s cognitive vulnerabilities are exploited.
Exactly how that system will function and whether it can be integrated into a software solution is far from clear, but Ferguson-Walter says it’s important for IARPA to pursue these types of high-risk, high-reward projects that in the absence of government funding are unlikely to receive support.
And amid widespread computer vulnerabilities and only halting progress in securing online life, a new approach might yield unexpected breakthroughs. “We’ve had 50 or 60 years of cybersecurity and look where we are now: Everything is getting worse,” Aiken says. “Cybersecurity protects your data, your systems, and your networks. It does not protect what it is to be human online.”
The post US intelligence research agency examines cyber psychology to outwit criminal hackers appeared first on CyberScoop.