Email has always been an attractive target for cybercriminals in search of a money grab. Over the years, we’ve seen email attacks of all flavors — from basic spam and virus attacks, to mass phishing emails containing malware, to today’s attack du jour: business email compromise (BEC).
These attacks have evolved far beyond the Nigerian prince or CEO gift card scams typically associated with BEC. Now, cybercriminals are implementing increasingly sophisticated tactics — including vendor impersonation, spoofed domains, and local translations — to secure their payday.
And the rise of generative AI tools like ChatGPT has added even more power to attackers’ arsenals, upleveling their ability to write more convincing emails at even greater scale. By inputting specific information about their targets, or snippets of previous conversation history, generative AI can help threat actors become better at engaging in highly realistic conversations with their victims.
Whether their endgame is convincing a target to pay a fake invoice, reroute payments to a different bank account, or share access to sensitive information, BEC attacks are confronting organizations of all sizes and across all industries, and costing billions of dollars. And even though employees are more aware of these scams, losses from BEC are continuing to increase year over year — costing $2.7 billion in 2022 alone.
Why? Because cybercriminals are getting smarter, and the tools put in place to stop them simply aren’t working like they should.
The SEG Challenge
Despite the rate at which BEC tactics are evolving, traditional secure email gateways (SEGs) have been slow to keep up. That’s because they’re designed to block attacks based on detection of known threat signatures, like malicious attachments or links and bad sender domains. This worked well when high-volume malware campaigns were common, but as savvy cybercriminals discovered how the SEG worked, they quickly learned how to outwit it.
Today, even inexperienced, non-technical hackers can bypass SEG detection, simply by sending text-based, socially engineered emails that omit traditional indicators of compromise — blending right in with ordinary inbox content. And not only does SEG miss sophisticated attacks, it also requires manual management that can drain the productivity of security teams that need to be focused on managing the most critical cybersecurity incidents.
So, how do you stop these BEC attacks, without requiring more time and resources from your team? One strategy to consider is replacing the SEG entirely.
Behavioral AI to Better Secure Inboxes
But if you’re going to sunset your SEG, what do you replace it with?
The problem with the SEG model is that it looks for known-bad indicators of compromise, which are constantly changing. What if you flipped this approach on its head — instead, learning what known-normal activity looks like to spot deviations that might indicate a potential attack?
This is how behavioral AI-based email security works. By ingesting behavioral signals across the email environment — like each user’s typical sign-in times and locations, the colleagues and vendors they ordinarily interact with, and the tone and language they tend to use in their emails among thousands of other signals — behavioral AI creates a system that learns and dynamically monitors baseline behaviors.
Any variation from the norm, no matter how subtle or novel, may indicate a social engineering attack — and can be remediated automatically before it reaches the target’s inbox.
SEG Displacement in Action
The benefits of swapping a SEG for a behavioral AI approach have been proven, too. Let’s take a look at a couple examples.
Healthcare providers are appealing targets for cybercriminals that are after patient data, and the resulting data breaches are the costliest of any industry, especially given their hefty regulatory penalties.
Elara Caring, one of the largest home healthcare providers in the US, experienced this threat firsthand, when advanced phishing emails were bypassing its SEG, leaving employees struggling to decipher whether emails were authentic or attacks. Upon transitioning to a behavioral AI-based model, Elara Caring’s security team stopped hundreds of attacks in the first 90 days alone, including credential phishing attacks, attempts to trick the payroll team into directing paychecks to fraudulent accounts, and executive impersonation attacks.
Behavioral AI also helps organizations recoup inefficiencies caused by the SEG. At Saskatoon Public Schools, the largest school division in Saskatchewan, Canada, the security team was spending half a day or more on manually remediating attacks. After implementing behavioral AI-based security, in three months, the team detected and auto-remediated more than 25,000 attacks, saving the team hundreds of hours each month.
SEGs have served their purpose well, but as email attacks become more sophisticated by the day, it’s time for a new approach. The organizations that can modernize their email security accordingly will be in the best position to protect against the threats of today and what’s to come.
Sixty-five percent of Abnormal customers no longer use an SEG. Visit this page for more information about companies that have displaced their SEGs.
Author the Author
Mike Britton is the CISO of Abnormal Security, where he leads information security and privacy programs. Prior to Abnormal Security, Mike spent six years as the CSO and Chief Privacy Officer for Alliance Data. He brings 25 years of information security, privacy, compliance, and IT experience from a variety of Fortune 500 global companies. He holds an MBA with a concentration in Information Assurance from the University of Dallas.