Seven percent. That means 93% of organizations are, by their own admission, underprepared for threats they already know are coming.
And these aren’t hypothetical threats. The same report identifies four fraud categories that have increased the most over the past two years: deepfake social engineering, consumer fraud and scams, generative AI document fraud and forgery, and deepfake digital injection. Every one of these attacks exploits the same vulnerability. They impersonate real people.
The preparation gap is getting worse, not better
Here’s what makes that 7% number so troubling. Organizations aren’t ignoring AI. A quarter of them (25%) now use AI or machine learning in their anti-fraud data analysis, up from 18% in 2024. Another 28% plan to adopt it within two years. And 58% say they plan to use generative AI as part of their anti-fraud programs at some point.
So the investment is growing. But the confidence isn’t. Organizations are buying tools to analyze fraud patterns after the fact while attackers are using AI to impersonate people in real time.
Think about what a deepfake social engineering attack actually looks like in practice. An attacker calls your IT help desk. They sound like a real employee. Maybe they’ve cloned that employee’s voice from a conference recording on YouTube. They have the employee’s name, department, and manager. They’re requesting a password reset or MFA bypass. Your help desk agent has no reliable way to verify the caller is who they claim to be.
That’s identity impersonation, and it is the attack that most organizations are not equipped to stop.
The tools organizations are reaching for don’t solve this problem
Among organizations currently using generative AI in their anti-fraud programs, the most common use cases are phishing and scam detection (49%), risk identification and assessment (46%), and report writing (45%). Those are worthwhile applications. But none of them address the moment of impersonation itself.
Detecting a phishing email after it lands is not the same as verifying the identity of the person calling your help desk right now. Risk scoring is not the same as proving the caller is holding their own government-issued ID. Report writing is certainly not stopping anyone from walking through your front door with a stolen credential.
The 93% of organizations that feel underprepared aren’t lacking analytics tools. They’re lacking Identity Impersonation Detection at the points where attackers actually impersonate people.
Where identity impersonation happens
The ACFE report focuses broadly on anti-fraud technology across industries. But when you look at the specific attack types that are growing fastest, they concentrate in a few predictable places.
IT help desks and service desks. These are the single largest attack surface for identity impersonation. An attacker who can convince a help desk agent they are a legitimate employee can trigger a password reset, disable MFA, or gain access to sensitive systems. The help desk agent is trained to be helpful. The attacker knows this.
Call centers and customer support lines. The same dynamic plays out on the customer side. An attacker impersonates a customer to access their account, change their contact information, or initiate a transaction. Voice deepfakes make this easier every month.
Any process that relies on knowledge-based verification. If your identity verification method involves asking questions the person should know the answer to, an attacker with access to breached data can pass that test. Mothers’ maiden names, last four of a Social Security number, recent transaction amounts. All of this is available for purchase.
These are the scenarios where Identity Impersonation Detection changes the outcome. Not by analyzing patterns after a breach, but by verifying the person’s identity at the moment they make the request.
What actual readiness looks like
If your organization wants to be in that 7% instead of the 93%, here’s what readiness requires.
You need a way to verify identity that does not depend on information the attacker can steal, guess, or fake. That means government-issued ID verification, not knowledge-based questions. It means SIM swap detection, because attackers regularly port phone numbers to devices they control. And it means protection against man-in-the-middle and replay attacks, because sophisticated attackers will try to intercept and reuse verification sessions.
None of these capabilities require employees to pre-register or install software. That matters because the attack often happens when someone is locked out, onboarding, or calling from a personal device. If your verification process only works for people who already set it up, it fails at the exact moment it matters most.
The ACFE report makes something clear that the industry has been slow to accept: AI-powered fraud is not a future problem. It is a current problem, and the organizations that recognize the gap between “we use AI for analytics” and “we can stop an impersonator in real time” are the ones that will actually be prepared.
The other 93% are still hoping their existing tools will be enough. The data says otherwise.
The 2026 Anti-Fraud Technology Benchmarking Report is published by the ACFE in partnership with SAS. You can request the full report on the ACFE website.
Trusona’s ATO Protect provides Identity Impersonation Detection for IT help desks and call centers. Learn more about how it works.
Audited. Verified. SOC2 Certified.