On the surface, this looks like progress. Organizations are investing in identity verification. They’re trying to make it harder for attackers to impersonate legitimate users. Good instincts.
But the instinct is pointing them toward a technology that generative AI is actively dismantling.
What biometrics assume, and why that assumption is breaking
Physical biometrics, whether fingerprint, facial recognition, or voice, work on a simple premise: the biological feature is unique to the person, and it is hard to fake.
The first part is still true. The second part is not.
Generative AI can now produce synthetic faces that pass commercial liveness detection systems. Voice cloning tools can generate convincing replicas from a few seconds of sample audio. Deepfake injection attacks bypass camera-based biometric checks entirely by feeding synthetic video directly into the verification pipeline, never involving a real camera at all.
The ACFE report itself identifies deepfake digital injection and deepfake social engineering as two of the four fastest-growing fraud categories. These are the attack types that biometric systems are most vulnerable to.
So when the report says biometrics adoption jumped 11 percentage points in four years, the question isn’t whether organizations are investing. It’s whether they’re investing in the right direction.
The liveness detection problem
Most biometric systems rely on liveness detection to confirm that a real person is present. The system asks you to blink, turn your head, or hold up your ID next to your face. It compares what it sees to a stored template.
This was a reasonable defense when the main threat was a printed photo held up to a camera. It is not a reasonable defense against generative AI.
Modern deepfake tools produce output that satisfies standard liveness checks. The synthetic face blinks. It turns. It matches the geometry of the ID photo because the attacker generated it from that photo. Injection attacks go a step further by replacing the camera feed entirely, so the liveness system never sees a real face at all, and has no way to know.
Organizations adopting biometrics today are building defenses against last year’s attackers. The attackers have already moved on.
What the ACFE report tells us about the gap
Two other numbers from the report help frame how wide this gap is.
Only 29% of organizations automate routine fraud investigation tasks. That means the majority of fraud cases still require manual review. If your biometric system gets beaten by a deepfake, the attack probably won’t get flagged until someone reviews it manually, if it gets flagged at all.
And 82% of organizations say explainability and auditability are important when adopting AI for anti-fraud. But only 6% feel completely confident they understand how their AI and ML models make decisions. Biometric matching systems, especially those using deep learning models, are often among the least explainable. When a biometric check passes a synthetic face, the system can’t tell you why it was fooled, because the decision happened inside layers of neural network weights that no human reviewed.
The alternative: verify the document, not the face
Identity Impersonation Detection takes a different approach. Instead of asking whether the face in front of the camera matches a stored template, it asks a more fundamental question: is this person actually who they claim to be, and can they prove it right now?
That means verifying the government-issued ID itself. It means checking whether the phone number associated with the request has been recently ported through a SIM swap. It means catching man-in-the-middle attacks where an attacker is relaying a legitimate person’s verification session through their own device. And it means detecting replay attacks where someone records a valid session and tries to play it back later.
None of these checks depend on biometric matching. None of them can be defeated by generating a synthetic face. And they work without requiring the person to have pre-registered with any system, which matters because impersonation attacks frequently target the moments when someone is locked out of their account or calling in for the first time.
Biometrics aren’t useless. But they aren’t enough.
This isn’t an argument to rip out every fingerprint reader and facial recognition camera. Physical biometrics still serve a purpose in multi-factor environments where they are one signal among several.
But the ACFE data shows organizations leaning into biometrics as their primary emerging anti-fraud technology, and that’s the concern. If biometrics are your first and strongest line of defense against identity impersonation, you are building on a foundation that generative AI is eroding in real time.
The 45% adoption number feels reassuring until you put it next to the 7% preparedness number. Almost half of organizations have added biometrics. Barely any of them feel ready for AI-powered fraud. Those two facts are not unrelated.
Identity Impersonation Detection doesn’t compete with biometrics. It fills the gap that biometrics can’t cover, which is the gap attackers are walking through right now.
The 2026 Anti-Fraud Technology Benchmarking Report is published by the ACFE in partnership with SAS. You can request the full report on the ACFE website.
Trusona’s ATO Protect stops identity impersonation at IT help desks and call centers without relying on liveness checks or biometric matching. See how it works.
Audited. Verified. SOC2 Certified.