The breach didn’t start with a vulnerability. It started with a phone call.
Someone called the help desk, said they were the VP of Finance, explained they were locked out, and walked the tier-1 agent through a standard verification process. Name, employee ID, last four of their social. All correct. All pulled from a leaked database, a LinkedIn profile, and maybe thirty seconds of basic OSINT. The agent reset the account. The attacker was in.
This isn’t hypothetical. According to Palo Alto Networks, social engineering was the leading initial access vector in over a third of incident response cases investigated between May 2024 and May 2025. In two thirds of those cases, attackers specifically went after privileged or executive accounts. The help desk is now one of the most targeted entry points in enterprise security, precisely because it’s staffed by people trying to be helpful.
MFA didn’t stop it. Liveness checks weren’t involved. The attacker never touched an authentication flow. They just asked.
That’s the gap Identity Impersonation Detection is built to close.
What Is Identity Impersonation Detection?
| Identity Impersonation Detection (IID) is the practice of verifying that the person requesting access or an account action is actually who they claim to be, not someone using stolen information to impersonate them. Unlike authentication, which confirms credentials, IID checks the real-world identity behind the request, in real time, without requiring pre-registration or liveness checks. |
Authentication tells you that someone has the right password or token. IID tells you whether the person holding that token is actually the account holder. The security industry has been treating those as the same question for a long time. They aren’t.
Why This Category Exists Now
Impersonation isn’t a new attack. What changed is the cost.
According to IBM, the average cost to create a deepfake is now $1.33. A convincing voice clone that can answer basic verification questions costs close to nothing, as long as the attacker has a phone recording and access to a consumer AI tool. Cheap synthetic media on top of breach data means the attacker’s job is now closer to social performance than technical exploitation.
SoSafe’s 2025 report found 87% of security leaders observed an increase in AI-based social engineering attacks over the previous two years. Veriff’s 2026 Fraud Report found that over 85% of fraudulent verification attempts in 2025 involved impersonation. That’s not a niche attack vector. That’s the dominant fraud pattern.
The organizational numbers are worse than most people realize. Proofpoint found that 99% of the organizations they monitored in 2024 were targeted for account takeover, and 62% experienced at least one successful breach. The Federal Reserve reported ATO fraud losses in the U.S. hit $15.6 billion in 2024, up from $12.7 billion the year before.
Most of those attacks didn’t succeed because authentication was weak. They succeeded because the attacker skipped authentication entirely. They called. They emailed. They social-engineered the human on the other end.
What MFA Misses
Nobody’s arguing that multi-factor authentication is a bad idea. It isn’t. But MFA was built to verify credentials, not identities. The question it answers is “do you have the right token?” which is not the same as “are you actually this person?”
That gap shows up in two ways.
The first is the help desk. An attacker who can convincingly impersonate an employee doesn’t need to crack MFA. They call support, explain they’re locked out, and ask an agent to reset the account and re-enroll MFA on their behalf. The authentication challenge never fires. There’s nothing for MFA to catch because the reset happens before authentication is even involved.
The second is technical. SMS-based MFA is vulnerable to SIM swapping, and SIM swap fraud jumped 1,055% in 2024. OTP bots intercept one-time codes in real time. MITM session hijacking can replay valid authenticated sessions at the protocol level, in ways the authentication layer never sees.
In both cases, the attacker isn’t breaking MFA. They’re routing around it.
What Liveness Checks Miss
Liveness detection was designed to confirm that a selfie or video feed is from a real, present person rather than a static image or pre-recorded clip. It worked reasonably well for a while. It works less well now.
The structural issue is that liveness checks are point-in-time tests. Generative AI has gotten very good at producing synthetic video that passes them. Sumsub’s 2025 Identity Fraud Report found that sophisticated fraud attempts, those built to defeat controls like liveness detection, jumped 180% year over year. In 2024, roughly 10% of fraud attempts were classified as sophisticated. By 2025 that number was 28%.
There’s also a more basic issue: liveness checks only apply at specific moments in a user’s journey. They don’t help when an attacker calls your support line using an AI-cloned voice and talks an agent into resetting an account. That conversation never enters a liveness flow. There’s no camera, no selfie prompt, no moment where synthetic video detection has anything to work with.
Trusona doesn’t use liveness checks, and that’s intentional. A control that sufficiently convincing video can defeat isn’t a reliable identity layer. And right now, convincing video is cheap.
How Identity Impersonation Detection Works Differently
Instead of “can you prove you have the right credentials?”, IID asks “does the evidence match what we know about this account holder’s real-world identity?”
In practice, that means verifying identity through government-issued documents, checking real-time SIM swap status on the account holder’s registered number, and running patented anti-replay and man-in-the-middle detection at the session level. No pre-registered biometrics. No liveness check for GenAI to defeat. The check runs against real-world identity signals, not stored credentials.
Here’s how that compares to the tools that came before it:
| MFA | Liveness Checks | Identity Impersonation Detection | |
| What it verifies | Credentials / tokens | That a video is “live” | That the requesting person is the account holder |
| Stops help desk social engineering | No | No | Yes |
| Stops SIM swap | No (SMS MFA) | N/A | Yes |
| Stops MITM / replay attacks | Partially | No | Yes (patented detection) |
| Defeats GenAI deepfakes | No | No | Yes |
| Requires pre-registration | Yes | Yes | No |
| Requires liveness check | No | Yes | No |
The middle column is where most modern ATO attacks land. MFA wasn’t designed to cover it. Liveness detection is increasingly getting beat at it. IID is built for it directly.
The Attack This Is Designed to Stop
An attacker spends twenty minutes on LinkedIn and a breach database. They pull the target’s full name, employer, job title, direct phone number, manager’s name, and the last four of their employee ID. They call the help desk with a voice cloner trained on a publicly available recording. They answer every verification question correctly. The agent resets the account.
No password. No authentication challenge. The breach was already done before any of that was relevant.
Deepfake-as-a-service platforms were widely available by 2025, putting voice cloning and persona simulation tools within reach of attackers with no particular technical skill. The barrier to executing this kind of attack is low enough now that treating it as a “sophisticated threat actor” problem is a mistake.
IID interrupts that chain before the reset completes. By checking identity through government-issued documents and real-time SIM status, and by detecting MITM and replay patterns at the session level, Trusona’s ATO Protect catches the impersonation at the point of the request. The agent doesn’t have to make a judgment call under pressure while someone on the phone is insisting they’re the VP of Finance and they need access now.
Frequently Asked Questions
What is the difference between authentication and Identity Impersonation Detection?
Authentication confirms that someone has the right credentials, a password, a token, a biometric match. IID asks a different question: is the person making this request actually the account holder, or are they impersonating someone using data they shouldn’t have? The gap matters most before authentication fires, when a social engineer is trying to convince your team to reset an account and re-enroll MFA on their behalf.
Does Identity Impersonation Detection require users to enroll in advance?
No. Trusona uses government-issued ID verification and real-time SIM swap detection without requiring users to pre-register biometric data or set up a device ahead of time. Pre-registration is a deliberate non-requirement. It adds friction for legitimate users and creates a window of vulnerability for anyone who hasn’t completed enrollment yet.
Why don’t liveness checks solve the identity impersonation problem?
Two reasons. First, generative AI tools can now produce synthetic video that passes many liveness detectors, and the quality keeps improving. Second, liveness checks only cover specific moments in a user’s journey. They do nothing when an attacker is on the phone with your support team using a cloned voice, asking for a password reset. That interaction never enters a liveness flow.
Is Identity Impersonation Detection the same as identity verification?
They’re related but not the same thing. Identity verification usually happens once, at onboarding, to confirm someone is who they claim to be when they first create an account. IID runs as a detection layer across the account lifecycle, during high-risk moments like password resets, access changes, and privilege escalations. It’s less about onboarding and more about what happens after.
What makes account takeover attacks so hard to stop?
The credentials are usually real. The person being impersonated is real. The data the attacker uses came from somewhere legitimate, just not from them. From the outside, a well-executed ATO looks like a confused user who locked themselves out. Stopping it requires a layer that checks whether the person asking for help is actually the account holder, not just whether they know enough about that account holder to sound convincing.
Audited. Verified. SOC2 Certified.