The Early Days of Email Phishing

When the term phishing first emerged in the mid‑1990s, attackers sent emails impersonating banks or online services, tricking users into providing passwords and credit card numbers. These scams relied on poor spelling, generic greetings and suspicious URLs. Over time, attackers refined their techniques. They adopted convincing branding, personalized messages and domain spoofing. Organizations responded by implementing spam filters, link scanners and employee training. Multifactor authentication and hardware security keys further reduced the efficacy of email phishing. Google’s use of security keys eliminated employee phishing success.

Despite these advances, phishing remained prevalent due to its scalability. It costs little to send thousands of emails. However, as technical defenses improved and users became more cautious, attackers adapted.

The Rise of Phone‑Based Vishing

Vishing (voice phishing) involves social‑engineering through phone calls. Attackers call victims, impersonating IT support, banks or executives. They exploit human trust and urgency to extract information or convince help‑desk agents to reset MFA. Vishing increased for several reasons:

  • Multi‑factor authentication adoption – As organizations widely implemented MFA, attackers pivoted to social‑engineering help‑desk agents to reset the second factor.
  • Abundance of personal data – Massive data breaches and oversharing on social media provide attackers with personal details. They use this information to answer security questions and craft convincing stories.
  • Spoofing and VoIP – Attackers use VoIP services to spoof caller IDs, making it appear that calls originate from trusted numbers. The Canadian Centre for Cyber Security notes that vishers use fraudulent phone numbers, voice alteration software and social engineering to trick victims.
  • Higher success rate – Phone calls create a sense of urgency. People are more likely to comply when talking to a “boss” or “IT support” than when reading an email. Attackers invest time to research their targets and rehearse scripts.

The Impact of Generative AI on Social Engineering

The latest evolution of social engineering leverages generative AI. Criminals use AI to produce natural language emails, scripts and even clone voices. The FBI warns that AI enables more sophisticated phishing, including voice and video cloning. This technology empowers attackers to:

  • Clone executive voices – By analyzing short audio samples, AI can mimic a CEO’s speech patterns. Attackers can then instruct employees to transfer money or reset credentials.
  • Produce deepfake videos – Deepfake videos can show a colleague requesting access during a video call. This undermines trust in visual verification.
  • Craft personalized messages – AI can ingest data about the victim’s role, recent projects and contacts to craft persuasive narratives.

These capabilities make vishing more convincing and scalable than ever before.

Next Steps for Defenders

As social engineering evolves, defenders must adapt. Strategies include:

  • Strong identity proofing – Replace knowledge‑based questions with secure identity verification, including government‑ID scans, selfies and device checks. AI voice clones cannot produce a valid ID.
  • Phishing‑resistant MFA – Use FIDO2 passkeys or security keys. Even if attackers trick help‑desk agents, they cannot complete authentication without the hardware key.
  • Zero‑trust policies – Apply zero‑trust principles to help‑desk interactions. Verify both the requester and the device before granting access.
  • Scripted and automated workflows – Create scripts that help‑desk agents must follow. Require call‑backs to numbers on file and multi‑party approval for high‑privilege accounts. Remove discretion.
  • Training and simulations – Educate employees and help‑desk staff about vishing, AI impersonation and MFA fatigue. Conduct regular simulations to reinforce procedures.
  • Monitoring and analytics – Log and analyze reset requests, call patterns and unusual behaviors. Early detection can prevent compromise.
  • Public awareness and regulation – Encourage legislation to curb caller ID spoofing and data broker practices. Promote public awareness about vishing and AI scams.

Conclusion

Social engineering techniques have evolved from simple phishing emails to sophisticated vishing and AI‑powered scams. As technical defenses improve, attackers target the human layer, exploiting trust, urgency and social bonds. Defenders must respond by implementing strong identity proofing, phishing‑resistant MFA, zero‑trust principles, scripted workflows and continuous education. By understanding the evolution of social engineering and anticipating the next wave of threats, organizations can stay one step ahead and protect their help desks from ever‑changing attacks.