Blog/Case Study

LastPass Employee Stops Deepfake Attack in Its Tracks

When scammers used AI to clone the CEO's voice over WhatsApp, an alert employee recognized the hallmarks of social engineering and shut it down.

Nevrial Security Team

Threat Intelligence

January 12, 2026

5 min read

LastPass logo - the password manager company

A Password Manager Under Attack

On April 10, 2024, LastPass—one of the world's most popular password managers—disclosed that it had been targeted by a deepfake audio attack. In a transparent move that helped raise awareness across the security community, the company shared details of how the attack unfolded and how it was stopped.

According to the LastPass blog post, an employee received a series of calls, texts, and voicemails that appeared to come from CEO Karim Toubba—except they were delivered via WhatsApp, outside normal business channels.

The Attack Method

The scammers employed a straightforward but increasingly common approach:

  1. WhatsApp Contact: The attacker created a WhatsApp account appearing to be CEO Karim Toubba
  2. Audio Deepfake: Voice messages and calls used AI-generated audio that mimicked Toubba's voice
  3. Urgency and Authority: The messages created pressure for immediate action
  4. Multiple Touchpoints: Calls, texts, and voicemails were used to increase legitimacy

The attack demonstrated how accessible deepfake technology has become. Creating a convincing voice clone no longer requires nation-state resources—it can be accomplished with publicly available software and a few minutes of sample audio.

What Saved LastPass

The targeted employee exhibited exactly the kind of security awareness that organizations need:

1. Recognized Unusual Channels

The use of WhatsApp for CEO communications immediately raised suspicion. Legitimate business requests should come through official channels.

2. Identified Social Engineering Hallmarks

The employee recognized classic manipulation tactics: - Forced urgency: Pressure to act immediately - Authority impersonation: Using the CEO's identity for compliance - Unusual requests: Deviating from normal business processes

3. Reported Instead of Acted

Rather than engaging with the suspicious communication, the employee immediately reported it to the internal security team.

LastPass's Transparency

In disclosing the attack, LastPass's threat intelligence team emphasized the broader implications:

"Impressing the importance of verifying potentially suspicious contacts by individuals claiming to be with your company through established and approved internal communications channels is an important lesson to take away from this attempt."

The company also noted the historical progression of deepfake threats:

  • 2019: First reported corporate deepfake attack—a UK company transferred funds after an AI voice impersonated their CEO
  • 2024: Deepfake technology has become sophisticated, accessible, and increasingly used against private sector targets

The Security Paradox

There's a certain irony in a password security company being targeted by AI impersonation. LastPass exists to solve the authentication problem for passwords—but deepfakes represent an entirely different authentication challenge.

Even the strongest password can't verify that the face on your video call or the voice on your phone actually belongs to who they claim to be.

Building a Security-Aware Culture

The LastPass incident demonstrates that technical solutions alone aren't sufficient. The employee who stopped this attack didn't use any special detection software—they used:

Critical Thinking

Questioning why the CEO would use WhatsApp for urgent business

Training Retention

Recognizing the hallmarks of social engineering attacks

Proper Reporting

Following incident response procedures instead of trying to handle it alone

Healthy Skepticism

Treating unexpected communications with appropriate suspicion

Lessons for Every Organization

LastPass's experience offers actionable guidance:

Establish Clear Communication Policies

Define which channels are appropriate for different types of business communications. Make it clear that urgent financial requests should never come through consumer messaging apps.

Train for Deepfake Awareness

Security awareness training must evolve beyond phishing emails to include: - Voice cloning and audio deepfakes - Video deepfakes in real-time calls - Social engineering via new communication channels

Create Safe Reporting Channels

Employees should feel empowered to report suspicious communications without fear of being wrong. False positives are far better than successful attacks.

Verify Through Independent Means

When receiving unusual requests—especially involving money or sensitive information—verify through a different channel. Call the supposed sender on a known phone number, not one provided in the suspicious message.

The Growing Accessibility of Deepfakes

LastPass's threat intelligence team highlighted a crucial point: deepfake technology that once required significant resources is now freely available:

"There are now numerous sites and apps openly available that allow just about anyone to easily create a deepfake."

This democratization of AI-powered fraud means every organization is a potential target, regardless of size or industry.


Nevrial provides real-time deepfake detection that works alongside employee training to catch AI-generated fraud. See how our technology can protect your team.

Protect your organization from deepfake fraud.

See how Nevrial's real-time detection and hardware-backed verification keeps your video calls secure.