woman sitting at desk working on computer

Deepfake: A Horrifying Tale of a $25 Million Cybercrime

“The devil never sleeps,” said St. Catherine of Siena. Neither do cybercriminals. And with the arrival of generative AI and deep fakes, the newest cyber threat emerged to catch a finance worker in Hong Kong off guard, leading to a $25 million loss.

The kicker? The employee was initially skeptical of an email from the company’s UK-based CFO that called for a confidential transaction. But the email also included instructions for a video call about the matter. The CFO led the video call. Other colleagues whom the staffer recognized joined the call, and all agreed on the payments. That call removed the employee’s doubts—seeing and hearing is believing—and he made payments as instructed, totaling $25 million in U.S. dollars.

But the video conference was a fake—a deepfake. The people in the video call looked and sounded like the CFO and other business colleagues, but they were all faked. Only later, when the poor staffer checked with the head corporate office, did he realize criminals had carried out an elaborate scam.

The criminals created AI-generated videos using material purloined from past online conferences. The perpetrators also employed email, which almost betrayed them with its urging for confidentiality.

Deepfakes Are Here

The event demonstrates scamming at a new, rather terrifying level. (Coincidentally, the news broke at about the same time that authorities in the U.S. were looking into a deepfake audio of the president making pre-primary phone calls in Maine.) 

Deepfake refers to creating video and audio representations of individuals, typically government or business leaders or celebrities, which bring a false message to influence people into thinking, acting or reacting according to the perpetrators’ aims. The FBI calls it business identity compromise: BIC.

The FBI, NSA and the Cybersecurity and Infrastructure Security Agency (CISA) have released a Cyber Information Sheet entitled “Contextualizing Deepfake Threats to Organizations,” defining the problem of synthetic media and the nature of the threats. 

The document’s summary section explains:

“Deepfakes are AI-generated, highly realistic synthetic media that can be abused to:

  • Threaten an organization’s brand
  • Impersonate leaders and financial officers
  • Enable access to networks, communications and sensitive information.”

According to the NSA, “Deepfakes are a particularly concerning type of synthetic media that utilizes artificial intelligence/machine learning (AI/ML) to create believable and highly realistic media.”

As Technology Advances, So Do Criminals’ Use

Several motives are driving malicious parties. Of concern to financial operations is impersonation for monetary gain. Cybercriminals use manipulated media and social engineering to steal money from businesses. Examples are impersonating the company’s CEO, CFO or other financial officers through various media, manipulating audio and video to authorize fraudulent payments. Business email compromise (BEC) may be part of the social engineering scheme, directing an employee to attend a video call. 

In a fraud scheme a couple of years ago, an employee was directed to a call, then due to “poor connection,” urged to switch to a messaging system, where instructions to make payments followed. With its poor quality, the initial call was persuasive enough to the victim that she was dealing with the actual CFO, but of course, she wasn’t. This latest scam in Hong Kong takes the idea to a new and more persuasive level, thanks to the higher quality of the fakes now possible through generative artificial intelligence programs.


The FBI recommends that organizations take steps to identify and defend against deepfakes. These steps should include detection technologies, real-time verification, and passive detection methods. The paper by FBI, NSA and CISO contains several recommendations for “resisting deepfakes.”

Many of the recommendations are technical and the responsibility of IT. But some are “target-level” steps for such intended victims as accounts payable staff. The recommendations, like with BEC and VEC, mean creating awareness of deepfakes and understanding social engineering, developing skepticism and mandating extra confirmation of requests through an independent channel. 

Access to communication activities or channels—video calls and conferences—must require multi-factor authorization and application of known personal details or biometrics. Verification of bank account numbers and ownership has never been more critical. (Click this link for a special report How to Reduce the Risk of Payment Fraud through Automatic Verification of Bank Account Ownership.)

Deepfakes are here, and they’re not just celebrity fakes or political stunts. Criminals are after your organization’s money and reputation. Raise your awareness and your controls. 

Contact us for help with automatic bank account verification and secure vendor data transfers.

Lets Talk!

Vendor Inquiries

— Simplified

Please enable JavaScript in your browser to complete this form.