That’s Not Your CEO on MS Teams Directing a Payment
Business and vendor email compromise are challenging enough. Now, criminals and malicious state actors are adding AI-generated deepfakes to their toolbox to create convincing imitations of organization leaders. Such fakes make detection by the unwary even more challenging. This latest warning comes from a US government Cybersecurity Information Sheet issued jointly by the NSA, FBI and the Cybersecurity and Infrastructure Security Agency (CISA).
The combined-agency report says there are limited indications of significant use of “synthetic media techniques.” The report raises the alarm that malicious actors will likely increase the use of the techniques.
Deepfakes involve AI machine learning to create audio and video communications that look and sound genuine. Tom Hanks and other celebrities are not the only victims of such imitation. Other marks include CEOs and CFOs, who seem to speak and appear online. Phony voices and images have officers instructing staff to grant systems access, make illegitimate account changes or issue payments.
Fake images and voice imitations have been around for a while, but creating a realistic fake took much time and work. However, with the application of AI, it is now very swift and much simpler. The wide availability of AI generators has “democratized” the ability to create sophisticated fakes.
The government is warning federal and local governments and agencies, including the military, government employees, first responders using the national security systems, defense industrial base firms and critical infrastructure owners and operators (utilities and similar organizations), that they are prime targets of certain state actors. But the notice also warns of the dangers to corporations and other organizations. It highlights incidents in May 2023 involving cybercriminal deepfakes attempting to defraud organizations for financial gain.
In one case, the perpetrators employed audio and imagery deepfakes to imitate a CEO, drawing an operations manager into an “interactive call” with the CEO. In the second case, the perps used synthetic audio, video and text messages—perhaps too clever by half. Initially, they impersonated an executive and set up an audio call on WhatsApp. Then, owing to a poor connection, the executive suggested an MS Teams meeting, where the executive appeared to be in his office. But again, the connection was poor, so the executive instructed the target to switch to text. In the text, the executive ordered the target to wire money to an account. Fortunately, by then, the targeted staffer had grown suspicious and ended the communication.
What can organizations do as the criminals get more sophisticated and persuasive? And how can an accounts payable department protect an organization’s assets?
An accounts payable department must foster a vigilant culture along with regular training on BEC, VEC and deepfakes to create awareness. Staff must not circumvent vital internal controls. Always verify a source and ensure a message came from the person or organization as represented. With the stated support of the C-suite, a bank account change or payment request must always be independently verified, with no exceptions for “CEO requests.”
More broadly, organizations must develop, implement and rehearse plans to respond to and limit damages caused by deepfakes. And IT departments must continue to keep up with necessary security software and protocols.
Bank account verification is a critical part of protecting your organization’s assets. Contact us to learn how VendorInfo can help.