Deep Deep Fakes

Tim MalcomVetter
4 min readFeb 20, 2024

You’ve seen the examples.

A Tom Cruise impressionist that uses AI to go the last mile and completely consume the persona of famous actor. A video of a political leader stating the opposite of what he would really state, in a modern high-tech propaganda pitch. Back in September 2019, a threat actor impersonated a CEO’s voice via AI, resulting in a six figure sum of money lost in fraud. Now, in the past month, we have a case where a multinational corporation transferred $25M USD fraudulently, because a finance employee in Hong Kong believed he was talking the CFO and several key executives in a video conference: “it turns out that everyone [he saw] was fake.”

Today, they’re still all high-profile and very targeted, impersonating individuals who have plenty of audio/video source material to clone, and being leveraged in attacks that feel more like Ocean’s Eleven than your run-of-the-mill exploit something/someone, get king of the hill, and drop ransomware scenarios. But the tide will turn as the tech improves, needs less source material to impersonate someone, and becomes easier to use.

But while we wait for that, what if something even easier is available for attackers right now?

But what if they weren’t fake? What if the “victims” are corporate employees with the ability to blame fraud on a “Deep Fake” audio/video conversation, and the corporate entity has no way to prove or disprove it? Let’s call these “Deep Deep Fakes,” or Deepfakes that are themselves so faked they’re not even attempted, but victim corporations won’t be able to tell the difference.

Today’s conferencing tech is not capable of detecting the presence of deepfake AI, and by default most conversations are not recording the audio or video of the conversations. Some turn on transcriptions of the audio — but is the recording good enough to even tell if impersonated audio is present? What if a co-conspirator on the other end of the call simply reads a script, the lines that a would-be a finance executive with authorization to transfer money might say, but without even the attempt at a deep fake? In today’s world, there isn’t enough forensic evidence to even know.

Yes, a co-conspirator would likely obfuscate their source IP address when connecting to the video service, but so would an actual malicious deepfake threat actor on the other end. So the forensic evidence around which device and location were used won’t indicate the presence of the deep fake.

And no, an actual deep fake threat actor won’t leave watermarks or any of other telltale signs of the presence of faked content in their video stream (watermarks won’t work because bad guys don’t follow rules and leave them in — and tools that force them in will be replaced by modified or open source tools that won’t have or require that feature), so the fact that a co-conspirator doesn’t have an AI audio/video “watermark” won’t be an indicator, either.

The way to catch these Deep Deep Fakes will be good old fashioned police work. When the “victim” employee in the treasury department suddenly retires in 6 months or departs for other reasons, maybe along with a change in behavior, such as no longer caring about the quality of his/her work, and living above his/her means, there’s a classic “follow the money” opportunity to detect these. For the large theft amounts, there will be available police resources to investigate, but on the smaller scale, much less so.

We have already seen cases where an outside threat actor will coerce an employee to run malicious software or take a poor security step in exchange for money, allowing the outside threat actor to gain a foothold for breaches. What’s to stop them from simply asking them to hop on Zoom, read a script, transfer money, and claim “I saw the CFO on the call,” when internal investigators and law enforcement come knocking?

Will the phrase “I saw a DeepFake” become an instant alibi for any would-be white-collar criminals?

Advice for the enterprise defenders:

Continue capturing as much cyber telemetry about their official video/audio conversations as possible, to the extent your legal and platform provider limitations allow. But this game won’t be won in the cyber realm. It will be won by controls that have existed for decades.

Treasury teams should follow separation of duties principles with multi-step approvals for large wire transfers, ideally with as much real-time transparency as possible. For mothership companies with subsidiaries, this should include transparency up to the mothership, not just contained within the subsidiary that may not have as much resources, maturity, and monitoring in place. Corporate bank accounts should have controls around wire transfers to countries that won’t allow grace periods to reclaim fraudulently transferred funds — smart corporate treasury departments already know this and talk to their banks to set this up. Lastly: have a plan to act quick if a fraudulent transaction pops up, including contacting your bank and asking them to initiate a SWIFT transaction freeze, which may require you to contact each bank involved along the way to do the same.

If fraud slips through faster than you can respond: have your insider threat program watch the internal “victim” for signs of behavioral changes, such as living above the means of the job’s pay, job performance going down, or a sudden resignation. There still may be an opportunity to get a criminal conviction and recoup some of the funds (albeit likely years later in a criminal proceedings).

--

--