The Era of Ambitious AI-Powered Cons Arrives
In a thwarted Marco Rubio impersonation, a new twist on a French-Israeli conman's legendary ploy.
(We’ve been keeping quiet around here as we’re working on Season 2 of the podcast, and funneling all the fun stuff we find into it. Can’t reveal much about the new season yet other than 1. It’s currently on track to launch this fall. 2. It’s bigger and more ambitious than the first season. And 3. We’re increasingly confident that it’ll be full of the same kind of weird, jarring, farcical moments that brought most of you here. Thanks for hanging in with us while we figure it out. In the meantime, this recent news was too enticing not to write a little something about.)
On Tuesday, The Washington Post landed a scoop on events that were predictable yet still extraordinary. Someone posing as U.S. Secretary of State Marco Rubio, and utilizing an AI clone of his voice, “contacted at least five non-Department individuals, including three foreign ministers, a U.S. governor, and a U.S. member of Congress,” according to a diplomatic cable obtained by the Post. This “unknown actor” used a Signal account with the display name “Marco.Rubio@state.gov” to send officials “voice and text messages that mimic Rubio’s voice and writing style.” The impersonation didn’t appear to have succeeded at their intended purpose, the nature of which the cable described vaguely as “gaining access to information or accounts.”
Many of the follow-on reports about the hoax focused on the fact that it was conducted through Signal, an embarrassing reminder of the administration’s previous operational security catastrophe using the service. But for me, the events represented a convergence of two developments I’ve been expecting to collide for a couple years now: the rapid advancement of AI cloning tools, fused with a notorious impersonation scam pioneered by the legendary conman Gilbert Chikli. (As it happens, these two developments also represent the convergence of my own peculiar interests, being the subjects of two podcasts I spent years reporting: Shell Game and Persona: The French Deception.)
The first half of this fusion, the AI portion, was easy enough to see coming. Once anyone could clone a voice for free or cheap, using a few minutes or even seconds of audio, large-scale AI impersonation was functionally inevitable. The bigger voice cloning companies like ElevenLabs have gestured at preventing this kind of voice theft, walling off their professional cloning tools by requiring the cloner to supply in-the-moment consent recordings. (We went through this process with my father, in Episode 6 of Shell Game. A phrase pops up on the screen, you speak it into the computer, and it gets instantly compared to the recordings you’ve uploaded.) But more often, consent requirements are barely a fig leaf. There are now many platforms that offer voice cloning as an add-on to other services, like building AI phone agents for telemarketing. These services—as well as ElevenLabs own instant clone tool—typically just require the cloner to check a box asserting they have the voice owner’s consent.
This self-policing landscape of voice cloning has already launched a new era of personalized cons, in the form of “grandparent scams.” The scammers grab snippets of someone’s voice off of social media, clone it, then contact that person’s relative, using the cloned voice to plead that they are in trouble or danger that only a quick money transfer can solve. The target, primed into a panic by a voice that sounds plausibly like their loved one, forks over the money in a kind of fugue state, never pausing to consider the holes in the story.
Clearly, these same AI impersonation scam techniques could and would be turned on more prominent figures. But it’s the second half of the fusion that I’ve been surprised has taken so long to emerge, at least publicly. Namely: an AI-driven iteration of the “fake CEO scam” pioneered by the French-Israeli con man Gilbert Chikli, who once impersonated France’s defense minister in a scam that netted tens of millions of dollars.
If you want the full saga of Chikli and his demented genius, I’ve got an eight-part podcast for you. But in brief, Chikli first made his name with a wave of shockingly successful scams in the early 2000s, in which he called up mid-level employees at French banks, impersonated the bank’s CEO, and convinced the employees to engage in secret “missions” to ferret out terrorist accounts. These missions culminated in the employees transferring millions of euros to Chikli and his associates—including once, famously, in the form of a suitcase full of cash handed over in the restroom of a Parisian café.
After years running the con from Israel, Chikli was finally arrested and extradited to France. But freed on bail in 2009, he hopped a plane back, holed up in a mansion south of Tel Aviv, and declared himself retired. The fake CEO scam he’d made famous, meanwhile, was quickly taken up by legions of copycats, eventually morphing into the most lucrative scam on the planet: business email compromise. (BEC, which I’ve written about in the past, often involves the scammer breaking into corporate email systems, impersonating companies that are expecting payments, then diverting those payments into the scammers own bank accounts.)
Chikli, for his part, couldn’t resist the lure of the game, and by 2015 he was back with an even more brazen scheme. Posing as the French minister of defense Jean-Yves Le Drian, Chikli and his associates contacted prominent, wealthy figures across Europe, requesting millions of euros to help free French hostages captured by ISIS. (Since France had an official policy of never paying ransom for hostages, the fake Le Drian’s argument went, the government was doing it all on the down-low with private money, which would later be repaid by the Bank of France.) The Le Drian impersonation included both phone and video calls, the latter conducted wearing a custom silicon mask of Le Drian’s face, sitting at an official-looking desk flanked by flags. It worked, most likely beyond even Chikli’s wildest imagination. The Aga Khan, the billionaire spiritual leader of the Ismaili Muslim sect, transferred more than $20 million to the fake Le Drian for nonexistent hostage rescues. One of the richest businessmen in Turkey, İnan Kıraç, handed over nearly $50 million in a period of three weeks.
Chikli’s Le Drian scam succeeded for the same reasons the fake CEO scam had before it, and the same reasons the grandparent scam succeeds daily: by suddenly dropping the mark into a confusing, isolating, and urgent scenario, the scammer impairs their logical reasoning. But as we discovered reporting Persona, it also benefited from the fact that humans, despite what we think of ourselves, are actually quite bad at identifying voices—even ones we know. As the speech processing expert Jean-Francois Bonastre told me, “if you take someone randomly in the street and ask if she or he is able to recognize people by voice, the answer would be, ‘Yes, I’m able to recognize someone.’” But take the same person into a lab and ask them to recognize voices with audio snippets of short sentences, he said, and “the answer would be close to random.”
Which brings us back to AI voice cloning. Cloned voices don’t need to have perfect fidelity to pull off a scam. With the right context, even the shabbiest clone can engineer targets into the belief that they are talking to their grandchild, their CEO, or the Secretary of State.
Chikli was finally arrested in the Le Drian scam in 2017, in Ukraine, and extradited back to France. He denied having carried out the Le Drian scam, suggesting that he himself was the victim of imposters. At his 2020 trial, his lawyers argued that the voice in the Le Drian scams was in fact not Chikli’s, but a clone of Chikli’s voice someone had made in order to frame him for the crime. Chikli was convicted and sentenced to a decade in prison.
There have been warning signs, over the last few years, of other scammers starting to adapt Chikli’s ideas for the AI age. In 2024, someone ran the CEO scam nearly to the letter, replacing “CEO” with “CFO” and using an AI clone to convince an employee of a British engineering firm based in Hong Kong to transfer out $25 million. Not long after, fraudsters reportedly attempted to use voice cloning to swindle the ad giant WPP and the password management company LastPass in separate instances, but failed in both cases.
For the most part big-ticket cloning scams haven’t yet surfaced. And the details around the Rubio impersonation are currently too vague to know whether it was a Chikli-inspired money play, a form of espionage, or an ill-advised prank. But I would venture that somewhere in the world, at this very moment, a person or company is realizing they’ve transferred millions of dollars into the accounts of a scammer, having been convinced by an AI clone they were dealing with a statesperson or CEO.
If you hear about anyone who has, please drop me a line.
À l'époque des arnaqueurs,
Evan