Earlier this year, The New Yorker‘s Charles Bethea chronicled an unsettling development in the world of scams: scammers using AI voice clones of people to trick their loved ones into sending money. The scammers covered in the article used AI technology to create what sounded like the voice of a friend or relative of their target; the scammers would then make it seem as though that person was in danger and then commence with the extortion.
Now, this phenomenon has become prominent enough that the FBI has issued a warning about it. Specifically, the warning covers a host of generative AI-fueled scams, including voice cloning and the creation of photos, videos and social media profiles.
“Criminals use AI-generated text to appear believable to a reader in furtherance of social engineering, spear phishing, and financial fraud schemes such as romance, investment, and other confidence schemes or to overcome common indicators of fraud schemes,” the warning declares — and then gets into more specifics about precisely how advances in AI technology make fraud easier.
The FBI goes on to advise readers to establish a code word or phrase with loved ones to easily determine whether a caller purporting to be them is actually who they claim to be. Many of their other tips to help identify AI-generated images and videos are useful to keep in mind, whether or not you’re potentially the victim of a scam.
Dental Scams Are Gaining Ground on Social Media
Unlicensed dentists offering veneers can be a problemIn an essay for The New York Times published this fall, Evan Ratliff explained the appeal of this technology. “Voice agents aren’t just a tool to fend off scammers, they’re also a scammer’s dream: never sleeping, cheap to deploy and human-sounding enough to fool some segment of their targets,” he wrote. It’s a cautionary tale, and may lead you to wonder where scammers will go from here.
This article was featured in the InsideHook newsletter. Sign up now.