By Parmy Olson
Where new technology goes, the unscrupulous follow. The generative artificial intelligence hype train has attracted fraudsters who aren’t only using it as part of their schemes, but in their ads on social media sites. What’s worse is that older and financially vulnerable people are sometimes the ones being targeted. The US Federal Trade Commission (FTC), which polices deceptive advertising practices, saw a jump in complaints over the past year about ads that either used AI or claimed to use it to lure people into scams, according to a document of complaints obtained via a Freedom of Information Act request by Bloomberg Opinion. At least a third of the complaints were about ads spotted on social media sites like Twitter, Facebook and YouTube, which now face a new kind of misinformation adversary among their own advertisers.
There were just two ad-related complaints that mentioned AI to the FTC in the year to February 2023, but that number rose to 14 in the year to February 2024, coinciding with the explosion of generative AI tools that businesses are using to conjure humanlike text and photorealistic images and deepfakes.
These numbers might not suggest an epidemic problem at first glance, but given that most regular social media users are encouraged to complain to platforms like Facebook or YouTube first, and not America’s top online advertising regulator, they still reflect a broader increase in what people are likely seeing, and what may be the tip of an iceberg. (All complaints listed here were made directly to the FTC.)
One person in their 30s from in Los Angeles, for instance, told the FTC that they’d been lured into transferring $7,000 to a fake Tesla website after watching a video on YouTube that featured a deepfake of Elon Musk. In the video, the fake Musk said Tesla would “double your money for a short period of time" by working with another crypto company. But the LA complainant never got their money back, and said they’d lost their life savings. “That was all I had,” they said in the complaint.
Another person from Florida said they kept seeing deepfake ads on YouTube purporting to show Brad Garlinghouse, the chief executive officer of blockchain-based payment network Ripple, and promising to “double your money,” according to the complaint. YouTube "completely ignores our concerns and these ads are still showing,” the person added.
“We are aware of an emerging trend of deep fake advertisements implying a false celebrity endorsement or relationship,” a spokesperson for You Tube owner Alphabet Inc. tells me by email. “We also know that these ads can sometimes be used to scam users. We are investing heavily in our detection and enforcement against these deep fake ads and the bad actors behind them.” In January, the company said it was “aware” of ads using deepfakes of celebrities like Musk to propagate scams, and took down 1,000 videos promoting them, according to a report in 404 Media.
The most recent complaints show that in much the same way businesses are falling over themselves to mention AI in their services or products, swindlers are using it as a kind of bait too. And several of the past year’s grievances to the FTC also pointed to ads on Facebook platforms. One person in England, for instance, found their way to a Facebook page for a company claiming to host an AI trading platform. Once the person opened an account, they made a deposit of $200 and got an appointment with a financial adviser. “He would then educate me further on AI trading,” their complaint said. The “adviser” ended up pushing them to pay a fee to withdraw their deposit, which they couldn’t afford.
Another person in the Philippines reported a video ad on Reels — Facebook’s version of TikTok — in which scammers claimed to use AI to help people earn up to $1,500 a day in a part-time job. And on a different Facebook platform — Instagram — someone in Australia said they’d seen an announcement for an AI trading platform developed by Musk that could “help ordinary people make thousands of dollars via crypto trading.” After investing $250, this person found themselves unable to withdraw any of their money. “They kept saying ‘Finance has to approve it,’” the complainant said. “That’s the last I heard from them.”
A spokeswoman for Meta Platforms Inc., which owns Facebook, said the company works “closely with law enforcement to support investigations and keep scammers off our platforms,” and that its various apps “have systems to block scams.”
Fraudsters are using AI in other ways too. One veteran in the US state of Georgia complained that she had visited a dating site for seniors and realized that many of the potential partners she was chatting with were bots.
Bots have infiltrated dating apps and sites for years, but generative AI has recently helped them sound far more fluent. On some sites, they’ll encourage users to spend money to buy virtual gifts for others on the platform, or on more chatting credits. “I was stupid for falling for it,” the veteran says in their complaint.
Social media companies have spent years battling exploiters of their platforms, including those trying to spread propaganda and conspiracy theories. This new breed of bad actors presents a unique challenge. AI tools make it possible for scammers to launch campaigns at scale, which means the sheer volume of their content could overwhelm detection methods. The platforms may have a Herculean effort on their hands. In the meantime, businesses and consumers alike should remember the adage when they encounter newfangled tech: If it seems too good to be true, it probably is.
(Photo credit: Reuters)