×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Elon is playing a dangerous game with Kamala Harris deepfake

Elon is playing a dangerous game with Kamala Harris deepfake

X prohibits users from posting media that has been 'deceptively altered' and 'likely to result in widespread confusion on public issues.' But such rules apparently do not apply to Musk himself.

Follow Us :

Last Updated : 30 July 2024, 04:43 IST
Comments

By Parmy Olson

A video that Elon Musk posted on X over the weekend has the voice of Kamala Harris speaking over her first big campaign ad, describing herself damningly as “the ultimate diversity hire” who does not “know the first thing about running the country.” It’s a powerful, devastating twist on the original ad because the voice is unmistakably Harris’. But it’s been digitally manipulated, most likely by an AI tool.

X prohibits users from posting media that has been “deceptively altered” and “likely to result in widespread confusion on public issues.” But such rules apparently do not apply to Musk himself. The original poster of the video marked it as a parody and got 22,000 views. Musk made no such disclosure when he reposted the video, which has been watched more than 128 million times. That may make him the site’s worst spreader of misinformation.

Musk’s defense on Monday that “parody is legal in America” is itself a joke. Yes, parody is perfectly legal, but as the owner of a social media site, he should know that when influential figures share content without proper context, the original intent (parody or not) gets lost as more people share and reshare. The Harris video in particular hits on existing criticisms about her being a “deep state puppet” and on her border security record, making the line between satire and misinformation all the more muddied.

To say the post was a parody after the fact doesn’t help when tens of millions of people have already watched the video. But this is also a regular cop-out for Musk. Remember his “funding secured” tweet about taking Tesla private for $420 a share, which he later claimed was a weed joke? Wall Street and the SEC didn’t find it very funny. When he humiliated a cave rescuer that year by calling him a “pedo guy” on X, Musk also claimed in court that he didn’t mean it literally.

For the Musk faithful, all this juvenile irreverence is what makes him so compelling. But we’re likely to see a closely fought US election this November, and the stakes are too high for recklessly posting half-truths. Experts in online misinformation tell me that, anecdotally, Harris has already become a greater target of deepfakes than Trump. With close to 200 million followers and the ability to tweak X’s recommendations or boot people off the platform, Musk can do more than just boost shares of Tesla or cause humiliation: He can influence thousands of voters in swing states. And if Elon can break the rules on posting AI-generated voices, there’s a good chance that others will do the same. Musk hasn’t only shown how much traction well-designed AI fakery can get on his site but how little pushback it can get too.

Audio deepfakes can be insidious. They are increasingly difficult to distinguish from real voices, hence why they have quickly become a favored tool for scammers. One in 10 people has reported being targeted by an AI voice cloning scam, while 77 per cent of these targets lost money to the fraud, according to a 2023 survey by cybersecurity firm McAfee. Another study found that humans in a lab setting could detect AI-generated speech about 70% of the time, suggesting that in the wild, voice clones are getting harder to discern as they become more sophisticated.

Cloning a voice is also relatively easy thanks to online tools from AI companies like Eleven Labs and HeyGen, whose products are designed for marketers and podcasters. These companies have rules against generating voice clones of public figures without permission, pornographic imagery or content that infringes copyright, for instance, but tend not to police what their customers create, which is why the best hope for stifling AI-generated misinformation still lies with social media giants such as Alphabet Inc.’s YouTube, Meta Platforms Inc.’s Facebook, TikTok and, unfortunately, Musk’s X.

Musk decimated the platform’s trust and safety team when he bought the company in late 2022, initially with a 30 per cent cut in the company’s global safety staff, according to a report from Australia's eSafety Commissioner. Whoever is left to enforce its deepfake policies probably has the toughest job in the tech industry.

Musk seems unable to grasp his responsibility as one of the world’s most powerful media owners as the US heads into a fraught election. He should spend more time mending X’s election integrity efforts or letting his CEO run the business, and less time playing games and sowing lies.

ADVERTISEMENT

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT