Images of Taylor Swift that had been generated by artificial intelligence and had spread widely across social media in late January probably originated as part of a recurring challenge on one of the internet’s most notorious message boards, according to a new report.
Graphika, a research firm that studies disinformation, traced the images back to a community on 4chan, a message board known for sharing hate speech, conspiracy theories, and, increasingly, racist and offensive content created using AI.
The people on 4chan who created the images of the singer did so in a sort of game, the researchers said — a test to see whether they could create lewd (and sometimes violent) images of famous female figures.
The synthetic Swift images spilled out onto other platforms and were viewed millions of times. Fans rallied to Swift’s defense, and lawmakers demanded stronger protections against AI-created images.
Graphika found a thread of messages on 4chan that encouraged people to try to evade safeguards set up by image generator tools, including OpenAI’s DALL-E, Microsoft Designer and Bing Image Creator. Users were instructed to share “tips and tricks to find new ways to bypass filters” and were told, “Good luck, be creative.”
Sharing unsavory content via games allows people to feel connected to a wider community, and they are motivated by the cachet they receive for participating, experts said. Before the midterm elections in 2022, groups on platforms such as Telegram, WhatsApp and Truth Social engaged in a hunt for election fraud, winning points or honorary titles for producing supposed evidence of voter malfeasance. (True proof of ballot fraud is exceptionally rare.)
In the 4chan thread that led to the fake images of Swift, several users received compliments — “beautiful gen anon,” one wrote — and were asked to share the prompt language used to create the images. One user lamented that a prompt produced an image of a celebrity who was clad in a swimsuit rather than nude.
Rules posted by 4chan that apply sitewide do not specifically prohibit sexually explicit AI-generated images of real adults.
“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” Cristina López G., a senior analyst at Graphika, said in a statement. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”
Swift is “far from the only victim,” López G. said. In the 4chan community that manipulated her likeness, many actresses, singers and politicians were featured more frequently than Swift.
OpenAI said in a statement that the explicit images of Swift were not generated using its tools, noting that it filters out the most explicit content when training its DALL-E model. The company also said it uses other safety guardrails, such as denying requests that ask for a public figure by name or seek explicit content.
Microsoft said that it was “continuing to investigate these images” and that it had “strengthened our existing safety systems to further prevent our services from being misused to help generate images like them.” The company prohibits users from using its tools to create adult or intimate content without consent and warns repeat offenders that they may be blocked.
Fake pornography generated with software has been a blight since at least 2017, affecting unwilling celebrities, government figures, Twitch streamers, students and others. Patchy regulation leaves few victims with legal recourse; even fewer have a devoted fan base to drown out fake images with coordinated “Protect Taylor Swift” posts.
After the fake images of Swift went viral, Karine Jean-Pierre, the White House press secretary, called the situation “alarming” and said lax enforcement by social media companies of their own rules disproportionately affected women and girls. She said the Justice Department had recently funded the first national helpline for people targeted by image-based sexual abuse, which the department described as meeting a “rising need for services” related to the distribution of intimate images without consent. Screen Actors Guild-American Federation of Television and Radio Artists, the union representing tens of thousands of actors, called the fake images of Swift and others a “theft of their privacy and right to autonomy.”
Artificially generated versions of Swift have also been used to promote scams involving Le Creuset cookware. AI was used to impersonate President Joe Biden’s voice in robocalls dissuading voters from participating in the New Hampshire primary election. Tech experts say that as AI tools become more accessible and easier to use, audio spoofs and videos with realistic avatars could be created in mere minutes.
Researchers said the first sexually explicit AI image of Swift on the 4chan thread appeared Jan. 6, 11 days before they were said to have appeared on Telegram and 12 days before they emerged on X, formerly known as Twitter. 404 Media reported on Jan. 25 that the viral Swift images had jumped into mainstream social media platforms from 4chan and a Telegram group dedicated to abusive images of women. The British news organization Daily Mail reported that week that a website known for sharing sexualized images of celebrities posted the Swift images on Jan. 15.
For several days, X blocked searches for Taylor Swift “with an abundance of caution so we can make sure that we were cleaning up and removing all imagery,” said Joe Benarroch, the company’s head of business operations.