ADVERTISEMENT
Seven years on, the explicit deepfake business is still thriving on TelegramBots are smaller apps that run within Telegram and sit alongside channels on the platform — they can broadcast messages, hold quizzes, and translate messages, among many more capabilities, which have expanded with the advent of AI.
DH Web Desk
Last Updated IST
<div class="paragraphs"><p>Image with the word 'Deepfake'. For representational purposes.</p></div>

Image with the word 'Deepfake'. For representational purposes.

Credit: iStock Photo

In 2020, as artificial intelligence was snaking its way into our lives, deepfake expert Henry Ajder had uncovered one of the first Telegram bots that used AI to "undress" women in photos. While that revelation, at the time, marked a watershed moment, highlighting the risks of deepfakes, the menace has only grown over the past four years.

ADVERTISEMENT

Bots are smaller apps that run within Telegram and sit alongside channels on the platform — they can broadcast messages, hold quizzes, translate messages, etc., among many more capabilities, which have been expanding as developers become more avant-garde with the advent of AI.

According to a recent investigation by WIRED, which reviewed Telegram communities that circulated explicit, non-consensual content, there at least 50 such bots still available for use on the platform, with some of these claiming to have the capability to "remove clothes" from photos, and others claiming the capability to generate images of people in various sexual acts.

Despite so much being written about deepfakes, the risks associated with them, and their negative impacts on the social fabric at large, it seems like the explicit deepfake business is thriving — numbers point towards this trend.

To start with, the 50 bots identified by WIRED boast more than four million "monthly users" combined, with two among these listing more than 400,000 monthly users each. Fourteen other apps identified by the publication boasted more than 100,000 users each.

Some of these bots also had telling descriptions: "I can do anything you want about the face or clothes of the photo you give me," one bot declared. "Experience the shock brought by AI," read the description of another.

WIRED noted that almost all of these bots required users to buy "tokens" to create explicit images, suggesting the existence of a marketplace for the same.

Further, these bots were found to be supported by at least 25 Telegram channels, which offers users the option to subscribe to a newsfeed-eque feature for fresh updates. These channels, WIRED noted, had more than three million combined members.

If that sounds alarming, it barely scratches the surface.

The investigation by WIRED only captured a snapshot of what can perhaps be described as the explicit deepfakes 'market' (for the lack of a better word) — while the publication only analysed English language bots, it noted that the 50 bots identified could merely represent a fraction of the overall number of deepfake bots on Telegram, many of which could be in other languages.

Often referred to as nonconsensual intimate image abuse (NCII), explicit nonconsensual deepfake content first emerged around the end of 2017, with the advent of generative AI, and has exploded since then with AI itself getting better at fulfilling tasks.

ADVERTISEMENT
(Published 17 October 2024, 21:00 IST)