Earlier this week, a viral video surfaced on X (formerly Twitter) featuring what appeared to be popular actress Rashmika Mandanna in sportswear entering an elevator. It generated a lot of debate on social media platforms; on one side, people criticised the outfit as outrageous, while the other group defended the clothing choice.
Soon, it came to light that the video was a deepfake. Actually, the original clip was of Instagram influencer Zara Patel, a British-Indian girl with 415K followers online. And, some miscreants using Artificial Intelligence (AI)-powered application had morphed Mandanna's face onto Patel's face.
Mandanna expressed deep shock and hurt over how people are misusing technology to defame woman and urged police to take action at the earliest.
I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused. Today, as a woman and as an actor, I am thankful for my family, friends and well wishers who are my protection and support system. But if this happened to me when I was in school or college, I genuinely can’t imagine how could I ever tackle this. We need to address this as a community and with urgency before more of us are affected by such identity theft.Rashmika Mandanna
This episode raised a lot of concerns about user privacy online. Bollywood legend Amitabh Bachchan sought legal action against the culprit.
Rajeev Chandrasekhar, Union Minister of State for Entrepreneurship, Skill Development, Electronics & Technology said "Deepfakes are the latest and even more dangerous and damaging form of misinformation and need to be dealt with by platforms".
And added that as per the Rule 3(2)(b) of the Information Technology Rules, it is the duty of social media platforms to take down any mischievous and defamatory content within 36 hours (or earliest) after filing a complaint.
And, those who are responsible for creating such fake videos with impersonation technology (like the deepfake) by using computer and related gadgets, are punishable up to three years of imprisonment and pay a fine up to Rs 1 lakh, the under the Section 66D of the IT Act.
However, the video deepfake video is still in circulation on social media platforms.
What is Deepfake?
Deepfake is a multimedia content (image or video), wherein a person's face or body is modified to appear as a different person.
Actually, Deepfake content was first termed as synthetic media in 2014 and later, as it grew in popularity, an anonymous Reddit user in 2017, started populating such videos into a playlist titled Deepfake and since then, such content has been labeled the same.
Yes, initially, deepfakes were taken lightly and mostly used for comedic content. The latest we saw was --the faces of top Malayalam actors-- Mohanlal, Mammootty, and Fahadh Faasil- superimposed on Al Pacino, Alex Rocco, and John Cazale, respectively, acting in the scene of iconic 'The Godfather' movie. Though, we can make out the video is fake; the video/audio sync quality was really good and in no time, it became viral on Instagram garnering millions of views.
However, deepfake technology is also being misused to malign high-profile people such as politicians, movie actors, and others just so that the victim loses popularity just before elections or a movie and is deprived of opportunities in the future.
Here's how to identify deepfake content
Though the latest advancements in generative Artificial Intelligence (AI) have improved the quality of deepfakes, we can still find telltale signs to differentiate a fake video from an original.
--Keep a close eye on the start of the video. For instance, many people failed to recognise that at the start of the viral Mandana video, the person's face was still of Zara Patel; the deepfake tech actually took effect only after the person entered the lift.
-- Observe the person's facial expression from the start to the end of the video. There will be irregular changes in expression during a conversation or an act.
--Look for lip sync issues. There will be a minor audio/video sync issue in the deepfake video. Always try to watch the viral videos a few times more before coming to a conclusion if it is a deepfake or not.
-- Deepfakes will have a minor variation of body posture, which may not be consistent with a real person's behaviour
-- Always check for the source of the video. Search the same content on search engine platforms to confirm and avoid jumping the gun.
[Note: Make it a practice to read news from authentic and reliable publishers.]
-- And, you can check for online tools from Sentinel, WeVerify, Reality Defender, and NewsGuard Misinformation Fingerprints, but these are subscription services.
Besides tools, there is an urgent need for regulation from not just government agencies but also technology companies should form alliances to bring cross-platform detection tools to curb deepfake videos.
Most of the deepfake detection services are expensive and can only be purchased by news agencies and big corporations, but for common people, there are limited resources to verify such misinformation content.
Recently, Google announced new watermark technology to identify images created by generative Artificial Intelligence, so users can easily detect fake from originals. But, it is still in the beta stage and there is no word on when it will be available to all.
Similarly, we should have free online tools to detect deekfakes and curb the spread of fake news. Also, make it mandatory for deepfake creator apps come with watermark for easy identification.
Get the latest news on new launches, gadget reviews, apps, cybersecurity, and more on personal technology only on DH Tech