The staggering rise in the number of deepfake crimes has raised concerns over the online safety of citizens. While some experts feel the use of the technology for financial fraud is a bigger concern, others are worried about its use to malign individuals.
Please note: You can use either these pics of the actresses or something from your end. These are agency pics.
“Video KYC fraud has been going on for a while. Lots of people have lost money owing to such crimes. I do not think it is a major concern when it is done with the intent of hurting someone’s reputation, as in the case of the celebrity deepfake videos,” says Srinivas Kodali, an independent researcher of data and privacy.
However, Pallavi Bedi, senior researcher at Centre for Internet and Society (CIS), finds the trend of celebrity deepfake videos disturbing.
“It’s a violation of privacy. If someone is using my image or likeness without my consent, it’s nothing to shrug off. It must be dealt with immediately,” she says. She points out that it’s not a simple face-to-face crime. Since it’s done on phones and computers, it is harder to pin down the culprit, she adds.
Following a spate of deepfake videos of actors such as Rashmika Mandanna and Katrina Kaif, the government had asked social media intermediaries to take down deepfake content within 24 hours of a complaint being filed. This was in keeping with the IT Rules of 2021. The government has also been mulling the introduction of laws to protect citizens from such crimes. A bigger concern is the manipulation of technology to spread false information in the run-up to the 2024 general elections.
New law?
Calling the government’s decision to bring out new laws to curtail the menace a “knee-jerk reaction”, Pranesh Prakash, co-founder, CIS, says they should re-examine the existing laws instead. “Without making it clear why the existing laws won’t suffice, they are drawing up new ones. They have not clarified under what provision the laws are being made,” he says. Pallavi cites the personal data protection law, obscenity laws and intimacy guidelines as examples of current laws governing such issues.
Highlighting another of his concerns, Pranesh says the proposed law on the right to privacy and personality rights should not burden freedom of expression.
How to tell it’s a fake?
There has been a 230% rise in such crimes since 2017, per researchers. Sahil Taksh, a celebrity and fashion photographer, blames it on the easy accessibility of generative AI tools.
While Sahil has not encountered deepfake-related snags in his profession, he says one can tell apart a morphed video from an original by taking a closer look at aspects like the sound, the face movement and the hairline. “The voice of the person being targeted may be the same, but the tone, the way of pronouncing certain words and the language used will be different. Sometimes it could even sound robotic,” he shares.
Facial expressions and wrinkles can also serve as clues. “If there are no wrinkles or the wrinkles are exaggerated, it’s probably fake. Sometimes if there’s some movement in front of the face, like a hand being lifted, there will be a slight blur. There’s also blurring around the hair area. These are obvious tells,” he explains.
However, the more sophisticated deepfakes, which are done using equally sophisticated technology, will be difficult to spot even for an expert.
“It needs the backing of machine learning, which uses information it is fed to generate an image or video. As information about public figures is readily available on the Internet, they are easier targets,” he says.This is where forensics comes in. “Forensic experts look at parameters, like audio signals and patterns, to determine if a video is a deepfake,” says Srinivas.