ADVERTISEMENT
Navigating the uncharted: AI tide sweeps IndiaThe evolving use of artificial intelligence to produce deepfakes and misinformation campaigns presents a complex challenge to the government.
R Krishnakumar
Last Updated IST
<div class="paragraphs"><p>Nearly 20% of Indians were victims of AI voice scams, according to a survey conducted in May this year. </p></div>

Nearly 20% of Indians were victims of AI voice scams, according to a survey conducted in May this year.

Credit: AI-generated image by Pushkar V using DreamStudio

Bengaluru: Popular Indian narratives around artificial intelligence (AI) have largely drawn on generic, alarmist predictions. That India’s lawmakers are getting increasingly vocal about the need for legislative regulations on AI is indicative of a new urgency.

ADVERTISEMENT

There appears to be an emerging realisation that AI-enabled crimes are no longer part of an imagined dystopia; they are now real and warrant an informed response. This response, experts say, cannot be limited to regulatory controls and has to be built on the ethical use of these technologies. 

A recent series of crimes assisted by AI, ranging from voice scams and financial frauds to celebrity deepfakes, has set off fresh concerns over technologies that run on AI and how they can abet criminal activities, in new form and scale. 

In November, a 59-year-old woman in Hyderabad was duped into transferring Rs 1.4 crore to a fraudulent caller who “mimicked” the woman’s Canada-resident nephew and claimed that he was in urgent need of the money. Deepfake videos featuring at least four popular actresses went viral in the past couple of months. 

In another case, P S Radhakrishnan, a retired central government employee in Kozhikode, lost Rs 40,000 in July, in what has been reported as a deepfake fraud. Radhakrishnan, 68, received a video call from a person who “looked like” a former colleague and requested the money for the surgery of a relative. 

The inevitability of these new crimes makes a case for law enforcement to adapt sooner, to stay ahead of the perpetrators by using their tools better.

Experts maintain that strategies to handle these crimes will have to involve a comprehensive policy, greater awareness among the public on the abuse of AI applications, and policing that makes the best use of technology.

M A Saleem, DGP, Criminal Investigation Department (CID), Economic Offences and Special Units, Karnataka, underlines the acceptance of this inevitability. “The modus operandi in online financial frauds is becoming largely predictable. Criminals are switching to AI applications; there will be more crimes involving the tampering of identity through photographs and videos, like how they are being used to create deepfakes,” he says.

AI and cybersecurity experts argue that it is important to approach these emerging trends in crime as inescapable in a country that is engaging with the first wave of generative AI. 

They note that criminals have, traditionally, been early adopters of technology. In developing strategies to counter them, it is critical to acknowledge that in its present forms, AI, like any technological innovation, is only a tool with applications across domains of human activity, including crime.

Nandakishore Harikumar, CEO of cybersecurity firm Technisanct, points to the “misinformed” apprehensions about a still-maturing industry. “This issue is further exacerbated by deepfakes which lend credibility to false narratives. Currently, there are no fully automated methods for detecting these deepfakes. Given this situation, the best countermeasure is to remain vigilant and verify facts independently,” he says. Technisanct, which has operations in Bengaluru and Kochi, is developing an AI-aided platform that can predict cyber threats. 

Saleem notes that criminals will keep upgrading themselves with emerging technologies; law enforcement has to constantly unlearn and relearn to stay relevant.

The Karnataka CID has been coordinating capacity-building programmes for officers, training them in multiple areas of specialisation, including cryptocurrency fraud, counter-forensic techniques, and dark web crimes. 

‘2023 State of Deepfakes’, a report compiled by US-based cybersecurity firm Home Security Heroes, said it now takes less than 25 minutes and costs $0 to create a 60-second deepfake pornographic video using “just one clear face image”. The report placed India sixth in the list of countries susceptible to being targeted by deepfake pornography (2%), behind South Korea (53%), the US (20%), Japan (10%), England (6%) and China (3%).

Deepfakes and dark designs

While these are crimes that primarily target individuals, there is also the threat of AI being used to manipulate the voices and visuals of influential men and women, for mass consumption. The just-concluded Assembly elections in Madhya Pradesh and Telangana saw deepfakes being used to launch targeted misinformation campaigns against political adversaries. 

Political deepfakes have found a large, new audience on messaging platforms like WhatsApp; their effectiveness in shaping social media narratives could soon make them integral to electoral campaigns in India. The end-to-end encryption feature in platforms like WhatsApp makes tracking the source of the malicious content difficult. Fake news, more convincingly peddled to whip up hate, is only one of the more disturbing possibilities.

Tobby Simon, founder and president of Bengaluru-based strategic think tank Synergia Foundation, sees data and the dangers of it being “poisoned” at the centre of the debate. At a fundamental level, manipulation of personal data impacts individuals in everyday transactions – the denial of a loan application, for instance – but the larger risks surface when countries manipulate data with strategic motives or threat actors tamper with it to aggravate conflicts.

“The proliferation of data, without the required checks on its veracity, is the biggest challenge at hand. When data becomes toxic, who is going to control it? It is a tough ask; the right approach will be one of self-regulation. The focus has to be on a judicious, ethical use of AI while we ensure that the institutions – governments, police, banks and the others – are ahead of the game,” Simon says.

A Nagarathna, Associate Professor of Law at the National Law School of India University (NLSIU), says ethical challenges associated with the usage of AI need a closer examination because principles of ethics have usually been considered while designing legal frameworks in any domain. “If any act is against the ethical rules, like misuse of AI against the interest of the public, or other ethical principles like the principles of neutrality (when AI processes a specific set of data or when AI’s usage is discriminatory), transparency (of AI’s decision-making processes and priorities) and accuracy of the results of AI usage, it is important to design standards of AI usage,” she says. Nagarathna, coordinator of NLSIU’s Advanced Centre on Research, Development and Training in Cyber Law and Forensics, cites UNESCO’s Recommendation on the Ethics of AI as an example.

A question of liability

Union Minister for Electronics and Information Technology Ashwini Vaishnaw called deepfakes – AI-generated synthetic media that replicate facial features of people – “a threat to democracy” as he announced the government’s plans to bring in fresh regulations on them. Minister of State Rajeev Chandrasekhar has also underscored the need for legislative guardrails to ensure the responsible use of AI.

Firoz Bharucha, an advocate at the Bombay High Court, contends that the preparedness to meet what is an evolving challenge is inadequate because the existing laws are generic and do not cover AI in terms of accountability in criminal activities. The key, Bharucha says, is in developing ways to pin AI down to the proof of liability.

“If we accept that AI may create problems, we will have to focus on that and a policy has to be framed. It is impossible to be sitting here today, where AI is still in its infancy, and trying to make it future-proof. The policy has to evolve as the problems present themselves. We will have to be a bit ahead of the game but at the same time, new problems will continue to emerge. Then, we will have to modify the policy accordingly,” he says.

The surge in social media engagement has seen users making themselves seen and heard more on public platforms, often unmindful of the threats of privacy breaches. A McAfee report titled ‘Beware the Artificial Impostor’, released in May this year, said 47 per cent of the respondents in India had either been a victim of an AI voice scam (20%) or knew somebody else who had (27%). A survey of adults who share their voice online revealed that the practice is most common in India, with 86% of the respondents making their voices available online at least once a week, followed by the UK (56%) and the US (52%). The report also mentioned how a voice-cloning tool was used to replicate the voice of a researcher – “a convincing clone” – at an estimated 85% match. 

Harikumar says the core issue lies in the digital footprint. “We leave extensive digital traces that facilitate both responsible and irresponsible usage in training various datasets using AI. In countries like India, data regulation is particularly problematic due to the lack of a comprehensive framework to distinguish between responsible and irresponsible use of this data and its associated adoption,” he says.

In a country where legal frameworks do not keep pace with the growth in technology, the digital push needed to be backed with an efficient support system. Harikumar maintains that the gaps have left India to grapple with privacy breach challenges in areas including loan applications, online gaming, and cryptocurrencies. 

The ethical challenge

Legal experts highlight the dangers of underestimating ethical concerns in the unrestricted adoption of AI. These concerns are largely about the inherent biases in AI and are not limited to crimes for gain. Arul George Scaria, Associate Professor of Law at NLSIU, notes how AI could amplify biases when it is used in diverse kinds of decision-making by the state or private entities, like credit rating agencies.

“Studies from other jurisdictions have shown that algorithmic bias is a reality and we need more studies to address such issues. Similar is the issue of lack of transparency. Do we still have enough information about the models (particularly, why they produce a particular output) or do we have enough information about the kind and types of training data used by different models? They remain a black box to date in most instances. So, more mandatory disclosures are warranted,” he says.

It is important to initiate more studies and dialogues on these challenges through which regulatory measures could be formulated, he says. Scaria served as co-chair of the Thematic Group on Access to Knowledge and Resources, constituted by the Principal Scientific Advisor to the Government of India and the Department of Science and Technology, for drafting the Science, Technology, and Innovation Policy 2020.

India has stated its intent to build its sovereign AI infrastructure. Its implementation could bring extensive changes to the functioning of India’s digital economy; it could also help overhaul existing laws relevant to AI-enabled crimes and create new ones. Until these preventive systems stabilise, India is likely to fall and learn with AI. 

Simon discards the doomsaying and points to the tremendous advances AI is bringing to areas like medical research. Its use, or misuse, still depends on the choices humans make — if a country decides to use autonomous weapons, or robots in warfare, the issue is not with AI; it is about that country and its priorities. The prospects carry hope, with some caution. “AI is already here. Now, we can only wish that this goes well, but
that has been the case with all science,” he says.

ADVERTISEMENT
(Published 03 December 2023, 05:16 IST)