ADVERTISEMENT
'Disinformation, deepfakes are socio-political, not technological, problem’Deepak P tells DH’s Anirban Bhaumik that deepfake technology will get better to continuously overcome attempts to detect it and, in the long run, a robust and independent media ecosystem, along with political literacy, is the best vaccine to disinformation and deepfakes.
Anirban Bhaumik
Last Updated IST
<div class="paragraphs"><p>Deepak P is an associate professor of AI ethics at Queen’s University, Belfast, UK</p></div>

Deepak P is an associate professor of AI ethics at Queen’s University, Belfast, UK

Special arrangement

 Deepak P is an associate professor of AI ethics at Queen’s University, Belfast, UK. He tells DH’s Anirban Bhaumik that misuse of generative AI can indeed disturb the level-playing field in elections, be it in India, or any other democracy. He says deepfake technology is bound to get better and will continuously overcome attempts to detect it. In the long run, according to him, a robust and independent media ecosystem, along with political literacy and an aware citizenry, is the best vaccine to disinformation and deepfakes. He says that neither disinformation nor deep-fakes are a technological problem – these are socio-political problems and ought to be treated that way.

ADVERTISEMENT

 

 

 

At least 64 countries, including India, the US, and the UK, will go to polls in 2024, which is being called the year of the elections. The elections are taking place amid rapid advances in Artificial Intelligence (AI), particularly in generative AI. Can you please briefly outline the opportunities and the specific risks the advances in AI present in the conduct of elections in democratic nations?

We are indeed in the ‘year of elections.’ A free press and an informed citizenry are often considered the lifeblood of democracy. Thus, it is natural to consider the role of AI within that space as most important. Let’s perhaps start there.

Press or media freedom involves the ability to conduct investigations and report freely and without any form of coercion or censorship. This is typically characterized by strong and independent media which act in the public interest to influence public discourse and debate towards important issues in society. Today, in the age of news consumption through social media, the ability of media to inform the public is mediated by AI algorithms that decide what news we should see. These AI are designed and run by big tech companies, which put their economic interests before the public interest. What they would like us to watch on their platforms are those that would amuse us, worry us, and arouse our curiosity. Keeping us ‘engaged’ is their intent, and that could be very different from keeping us ‘informed’. Fake news, which incite hatred and divisions in society, unsurprisingly turn out to be much more engaging than quality news that treat the reader as a thinker and an interested citizen. The AI algorithms within social media could be seen as providing a rich playing field for the colonial-era logic of ‘divide and rule’. Those who benefit from societal divisions would stand to gain more from it.

Social media and AI are not without upsides too. It is very hard for traditional media houses to report from conflict zones like Gaza or Ukraine. This could be due to shallow connectivity owing to fragile infrastructure or internet shutdowns. The very engagement-focused AI algorithms could potentially provide high visibility to such rare citizen reports from such regions, and thus – though inadvertently – plug a gap in traditional media. We did see such positive usages of social media in the Arab Spring in the early 2010s, but such usage is rare in today’s world.

All of the above issues are getting further complicated with the arrival of generative AI in the last 1-2 years. Now, not just tech-savvy professionals, but common citizens can create fake and engaging news – text, images, and videos included – in minutes using web services. Will AI-powered social media infested with generative AI imagery decimate the free press, and in the process, undermine democracy to irreparable levels? This is a burning question of our times.

The international community has been waking up to the need to regulate AI, particularly to avoid its intentional misuse. The UK Government held an AI safety summit in Bletchley Park, which concluded with a summit declaration. The Tech Accord to Combat Deceptive Use of AI in 2024 Elections was announced by the world’s largest tech companies on the sideline of the Munich Security Conference recently. Do you think these initiatives could protect the integrity of democratic processes from the misuse of AI?

Since November 2023, ‘AI safety’ has become a big global buzzword. Of course, nobody would want ‘unsafe AI’ or ‘rogue AI’. Yet, it is important to look beyond what appears at the surface. Is AI safety sufficient? Let’s look at a specific case, called EagleAI, a tool for electoral administration fault-finding deployed in the US. It uses intelligent matching to identify small casual mistakes in voter records – such as name typos – and uses those to challenge citizens’ right to vote. Through lodging thousands of challenges, especially targeted at voters aligned with a particular party (in this case, Democrats), it creates a perception that the voting list is not credible and causes a lot of extra work for the election commission. It works perfectly as intended, sifting through voters’ lists and identifying minor issues – thus, this is ‘safe’ under the traditional interpretation of safety. Yet, this AI is undesirable if our goal is to maintain a healthy democracy. My point is that a single-minded focus on ‘AI safety’ is dangerous in that it may deflect attention from such forms of harm from AI.

The tech accord and other kinds of big tech strategies that have been put forward to combat AI harm during elections can be seen to be proposing to do more and more of the same. They look to put more effort into fact-checking, and on building AI tools to detect fake news. These have probably worked well so far. But they are deficient in at least two dimensions. First, electoral disinformation is often structurally biased. In India, a lot of election-time disinformation targets minorities, and in general, disinformation targets women more than men. The tech strategies do not seem to indicate any understanding of such very well-known structural biases in disinformation. If we don’t care to understand the nuances of the challenge, how can we address it? Second, AI-generated images, a relatively new arrival on the scene, cannot be debunked using traditional fact-checking methods. When a piece of disinformation claims that a leader said a particular thing, it is easy to fact-check by seeing whether other media have reported similar things, or even by directly checking with the leader in question. However, how can an AI-generated image showing a communal street fight be debunked? When the location and people involved are not mentioned, how can traditional fact-checking verify if any such incident took place? Doing more of the same is not enough, and tech companies do not seem to show any urge to diversify their strategies in fighting AI electoral deception. This makes their ‘strategies’ woefully inadequate.

Nearly 960 million voters are expected to cast votes in the parliamentary elections in India. It is the biggest democratic exercise in the world, but always fraught with the risks of being manipulated by unscrupulous politicians who may seek to divide and polarise the electorate on religious and casteist lines. Do you think that AI could come in handy for such unscrupulous politicians to disturb the level-playing fields in the elections?

The question relates to a very real threat that India faces today. Election after election, we see so much news intended to polarize the electorate, incite hate, and deepen social divisions. It is not just news; even movies get released with such intent! It is well understood that hate appeals to our primal instinct – our emotions such as anger and fear, and often also to our insecurities – it also attracts our attention quickly. On the other hand, love and social solidarity requires patience and understanding. This makes it generally harder to spread positive news. AI and social media, which amplify attention-capturing news, make it even harder to build social solidarity. The story doesn’t stop here. Social media platforms allow clients to target their ads to specific demographics – for example, a political ad talking about IT policy can be specifically targeted to young users in Bengaluru. The increased possibilities offered by generative AI, social media, and personalized targeting also come with a caveat – you need to understand them better to use them to your advantage. This leads to another asymmetry in the playing field, one that relates to wealth. Wealthier political parties would be able to navigate the technological complexity and use modern AI technologies to their advantage better than others. Smaller regional political parties may find it harder to thrive in this technological electoral ecosystem. It indeed does make the playing field more asymmetric, in a variety of ways.

An AI-generated robocall imitating US President Joe Biden’s voice recently asked Democrats not to vote in the New Hampshire primaries. In the elections in Pakistan, an AI-generated video of incarcerated Imran Khan appealing to his voters to vote for the candidates backed by his party was circulated. In India, late Tamil Nadu Chief Minister M Karunanidhi’s AI-generated video message congratulating T R Baalu, a leader of his own party, Dravida Munnethra Kazhagam, on the launch of his autobiographical book was circulated recently. There are anticipations in India that the forthcoming elections may see AI-generated videos of mythological or historical characters, dead leaders, and icons of the nation’s struggle for Independence being used to seek votes either in favour of or against candidates. How can one draw the line between what is acceptable and what is fake or deepfake or disruptive or harmful to democratic processes?

AI-generated images are not all of one kind, and nor should be treated as being of one kind. It is often said that ‘a picture conveys a thousand words.’ It is much more forceful to illustrate the plight of war-torn Gazans through a picture than a detailed narration. What do we do if we can’t get pictures due to shallow connectivity or internet shutdowns? We may have cartoons, which invoke a strong empathy in the onlooker. Generative AI could be made to work similarly to cartoons in that spirit – one that could be legitimately used to illustrate the plight of a struggling group to create much-required social solidarity or empathy. These are values that positively influence democracy. In contrast, generative AI could also be used to show a particular minority group in bad light and incite hatred. If we consider images as a generative/real binary, we won’t be able to distinguish between these forms of messaging. The social relations that are embedded within the generative AI matter much more than whether it is generative or not.

The bad news is that it means there is no simple way to draw the line – the good news is that there may be ways, however narrow, to use generative AI positively. We have nothing but our political literacy to rely on, to decide what to take in and what not to. If we take the care to dissect the oncoming messaging – to identify the social relations embodied within it and the intents that may have led to its creation – we will do much better with any objective criteria. The critically minded and informed citizen is key to democracy – more so in these times of generative AI. Neither disinformation nor deep-fakes are a technological problem – these are socio-political problems and ought to be treated that way.

How can the Election Commission of India or such election management bodies around the world equip themselves adequately to curb the misuse of generative AI in polls and protect democracy from disinformation and deepfakes?

It is indeed a very tough situation to be in, for the ECI. Yet, it is important that election management bodies look at long-term solutions rather than short-term ones.

For example, we need to understand that technological solutions to detect deepfakes are intrinsically limited since the very same technology can be used to make deepfakes better, at a technological level. The fact-checkers are used to check for what is called ‘tell-tale signs’ such as very smooth skin or extra fingers in images or characters not blinking eyes in videos. Newer generative AI images and videos seldom have such issues. As a technology expert, I can safely say that it is easy to re-train these generative AI technologies to get past such tricks. In other words, deepfake technology is bound to get better, and will continuously overcome attempts to detect it. If we continue to rely on these tricks, we will only end up debunking fake news generated by poor AI, or AI that can be afforded by smaller political parties that don’t have the financial muscle to invest heavily in AI. In India, most electoral disinformation propagates through WhatsApp and such non-public media, making it even more complicated for electoral bodies to estimate the extent of the challenge.

In the long run, a robust and independent media ecosystem is our best vaccine to disinformation and deepfakes. Yet, we could think of potential alternative mitigating solutions such as perhaps creating a ‘disinformation/deepfake observatory’ which allows citizens to report malicious deepfakes that they come across. This could be facilitated through an anonymous reporting mechanism, like what Wikileaks used, so that citizens could report deepfakes without fear of leakage of their activity and possible retaliation. Data gathered by the observatory could help identify groups or sub-populations that are most targeted by deepfakes; such insights could be used to develop awareness campaigns specifically targeted at mitigating the most serious harm. Running a general media literacy campaign to educate the public on the changing media landscape and the need to critically evaluate any information before internalizing it could also help develop a robust citizenry.

ADVERTISEMENT
(Published 08 March 2024, 23:08 IST)