×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
Caution: Creative disruption ahead

Caution: Creative disruption ahead

The Oppenheimer sense of trepidation in some quarters comes from the aphorism that evil begins in the minds of men. In the case of AI, this is exacerbated by the nature of its powerful tools; and their ability to codify and reproduce patterns they decipher.

Follow Us :

Last Updated : 02 July 2024, 21:58 IST
Last Updated : 02 July 2024, 21:58 IST
Comments

We are in the midst of an unprecedented and disruptive transformation with artificial intelligence (AI) and machine learning (ML) driving automated decision-making and data-derived predictions. Ubiquitous digital services like interactive data sets, maps, and voice-activated bots are just the beginning. 

AI evangelists will tell you that its impact is no less than that of the discovery of fire or the invention of the wheel, and will revolutionise every field of human action. There is great expectation that emerging AI applications will transform healthcare, education, agriculture, climate action, and even democracy. Early evidence suggests that AI can help achieve sustainable development goals (SDGs) more effectively than without it. Some optimistic reports even suggest that advances in AI could double economic growth and increase human productivity by 40% by 2035.

Yet, data scientists are circumspect about the creative disruption ahead. The Oppenheimer sense of trepidation in some quarters comes from the aphorism that evil begins in the minds of men. In the case of AI, this is exacerbated by the nature of its powerful tools; and their ability to codify and reproduce patterns they decipher. In more advanced countries there are indications that ML tools have sometimes been found to automate racial profiling, for purposes of surveillance, and to perpetuate racial stereotypes. An inbuilt bias in the algorithm does create unfair outcomes, such as privileging one set of people over another in ways different from the intended function of the algorithm. 

Bias can emerge from design or by default, including unintended decisions relating to the way data is collected, selected, coded, or used to train the machine. This is evident in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, religion, gender, sexuality, and ethnicity. Thus, algorithms can be used, intentionally or unintentionally, in ways that result in disparate or unfair outcomes between those who hold power and control resources, and those who are disadvantaged and vulnerable populations. 

The more complex the AI model, the less likely accountability can be fixed, and the more difficult the process to seek redress. These shortcomings are universal, and will likely manifest in settings with a sharp power asymmetry or a history of discrimination and inequality as in much of the developing world. Even as we increasingly deploy AI tools in human development interventions, we need a clear sight on ensuring their effective applications, which are inclusive and just. 

This will require an understanding of when and in what kinds of problems AI-ML offers a suitable solution. This also means an appreciation that AI can under certain circumstances do harm; and, therefore, to remain committed to mitigating this potential harm. Those engaged in development praxis will have to play an important role even as our dependence on AI-based solutions to seemingly intractable problems grows; and this role is to recognise how machine-based algorithm-driven decisions can impact people.

AI is an extremely powerful instrument that has altered many aspects of our life, and its potential appears to be endless. However, AI, like any other instrument, can become a huge threat if used maliciously. In this duplicitous and self-aggrandising world, the likelihood of the misuse of AI is of frightening proportions. Among many possible misapplications, some that will impact ordinary citizens are: Surveillance, tracking your public movements, analysing your social media activity, and using bots to monitor online conversations; Communication Disruption, by flooding your email with spam, and using bots to send disruptive messages; Reputation Damage through creating deepfakes, creating false financial misconduct cases, and linking your presence with illicit groups. The list is endless. 

There is a need to develop a comprehensive set of rules and regulations and to enforce them in an exemplary fashion. This initiative-taking approach must aim to safeguard citizens, protect privacy, and ensure AI is developed and used ethically. We can all learn to ask the hard questions that will ensure that solutions work for, rather than against, the development concerns we care about. Development practitioners already have extensive knowledge of their specific sectors or locations. They bring valuable experience in engaging local stakeholders, navigating complicated social systems, and uncovering structural disparities that impede inclusive progress. Unless this expert perspective informs the deployment of ML/AI technologies, they are unlikely to realise their transformative promise.

Our efforts in human development should be directed at helping people with less technical skills navigate the rising ML/AI landscape and educate and empower them. Donors, implementers, and other development partners should gain a basic understanding of typical ML techniques and the challenges it is uniquely suited to solving. Social discourse must discuss issues with ML/AI deployment in addressing problems that impact disadvantaged and vulnerable populations and those in resource-deficit geographies. Recognising and addressing the risks associated with AI and ML can lead to collaborative efforts to prevent harm and promote a more equitable and humane future. A good starting point will be to instruct students on the transformative power of AI as well as the importance of a strong ethical foundation for its use.

While the risks from AI are real, we can take steps for better protection against its misuse. From stringent data protection laws to AI ethics committees, we must construct defences against such misuse. As AI developers, users, and observers alike, we must all share responsibility for upholding rigorous ethical standards. As AI continues to evolve and permeates every facet of our lives, we must remain vigilant, aware not only of its vast potential but also its potential for misuse. 

Let us not lose sight of the incredibly positive impact of AI. The creative disruption ahead must make our world safer, efficient, and equitable, rather than leading to a dystopian future.

(The writer is Director, School of Social Sciences, Ramaiah University of Applied Sciences)

ADVERTISEMENT

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT