The Artificial Intelligence (AI) landscape has undergone a seismic shift over the past few decades, but it’s the recent explosion of generative AI that has thrust it into the forefront of mainstream discourse and caught the attention of governments worldwide. Today, discussions around
AI brim with possibilities as well as cautionary tales. Experts in emerging technology have sounded the alarm on the safety and societal implications of AI, sparking a necessary dialogue about its potential risks and rewards.
It’s impossible to ignore the parallels between the reservations voiced about AI today and the scepticism that accompanied previous technological advancements like Web 2.0 and the widespread adoption of computers. Historically, technology has continually disrupted traditional forms of labour, yet it’s also brought convenience, efficiency, and progress to various sectors. For instance, technological advancements revolutionised healthcare by enabling telemedicine, allowing patients in remote areas to access medical consultations via video calls, and in finance by providing mobile banking services, empowering underserved communities without easy access to physical banks to manage their finances digitally. However, what sets AI apart is its encroachment into white-collar domains, where intellectual prowess and analytical abilities are paramount. This shift raises concerns as it impacts individuals with significant influence within the system, altering power dynamics in unforeseen ways.
Among the myriad critiques levelled against AI, one of the most glaring is its impact on society from a gendered perspective. The pervasive biases ingrained within AI systems regarding gender roles are alarming. Yet, it’s intriguing to observe that these biases are not entirely new; they’ve merely been amplified and perpetuated by emerging technologies like AI. Throughout history, technology has served as a double-edged sword, reflecting and often exacerbating societal inequalities. From revenge porn and cyberstalking to deepfakes in the AI era, technology’s capacity to perpetuate harm echoes the broader issues etched in our social fabric.
Intriguingly, this bias isn’t confined to AI alone; it’s deeply ingrained in our technology and even our textbooks. When we conjure images of engineers or firefighters, it’s often the stereotypical image of a man that springs to mind — a bias perpetuated through generations. I recently attended a panel discussion on AI that was, ironically, a “manel,” comprising entirely of male panellists. One panellist astutely noted that this was a result of human selection, prompting a discussion on the rationale behind the organisers’ choices. However, had this selection been delegated to AI, the biases inherent in the technology would have taken centre stage, prompting criticism of the technology itself rather than addressing the root cause — the societal biases reflected in the data sets on which it was trained. While we shouldn’t anthropomorphize technology, it’s imperative to recognise that all existing technology is trained on datasets shaped by human inputs and behaviours, underscoring the need for introspection and accountability in our technological advancements.
The recent working paper by PM’s Economic Advisory Council categorises AI as a complex adaptive system — a system where the relationships between inputs and outputs are far from linear. Picture it as a web of interactions among various actors, much like how humans navigate within society. Here, the input-to-output trajectory isn’t a straightforward path; it’s the emergent behaviour resulting from interactions within your ecosystem. Understanding and ensuring the safety of such emergent behaviour is crucial for AI.
The gender bias evident in AI stems from its training on biased datasets, and while transparency can unveil these biases, the focus should also be on tackling the underlying issue. Addressing gender bias in AI requires a multifaceted approach that promotes transparency and emphasises the importance of expertise and accountability in its development and deployment.
While we’ve heard ample warnings about synthetic data generated by AI often being deceptive — issues that can be mitigated through transparency and robust governance frameworks — there’s another side to this coin. Synthetic data has the potential to be crafted to be inherently unbiased. Research suggests that training AI systems on such data can significantly enhance efficiency. Hence, there’s a need for a deliberate effort to train models on unbiased synthetic data, tackling the root cause of bias in AI systems. This calls for exploring mechanisms and fostering industry-wide understanding and collaboration to pave the way for fairer and more reliable AI technologies.
This is where the role of regulation becomes pivotal. Ideally, regulations are intended to foster a climate of freedom, safety, and healthy competition, nurturing innovation rather than stifling it. There’s unanimous agreement on the transformative potential of AI in healthcare,education, and financial inclusion. However, regulators must avoid the missteps of the past, where they remained passive until tech companies abused their dominance, leading to market monopolisation and anti-competitive practices during the era of Web 2.0. As highlighted in the working paper on AI, the regulatory framework must adapt to perceive the system as a rapidly evolving, living entity requiring proactive regulation that fosters innovation rather than reactive measures.
(The writer is a New Delhi-based public policy consultant)