×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

AI: The good and the bad

Although AI is now past the stage of definition, ask around and people still find it hard to wrap their heads around what it really entails.
Last Updated : 20 January 2024, 22:53 IST

Follow Us :

Comments

Way back in 1976, Drew McDermott, a professor of Computer Science at Yale University, US, was already discovering the enormous possibilities of Artificial Intelligence, which was then in its nascent stages. He had summed up the field as: “Artificial Intelligence has always been on the border of respectability, and therefore on the border of crackpottery.” In 2024, we are realising how true this assessment is.

It is almost impossible to spend a day without hearing about another advance AI has made, a generative AI artwork that has won a prize or a celebrity who has been deepfaked. Business, travel, finance, warfare, healthcare, entertainment and environment — AI is enabling every field you can think of and at an alarming pace, which is where the misinformation and the hype come in. It has not just left most people dizzy but also confused and wary. In his entertaining book, ‘Faking It’, Toby Walsh, professor of AI at the University of New South Wales, who is often called the ‘rockstar’ of Australia’s digital revolution, writes how many of the claims made about AI are simply “overinflated”, and in several cases, plain wrong. 

Although AI is now past the stage of definition, ask around and people still find it hard to wrap their heads around what it really entails. Essentially, Artificial Intelligence mimics the behaviour of the human brain to complete certain tasks faster. Walsh, while ruing that it is depressing to be asked to “define what I do”, says most researchers agree that Artificial Intelligence is about “trying to get computers to do tasks that, when humans do them, we say require intelligence.” The key thing about it is that it is “artificial”; that it is about “imitating human intelligence”. 

This is our cue to put aside, at least for the time being, our fantastical notions about AI. Humanoid robots will not be helping you in the kitchen this year and artists won’t be giving up their art anytime soon, because of Midjourney and other generative-AI apps. And there is no imminent machine takeover to deal with either. As Walsh says, we don’t need robots to destroy humanity, we do that ourselves well! 

In reality, AI can be both good and great and will certainly play an increasingly crucial role in our everyday lives. But what will it do? “It will take on the dirty, the dull, the difficult and the dangerous, which is a good thing. Indeed, it is hard to imagine a part of our lives that AI won’t touch,” writes Walsh. This may mean anything from cars with better safety features to cancer cures to easier travel logistics and a blessed escape from the drudgery of repetitive tasks.

That said, deepfake is today the biggest AI-generated headache of our times while ChatGPT is still in oddball territory (see accompanying stories) — considered by many as a freaky plaything rather than as an application that is threatening their way of existence or livelihood. But perhaps, that is just a matter of time. As with all things AI, that may change in a blink or two — the same amount of time it will take for this article to become dated, one presumes!

Deepfake for dummies!

Last week, Sachin Tendulkar became the latest in a series of celebrities who have found themselves deepfaked. The cricketing great was seen, and heard, endorsing a mobile application in what was later revealed as a doctored video. Doctored is one way to break deepfakes down for the uninitiated but as AI technologies evolve at a staggering pace and enable criminal intent in unprecedented ways, this could be a simplistic and ineffectual description of what is emerging as one of the biggest technology fallouts of our time.

1. What is a deepfake?

In 2017, the members of a Reddit community used face-swapping technologies to create pornographic images and called them deepfakes. The naming derives from Deep Learning, a human brain-inspired form of Machine Learning, and its use to manipulate, or fake, audiovisual content.

A generative adversarial network, or GAN, is integral to the creation of most of these manipulated versions of photographs, voices and videos. There are two components in the network — while the generator algorithm is trained to produce content that resembles or mimics, the source material, the discriminator algorithm acts as an adversarial correcting system and checks this content for accuracy.

The repeated exchanges between the two algorithms refine the generator’s performance and improve the discriminator’s feedback on the performance; conflicting and complementing at once, they synthesise content that gets progressively closer to the original.

2. How easy can it get?

Free deepfake applications are bringing the technology increasingly closer to unskilled users, leaving them with unregulated power to create. US-based cybersecurity firm Home Security Heroes says in its report called 2023 State of Deepfakes that with the existing technologies, a user can put together a deepfake pornographic video with just one clear image of a face, in less than 25 minutes and without spending any money on it.

3. Why is this a problem with serious social implications?

Manipulated content that targets celebrities has dominated the public discourse on deepfakes. Emerging, and arguably more alarming, trends point to wider fields of application. The lines between mischief and malice are blurring rapidly. Deepfakes could be used to aid crime across domains — as a tool to extort, impersonate, plagiarise, fabricate legal evidence, manufacture political consensus and swing elections, or even set off geopolitical tension. The technology is also being weaponised against women in the form of revenge pornography.

The virality these deepfakes find on social media platforms makes them a powerful tool for polarisation. Research has indicated that the understanding of deepfakes, among the public, is growing but with the technology upgrades making such content more sophisticated, this could be a long game of catch-up.

4. Can deepfakes be detected?

Inconsistencies in facial expressions, mouth movements, ambient sound and lighting patterns are giveaways but the detection of these glitches also depends on the levels of sophistication at which the content is created. Advanced manipulation warrants advanced tools for detection.

Sentinel which counters deepfakes with an AI-based protection platform is being adopted by governments and large organisations. Intel launched FakeCatcher in late 2022, with the promise of detecting fake videos with a 96% accuracy rate. The detector studies subtle blood flow signals from the face in the video and using Deep Learning techniques, ascertains if the face is real or fake, “in milliseconds”. The democratisation of these technologies can help build a stronger resistance against deepfakes.

5. What does the law say?

Section 66E of the Information Technology Act, 2000 is one of the provisions that could be invoked to handle crimes that involve deepfakes in India. It stipulates imprisonment of up to three years or a fine not exceeding Rs 2 lakh, or both, for capturing, publishing or transmitting the image of a person without his or her consent.

The Tendulkar deepfake has surfaced when the calls for laws that specifically address AI-enabled crimes are growing shriller. Minister of State for Electronics and Technology Rajeev Chandrasekhar has said that tighter rules under the IT Act would be notified soon, to ensure compliance by the platforms.

In December last year, the Ministry of Electronics and Information Technology issued an advisory through which digital platforms were asked to comply with the existing IT Rules. The platforms have been directed to “clearly and precisely” communicate to the users content that is notified as impermissible under the IT Rules.

6. How does the future look?

The big challenge will be in devising effective regulatory systems that are not in conflict with the idea of free expression. Building awareness through research and educational content can help users prepare better for future upgrades in deepfake technologies. There are AI analysts who predict that deepfakes will create a culture of mistrust. The more optimistic among them, however, see the detection tools getting smarter, eventually keeping the deepfakes limited to their positive utilities, like their applications in visual effects for games and films.

ADVERTISEMENT
Published 20 January 2024, 22:53 IST

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT