×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
AI’s brain fog won’t stop a reckoning for the arts

AI’s brain fog won’t stop a reckoning for the arts

Even as firms rein in spending and investors dampen their expectations for generative AI leaders, the technology’s creative strengths that are taking hold in industries where hallucinations can be an advantage, and where there’s less risk in getting things wrong. Think marketing, gaming and entertainment or any job that involves nonlinear thinking.

Follow Us :

Last Updated : 02 July 2024, 05:13 IST
Last Updated : 02 July 2024, 05:13 IST
Comments

By Parmy Olson

Ever notice how science fiction gets things wrong about future technology? Instead of flying cars, we got viral tweets that fueled culture wars. Instead of a fax machine on your wrist, we got memes. We’re having a similar reality check with artificial intelligence. Sci-fi painted a future with computers that delivered reliable information in robotic parlance. Yet businesses who’ve tried plugging generative AI tools into their infrastructure have found, with some dismay, that the tools “hallucinate” and make mistakes. They are hardly reliable. And the tools themselves aren’t stiff and mechanistic either. They’re almost whimsical.

“We thought AI would be ‘The Terminator’ but it turned out to be Picasso,” says Neil Katz, founder of EyeLevel.ai, a startup that helps companies get generative AI models to try and work with 95 accuracy when plugged into their data. It will take another three to five years of tinkering before that level of reliability becomes widespread with AI, Katz predicts, meaning that the technology can be substantially useful to the core operations of finance or health-care companies. That doesn’t mean generative AI isn’t already having an industry-transforming impact, though. It’s just not happening in the way that its creators envisioned.

Even as firms rein in spending and investors dampen their expectations for generative AI leaders like Nvidia Corp., the technology’s creative strengths that are taking hold in industries where hallucinations can be an advantage, and where there’s less risk in getting things wrong. Think marketing, gaming and entertainment or any job that involves nonlinear thinking.

The impact is already clear on jobs. Contractors dominate in creative industries and, since the launch of ChatGPT, there has been a 21 per cent drop in demand for digital freelancers, according to a November 2023 study by researchers at Harvard Business School and two other academic institutions. The jobs most affected have been in writing. Yael Biran, an experienced animator who had enjoyed a work flow of about 12 projects annually, recently told me that her activity had dwindled to just three in the past year.

“Marketing is furthest in exploiting AI,” according to a recent note by Enders Analysis Ltd., an industry research firm specializing in technology, media and telecoms, while smaller advertisers can use gen-AI tools to write marketing copy or generate posters and images. So far, one of the biggest obstacles has been getting AI to produce accurate depictions of company logos, one machine-learning engineer at a marketing firm tells me, but such technical issues are easy to rectify with tools like Adobe Photoshop.

Last year Coca-Cola Co. published an advertisement on YouTube partially created with generative AI tools, while Toys ‘R’ Us has developed an entire ad with OpenAI’s video generation tool SORA, and to arguably bizarre effect. The saccharine video commercial features a young boy whose face makes subtle, alien-like contortions throughout, a reminder of the flaws that image generators are still ironing out.

Twitter users piled on the ad to label it as creepy, which was true, but they also called it highly ineffective, which it wasn’t. AI-generated ads are only just getting going and will likely find a willing audience. Consider that AI-generated images have already been flooding Facebook with bizarre renderings of “Shrimp Jesus” and other similar detritus, suggesting that large swaths of the public are forgiving (or perhaps not seeing) the misshapen human hands of generative AI, but rather taking a shine to its vivid and slightly unreal aesthetic.

In the meantime, businesses are grudgingly accepting that AI has truth issues, something that should have been obvious from the start: The generative AI boom was underpinned by language models that can predict the next most likely word in a sentence. Companies that are trying to plug these models into their data sets are struggling because the AI systems were trained on text. The models struggle to make sense of numbers, financial tables, charts or handwriting. That is a solvable technical problem, but it will take time, making hallucinations a bump in the road to generative AI’s path to wider adoption. For a while, AI will continue to be better at writing poetry than at solving math problems.

Perhaps that’s why OpenAI’s chief technology officer, Mira Murati, was so brutal in her assessment of how her company’s tools would upend creative skills in humans. “Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place,” she said in a recent interview.

AI’s hallucination problem could take a few years to solve for the likes of banks, telecoms and health-care companies. In the short term, though, creative industries will be the ones that face a reckoning.

ADVERTISEMENT

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT