<p><em>By Parmy Olson for the Bloomberg</em></p><p>For years, to “google” meant to tap the web’s prodigious data. Today it means wading through ads, spam and, most recently, wildly inaccurate AI answers. </p><p>The new AI Overview feature rolled out by Google over the last week has led to a flurry of errors that must have Alphabet Inc. Chief Executive Officer Sundar Pichai cringing. When asked about cheese sliding off pizza, it recommended a user apply glue. It suggested to another that a python was a mammal. Worse: Google’s AI told one user who was “feeling depressed” that they could jump off the Golden Gate Bridge. </p><p>Pichai has tinkered with one of the most successful and profitable technology products of all time and made it completely unreliable, even dangerous. The countdown has begun for when he takes it offline. The sooner he does, the better. </p>.The next government’s focus must be economy, trade, governance.<p>For now, Google has said it’s refining its AI search service with every new report of a hallucination (which, according to my Twitter feed, is turning into a flood). A Google spokesperson told <em>The Verge</em> that errors were happening for “generally very uncommon queries and aren’t representative of most people’s experiences.” </p><p>That’s a poor excuse for a company that prides itself on organising the world’s information. And infrequent search queries should get reliable results, since the vast majority of Google searches are made up of a long tail of uncommon queries.</p><p>This is a stunning turnaround for a company that was once so cautious that it refused to release generative AI technology it had built that was at least two years ahead of OpenAI Inc.’s ChatGPT. It has since succumbed to the race set off by Microsoft Corp. and OpenAI, which are stirring one controversy after another. </p><p>Last week OpenAI released a new version of ChatGPT deliberately timed to preempt Google’s AI launches the following day. But in all the rush, Sam Altman botched the rollout and got into a beef with Scarlett Johansson. </p><p>Steve Jobs’ 2011 catchphrase “It just works” epitomized an era when the bar for technology products was reliability. But the more tech companies showcase how much generative AI doesn’t work, the harder it will be for them to prove its usefulness to enterprise customers and consumers alike.</p><p>Even Elon Musk, on the verge of raising $6 billion for his xAI startup, isn’t using generative AI tools at his own SpaceX and Starlink businesses because they keep making mistakes. “I'll ask it questions about the Fermi Paradox, about rocket engine design, about electrochemistry,” he told the Milken Institute conference earlier this month. “And so far, the AI has been terrible at all those questions.”</p><p>Should Google stick it out with AI Overview and keep the feature in place, one outcome will obviously be more misinformation. Another is that, in much the same way we got used to scrolling past SEO spam and sponsored ads, we’ll acclimate to the zany mistakes its AI makes too. We’ll get used to an even more mediocre service because there are so few other options. (Google’s global market share for search has slipped to 82 per cent from 87 per cent about a decade ago.) In this new era, we resign ourselves to subpar software once billed as transformative for the world, that requires constant fact-checking.</p><p>Hallucinations aren’t a new problem, but they seem to be one that, to our detriment, we’re getting used to. When mistakes cropped up in Google’s very first demo of Bard in February 2023, shares of Alphabet dropped 7 per cent, wiping $100 billion off the company’s value. On Friday, as more social posts of its latest gaffes went viral, they opened up almost 1 per cent. Wall Street doesn’t seem to care. Does Google?</p><p>We’ll find out if and when Pichai pauses his new AI feature for further tinkering, as he did with the Gemini image-generator in February. It’d be yet another humiliating retreat, but to put tech back on the path to “just working,” he should just do it.</p>
<p><em>By Parmy Olson for the Bloomberg</em></p><p>For years, to “google” meant to tap the web’s prodigious data. Today it means wading through ads, spam and, most recently, wildly inaccurate AI answers. </p><p>The new AI Overview feature rolled out by Google over the last week has led to a flurry of errors that must have Alphabet Inc. Chief Executive Officer Sundar Pichai cringing. When asked about cheese sliding off pizza, it recommended a user apply glue. It suggested to another that a python was a mammal. Worse: Google’s AI told one user who was “feeling depressed” that they could jump off the Golden Gate Bridge. </p><p>Pichai has tinkered with one of the most successful and profitable technology products of all time and made it completely unreliable, even dangerous. The countdown has begun for when he takes it offline. The sooner he does, the better. </p>.The next government’s focus must be economy, trade, governance.<p>For now, Google has said it’s refining its AI search service with every new report of a hallucination (which, according to my Twitter feed, is turning into a flood). A Google spokesperson told <em>The Verge</em> that errors were happening for “generally very uncommon queries and aren’t representative of most people’s experiences.” </p><p>That’s a poor excuse for a company that prides itself on organising the world’s information. And infrequent search queries should get reliable results, since the vast majority of Google searches are made up of a long tail of uncommon queries.</p><p>This is a stunning turnaround for a company that was once so cautious that it refused to release generative AI technology it had built that was at least two years ahead of OpenAI Inc.’s ChatGPT. It has since succumbed to the race set off by Microsoft Corp. and OpenAI, which are stirring one controversy after another. </p><p>Last week OpenAI released a new version of ChatGPT deliberately timed to preempt Google’s AI launches the following day. But in all the rush, Sam Altman botched the rollout and got into a beef with Scarlett Johansson. </p><p>Steve Jobs’ 2011 catchphrase “It just works” epitomized an era when the bar for technology products was reliability. But the more tech companies showcase how much generative AI doesn’t work, the harder it will be for them to prove its usefulness to enterprise customers and consumers alike.</p><p>Even Elon Musk, on the verge of raising $6 billion for his xAI startup, isn’t using generative AI tools at his own SpaceX and Starlink businesses because they keep making mistakes. “I'll ask it questions about the Fermi Paradox, about rocket engine design, about electrochemistry,” he told the Milken Institute conference earlier this month. “And so far, the AI has been terrible at all those questions.”</p><p>Should Google stick it out with AI Overview and keep the feature in place, one outcome will obviously be more misinformation. Another is that, in much the same way we got used to scrolling past SEO spam and sponsored ads, we’ll acclimate to the zany mistakes its AI makes too. We’ll get used to an even more mediocre service because there are so few other options. (Google’s global market share for search has slipped to 82 per cent from 87 per cent about a decade ago.) In this new era, we resign ourselves to subpar software once billed as transformative for the world, that requires constant fact-checking.</p><p>Hallucinations aren’t a new problem, but they seem to be one that, to our detriment, we’re getting used to. When mistakes cropped up in Google’s very first demo of Bard in February 2023, shares of Alphabet dropped 7 per cent, wiping $100 billion off the company’s value. On Friday, as more social posts of its latest gaffes went viral, they opened up almost 1 per cent. Wall Street doesn’t seem to care. Does Google?</p><p>We’ll find out if and when Pichai pauses his new AI feature for further tinkering, as he did with the Gemini image-generator in February. It’d be yet another humiliating retreat, but to put tech back on the path to “just working,” he should just do it.</p>