<p>San Franscisco: As Apple and Google transform their voice assistants into chatbots, OpenAI is transforming its chatbot into a voice assistant.</p><p>On Monday, the San Francisco artificial intelligence startup unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos.</p><p>The company said the new app — based on an AI system called GPT-4o — juggles audio, images and video significantly faster than previous versions of the technology. The app is available free of charge, for both smartphones and desktop computers.</p><p>“We are looking at the future of the interaction between ourselves and machines,” said Mira Murati, the company’s chief technology officer.</p><p>The new app is part of a wider effort to combine conversational chatbots such as ChatGPT with voice assistants like the Google Assistant and Apple’s Siri. As Google merges its Gemini chatbot with the Google Assistant, Apple is preparing a new version of Siri that is more conversational.</p><p>OpenAI said it would gradually share the technology with users “over the coming weeks.” This is the first time it has offered ChatGPT as a desktop application.</p>.OpenAI unveils new AI model as competition intensifies.<p>The company previously offered similar technologies from inside various free and paid products. Now, it has rolled them into a single system that is available across all its products.</p><p>During an event streamed on the internet, Murati and her colleagues showed off the new app as it responded to conversational voice commands, used a live video feed to analyze math problems written on a sheet of paper and read aloud playful stories that it had written on the fly.</p><p>The new app cannot generate video. But it can generate still images that represent frames of a video.</p><p>With the debut of ChatGPT in late 2022, OpenAI showed that machines can handle requests more like people. In response to conversational text prompts, it could answer questions, write term papers and even generate computer code.</p><p>ChatGPT was not driven by a set of rules. It learned its skills by analyzing enormous amounts of text culled from across the internet, including Wikipedia articles, books and chat logs. Experts hailed the technology as a possible alterative to search engines like Google and voice assistants like Siri.</p><p>Newer versions of the technology have also learned from sounds, images and video. Researchers call this “multimodal AI.” Essentially, companies like OpenAI began to combine chatbots with AI image, audio and video generators.</p><p>(The New York Times sued OpenAI and its partner, Microsoft, in December, claiming copyright infringement of news content related to AI systems.)</p><p>As companies combine chatbots with voice assistants, many hurdles remain. Because chatbots learn their skills from internet data, they are prone to mistakes. Sometimes, they make up information entirely — a phenomenon that AI researchers call “hallucination.” Those flaws are migrating into voice assistants.</p><p>While chatbots can generate convincing language, they are less adept at taking actions like scheduling a meeting or booking a plane flight. But companies like OpenAI are working to transform them into “AI agents” that can reliably handle such tasks.</p><p>OpenAI previously offered a version of ChatGPT that could accept voice commands and respond with voice. But it was a patchwork of three different AI technologies: one that converted voice to text, one that generated a text response and one that converted this text into a synthetic voice.</p><p>The new app is based on a single AI technology — GPT-4o — that can accept and generate text, sounds and images. This means that the technology is more efficient, and the company can afford to offer it to users for free, Murati said.</p><p>“Before, you had all this latency that was the result of three models working together,” Murati said in an interview with the Times. “You want to have the experience we’re having — where we can have this very natural dialogue.”</p>
<p>San Franscisco: As Apple and Google transform their voice assistants into chatbots, OpenAI is transforming its chatbot into a voice assistant.</p><p>On Monday, the San Francisco artificial intelligence startup unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos.</p><p>The company said the new app — based on an AI system called GPT-4o — juggles audio, images and video significantly faster than previous versions of the technology. The app is available free of charge, for both smartphones and desktop computers.</p><p>“We are looking at the future of the interaction between ourselves and machines,” said Mira Murati, the company’s chief technology officer.</p><p>The new app is part of a wider effort to combine conversational chatbots such as ChatGPT with voice assistants like the Google Assistant and Apple’s Siri. As Google merges its Gemini chatbot with the Google Assistant, Apple is preparing a new version of Siri that is more conversational.</p><p>OpenAI said it would gradually share the technology with users “over the coming weeks.” This is the first time it has offered ChatGPT as a desktop application.</p>.OpenAI unveils new AI model as competition intensifies.<p>The company previously offered similar technologies from inside various free and paid products. Now, it has rolled them into a single system that is available across all its products.</p><p>During an event streamed on the internet, Murati and her colleagues showed off the new app as it responded to conversational voice commands, used a live video feed to analyze math problems written on a sheet of paper and read aloud playful stories that it had written on the fly.</p><p>The new app cannot generate video. But it can generate still images that represent frames of a video.</p><p>With the debut of ChatGPT in late 2022, OpenAI showed that machines can handle requests more like people. In response to conversational text prompts, it could answer questions, write term papers and even generate computer code.</p><p>ChatGPT was not driven by a set of rules. It learned its skills by analyzing enormous amounts of text culled from across the internet, including Wikipedia articles, books and chat logs. Experts hailed the technology as a possible alterative to search engines like Google and voice assistants like Siri.</p><p>Newer versions of the technology have also learned from sounds, images and video. Researchers call this “multimodal AI.” Essentially, companies like OpenAI began to combine chatbots with AI image, audio and video generators.</p><p>(The New York Times sued OpenAI and its partner, Microsoft, in December, claiming copyright infringement of news content related to AI systems.)</p><p>As companies combine chatbots with voice assistants, many hurdles remain. Because chatbots learn their skills from internet data, they are prone to mistakes. Sometimes, they make up information entirely — a phenomenon that AI researchers call “hallucination.” Those flaws are migrating into voice assistants.</p><p>While chatbots can generate convincing language, they are less adept at taking actions like scheduling a meeting or booking a plane flight. But companies like OpenAI are working to transform them into “AI agents” that can reliably handle such tasks.</p><p>OpenAI previously offered a version of ChatGPT that could accept voice commands and respond with voice. But it was a patchwork of three different AI technologies: one that converted voice to text, one that generated a text response and one that converted this text into a synthetic voice.</p><p>The new app is based on a single AI technology — GPT-4o — that can accept and generate text, sounds and images. This means that the technology is more efficient, and the company can afford to offer it to users for free, Murati said.</p><p>“Before, you had all this latency that was the result of three models working together,” Murati said in an interview with the Times. “You want to have the experience we’re having — where we can have this very natural dialogue.”</p>