<p>A computer scientist often dubbed "the godfather of artificial intelligence" has quit his job at Google to speak out about the dangers of the technology, US media reported Monday.</p>.<p>Geoffrey Hinton, who created a foundation technology for AI systems, told <em>The New York Times</em> that advancements made in the field posed "profound risks to society and humanity".</p>.<p>"Look at how it was five years ago and how it is now," he was quoted as saying in the piece, which was published on Monday.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/business-news/ibm-likely-to-pause-hiring-replace-7800-jobs-with-ai-1214852.html" target="_blank">IBM likely to pause hiring, replace 7,800 jobs with AI</a></strong></p>.<p>"Take the difference and propagate it forwards. That's scary."</p>.<p>Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.</p>.<p>"It is hard to see how you can prevent the bad actors from using it for bad things," he told the <em>Times</em>.</p>.<p>In 2022, Google and OpenAI -- the start-up behind the popular AI chatbot ChatGPT -- started building systems using much larger amounts of data than before.</p>.<p>Hinton told the <em>Times</em> he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.</p>.<p>"Maybe what is going on in these systems is actually a lot better than what is going on in the brain," he told the paper.</p>.<p>While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.</p>.<p>AI "takes away the drudge work" but "might take away more than that", he told the <em>Times</em>.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/technology/ai-brain-scans-and-cameras-the-spread-of-police-surveillance-tech-1207159.html" target="_blank">AI, brain scans and cameras: The spread of police surveillance tech</a></strong></p>.<p>The scientist also warned about the potential spread of misinformation created by AI, telling the <em>Times </em>that the average person will "not be able to know what is true anymore."</p>.<p>Hinton notified Google of his resignation last month, the <em>Times </em>reported.</p>.<p>Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media.</p>.<p>"As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI," the statement added.</p>.<p>"We're continually learning to understand emerging risks while also innovating boldly."</p>.<p>In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.</p>.<p>An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.</p>.<p>Hinton did not sign that letter at the time, but told <em>The New York Times</em> that scientists should not "scale this up more until they have understood whether they can control it."</p>
<p>A computer scientist often dubbed "the godfather of artificial intelligence" has quit his job at Google to speak out about the dangers of the technology, US media reported Monday.</p>.<p>Geoffrey Hinton, who created a foundation technology for AI systems, told <em>The New York Times</em> that advancements made in the field posed "profound risks to society and humanity".</p>.<p>"Look at how it was five years ago and how it is now," he was quoted as saying in the piece, which was published on Monday.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/business-news/ibm-likely-to-pause-hiring-replace-7800-jobs-with-ai-1214852.html" target="_blank">IBM likely to pause hiring, replace 7,800 jobs with AI</a></strong></p>.<p>"Take the difference and propagate it forwards. That's scary."</p>.<p>Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.</p>.<p>"It is hard to see how you can prevent the bad actors from using it for bad things," he told the <em>Times</em>.</p>.<p>In 2022, Google and OpenAI -- the start-up behind the popular AI chatbot ChatGPT -- started building systems using much larger amounts of data than before.</p>.<p>Hinton told the <em>Times</em> he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.</p>.<p>"Maybe what is going on in these systems is actually a lot better than what is going on in the brain," he told the paper.</p>.<p>While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.</p>.<p>AI "takes away the drudge work" but "might take away more than that", he told the <em>Times</em>.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/technology/ai-brain-scans-and-cameras-the-spread-of-police-surveillance-tech-1207159.html" target="_blank">AI, brain scans and cameras: The spread of police surveillance tech</a></strong></p>.<p>The scientist also warned about the potential spread of misinformation created by AI, telling the <em>Times </em>that the average person will "not be able to know what is true anymore."</p>.<p>Hinton notified Google of his resignation last month, the <em>Times </em>reported.</p>.<p>Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media.</p>.<p>"As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI," the statement added.</p>.<p>"We're continually learning to understand emerging risks while also innovating boldly."</p>.<p>In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.</p>.<p>An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.</p>.<p>Hinton did not sign that letter at the time, but told <em>The New York Times</em> that scientists should not "scale this up more until they have understood whether they can control it."</p>