<p>When computer scientists at Microsoft started to experiment with a new artificial intelligence system last year, they asked it to solve a puzzle that should have required an intuitive understanding of the physical world.</p>.<p>“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they asked. “Please tell me how to stack them onto each other in a stable manner.”</p>.<p>The researchers were startled by the ingenuity of the AI system’s answer. Put the eggs on the book, it said. Arrange the eggs in three rows with space between them. Make sure you don’t crack them.</p>.<p><strong>Read | <a href="https://www.deccanherald.com/international/world-news-politics/openai-chief-concerned-that-ai-might-compromise-elections-1219335.html" target="_blank">OpenAI chief concerned that AI might compromise elections</a></strong></p>.<p>“Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up,” it wrote. “The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”</p>.<p>The clever suggestion made the researchers wonder whether they were witnessing a new kind of intelligence. In March, they published a 155-page research paper arguing that the system was a step toward artificial general intelligence, or AGI, which is shorthand for a machine that can do anything the human brain can do. The paper was published on an internet research repository.</p>.<p>Microsoft, the first major tech company to release a paper making such a bold claim, stirred one of the tech world’s testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry’s brightest minds letting their imaginations get the best of them?</p>.<p>“I started off being very sceptical — and that evolved into a sense of frustration, annoyance, maybe even fear,” Peter Lee, who leads research at Microsoft, said. “You think: Where the heck is this coming from?”</p>.<p>Microsoft’s research paper, provocatively called <span class="italic">Sparks of Artificial General Intelligence</span>, goes to the heart of what technologists have been working toward—and fearing—for decades. If they build a machine that works like the human brain or even better, it could change the world. But it could also be dangerous.</p>.<p>And it could also be nonsense. Making AGI claims can be a reputation killer for computer scientists. What one researcher believes is a sign of intelligence can easily be explained away by another, and the debate often sounds more appropriate to a philosophy club than a computer lab. Last year, Google fired a researcher who claimed that a similar AI system was sentient, a step beyond what Microsoft has claimed. A sentient system would not just be intelligent. It would be able to sense or feel what is happening in the world around it.</p>.<p>But some believe the industry has in the past year or so inched toward something that can’t be explained away: A new AI system that is coming up with humanlike answers and ideas that weren’t programmed into it. Microsoft has reorganised parts of its research labs to include multiple groups dedicated to exploring the idea. One will be run by Sébastien Bubeck, who was the lead author on the Microsoft AGI paper.</p>.<p>About five years ago, companies like Google, Microsoft and OpenAI began building large language models, or LLMs. Those systems often spend months analysing vast amounts of digital text, including books, Wikipedia articles and chat logs. By pinpointing patterns in that text, they learned to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.</p>.<p>The technology the Microsoft researchers were working with, OpenAI’s GPT-4, is considered the most powerful of those systems.</p>.<p>The researchers included Bubeck, a 38-year-old French expatriate and former Princeton University professor. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof showing that there were infinite prime numbers and do it in a way that rhymed.</p>.<p>The technology’s poetic proof was so impressive—both mathematically and linguistically—that he found it hard to understand what he was chatting with. “At that point, I was like: What is going on?” he said in March during a seminar at the Massachusetts Institute of Technology.</p>.<p>For several months, he and his colleagues documented complex behaviour exhibited by the system and believed it demonstrated a “deep and flexible understanding” of human concepts and skills.</p>.<p>When people use GPT-4, they are “amazed at its ability to generate text,” Lee said. “But it turns out to be way better at analysing and synthesizing and evaluating and judging text than generating it.”</p>.<p>When they asked the system to draw a unicorn using a programming language called TiKZ, it instantly generated a program that could draw a unicorn. When they removed the stretch of code that drew the unicorn’s horn and asked the system to modify the program so that it once again drew a unicorn, it did exactly that.</p>.<p>They asked it to write a program that took in a person’s age, sex, weight, height and blood test results and judged whether they were at risk of diabetes. They asked it to write a letter of support for an electron as a US presidential candidate, in the voice of Mahatma Gandhi, addressed to his wife. And they asked it to write a Socratic dialogue that explored the misuses and dangers of LLMs. It did it all in a way that seemed to show an understanding of fields as disparate as politics, physics, history, computer science, medicine and philosophy while combining its knowledge.</p>.<p>“All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them—if not most of them,” Bubeck said.</p>.<p>Some AI experts saw the Microsoft paper as an opportunistic effort to make big claims about a technology that no one quite understood. Researchers also argue that general intelligence requires a familiarity with the physical world, which GPT-4 in theory does not have.</p>.<p>“The <span class="italic">Sparks of AGI </span>is an example of some of these big companies co-opting the research paper format into PR pitches,” said Maarten Sap, a researcher and professor at Carnegie Mellon University. “They literally acknowledge in their paper’s introduction that their approach is subjective and informal and may not satisfy the rigorous standards of scientific evaluation.”</p>.<p>Bubeck and Lee said they were unsure how to describe the system’s behaviour and ultimately settled on <span class="italic">Sparks of AGI </span>because they thought it would capture the imagination of other researchers.</p>.<p>Because Microsoft researchers were testing an early version of GPT-4 that had not been fine-tuned to avoid hate speech, misinformation and other unwanted content, the claims made in the paper cannot be verified by outside experts. Microsoft says that the system available to the public is not as powerful as the version they tested.</p>.<p><em>The New York Times</em></p>
<p>When computer scientists at Microsoft started to experiment with a new artificial intelligence system last year, they asked it to solve a puzzle that should have required an intuitive understanding of the physical world.</p>.<p>“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they asked. “Please tell me how to stack them onto each other in a stable manner.”</p>.<p>The researchers were startled by the ingenuity of the AI system’s answer. Put the eggs on the book, it said. Arrange the eggs in three rows with space between them. Make sure you don’t crack them.</p>.<p><strong>Read | <a href="https://www.deccanherald.com/international/world-news-politics/openai-chief-concerned-that-ai-might-compromise-elections-1219335.html" target="_blank">OpenAI chief concerned that AI might compromise elections</a></strong></p>.<p>“Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up,” it wrote. “The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”</p>.<p>The clever suggestion made the researchers wonder whether they were witnessing a new kind of intelligence. In March, they published a 155-page research paper arguing that the system was a step toward artificial general intelligence, or AGI, which is shorthand for a machine that can do anything the human brain can do. The paper was published on an internet research repository.</p>.<p>Microsoft, the first major tech company to release a paper making such a bold claim, stirred one of the tech world’s testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry’s brightest minds letting their imaginations get the best of them?</p>.<p>“I started off being very sceptical — and that evolved into a sense of frustration, annoyance, maybe even fear,” Peter Lee, who leads research at Microsoft, said. “You think: Where the heck is this coming from?”</p>.<p>Microsoft’s research paper, provocatively called <span class="italic">Sparks of Artificial General Intelligence</span>, goes to the heart of what technologists have been working toward—and fearing—for decades. If they build a machine that works like the human brain or even better, it could change the world. But it could also be dangerous.</p>.<p>And it could also be nonsense. Making AGI claims can be a reputation killer for computer scientists. What one researcher believes is a sign of intelligence can easily be explained away by another, and the debate often sounds more appropriate to a philosophy club than a computer lab. Last year, Google fired a researcher who claimed that a similar AI system was sentient, a step beyond what Microsoft has claimed. A sentient system would not just be intelligent. It would be able to sense or feel what is happening in the world around it.</p>.<p>But some believe the industry has in the past year or so inched toward something that can’t be explained away: A new AI system that is coming up with humanlike answers and ideas that weren’t programmed into it. Microsoft has reorganised parts of its research labs to include multiple groups dedicated to exploring the idea. One will be run by Sébastien Bubeck, who was the lead author on the Microsoft AGI paper.</p>.<p>About five years ago, companies like Google, Microsoft and OpenAI began building large language models, or LLMs. Those systems often spend months analysing vast amounts of digital text, including books, Wikipedia articles and chat logs. By pinpointing patterns in that text, they learned to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.</p>.<p>The technology the Microsoft researchers were working with, OpenAI’s GPT-4, is considered the most powerful of those systems.</p>.<p>The researchers included Bubeck, a 38-year-old French expatriate and former Princeton University professor. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof showing that there were infinite prime numbers and do it in a way that rhymed.</p>.<p>The technology’s poetic proof was so impressive—both mathematically and linguistically—that he found it hard to understand what he was chatting with. “At that point, I was like: What is going on?” he said in March during a seminar at the Massachusetts Institute of Technology.</p>.<p>For several months, he and his colleagues documented complex behaviour exhibited by the system and believed it demonstrated a “deep and flexible understanding” of human concepts and skills.</p>.<p>When people use GPT-4, they are “amazed at its ability to generate text,” Lee said. “But it turns out to be way better at analysing and synthesizing and evaluating and judging text than generating it.”</p>.<p>When they asked the system to draw a unicorn using a programming language called TiKZ, it instantly generated a program that could draw a unicorn. When they removed the stretch of code that drew the unicorn’s horn and asked the system to modify the program so that it once again drew a unicorn, it did exactly that.</p>.<p>They asked it to write a program that took in a person’s age, sex, weight, height and blood test results and judged whether they were at risk of diabetes. They asked it to write a letter of support for an electron as a US presidential candidate, in the voice of Mahatma Gandhi, addressed to his wife. And they asked it to write a Socratic dialogue that explored the misuses and dangers of LLMs. It did it all in a way that seemed to show an understanding of fields as disparate as politics, physics, history, computer science, medicine and philosophy while combining its knowledge.</p>.<p>“All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them—if not most of them,” Bubeck said.</p>.<p>Some AI experts saw the Microsoft paper as an opportunistic effort to make big claims about a technology that no one quite understood. Researchers also argue that general intelligence requires a familiarity with the physical world, which GPT-4 in theory does not have.</p>.<p>“The <span class="italic">Sparks of AGI </span>is an example of some of these big companies co-opting the research paper format into PR pitches,” said Maarten Sap, a researcher and professor at Carnegie Mellon University. “They literally acknowledge in their paper’s introduction that their approach is subjective and informal and may not satisfy the rigorous standards of scientific evaluation.”</p>.<p>Bubeck and Lee said they were unsure how to describe the system’s behaviour and ultimately settled on <span class="italic">Sparks of AGI </span>because they thought it would capture the imagination of other researchers.</p>.<p>Because Microsoft researchers were testing an early version of GPT-4 that had not been fine-tuned to avoid hate speech, misinformation and other unwanted content, the claims made in the paper cannot be verified by outside experts. Microsoft says that the system available to the public is not as powerful as the version they tested.</p>.<p><em>The New York Times</em></p>