OpenAI has effectively dissolved a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems, following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist, Ilya Sutskever.
Rather than maintain the so-called superalignment team as a standalone entity, OpenAI is now integrating the group more deeply across its research efforts to help the company achieve its safety goals, the company told Bloomberg News. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.
The decision to rethink the team comes as a string of recent departures from OpenAI revives questions about the company’s approach to balancing speed versus safety in developing its AI products. Sutskever, a widely respected researcher, announced Tuesday that he was leaving OpenAI after having previously clashed with Chief Executive Officer Sam Altman over how rapidly to develop artificial intelligence.
Leike revealed his departure shortly after with a terse post on social media. “I resigned,” he said. For Leike, Sutskever’s exit was the last straw following disagreements with the company, according to a person familiar with the situation who asked not to be identified in order to discuss private conversations.
In a statement on Friday, Leike said the superalignment team had been fighting for resources. “Over the past few months my team has been sailing against the wind,” Leike wrote on X. “Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”
Hours later, Altman responded to Leike’s post. “He's right we have a lot more to do,” Altman wrote on X. “We are committed to doing it.”
Other members of the superalignment team have also left the company in recent months. Leopold Aschenbrenner and Pavel Izmailov, were let go by OpenAI. The Information earlier reported their departures. Izmailov had been moved off the team prior to his exit, according to a person familiar with the matter. Aschenbrenner and Izmailov did not respond to requests for comment.
John Schulman, a co-founder at the startup whose research centers on large language models, will be the scientific lead for OpenAI’s alignment work going forward, the company said. Separately, OpenAI said in a blog post that it named Research Director Jakub Pachocki to take over Sutskever’s role as chief scientist.
“I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone,” Altman said in a statement Tuesday about Pachocki’s appointment. AGI, or artificial general intelligence, refers to AI that can perform as well or better than humans on most tasks. AGI doesn’t yet exist, but creating it is part of the company’s mission.
OpenAI also has employees involved in AI-safety-related work on teams across the company, as well as individual teams focused on safety. One, a preparedness team, launched last October and focuses on analyzing and trying to ward off potential “catastrophic risks” of AI systems.
The superalignment team was meant to head off the most long term threats. OpenAI announced the formation of the superalignment team last July, saying it would focus on how to control and ensure the safety of future artificial intelligence software that is smarter than humans — something the company has long stated as a technological goal. In the announcement, OpenAI said it would put 20 per cent of its computing power at that time toward the team’s work.
In November, Sutskever was one of several OpenAI board members who moved to fire Altman, a decision that touched off a whirlwind five days at the company. OpenAI President Greg Brockman quit in protest, investors revolted and within days, nearly all of the startup’s roughly 770 employees signed a letter threatening to quit unless Altman was brought back. In a remarkable reversal, Sutskever also signed the letter and said he regretted his participation in Altman’s ouster. Soon after, Altman was reinstated.
In the months following Altman’s exit and return, Sutskever largely disappeared from public view, sparking speculation about his continued role at the company. Sutskever also stopped working from OpenAI’s San Francisco office, according to a person familiar with the matter.
In his statement, Leike said that his departure came after a series of disagreements with OpenAI about the company’s “core priorities,” which he doesn’t feel are focused enough on safety measures related to the creation of AI that may be more capable than people.
In a post earlier this week announcing his departure, Sutskever said he’s “confident” OpenAI will develop AGI “that is both safe and beneficial” under its current leadership, including Altman.