San Francisco: Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s artificial intelligence technologies.
The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its AI.
OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023 and informed its board of directors, according to the two people, who discussed sensitive information about the company on the condition of anonymity.
But the executives decided not to share the news publicly because no information about customers or partners had been stolen, the two people said. The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. The company did not inform the FBI or anyone else in law enforcement.
For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal AI technology that — while now mostly a work and research tool — could eventually endanger US national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of AI.
After the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board of directors, arguing that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.
Aschenbrenner said OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” said an OpenAI spokesperson, Liz Bourgeois. Referring to the company’s efforts to build artificial general intelligence, a machine that can do anything the human brain can do, she added, “While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”
Fears that a hack of a US technology company might have links to China are not unreasonable. Last month, Brad Smith, Microsoft’s president, testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a wide-ranging attack on federal government networks.
However, under federal and California law, OpenAI cannot prevent people from working at the company because of their nationality, and policy researchers have said that barring foreign talent from US projects could significantly impede the progress of AI in the United States.
“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s head of security, told The New York Times in an interview. “It comes with some risks, and we need to figure those out.”
(The Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems.)
OpenAI is not the only company building increasingly powerful systems using rapidly improving AI technology. Some of them — most notably, Meta, the owner of Facebook and Instagram — are freely sharing their designs with the rest of the world as open source software. They believe that the dangers posed by today’s AI technologies are slim and that sharing code allows engineers and researchers across the industry to identify and fix problems.
Today’s AI systems can help spread disinformation online, including text, still images and, increasingly, videos. They are also beginning to take away some jobs.
Companies like OpenAI and its competitors Anthropic and Google add guardrails to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems.
But there is not much evidence that today’s AI technologies are a significant national security risk. Studies by OpenAI, Anthropic and others over the past year showed that AI was not significantly more dangerous than search engines. Daniela Amodei, an Anthropic co-founder and the company’s president, said its latest AI technology would not be a major risk if its designs were stolen or freely shared with others.
“If it were owned by someone else, could that be hugely harmful to a lot of society? Our answer is, ‘No, probably not,’” she told the Times last month. “Could it accelerate something for a bad actor down the road? Maybe. It is really speculative.”
Still, researchers and tech executives have long worried that AI could one day fuel the creation of new bioweapons or help break into government computer systems. Some even believe it could destroy humanity.
A number of companies, including OpenAI and Anthropic, are already locking down their technical operations. OpenAI recently created a Safety and Security Committee to explore how it should handle the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He has also been appointed to the OpenAI board of directors.
“We started investing in security years before ChatGPT,” Knight said. “We’re on a journey not only to understand the risks and stay ahead of them but also to deepen our resilience.”
Federal officials and state lawmakers are also pushing toward government regulations that would ban companies from releasing certain AI technologies and fine them millions if their technologies caused harm. But experts say these dangers are still years or even decades away.
Chinese companies are building systems of their own that are nearly as powerful as the leading US systems. By some metrics, China eclipsed the United States as the biggest producer of AI talent, with the country generating almost half the world’s top AI researchers.
“It is not crazy to think that China will soon be ahead of the US,” said Clément Delangue, CEO of Hugging Face, a company that hosts many of the world’s open source AI projects.
Some researchers and national security leaders argue that the mathematical algorithms at the heart of current AI systems, while not dangerous today, could become dangerous and are calling for tighter controls on AI labs.
“Even if the worst-case scenarios are relatively low-probability, if they are high-impact, then it is our responsibility to take them seriously,” Susan Rice, former domestic policy adviser to President Joe Biden and former national security adviser for President Barack Obama, said during an event in Silicon Valley last month. “I do not think it is science fiction, as many like to claim.”