×
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Artificial Intelligence may save us or may construct viruses to kill us

Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with AI it could be easier and cheaper to refine the virus.
Last Updated : 27 July 2024, 17:34 IST

Follow Us :

Comments

Here's a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.

That's the conclusion of Jason Matheny, president of the Rand Corp., a think tank that studies security matters and other issues.

"It wouldn't cost more to create a pathogen that's capable of killing hundreds of millions of people versus a pathogen that's only capable of killing hundreds of thousands of people," Matheny said.

In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response.

I told Matheny that I'd been The New York Times' Tokyo bureau chief when a religious cult called Aum Shinrikyo had used chemical and biological weapons in terror attacks, including one in 1995 that killed 13 people in the Tokyo subway. "They would be capable of orders of magnitude more damage" today, Matheny said.

I'm a longtime member of the Aspen Strategy Group, a bipartisan organization that explores global security issues, and our annual meeting this month focused on artificial intelligence. That's why Matheny and other experts joined us -- and then scared us.

In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia's Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn't have to be stolen.

Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with AI it could be easier and cheaper to refine the virus.

One reason biological weapons haven't been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person's DNA at a dinner or reception.

Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.

AI has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.

One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. "It's Google Maps for biology," said Kent Walker, president of global affairs at Google.

Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.

So it's unclear whether AI will save us or kill us first.

Scientists for years have explored how AI may dominate warfare, with autonomous drones or robots programmed to find and eliminate targets instantaneously. Warfare may come to involve robots fighting robots.

Robotic killers will be heartless in a literal sense, but they won't necessarily be particularly brutal. They won't rape and they might also be less prone than human soldiers to rage that leads to massacres and torture.

One great uncertainty is the extent and timing of job losses -- for truck drivers, lawyers and perhaps even coders -- that could amplify social unrest. A generation ago, American officials were oblivious to the way trade with China would cost factory jobs and apparently lead to an explosion of deaths of despair and to the rise of right-wing populism. May we do better at managing the economic disruption of AI.

One reason for my wariness of AI is that while I see the promise of it, the past 20 years have been a reminder of technology's capacity to oppress. Smartphones were dazzling -- and apologies if you're reading this on your phone -- but there's evidence tying them to deteriorating the mental health of young people. A randomized controlled trial published just this month found that children who gave up their smartphones enjoyed improved well-being.

Dictators have benefited from new technologies. Liu Xiaobo, the Chinese dissident who received a Nobel Peace Prize, thought that "the internet is God's gift to the Chinese people." It did not work out that way: Liu died in Chinese custody, and China has used AI to ramp up surveillance and tighten the screws on citizens.

AI may also make it easier to manipulate people, in ways that recall Orwell. A study released this year found that when Chat GPT-4 had access to basic information about people it engaged with, it was about 80% more likely to persuade someone than a human was with the same data. Congress was right to worry about manipulation of public opinion by the TikTok algorithm.

All this underscores why it is essential that the United States maintain its lead in artificial intelligence. As much as we may be leery of putting our foot on the gas, this is not a competition in which it is OK to be the runner-up to China.

President Joe Biden is on top of this, and limits he placed on China's access to the most advanced computer chips will help preserve our lead. The Biden administration has recruited first-rate people from the private sector to think through these matters and issued an important executive order last year on AI safety, but we will also need to develop new systems in the coming years for improved governance.

I've written about AI-generated deepfake nude images and videos, and the irresponsibility of both the deepfake companies and major search engines that drive traffic to deepfake sites. And tech companies have periodically used immunities to avoid accountability for promoting the sexual exploitation of children. None of that inspires confidence in these companies' abilities to self-govern responsibly.

"We've never had a circumstance in which the most dangerous, and most impactful, technology resides entirely in the private sector," said Susan Rice, who was President Barack Obama's national security adviser. "It can't be that technology companies in Silicon Valley decide the fate of our national security and maybe the fate of the world without constraint."

I think that's right. Managing AI without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.

ADVERTISEMENT
Published 27 July 2024, 17:34 IST

Follow us on :

Follow Us

ADVERTISEMENT
ADVERTISEMENT