“Exascale” sounds like a science-fiction term, but it has a simple and very nonfictional definition: while a human brain can perform about one simple mathematical operation per second, an exascale computer can do at least one quintillion calculations in the time it takes to say, “One Mississippi.”
In 2022 the world’s first declared exascale computer, Frontier, came online at Oak Ridge National Laboratory—and it’s 2.5 times faster than the second-fastest-ranked computer in the world. It will soon have better competition (or peers), though, from incoming examachines such as El Capitan, housed at Lawrence Livermore National Laboratory, and Aurora, which will reside at Argonne National Laboratory.
It’s no coincidence that all of these machines find themselves at facilities whose names end with the words “national laboratory.” The new computers are projects of the Department of Energy and its National Nuclear Security Administration (NNSA). The DOE oversees these labs and a network of others across the country. NNSA is tasked with keeping watch over the nuclear weapons stockpile, and some of exascale computing’s raison d’être is to run calculations that help maintain that arsenal. But supercomputers also exist to solve intractable problems in pure science.
When scientists are finished commissioning Frontier, which will be dedicated to such fundamental research, they hope to illuminate core truths in various fields—such as learning about how energy is produced, how elements are made and how the dark parts of the universe spur its evolution—all through almost-true-to-life simulations in ways that wouldn’t have been possible even with the nothing-to-sniff-at supercomputers of a few years ago.
“In principle, the community could have developed and deployed an exascale supercomputer much sooner, but it would not have been usable, useful and affordable by our standards,” says Douglas Kothe, associate laboratory director of computing and computational sciences at Oak Ridge. Obstacles such as huge-scale parallel processing, exaenergy consumption, reliability, memory and storage—along with a lack of software to start running on such supercomputers—stood in the way of those standards. Years of focused work with the high-performance computing industry lowered those barriers to finally satisfy scientists.
Frontier can process seven times faster and hold four times more information in memory than its predecessors. It is made up of nearly 10,000 CPUs, or central processing units—which perform instructions for the computer and are generally made of integrated circuits—and almost 38,000 GPUs, or graphics processing units. GPUs were created to quickly and smoothly display visual content in gaming. But they have been reappropriated for scientific computing, in part because they’re good at processing information in parallel.
Inside Frontier, the two kinds of processors are linked. The GPUs do repetitive algebraic math in parallel. “That frees the CPUs to direct tasks faster and more efficiently,” Kothe says. “You could say it’s a match made in supercomputing heaven.” By breaking scientific problems into a billion or more tiny pieces, Frontier allows its processors to each eat their own small bite of the problem. Then, Kothe says, “it reassembles the results into the final answer. You could compare each CPU to a crew chief in a factory and the GPUs to workers on the front line.”
The 9,472 different nodes in the supercomputer—each essentially its own not-so-super computer—are also all connected in such a way that they can pass information quickly from one place to another. Importantly, though, Frontier doesn’t just run faster than machines of yore: it also has more memory and so can run bigger simulations and hold tons of information in the same place it’s processing those data. That’s like keeping all the acrylics with you while you’re trying to do a paint-by-numbers project rather than having to go retrieve each color as needed from the other side of the table.
With that kind of power, Frontier—and the beasts that will follow—can teach humans things about the world that might have remained opaque before. In meteorology, it could make hurricane forecasts less fuzzy and frustrating. In chemistry, it could experiment with different molecular configurations to see which might make great superconductors or pharmaceutical compounds. And in medicine, it has already analyzed all of the genetic mutations of SARS-CoV-2, the virus that causes COVID—cutting the time that calculation takes from a week to a day—to understand how those tweaks affect the virus’s contagiousness. That saved time allows scientists to perform ultrafast iterations, altering their ideas and conducting new digital experiments in quick succession.
With this level of computing power, scientists don’t have to make the same approximations they did before, Kothe says. With older computers, he would often have to say, “I’m going to assume this term is inconsequential, that term is inconsequential. Maybe I don’t need that equation.” In physics terms, that’s called making a “spherical cow”: taking a complex phenomenon, like a bovine, and turning it into something highly simplified, like a ball. With exascale computers, scientists hope to avoid cutting those kinds of corners and simulate a cow as, well, essentially a cow: something that more closely approaches a representation of reality.
Frontier’s upgraded hardware is the main factor behind that improvement. But hardware alone doesn’t do scientists that much good if they don’t have software that can harness the machine’s new oomph. That’s why an initiative called the Exascale Computing Project (ECP)—which brings together the Department of Energy and its National Nuclear Security Administration, along with industry partners—has sponsored 24 initial science-coding projects alongside the supercomputers’ development.
Those software initiatives can’t just take old code—meant to simulate, say, the emergence of sudden severe weather—plop it onto Frontier and say, “It made an okay forecast at lightning speed instead of almost lightning speed!” To get a more accurate result, they need an amped-up and optimized set of codes. “We're not going to cheat here and get the same not-so-great answers faster,” says Kothe, who is also ECP’s director.
But getting greater answers isn’t easy, says Salman Habib, who’s in charge of an early science project called ExaSky. “Supercomputers are essentially brute-force tools,” he says. “So you have to use them in intelligent ways. And that's where the fun comes in, where you scratch your head and say, ‘How can I actually use this possibly blunt instrument to do what I really want to do?’” Habib, director of the computational science division at Argonne, wants to probe the mysterious makeup of the universe and the formation and evolution of its structures. The simulations model dark matter and dark energy’s effects and include initial conditions that investigate how the universe expanded right after the big bang.
Large-scale astronomical surveys—for instance, the Dark Energy Spectroscopic Instrument in Arizona—have helped illuminate those shady corners of the cosmos, showing how galaxies formed and shaped and spread themselves as the universe expands. But data from these telescopes can’t, on its own, explain the why of what they see.
Theory and modelling approaches like ExaSky might be able to do so, though. If a theorist suspects that dark energy exhibits a certain behaviour or that our conception of gravity is off, they can tweak the simulation to include those concepts. It will then spit out a digital cosmos, and astronomers can see the ways it matches or doesn’t match, what their telescopes’ sensors pick up. “The role of a computer is to be a virtual universe for theorists and modellers,” Habib says.
ExaSky extends algorithms and software written for lesser supercomputers, but simulations haven’t yet led to giant breakthroughs about the nature of the universe’s dark components. The work scientists have done so far offers “an interesting combination of being able to model it but not really understand it,” Habib says. With exascale computers, though, astronomers s Habib can simulate a larger volume of space, using more cowlike physics, in higher definition. Understanding, perhaps, is on the way.
Another early Frontier project called ExaStar, led by Daniel Kasen of Lawrence Berkeley National Laboratory will investigate a different cosmic mystery. This endeavour will simulate supernovae—the end-of-life explosions of massive stars that, in their extremity, produce heavy elements. Scientists have a rough idea of how supernovae play out, but no one actually knows the whole-cow version of these explosions or how heavy elements get made within them.
In the past, most supernova simulations simplified the situation by assuming stars were spherically symmetric or by using simplified physics. With exascale computers, scientists can make more detailed three-dimensional models. And rather than just running the code for one explosion, they can do whole suites, including different kinds of stars and different physics ideas, exploring which parameters produce what astronomers actually see in the sky.
“Supernovae and stellar explosions are fascinating events in their own right,” Kasen says. “But they’re also key players in the story of the universe.” They provided the elements that makeup Earth and us —and the telescopes that look beyond us. Although their extreme reactions can’t quite be replicated in physical experiments, digital trials are both possible and less destructive.
A third early project is examining phenomena that are closer to home: nuclear reactors and their reactions. The ExaSMR project will use exascale computing to figure out what’s going on beneath the shielding of “small modular reactors,” a type of facility that nuclear-power proponents hope will become more common. In earlier days supercomputers could only model one component of a reactor at a time. Later they could model the whole machine but only at one point in time—getting, say, an accurate picture of when it first turns on. “Now we're modelling the evolution of a reactor from the time that it starts up over the course of an entire fuel cycle,” says Steven Hamilton of Oak Ridge, who’s co-leading the effort.
Hamilton’s team will investigate how neutrons move around and affect the chain reaction of nuclear fission, as well as how heat from fission moves through the system. Figuring out how the heat flows with both spatial and chronological detail wouldn’t have been possible at all before because the computer didn’t have enough memory to do the math for the whole simulation at once. “The next focus for us is looking at a wider class of reactor designs” to improve their efficiency and safety, Hamilton says.
Of course, nuclear power has always been the flip side of that other use of nuclear reactions: weapons. At Lawrence Livermore, Teresa Bailey leads a team of 150 people, many of whom are busy preparing the codes that simulate weapons to run on El Capitan. Bailey is the associate program director for computational physics at Lawrence Livermore, and she oversees parts of the Advanced Simulation and Computing project—the national security side of things. Teams from the NNSA labs—supported by ECP and the Advanced Technology Development and Mitigation program, a more weapons-oriented effort—worked on R&D that helps with modernizing the weapons codes.
Ask any scientist whether computers like Frontier, El Capitan and Aurora are finally good enough, and you’ll never get a yes. Researchers would always take more and better analytical power. And there’s extrinsic pressure to keep pushing computing forward: not just for bragging rights, although those are cool, but because better simulations could lead to new drug discoveries, new advanced materials or new Nobel Prizes that keep the country on top.
All those factors have scientists already talking about the “post-exascale” future—what comes after they can do one quintillion math problems in one second. That future might involve quantum computers or augmenting exascale systems with more artificial intelligence. Or maybe it’s something else entirely. Maybe, in fact, someone should run a simulation to predict the most likely outcome or the most efficient path forward.