Climate change threatens supercomputers | Science

In 2018, during a savage drought, the California wildfire known as the Camp Fire burned 620 square kilometers of land, reducing several towns nearly to ashes and killing at least 85 people. The disaster also had a ripple effect far from the flames, at a supercomputer facility operated by Lawrence Berkeley National Laboratory (LBNL) 230 kilometers away. The National Energy Research Scientific Computing Center (NERSC) typically relies on outside air to help cool its hot electronics. But smoke and soot from the fire forced engineers to cool recirculated air, driving up humidity levels.

“That’s when we discovered, ‘Wow, this is a real event,’” says Norm Bourassa, an energy performance engineer at NERSC, which serves about 3000 users a year in fields from cosmology to advanced materials. Hot and dry weather took a toll again a year later. California utilities cut NERSC’s power for fear that winds near LBNL might blow trees into power lines, sparking new fires. Although NERSC has backup generators, many machines were shut down for days, Bourassa says.

Managers at high-performance computing (HPC) facilities are waking up to the costly effects of climate change and the wildfires and storms it is intensifying. With their heavy demands for cooling and massive appetite for energy, HPC centers—which include both supercomputers and data centers—are vulnerable, says Natalie Bates, chair of an HPC energy efficiency working group set up by Lawrence Livermore National Laboratory (LLNL). “Weather extremes are making the design and location of supercomputers far more difficult.”

Climate change can bring not only heat, but also increased humidity, reducing the efficiency of the evaporative coolers many HPC centers rely on. Humidity can also threaten the computers themselves, as NERSC discovered during a second fire. As interior air was recirculated, condensation inside server racks led to a blowout in one cabinet, Bourassa says. For its next supercomputer, set to open in 2026, NERSC is planning to install power-hungry chiller units, similar to air conditioners, that would both cool and dehumidify outside air.

The cost of such adaptations is motivating some HPC centers to migrate to cooler and drier climates, places like Canada and Finland, says Nicolas Dubé, chief technologist for Hewlett Packard Enterprise’s HPC division. “We can’t build in some locations going forward, it just doesn’t make sense,” he says. “We need to move north.”

But some HPC facilities find themselves stuck. The supercomputers at LLNL are used to simulate the explosions of nuclear weapons. The cost of relocating specialized personnel could be prohibitive, and LLNL’s California site is a highly secure facility, says Chief Engineer Anna-Maria Bailey. Instead, LLNL is studying the possibility of moving its computers underground. “Humidity and temperature control would be a lot easier,” she says, “like a wine cave.”

Running from climate change can be futile, however. In 2012, the National Center for Atmospheric Research opened a supercomputer site in Cheyenne, Wyoming, to take advantage of its cool, dry air. However, climate change has led to longer and wetter thunderstorm seasons there, hampering evaporative cooling. In response, the Wyoming center added a backup chiller. “Now you have to build your infrastructure to meet the worst possible conditions, and that’s expensive,” Bates says.

Climate change is also threatening the lifeblood of these HPC facilities: electricity. HPC centers consume up to 100 megawatts of power, as much as a medium-size town. Meanwhile, hotter temperatures can increase power demands by other users. During California’s heat wave this summer, when air-conditioning use surged, LLNL’s utility told the facility to prepare for power cuts of 2 to 8 megawatts. Although the cuts did not happen, it was the first time the laboratory was asked to prepare for non-voluntary cuts, Bailey says.

Many HPC facilities are heavy users of water, too, which is piped around components to carry away heat—and which will grow scarcer in the western United States as droughts persist or worsen. A decade ago, Los Alamos National Laboratory in New Mexico invested in water treatment facilities so its supercomputers could use reclaimed wastewater rather than more precious municipal water, says Jason Hick, an LANL program manager.

Although droughts and rising temperatures may be the biggest threats, a RIKEN HPC facility in Kobe, Japan, must contend with power outages because of storms, which are expected to get more intense with global warming. A high-voltage substation was flooded in 2018, cutting RIKEN’s power for more than 45 hours. Similarly, a lightning strike this year on a power line knocked the facility out for about 15 hours. The center’s 200 projects span fields such as materials science and nuclear fusion, says Fumiyoshi Shoji, who directs operations and computer technologies. “If our system were unavailable, these research projects would stall,” he says.

Bates says future supercomputers will need to be constructed in ways that will allow them to cut performance—and the need for cooling and power—during bouts of bad weather. “We’re still building race cars, but we’re building them with a throttle.”

Leave a Comment

Your email address will not be published. Required fields are marked *