Why Sierra the Supercomputer Had to Die

Wired
by Rebecca Heilweil
February 26, 2026
AI-Generated Deep Dive Summary
For seven years, Sierra, a groundbreaking supercomputer at Lawrence Livermore National Laboratory, served as one of the fastest machines in the world, running critical high-security nuclear simulations for the U.S. government. Now, after a distinguished career, Sierra is being decommissioned—a decision driven by her advancing age, hardware obsolescence, and rising maintenance costs. Once the second-fastest supercomputer globally, Sierra was built with cutting-edge IBM Power9 CPUs and Nvidia Volta V100 GPUs, but even state-of-the-art systems eventually reach their limits. Her retirement highlights the challenges of maintaining aging technology infrastructure. Sierra’s life spanned a remarkable era in high-performance computing. She was assembled from thousands of compute nodes spread across 240 racks, occupying nearly 7,000 square feet. During her operation, she faced the natural wear and tear of hardware components, following what IT experts call the "bathtub curve"—a period of initial failures, a stable middle phase, and then a sharp increase in failures as parts degrade. Sierra’s managers noted that while she hadn’t reached the final phase of this cycle, the costs of replacing failing components and sourcing outdated parts became unsustainable. Additionally, obsolescence played a significant role in her decommissioning. The technology industry evolves rapidly, and maintaining compatibility with outdated systems becomes increasingly difficult. Sierra’s hardware and software became harder to support as newer models phased out compatible components. Decommission
Verticals
techscience
Originally published on Wired on 2/26/2026