17 March, 2026
thermodynamic-computing-harnessing-noise-for-energy-efficient-ai

What if the thermal noise that hinders the efficiency of both classical and quantum computers could, instead, be used as a power source? This is the ambitious goal of thermodynamic computing, a cutting-edge field that seeks to transform how we think about computation. Researchers at the Molecular Foundry and the National Energy Research Scientific Computing Center (NERSC), both U.S. Department of Energy (DOE) facilities at Lawrence Berkeley National Laboratory, have made significant strides in this area. Their recent paper in Nature Communications proposes a design and training framework for thermodynamic computers that mimic neural networks, potentially reducing the energy demands of machine learning.

Modern computing is energy-intensive. For instance, a single Google search consumes enough energy to power a six-watt LED for three minutes. This is largely due to the need to manage thermal noise—the vibration of electrons within conductive materials. Classical computers operate at energy levels far above this noise to ensure reliable computation, but this comes at a high energy cost. Thermodynamic computing, however, turns this paradigm on its head by using thermal fluctuations as a power source, significantly reducing the external energy required for computations.

Revolutionizing Computation with Noise

Both classical and quantum computing typically aim to suppress thermal noise. In contrast, thermodynamic computing embraces these fluctuations, allowing for operations at room temperature, unlike many quantum computers. This approach is part of a broader movement towards Beyond-Moore’s-Law microelectronics and low-power, energy-aware computing.

“Thermodynamic computing is noise-powered,” said Stephen Whitelam, a staff scientist at the Molecular Foundry and co-author of the study.

“The premise of thermodynamic computing is that if you take a physical device with an energy scale comparable to that of thermal energy and leave it alone, it will change state over time, driven by thermal fluctuations. The goal is to program it so that this time evolution does something useful.”

Overcoming Challenges in Thermodynamic Computing

Despite its promise, thermodynamic computing faces significant challenges. Traditionally, these systems have been limited to computations at thermodynamic equilibrium, requiring long wait times for the system to settle into its lowest-energy configuration. Additionally, the range of computations has been restricted to linear algebra problems, limiting their utility for general-purpose computation.

Whitelam and his colleague Corneel Casert of NERSC have addressed these challenges by demonstrating through digital simulations that nonlinear computations, akin to those performed by neural networks, are possible with thermodynamic computers not operating at equilibrium. This breakthrough means that thermodynamic computers can now tackle complex, nonlinear problems similar to those handled by neural networks, expanding their potential applications significantly.

“A nonlinear thermodynamic circuit can behave like a neuron in a neural network,” Whitelam explained.

“Nonlinearity is what gives a neural network its expressive power. By building thermodynamic neurons into a connected structure, we can mimic a neural network and enable machine learning.”

Innovative Training Techniques

The next hurdle is training these systems. Unlike digital neural networks, thermodynamic computers are stochastic, meaning each run is unique. Traditional training methods do not apply, but Whitelam and Casert have developed a novel solution.

Casert utilized a large-scale computational framework on the Perlmutter supercomputer at NERSC, employing 96 GPUs in parallel to run extensive evolutionary simulations. This approach, based on genetic algorithms, involved evaluating billions of noisy dynamical trajectories to identify the most effective network parameters. Although this training process is more resource-intensive than that for digital networks, it results in a computer that operates with minimal energy once trained.

“It’s a very different way of optimizing a neural network,” Casert noted.

“Training a thermodynamic neural network by simulating it digitally is expensive, but once trained and built as physical hardware, we can perform inference on that hardware for a very low energy cost.”

The Future of Thermodynamic Computing

As a relatively young field, thermodynamic computing is poised for further development. The next steps involve realizing these designs in hardware and developing new algorithms suited for systems that do not require equilibrium. This will involve creating algorithms for nonlinear computations, similar to those used in digital neural networks.

“It’s an exciting field,” Whitelam concluded.

“We’re looking for more efficient ways of computing, and thermodynamic computing is definitely one of them.”

The team is actively seeking experimental partners to bring both hardware and software to fruition, marking another step forward in exploring the possibilities of thermodynamic computing.