Researchers Develop Nonlinear Thermodynamic Computing Framework at Lawrence Berkeley National Laboratory
A collaboration between researchers at the Molecular Foundry and NERSC has proposed a design framework for thermodynamic computing, aiming to use thermal noise as a power source. Their paper in Nature Communications demonstrates that nonlinear computations, similar to those in neural networks, can be executed without reaching thermodynamic equilibrium. Utilizing 96 GPUs on the Perlmutter supercomputer, they developed a genetic algorithm for training thermodynamic neural networks, which could significantly reduce energy consumption in machine learning applications.

Researchers at the Molecular Foundry and NERSC have advanced thermodynamic computing, using thermal noise as a power source. Their recent publication in Nature Communications outlines a framework enabling nonlinear computations without requiring thermodynamic equilibrium.
This approach allows the systems to perform tasks akin to neural networks. By employing 96 GPUs on the Perlmutter supercomputer, the team utilized a genetic algorithm to train the thermodynamic computer, evaluating billions of variations to optimize performance.
This method, while resource-intensive, promises low energy operation post-training. Future work will focus on developing hardware implementations and new algorithms for nonlinear calculations.




Comments