Thermodynamic Computing

Jeff Brown
|
Nov 25, 2025
|
The Bleeding Edge
|
5 min read


It looks like some kind of ancient and powerful artifact…

Seen, perhaps, in a science fiction epic – like Dune…

XTR-0 | Source: Extropic

In the shape of a heptagon – with odd engravings – and an elevated top containing an asymmetrical, cell-like encasement…

It makes the imagination run wild wondering what happens when we activate it.

If we showed this image to a thousand people, I doubt that anyone would correctly guess what it is….

Which is, quite simply, the most radical innovation in computing technology of the last two decades.

10,000-Times Less Energy

The company behind this exotic technology is Extropic, and the image above is its XTR-0 thermodynamic computer.

Just announced at the end of October, the XTR-0 is a research and development prototype – a precursor to its forthcoming production model – the Z1 – designed to introduce the world to a novel form of physics-based, probabilistic computing.

The announcement related to the XTR-0 and its corresponding published research got a lot of attention in the industry.

The reason was simple: Extropic’s radical computing architecture was able to perform on par with GPU performance. And it could do so using 10,000 times less energy.

It certainly made for a great headline.

Some in the media, not able to understand the research or the technology, proclaimed it an existential threat to NVIDIA.

After all, if you can perform on par with NVIDIA GPUs – and with 10,000 times less energy – the industry should immediately shift to this kind of thermodynamic computing, right?

Wrong.

But that doesn’t mean that this new computing technology isn’t worth understanding.

Quite the opposite.

The implications have incredible potential… and Extropic is definitely a company to watch.

A Radically Different Model

What Extropic has built is a probabilistic computing platform designed with pbits (probabilistic bits) constructed from standard transistors used in semiconductors.

Where this technology gets a bit tricky is that the probabilities of the output of any computational run need to be expressed as an energy-based model (EBM).

Energy values are set for each probabilistic outcome. High-energy states represent the least likely outcomes, and the lowest energy state is the most likely outcome and also the optimal result.

It is a completely different computational paradigm compared to the deterministic structure of working with CPUs or GPUs. It’s mind-bending in many ways precisely because it is so radically different.

It’s not designed to just plug in and run a software program…

Extropic’s semiconductors are what it calls Thermodynamic Sampling Units (TSUs).

The company has developed its own open-source software library called THRML, which is designed to help programmers run software on Extropic’s TSUs.

To draw a parallel, THRML is to Extropic’s TSUs as CUDA is to NVIDIA’s GPUs.

The CUDA computing platform has become one of NVIDIA’s strongest assets because so many software developers know how to work with CUDA to run artificial intelligence and machine learning applications.

New forms of computing technology require new programming libraries to run applications, hence THRML.

XX

The Need for Energy Efficiency

Extropic’s mission was born out of an energy-driven reality that we’ve been researching and writing about for years at Brownstone Research: The single biggest bottleneck to achieving AGI and ultimately ASI is energy production.

Extropic can’t control energy production, but it came at the problem from the opposite side.

By using physics-based probabilistic thermodynamic computing, it could radically reduce the amount of energy required to run generative AI applications.

At a high level, the concept is pretty simple.

If we think of generative AI as software that samples from probability distributions, and that the training of an AI model results in inferring the probability distribution based on a given set of training data, we can draw a parallel to what Extropic is doing.

When working with neural networks, this probabilistic framework is typically referred to as the weights that an AI model gives to certain possible outputs.

The higher the weight, the higher the likelihood that the output is correct. In that way, the “answer” of an AI model is probabilistically the correct answer.

Energy-based models (EBMs) are similar in that the lowest-energy state is also probabilistically the correct answer.

The framework is similar. It’s just that the actual approach is different.

Knowing this, does it mean that NVIDIA (NVDA) and AMD (AMD) are in trouble? No longer a need for their GPUs?

Is this the beginning of the end?

One Chip to Rule Them All?

Again, on the surface, it sounds like that might be true.

After all, hundreds of thousands of Extropic’s TSUs could run on a single iPhone-sized battery. That’s how significant the energy savings are for this kind of thermodynamic computing architecture.

The reality, however, is that only certain software tasks are particularly well-suited for using Extropics’s probabilistic thermodynamic computing.

For those who took the time to read and understand Extropic’s research, the company envisions what it refers to as a hybrid thermodynamic-deterministic machine learning system.

The point being made is that not all machine learning and artificial intelligence tasks should be handled by a probabilistic computer. In fact, most are best suited to deterministic computing systems like GPUs.

I really like this example, because it highlights the future of computing and something critically important for investors to understand: There isn’t “one chip to rule them all.”

There are – and will continue to be – different semiconductors and computing architectures that are designed to perform specific kinds of tasks extremely well.

Extropic for probabilistic computing… Cerebras, Groq, AMD, and others for inference… NVIDIA and AMD for training large AI models… quantum computers for the most complex workloads out of reach of classical computing architectures, etc.

Data center companies and hyperscalers will need to employ all of these computational systems and make them available to their customers.

And software companies will provide development environments that abstract away the complexity of so many different computing platforms. A sophisticated software layer will be designed to parse out tasks to the most efficient computing platform.

The answer to the energy shortage is being attacked from both sides.

The energy industry is racing to build more natural gas turbines, fourth-generation nuclear fission plants, nuclear fusion reactors, and bridge the gap with whatever increased energy production from existing infrastructure it can get its hands on.

And the semiconductor industry is doing the same: by making application-specific semiconductors, optimized for energy efficiency and performance for specific tasks like inference.

The energy gap is closing from both sides… and the alignment, motivations, regulations, and capital support have never been more aligned for a technology at any other time in history.

Jeff


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.