The Superintelligence Buildout Is Beginning

Jeff Brown
|
Nov 4, 2025
|
The Bleeding Edge
|
6 min read

Managing Editor’s Note: While most traders are stuck staring at charts all day trying to “time the market”… Folks who are following our colleague Larry Benedict’s trade alerts are poised to profit again and again.

Folks have posted huge wins like:

“$11,495 within 3 hours”…

“Over 800% in 24 hours”…

Or “$20k within 3 hours”…

And now, they’re ready to pounce when Trump’s “24-Hour Profit Window” opens right on schedule…

If you’d like a chance to profit too, join Larry for an exclusive event on November 6 at 8 p.m. ET SHARP.

Click here to register with a single click.


In the October issue of The Near Future Report, I told subscribers that September changed everything… and it was time to raise our buy-up-to prices on seven of our core AI infrastructure stocks.

In October, those seven positions soared an average of 19.9%. That included a 58% rise in Advanced Micro Devices (AMD), which is now again above its buy-up-to price. (Out of respect for paying subscribers, I won’t list all seven here.)

Stocks we bought more than three months ago are now up an average of 60.6%. We gave the thesis a chance to play out, and the results are clear. And the average holding period is 491 days, giving us an annualized return of 45%.

That’s a return very few investors can match. To put that in perspective, Bloomberg published a ranking of the top-performing large hedge funds through September 2025. As we can see, only Melqart’s event-driven fund comes close.

Source: Bloomberg

This performance didn’t happen by luck. It’s the result of methodically building a portfolio of companies with the best technologies that are enabling the advancement of artificial intelligence capabilities.

This is a trend we’ve been tracking for years – well before I recommended Nvidia (NVDA) for the first time in early 2016.

The superintelligence buildout has begun.

It’s the beginning of a capital-spending supercycle. The hyperscalers – Microsoft, Amazon, Meta, Google, xAI, and Oracle – are racing to build out the physical infrastructure for artificial superintelligence (ASI): the data centers, GPUs, power grids, and semiconductor supply chains that will define the next decade.

The pace of change is staggering. Deals that used to take months to negotiate are getting done in weeks. Data center projects that once took years to approve are being fast-tracked across the U.S.

And that’s why we’ve spent the last year building a portfolio of companies leading this infrastructure revolution. What’s coming next will make the early AI boom of 2023 look small by comparison.

Here’s what we shared last month.

September Changed the Trajectory of AI

Plans became realities. Budgets jumped from the billions… to tens of billions… to hundreds of billions.

While this may sound unbelievable, it is very true. These are a collection of calculated, long-term investments designed to put companies on the fastest path to artificial general intelligence (AGI) and then artificial superintelligence (ASI).

Let’s take a hard look at where this infrastructure cycle is headed. Some of the numbers may seem large. Others may feel almost unthinkable. But they are grounded in real-world build costs, power constraints, and capital commitments already underway.

Winning the race to ASI will be the most valuable and disruptive technological breakthrough of our lifetimes – far bigger than the personal computer revolution, the internet, and smartphone innovations combined. The implications for investors will be massive.

We’ve said all along the AI buildout would be large. It just got bigger. Industry roadmaps like Marvell’s already called for more than $1 trillion of hyperscaler spend by 2028. Now the announcements are arriving faster… and at a scale that resets expectations.

Projected Hyperscaler Infrastructure Spend | Source: Marvell

In September alone, nearly a trillion dollars of new projects, compute agreements, and investments were announced.

OpenAI, Oracle, and SoftBank announced five new U.S. AI data center sites as part of the Stargate plan. These data centers will cost $300 billion, and that’s in addition to the prior $100 billion commitment. The announcement pushes total planned capacity to 7 gigawatts (GW), putting the program ahead of schedule toward its $500 billion, 10 gigawatt (GW) goal announced in January.

This expansion coincides with OpenAI’s $300 billion multiyear compute contract with Oracle… and NVIDIA’s own $100 billion investment in OpenAI to fuel its growth. When the world’s largest AI model builder, most valuable GPU supplier, and a major hyperscale cloud service provider all align their balance sheets, the message is clear: It will be built.

It’s more than just hyperscale data centers. Governments are moving too. The U.K.–U.S. $42 billion tech pact aims to accelerate capacity in Britain. And dedicated AI data-center players like Coreweave and Nebius announced $45 billion in compute purchase agreements.

These stack on top of the trillions already deployed or budgeted across chips, memory, networking, power, and cooling.

And we know this acceleration is just the beginning. Sam Altman, CEO of OpenAI, has been very busy. He is at the center of the biggest deal announcements. And he announced via X that OpenAI will have well over 1 million GPUs online by the end of the year.

Now, 1 million GPUs is an incredible accomplishment. But Altman wants his team to 100x that number.

And his former founding partner in OpenAI, now arch nemesis, Elon Musk echoed a similar sentiment.

And xAI isn’t wasting time. Its Colossus 2 project – launched in March 2025 – has already assembled enough infrastructure to support over 110,000 GB200 NVIDIA server racks, according to SemiAnalysis. For comparison, it took Oracle, Crusoe, and OpenAI more than 15 months to build similar capacity.

We’re now seeing how quickly the landscape can shift… and how drastically current projections understate the capital that’s about to flood into this buildout.

Using the cost for Stargate, we can see that currently, 1 GW of data center capacity costs about $50 billion to build. Now, I’ve said that roughly 100 GW of compute will be needed to achieve artificial superintelligence (ASI). Nothing, and I mean nothing, will slow down until that happens.

So with today’s technology and prices, that would mean a 100 GW AI superintelligence data center would cost roughly $5 trillion.

And with each year that passes, the technology used continues to improve. Each generation of GPU by AMD or NVIDIA becomes more power efficient, meaning that the amount of electricity used per unit of compute declines.

Each new GPU generation is more power-dense. It delivers more performance per watt. As we can see in the chart below, each year, an AI server rack’s power density goes higher. And by 2027, AI racks will have over 3x the density that they do today.

Server density will continue to increase. And this will only fuel more investment.

The multi-gigawatt scale of training frontier AI models, AGI, and ultimately ASI is so large, even small improvements in power efficiency result in hundreds of millions of savings in electricity costs.

The same is true for running AI applications. The industry will be just as sensitive to inference costs as it will be for training as the industry scales further. This is what continues to drive the incessant need to keep purchasing the most advanced generation of GPUs.

No matter how we look at it, even with improved efficiency and falling cost-per-watt, the next-generation ASI factories could still carry price tags of $500 billion each. And there won’t be just one. Multiple companies, as well as multiple nation-states, will race to build and operate these clusters to gain strategic advantages. No one, and no country, will be beholden to just one AGI model or ASI model.

This kind of demand would imply 10x growth potential for leading semiconductor and infrastructure names. We believe NVIDIA (NVDA), AMD (AMD), and others are among the best-positioned to capture this upside.

We’re not here to speculate; we’re here to make solid investments grounded in facts: hard data based on power requirements, computing requirements, available investment capital, and competitive market forces.

We are still early. The spending is accelerating. The buildout is shifting from multi-megawatt AI factories to multi-gigawatt AI factory clusters. And most of these projects are already underpinned by long-dated power, supply, and equipment contracts.

This is the window to own the companies supplying the equipment to enable AGI and ultimately ASI as we begin the next leg higher.

Tomorrow, we’ll share the next opportunity where we see big gains continuing. We’re so confident in this trend that this past month, we made two new investments in leaders in the space. Stay tuned tomorrow for the second installment.

Jeff


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.