“We’ll take all the capacity you have… wherever it is.”
That’s what a customer told Oracle founder Larry Ellison just a few weeks ago.
It’s a staggering statement. Especially from a company that’s used to big enterprise contracts. Ellison admitted on the company’s most recent earnings call that Oracle has never received a request like that before.
The reason? Demand for AI infrastructure, as Ellison put it, is “astronomical.”
Last week, Oracle reported earnings for the fourth quarter of its fiscal year 2025. The company not only beat expectations, but it also revised forward guidance higher for fiscal 2026. Management even hinted that 2027 projections were likely too conservative… even though those estimates were just issued.
Even more impressive is that these forecasts don’t include any potential revenue due to Project Stargate. That is the ambitious $500 billion AI initiative announced by President Trump, which is spearheaded by OpenAI and SoftBank, with major tech partners including Oracle, Microsoft, NVIDIA, and more.
Investors took notice. Oracle (ORCL) shares shot higher after the call. Wall Street is waking up to the fact that this company has been wildly mispriced.
But this story isn’t just about Oracle. It’s about the entire AI infrastructure buildout.
Because what Oracle is seeing on the ground confirms something we’ve been saying for months: Demand for AI compute isn’t cooling off. It’s accelerating.
And that’s great news for the companies we follow closely in The Bleeding Edge and across our paid research.
When it comes to renting out AI compute – GPUs for training and inference – there are a few big players: Microsoft Azure, Amazon Web Services, and Google Cloud.
These are the “hyperscalers.” And they have one major flaw in common…
They’re also building their own AI models.
Google has Gemini. Microsoft is married to OpenAI. Amazon is fine-tuning Alexa and investing in Anthropic.
That means every AI startup using hyperscaler infrastructure is, at some level, working with a competitor.
Yes, these platforms have strict security protocols to protect data. But that’s not always enough when you’re protecting your crown jewels: proprietary model weights, datasets, or architecture designs.
And that’s exactly why Oracle is emerging as the preferred partner for neutral, enterprise-grade AI compute.
Unlike the others, Oracle isn’t trying to build a rival foundation model. It’s focused solely on providing best-in-class infrastructure.
And that neutrality is paying off.
Founder Larry Ellison recently revealed that Oracle just signed a major deal with Chinese e-commerce giant Temu. The likely reason? Temu didn’t want its shopper data anywhere near Amazon, Microsoft, or Google.
And Temu isn’t alone. Oracle now counts TikTok, Uber, Lloyds Banking Group, and even Meta and OpenAI among its infrastructure clients.
For these firms, the message is clear: When the stakes are high, they trust Oracle.
There are younger players in the market like CoreWeave (CRWV) and Nebius (NBIS). And they’ve built impressive platforms tailored for AI workloads. But they’re smaller. Oracle is more than 7x larger than CoreWeave… and nearly 50x the size of Nebius.
That size matters when you’re a Fortune 500 company about to invest millions into model training. Reliability, security, and global scale win out.
This is exactly why Oracle’s seeing a surge of demand unlike anything in its history.
And for us, that’s the real takeaway… Oracle’s earnings are more than just a success story. They’re a signal. A signal that AI adoption is gaining speed across every sector.
This wave of infrastructure spending is just getting started.
If we want a glimpse of what’s coming, we can look at a key financial metric: Remaining Performance Obligations, or RPO.
This is how much revenue a company has already locked in. These are signed contracts it still has to fulfill and get paid for. It’s one of the best leading indicators of future growth.
At the end of its fiscal year, Oracle reported a stunning $138 billion in RPO. That’s up 41% from the year before.
And management expects that number to double in the next 12 months.
That kind of growth gives Oracle the confidence to raise its full-year revenue outlook. CFO Safra Catz said the company now expects to grow revenue by 16% this year. And it will likely beat its already ambitious target for fiscal 2027.
For context, most companies don’t guide more than 12 months out. Oracle is confidently projecting multiple years ahead. And it’s still saying those numbers might be too low.
Last October, the company hosted an analyst day where it laid out a roadmap through 2029. And now, it’s saying it will overshoot that, too.
But even those long-term forecasts may understate what’s really happening.
Because Oracle’s cloud infrastructure business, the backbone of its AI data center offering, is growing far faster.
CFO Safra Catz said cloud revenue will rise 40% this year. And cloud infrastructure specifically is expected to surge by more than 70%.
Of course, this kind of growth takes serious investment.
Oracle spent $21 billion last year building new data centers. This year, it expects to spend over $25 billion. That’s a 19% increase year over year.
And it’s not alone. Every major hyperscaler is upping its infrastructure spend. Here’s a look at the total capex of the top four:
And that’s not even counting the newer players like Coreweave or Nebius, which are smaller, faster-growing data center firms focused exclusively on AI compute.
All this tells us that the AI infrastructure buildout remains strong. And it is accelerating.
We’ve long measured progress in semiconductors by Moore’s Law. This is the idea that chip performance doubles roughly every two years.
But today’s reality has outgrown that rule.
As the chart below shows, the demand for compute to train AI models is growing far faster than the advancements of Moore’s Law.
For example, OpenAI’s GPT-4 was estimated to use over 1.7 trillion parameters – more than 10x what GPT-3 required just two years earlier. Training compute has been doubling every 6–9 months, not every 24 months as Moore’s Law would suggest.
This tells us that there is an explosion of demand for GPUs and CPUs. Yes, chips are getting faster and more power-efficient each year. But not fast enough to meet the exploding demand for model training and inference. The only way to keep up is to manufacture more chips and build more data centers to deploy them.
The critical bottleneck in the AI race is the speed at which we can build data centers, power them, and put in compute infrastructure. And that’s why we believe we’re still early in the AI infrastructure boom.
Jeff has predicted we’ll see artificial general intelligence (AGI) within the next 12 months. And whether it’s 12 months or 24, one thing is certain: The compute intensity required to reach that milestone is enormous.
As more companies adopt AI not just for one-off queries but for full-scale decision-making, automation, and optimization… the infrastructure needs will only grow.
We’ll continue tracking the key players building this digital backbone. Because those who recognize what’s coming and invest accordingly stand to capture the upside of a once-in-a-generation technology boom.
Nick Rokke
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.