
It’s hard to believe that three years have already passed since the original release of ChatGPT from OpenAI…
As we now know, it was the catalyst that kicked off the largest infrastructure buildout of any kind in history.
After three years of frenzied construction, building, programming, and training, it has been suggested by many that it’s a bubble ready to burst…
But they’re wrong.
The data, investment, monetization, and path towards more powerful frontier models all tell a very different story…
A breakthrough announced on November 18 is yet another example of forward momentum.
Google (GOOGL) boldly announced “A New Era of Intelligence” with the latest release of its frontier AI model – Gemini 3.
It is an impressive piece of work. It is very clear that the leadership of Google DeepMind’s CEO, Demis Hassabis, has been a catalyst for this remarkable advancement in artificial intelligence.
Google had fallen significantly behind OpenAI over the last couple of years. And the competitive threat of Elon Musk and his team at xAI made matters even worse, forcing Alphabet CEO Sundar Pichai to shake up his U.S.-based AI team.
So, in April 2024, Pichai placed Hassabis at the top of Google’s AI research and development and tasked him with closing the gap between Google’s Gemini and competitive state-of-the-art reasoning models.
As painful as the shuffle was for some, the move worked.
Gemini 3 is impressive, and it is already incorporated into Google’s search results, which produce an “AI Overview” shown below as an example. I’m sure most will recognize this feature if you are still using Google.

Google’s AI Overviews now experience more than 2 billion users every month. The Gemini app is now seeing 650 million users a month. And 13 million software developers have built on top of Google’s AI models.
The uptake is stunning. We’ve never before seen the kind of adoption update that we’ve seen with OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok.
Google’s pitch on Gemini 3 is that it is a state-of-the-art multi-modal reasoning model that incorporates advanced agentic capabilities. It positions Gemini 3 at the top of the charts for nearly every major AI benchmark test, as seen below.
The percentages in bold indicate where Gemini 3 is leading.

Gemini 3 Benchmarks for Most Common AI Tests | Source: Google
For those with a sharp eye, you’ll quickly realize some important data points are missing. In the chart above, Gemini 3 is compared to Gemini 2.5 Pro, Anthropic’s Claude Sonnet 4.5, and OpenAI’s GPT 5.1.
Notably, xAI’s Grok is nowhere to be seen.
How odd…
Even when presenting the results of the Gemini 3 Deep Think model – a model that allocates more computational resources and time to improve performance – xAI’s Grok is still not represented.

Source: Google
Google’s results are spectacular, but why the charade?
The industry, and the media, continue to pretend, and perhaps hope, if they just ignore xAI and Grok, that it will somehow just go away…
That will never happen.
Let’s take, for example, Humanity’s Last Exam (HLE) – the benchmarks of which are shown above on the left-hand side of the chart above.
HLE is designed to be a measure of how close an AI is to artificial general intelligence (AGI). It is a multi-modal benchmark comprised of 2,500 complex, expert-level questions that cannot be answered via a simple internet retrieval.
They require contextual understanding and reasoning skills on par with PhD-level experts in every field represented.
Looking at the benchmark scores above, Google would have us believe that it’s the best. Gemini 3 Deep Think scored 41% – an amazing score.
Yet, xAI’s Grok 4 Heavy – the equivalent of Google’s “Deep Think” version – scored 58.3%…
And that was in July… more than four months ago! Just imagine where Grok is today…
Now, to Google’s credit, its largest performance jump came in the results of the ARC-AGI-2 benchmark, the other AGI-centric benchmark widely looked at by the industry. Gemini 3 Deep Think jumped to 45.1% compared to Gemini 2.5 Pro, which was only 4.9%.

Source: Arc Prize
Last July, xAI’s Grok 4 (Thinking) model basically doubled the state-of-the-art benchmark at the time with a score of 15.9%. It wasn’t even close. Grok’s performance was so impressive at the time, it was a wakeup call to the entire industry.
Today, the above chart looks very different. If we believed it was up to date, we would have to conclude GPT-5 Pro, Claude Opus 4.5, and Gemini 3 Deep Think are well ahead of xAI’s Grok.
But here’s the thing…
The chart isn’t up to date. Grok hasn’t been tested since July. And unlike other frontier AI models, xAI’s approach to its training and improvement of Grok isn’t defined by annual or biannual release cycles. Grok’s performance is updated every week.
And a major release, a step function in the form of Grok 5, can be expected in just weeks. While Grok 5 was originally scheduled to arrive around the end of the year, xAI chose to swing even harder on this massive upgrade, which we can expect early next year:
I haven’t seen anyone admit it, but the entire industry knows that the biggest competitive threat is xAI and its release of Grok 5. The performance will shatter the current state-of-the-art benchmark scores.
It’s Google’s release of Gemini 3, combined with the nearly unspoken competitive threat from xAI, that prompted OpenAI CEO Sam Altman yesterday to declare a “code red” to OpenAI employees.
Altman acknowledged competitors’ progress and tasked the OpenAI team with a “surge” in development and improvement in GPT-5, which has largely been a disappointment in the industry.
He specified that the goal was to improve ChatGPT’s speed and stability, as well as to improve performance in the benchmarks that all players are being measured against. This will come at the cost of delaying projects like implementing agentic AI capabilities into ChatGPT, which is a major mistake from my perspective.
The reality is that Altman has already committed to $1.4 trillion of AI infrastructure spend, roughly 30 gigawatts of new online data center capacity.
We should remember that he’s spending $1.4 trillion of other people’s money…
He can’t afford to have his frontier AI model perceived as a laggard in the industry. Losing the confidence of his investors could result in OpenAI as the self-inflicted victim of one of the ugliest tech implosions in history.
The reality is that Google and xAI are leading the industry now in frontier AI models, significantly more advanced than OpenAI and Anthropic. Meta is trailing even farther behind.
But none of these five players is slowing down one bit. Every time one of them releases a new model with leading benchmark scores, it only motivates the others to do even better.
And with the inherent computational power in NVIDIA’s latest Blackwell-based GPU racks, which are coming online now, it’s about to get so much more interesting.
These U.S.-based companies with access to NVIDIA’s bleeding-edge semiconductors are about to significantly widen the gap compared to anything that can come out of China due to the trade restrictions regarding access to the most advanced semiconductors.
What happens next year will have your head spinning.
Regardless of OpenAI’s current “code red” emergency, what it has accomplished in the last three years is arguably the greatest product rollout in history.

Source: Reddit r/ChatGPT
Just imagine flying from product launch to ~800 million weekly active users in the span of 36 months. To most, this seems like an insurmountable advantage.
I disagree…
The company that first enables personalized agentic AI capabilities for every internet user on Earth will see an adoption ramp even faster than ChatGPT.
The entire competitive landscape will change seemingly overnight.
This battle is far from done… and I’m not betting on OpenAI.
Jeff
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.