
Perhaps it’s the distraction of the holidays…
Or a few too many drinks at the holiday party.
There has been some widespread and nearly ubiquitous journalistic ineptitude with regard to some recent developments in artificial intelligence semiconductors.
The story, as it was presented, is that Meta (META) is in talks to team up with Alphabet (GOOGL).
The reporting says Meta intends to spend billions of dollars using Google’s tensor processing units (TPUs) in Google’s cloud-based services and ultimately procure the TPUs for Meta’s own use.
One media outlet proclaimed that Google’s pending deal with Meta is designed to “compete directly with NVIDIA in the AI chip business”… and that this move would “cast Google as a serious rival to semiconductor giant NVIDIA.”
Another trotted out this beauty: “Has Google burst the NVIDIA bubble?”
Or how about, “Google may be looking to get in on NVIDIA’s act.”
All I could do was shake my head…
While “the markets” and Wall Street fell for it – hook, line, and sinker.
Have a look at what has transpired over the last few days with Alphabet’s and NVIDIA’s (NVDA) share prices:
1-Month Charts of NVIDIA (NVDA) and Alphabet (GOOGL)

GOOGL spiked about 10% higher in the last few days, while NVDA tumbled as much as 4.6%. (Note: NVDA’s share price is in BLUE and GOOGL’s share price is in WHITE.)
If we were to just look at the chart above, we’d think the media was right.
Good news for GOOGL, bad news for NVIDIA, right?
The chart and the media show us how little they know about the basics of semiconductors, the applications they are designed for, and how the industry works.
We’re exactly three years into the AI infrastructure boom… and they still haven’t figured out the basics.
And it’s not hard to figure out. It’s written everywhere.
Take, for example, Google’s Ironwood TPU, which represents its seventh-generation TPU.
In Google’s own words, “Ironwood: Google Cloud’s 7th-Generation TPU Engineered for Inference.”

Google’s Ironwood TPUs (GOLD squares) | Source: Google Cloud
The key word, of course, is inference.
Google’s TPUs are primarily designed and optimized for inference – the running of AI applications.
Does the media even understand this basic distinction – beyond the copy/pasted definition?
Because this is a critically important distinction to make between AI inference and AI training – the training of massive foundational AI models, which entirely use GPUs – what I call the general-purpose workhorses of artificial intelligence.
Regular Bleeding Edge readers will know by now that NVIDIA owns about 90% of the GPU market for AI training, and Advanced Microdevices (AMD) owns the remaining 10%.
Just two companies control 100% of the AI training market for frontier AI models. That’s it. They own it. Own shares in these two companies, and you own it all.
Google’s TPUs do not compete with NVIDIA’s or AMD’s GPUs for training frontier AI models.
It’s plain and simple, which is why all the reporting on this Google and Meta mega-deal has been so dead wrong. Even the implications of the deal are misunderstood.
Where Google’s TPUs do complete is in the inference market.
It’s TPUs compete against:
The entire semiconductor industry – outside of NVIDIA and AMD – is focused on inference for two key reasons:
Even Tesla (TSLA) is part of this race, although it doesn’t make its own inference chips available to third parties presently.
Tesla historically has used NVIDIA chips to train its frontier models, just as xAI has done to train Grok.
However, for inference and video-specific AI training, Tesla developed its own application-specific semiconductors manufactured by either TSM or Samsung Electronics.
For Tesla, designing its own custom semiconductors for inference is a competitive advantage.
The real purpose of Google and Meta teaming up for more TPUs is simple…
Both need more purchasing power with Taiwan Semiconductor Manufacturing (TSM).
Increased purchasing power isn’t just about better pricing. It’s a negotiating position to gain a larger allocation to TSM’s total manufacturing capacity.
Outside of the need for increased energy production to fuel AI factories, TSM is the single largest bottleneck in AI.
All the companies that I listed above pay TSM to manufacture their semiconductors. All of them.
And the key implication of Google and Meta partnering for more TPUs has nothing to do with being a competitive threat to NVIDIA.
It is telling us something entirely different.
Demand for inference semiconductors for artificial intelligence is skyrocketing.
And that means that the utilization of AI applications is experiencing exponential growth at a scale that is nearly impossible to understand.
Skyrocketing demand for inference is the proof that we’re not just chasing a bubble to achieve AGI.
Both the consumer, enterprise, and public sector adoption of AI is torrid, material, and tied to concrete and measurable revenues and free cash flow.
Stick with us at The Bleeding Edge in 2026 and beyond, and ignore the media if you’d like to have an inside track on what is really happening in high tech. Not only will it save you a lot of time, but it will be both intellectually and financially profitable.
Happy Thanksgiving to all.
We have so much to be grateful for…
And we have so much to look forward to.
Jeff
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.