Jack Kilby suffered under the “tyranny of numbers,” as he called it.

In 1958, the 35-year-old engineer was a new hire at the electronics company Texas Instruments.

He was so new, he didn’t get vacation time.

So while the rest of the office was on holiday, he was left working.

Jack’s job was to connect circuits for the company’s mainframe computer.

Back then, computers filled an office floor. And they ran over a series of circuits connected by wires and vacuum tubes.

Replacing a bad tube in a computer meant checking one of 19,000 possibilities

Source: U.S. Army

Broken circuits were common. Jack’s job was to find them and fix them. It was tedious work. And Jack set out to put an end to it.

In a near-empty office, he created the world’s first integrated circuit (IC).

Made out of germanium – a highly conductive element – and gold wire, this primitive IC was a forerunner of what we call a semiconductor.

Jack Kilby’s first integrated circuit

Source: National Museum of American History

With just one transistor and a couple wires, Jack’s circuit doesn’t look very impressive. But it earned him the Nobel Prize in physics.

Instead of toiling over thousands of possible circuits, they could finally be integrated into one compact place. And the circuits no longer required the vacuum tubes that were susceptible to failure.

Jack’s invention gave birth to the modern era of computing and the breakthroughs in artificial intelligence (AI) we are seeing today. Understanding how the tech has evolved will help us profit from it in the months ahead. 

80 Billion Transistors

Today’s bleeding-edge chips aren’t much bigger than Jack Kilby’s prototype. But they’re way more powerful.

Take Nvidia’s H100.

It has 80 billion transistors spread across thousands of cores. Each transistor is only a bit wider than a strand of DNA. The entire chip measures just 10×4.

Nvidia’s H100 is the world’s most sought-after chip for AI systems. Without the power it offers, ChatGPT, Google’s Bard, and Elon Musk’s new AI chatbot Grok couldn’t run at scale.

That’s why we’ve seen Nvidia rocket 218% higher this year.

But the H100 isn’t the only AI-ready chip on the market.

Nvidia’s competitor Advanced Micro Devices (AMD) has developed the MI300 chip. It’s about twice as powerful as the H100. And it’s expected to be cheaper than the H100 when it hits the market this year.

That’s why buyers are already lining up.

Lining Up

On October 31, AMD announced earnings for the June-to-September quarter.

Shares rallied about 10% following the news.

Here’s what got investors excited…

CEO Lisa Su finally announced that “multiple large hyperscale customers are committed to deploying Instinct MI300 accelerators.”

Hyperscalers is industry jargon for any company that operates large data centers. Amazon and Microsoft are hyperscalers because of their AWS and Azure cloud services. Most of the big tech companies are hyper-scalers.

This year, Su has been reluctant to speak about large tech companies committing to its MI300.

Many Wall Street analysts took this to mean that AMD was struggling to win customers over from Nvidia.

I didn’t believe that for a moment. Here’s what I said back in June…

Nvidia may be the king of AI chips, but there’s still plenty of money to be made if AMD can solidify its position in second place. 

Now, it looks as though Wall Street is finally waking up to the fact that AMD’s MI300 will see mass adoption in AI data centers.

And that won’t come as a surprise to regular readers of The Bleeding Edge.

In the July 4 dispatch, I told you it was one of my top three must-own AI stocks.

It’s up 54% since then.

Ultimately, AMD’s chip will speed up AI development and adoption. And investors will continue to profit as Wall Street recognizes AMD’s edge over Nvidia.

Regards,

Colin Tedards
Editor, The Bleeding Edge