Jeff’s Note: Tomorrow at 2 p.m. ET, I’ll be sharing one of the most important broadcasts of my career alongside one of the greatest financial forecasters of our time – Porter Stansberry.
Together, we’re going to expose a national emergency that Trump has declared behind the scenes. This initiative already is pushing trillions of dollars into a specific set of stocks – triggering surges like 597%, 256%, and 772%.
This is a once-in-history reordering of the American economy… the largest mobilization of capital since World War II. In this broadcast, we’ll walk you through exactly what’s happening – and most important of all, how you could profit from it.
Porter and I will name two companies we believe could see massive growth as this hidden emergency ramps up. And just for attending, you’ll also receive an important new briefing that reveals five stocks that could be decimated as these events unfold.
Don’t drag your feet on this one. RSVP now to add your name to the guest list with one click.
Last week, Morgan Stanley quietly released a report that caught almost no one’s attention outside of the AI industry.
The takeaway shouldn’t be surprising to long-term Bleeding Edge readers: AI usage is exploding. Here’s a quote from the report:
Every hyperscaler has reported unanticipated strong token growth […] literally everyone we talk to in the space is telling us that they have been surprised by inference demand, and there is a scramble to add GPUs.
For months, skeptics have claimed that AI is a bubble. That ChatGPT is a gimmick. That adoption will slow.
But the exact opposite is happening.
Let’s break down the above quote. AI inference is when large language models (LLMs) “think” and process real-world workloads. For that to happen, LLMs need to both process and produce tokens. In simple terms, a token is a tiny chunk of text, just a few characters. Every question and answer requires both the processing of and creation of tokens. And the more tokens used, the more computational power required.
That’s why token demand is so important. It’s not just a measure of adoption… It’s a measure of how intensely AI is being used.
Just this week, OpenAI’s annualized revenues have hit $10 billion, an extraordinary number considering ChatGPT was only launched in late November 2022.
And yet, the heaviest users of AI right now are businesses, not consumers. After all, they have a goal of earning a profit, and LLMs and agentic AI help them operate more efficiently.
As we can see in the following chart, over 40% of U.S. businesses are now paying for access to AI models, platforms, or tools. Does anyone still seriously believe that AI isn’t being widely used yet?
AI has moved from curiosity to core infrastructure in record time. It’s being deployed across marketing departments, legal teams, software engineering, customer support, and finance operations. From Fortune 500 companies to small businesses, AI is becoming embedded in daily business operations.
Many of us have read stories about how computer coders are using AI. Microsoft reported that the number of GitHub Copilot users soared 4x this past year to 15 million people.
And Meta Networks is using AI to help advertisers more effectively target customers so they pay more to place ads on their networks. Those are widespread examples. But here are a few other ways companies are using AI.
In the 12 years before last year, Duolingo (DUOL) produced 100 courses teaching people foreign languages. But in the past 12 months, with the help of AI, Duolingo added 148 language courses to its catalog.
There’s no doubt that AI will transform how people learn. Everyone has at their fingertips the most knowledgeable tutor ever. And we can even create our own customizable learning plans. I picture that even our children will use AI in schools to self-direct their learning through a teacher-approved curriculum. Similar to a young Spock at Vulcan School in the 2009 Star Trek movie.
But Duolingo CEO Luis von Ahn is taking AI a step further. In a recent memo he sent to employees, he said, “Duolingo is going to be AI-first.” And he gave a few constraints to help push employees to use AI. They are:
This memo leaked, and many people were upset about it. People said they’d stop using it because they don’t want to support a company that prioritizes AI over people. But that’s not what this is at all.
AI will enhance workers and make them more efficient and productive… capable of more. Will that reduce the number of people needed to do the jobs of today? Probably, and it should. Just like how tractors reduced the number of people needed to produce food.
But they will also open many new jobs we haven’t seen yet. Just look at how social media led to the whole influencer economy… Luis van Ahn knows that he has two choices: adopt AI and thrive or go bankrupt.
Shopify (SHOP) CEO Tobi Lutke wrote a similar memo with bullet points for AI integration. He said that everyone needs to take control of their learning and implement AI strategies to make themselves more efficient.
Cloud data storage company Box’s CEO Aaron Levie also told employees to start looking for ways to incorporate AI. Here at Brownstone Research, all of our teams are expected to use AI to drive efficiency.
Now, all these companies faced pushback from outside. But these CEOs know that companies need to use AI. Otherwise, they will be left behind by companies that do. Being a Luddite is rarely a good idea.
But it’s not just companies adopting AI. Even the U.S. government is getting on board. And I’m not talking about our defense or intelligence agencies. They’ve been using earlier forms of AI for decades.
One recent example is the Food and Drug Administration (FDA) incorporating a generative AI tool named Elsa to help speed up their scientific reviews. This will hopefully get drugs through their clinical trials quicker. And then doctors will be able to prescribe more life-saving drugs. This will make life better for everyone.
This is just a small smattering of how businesses and governments are adopting AI.
It’s not just the number of users that’s growing; it’s the intensity of usage.
When ChatGPT first launched, most tasks were simple. A quick summary. A brainstorming session. A rewritten paragraph. These jobs took about 15 seconds to run.
Today, OpenAI’s latest models are routinely running multistep, high-complexity tasks that take 30 minutes to over an hour to complete.
A year ago, we typed prompts. Today, we deploy autonomous agents that make decisions, compose emails, generate code, interpret PDFs, or scrape data from live websites. These agents can call other models, wait for external input, and return results minutes or hours later.
We’re seeing model task durations stretch from seconds to hours… and those hours are filled with nonstop token churn (i.e., computational power). Each of those tokens passes through GPUs, data center interconnects, and high-bandwidth memory stacks.
This is why hyperscalers are racing to expand capacity.
We’ve seen this firsthand. Many of our own Deep Research queries now take 30 to 45 minutes to complete. They’re synthesizing vast data sets, reasoning across multiple layers of context, and returning insight that would have taken a human analyst hours or even days.
And as the models get smarter, the tasks get longer… and the compute demand intensifies.
What we’re witnessing is the ignition of a self-reinforcing feedback loop. A loop that accelerates both technological progress and infrastructure demand.
We call it the AI flywheel:
Better models → more complex tasks → greater utility → more users → more compute → better models.
This mechanism is pulling us toward artificial general intelligence (AGI) – not because of wishful thinking but because the math supports it.
These models aren’t static. They build on themselves. They learn from each other.
And soon, they’ll mimic how we think, plan, reason… and innovate.
That’s the inflection point.
Because when we reach AGI, likely in the next 12 months, we’ll start collapsing decades of progress into just years.
Historically, each wave of technological transformation took about 15–20 years to materialize:
AGI will collapse that curve. Then we project it will be less than five years until artificial superintelligence (ASI) comes online. That’s when the AI starts to work and innovate on its own, without the need for human prompting. This will change how we all live.
The only thing slowing us down? Compute and electricity.
AI progress is now bottlenecked by how many GPUs we can build and how much electricity we can feed into them.
Nearly half the cost of a modern AI data center is spent on the chips. These are the GPUs or ASICs that train and run the models. And this trend is accelerating.
That’s why semiconductor stocks have roared back to life.
The VanEck Semiconductor ETF (SMH), which holds a basket of semiconductor stocks, has surged 40% since bottoming in April. And it’s getting close to its January high.
This leadership is important to the market. Historically, semiconductors have led the market out of economic troughs. This is because semiconductors go into every electronic device we use – from toasters to computers to cars to air conditioners. All use semiconductors.
But the biggest winners won’t come from generic chipmakers. They’ll be the companies building the silicon brains for AGI.
SMH is one way to gain broad exposure to this megatrend. But for targeted positions, we continue to track the most promising names in our Near Future Report and Exponential Tech Investor advisories. (If you’re interested in joining us, you can learn how right here.)
The next 5–10 years will bring huge economic gains as AI increases the productivity of all workers. As we can see from the adoption stats shown earlier, this trend is accelerating.
And we’re still in the early innings.
– Nick Rokke
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.