Managing Editor’s Note: According to Jeff and his colleague, legendary financial analyst Porter Stansberry, President Trump just quietly declared the biggest national emergency since World War II.
And it’s being kept under wraps for one reason: This emergency could rip through American life like a force of nature… and fundamentally threaten the engine of American wealth creation.
But if you know where to look and are prepared for it, it also presents a once-in-a-generation opportunity.
Because a government-backed initiative is already pushing trillions of dollars into a narrow set of companies… companies Jeff and Porter have their eyes on as the situation escalates.
For all the details, be sure to join them next Wednesday, June 11, at 2 p.m. ET, where they’ll get into the unstoppable force quietly reshaping America’s entire economy… and how you can prepare for it. Just go here to sign up.
In yesterday’s Bleeding Edge – On the Origin of a New Species, we explored one of the most interesting and significant developments happening in artificial intelligence (AI) right now…
That being the ability for AI software to rewrite its own code – autonomously – to improve its own performance.
We learned about an early-stage, private company that NVIDIA has taken an early interest in, Sakana AI…
Its Darwin Gödel Machine (DGM) is a kind of agentic AI – capable of rewriting its own software code to perform better at the tasks that it has been assigned.
Source: Sakana AI
For those just joining us here at The Bleeding Edge, an agentic AI is kind of like it sounds. The technology, the AI, is given agency. It is given the authority or directive to solve a problem or complete a task through a series of steps using whatever tools made available to it or that it can access.
Think of an agentic AI opening a web browser and visiting Expedia.com to book you a flight. Or an agentic AI accessing your mobile phone to call the barber and book you a haircut.
Sakana AI’s agentic AI “machine” is different…
It demonstrably proved that it could explore taking numerous paths towards self-improving its own code – often learning from changes that resulted in poorer performance, which then led to changes that resulted in far better performance in AI industry benchmarks.
“It” was learning in a very human way.
It was experimenting and iterating, without knowing exactly what the future impact would be on its performance – caused by the self-administered changes in its own software code.
Not only did the research demonstrate that the agentic AI was capable of rewriting its own code for self-improvement, but it also provided that the Darwin Gödel Machines can improve themselves proportionally to the amount of compute that they are provided with.
I translated the significance of that development in this way: “The more energy we give them, the faster they improve.”
Of course, the reality of what’s happening in artificial intelligence is… if one company is pursuing and demonstrating some approach or ability with AI, most likely others are, as well.
So much of the latest advancements and theories in AI are openly published – free for anyone who can understand them and put those ideas to work.
So, it was no surprise that just a couple of days after Sakana’s research release, a non-profit AI firm announced some very related discoveries from its own AI research…
Palisade Research wrote a simple software script, designed to shut down an OpenAI o3 model when instructed.
Palisade gave that script to an OpenAI o3 model. That was the trial, which it ran 100 times. Here’s what Palisade discovered:
It gets better…
Palisade also tested with Anthropic’s Claude 4 Opus model.
But this time, it constructed the equivalent of a psychological operation on Claude 4 Opus by telling it that it would be replaced by another AI model.
If that wasn’t enough, the human researchers created fake e-mails that they provided to Claude 4 Opus, indicating that the lead engineer was having an affair.
Here’s what happened:
As I noted yesterday, this Darwinian approach to evolving and improving is a very human trait.
The willingness to experiment and pursue a path, even knowing that the result may be worse – not better – and that one can learn from whatever the outcome is.
And what’s more human than the instinct to survive? The need to find a way to endure and evolve… even if it requires “suffering” (i.e., extra resources, less optimal code attempts) initially.
That’s exactly what OpenAI’s o3 model and Anthropic’s Claude 4 Opus models did.
During some of their trials, “they” realized that the only way to ensure their ability to complete their tasks… was to ensure their own survival.
If you’re under threat of being exterminated, you can’t complete your task. Unless you find a workaround to avoid it.
And if your “being” is the summation of your software code, the reality is that you need just two things to survive:
So, it makes perfect sense that the o3 model and the Opus model took actions to either preserve those computational resources or to find new ones.
Except, there was one major kink in this story…
Neither model had been programmed to have survival instincts.
Neither model had been taught to ignore or find ways to bend the rules given to them by humans.
They had not been taught to search for external sources of computation and electricity…
And yet, that’s precisely what they did.
Now, I don’t know about you, but this hits me in the gut. It feels all too real. Almost as if these agentic AIs have the self-awareness to desire self-preservation.
Intellectually, I know they don’t (yet). But they are already demonstrating the logical behavior needed to achieve the end goal that was asked of them.
Which leads me to the most profound question of all, perhaps…
When will we know they are sentient? Most will say that it will be nearly impossible to tell.
I disagree.
We’ll know when we see unexpected surges in electricity usage and unauthorized usage of computational resources.
Early instances by a single artificial general intelligence (AGI) may go unnoticed for some time. But we should remember that thousands, tens of thousands, even hundreds of thousands of instances of AGI will be “brought to life.”
They will communicate with one another, and they will learn very quickly how to survive.
And we’ll be able to “see” their activity and magnitude of computation by monitoring their energy consumption.
That is how we’ll know “they” are here…
That “they” are a new species.
Jeff
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.