This certainly wasn’t the week that I expected.
I boarded a plane for a long flight yesterday. By the time I landed, it felt like everything had turned to complete chaos.
The Trump/Musk fallout is so surreal. So much so, I can’t help but think that it all might be scripted for some bigger endgame…
But even if that isn’t the case, it is certainly explainable.
The practicalities of affecting change in Washington, D.C., are at odds with the speed at which Musk (and many of us) would like to see positive change happen.
I’m not happy with the Big Beautiful Bill. I want to see deficits get smaller, not bigger. But I also acknowledge that so much debt was created in the last several years that this isn’t a problem that can be solved quickly or easily.
And regardless of how things shake out between Musk and Trump, there are a few reasons to be optimistic:
These three reasons are the answer to managing the debt problem that was created.
I’ve long maintained that AI will be the productivity boom that will get us out of the mess. And while the current debt levels are almost inconceivable, it’s through above-average economic growth that we can reduce the U.S. debt-to-GDP ratio and have the largest positive impact, hopefully getting back below 100%.
Musk and Trump don’t need to be best buddies for this to happen. But they do need to collaborate to make this happen. And that’s what I believe happens next.
I believe cooler heads will prevail in the coming days and weeks. And it’s for one, simple truth: There is simply too much at stake.
And before we get into today’s AMA, I have something urgent to share…
On Tuesday, June 11, at 2 p.m. ET, I’ll be holding an urgent broadcast with Porter Stansberry to expose a crisis gripping America – what I believe is the biggest national emergency the U.S. has seen in decades.
It’s not about the economy, inflation, or even geopolitical tension… And the mainstream media is either completely oblivious or keeping it completely under wraps.
But here’s the thing. Behind all the noise in Washington… behind the tariffs, the tech deals, the energy upheaval, and all of the so-called erraticism of the Trump administration… there’s a foundation being built for America to tackle this crisis head-on.
This is not a normal cycle. This is something much bigger. And, frankly, the stakes are significantly higher. Because if America is caught unprepared, it could rip through American life like a force of nature.
At the same time, in bracing for impact, there’s a covert, national mobilization happening right now that is funneling trillions of dollars into a tiny set of companies tied to managing this national emergency. And with any major financial shift, there is opportunity for those who are prepared.
I know I’m being vague. Due to the sensitive nature of the situation and the information Porter’s team and I have uncovered, I don’t want to say too much, too soon.
But here’s what you need to understand… Entire industries are being reshaped due to this crisis. Trillions of dollars are on the move. This national emergency comes with a destructive economic force that could impoverish millions of Americans… while creating extraordinary wealth for a handful of others.
Porter and I don’t want you caught on the wrong side of things. You can go here to add your name to the list for our emergency broadcast next Wednesday, June 11, at 2 p.m. ET.
Now, on to the AMA…
Hello, Jeff,
Thank you so very much for all your research and hard work! I am a little surprised by the phrase, “I can’t help but wonder what xAI is thinking right now.”
I was under the impression that Elon Musk started Neuralink to end the need for all “devices” in general… Would it make sense for xAI to keep developing AGI and integrate with Neuralink when ready?
Also, what would be the role of Agent AIs once we reach AGI? Am I correct to assume that AGI will encompass all that AI Agents can do and more?
Thank you again for your hard work!
– Anton L.
Hi Anton,
I’m glad that you wrote in and raised this point. As we know with all of Musk’s endeavors, each one of his companies or projects has an extraordinary mission, one that usually looks decades into the future.
At the moment, Neuralink is seen as a neuroscience technology company developing a brain-computer interface for those who have had a severe spinal cord injury or have lost the ability to control their limbs.
For anyone who would like to see the extraordinary work that Neuralink is doing, I recommend checking out this short video that shares the journey of Brad Smith, Neuralink’s 3rd patient to receive the implant, who has ALS and is non-verbal. Neuralink not only enabled Smith to control computers, but it also gave him his voice back to communicate with his family. It’s just incredible and also heartwarming.
But what’s the endgame for Neuralink? What’s the grand vision?
Musk has been developing Neuralink’s brain-computer interface (BCI) technology in an effort to eventually enable a symbiosis with artificial intelligence. I know that may sound uncomfortable to many of us, but the technical reality is that our current interfaces to computing systems are highly inefficient.
Using a BCI is the solution to increasing our bandwidth and capacity to interface with future computing systems. It won’t be for everyone, but it will be for many. In a world prolific with artificial general intelligence (AGI), being able to interface with an AGI in real-time is a way for us humans to maintain our relevance (in terms of productivity and invention).
It won’t be necessary to do so, but it will be a competitive advantage to do so. And for some, it will be highly desirable depending on the kind of work.
So yes, to your point, it makes perfect sense for Neuralink to partner with xAI when it releases its AGI.
And yes, agentic AIs already exist and are being widely used. And, of course, an AGI will have agentic capabilities, as well as the ability to improve itself, conduct self-directed research, and be an expert in basically any field studied by the human race. AGIs will also have the ability to discover new scientific breakthroughs and create new inventions. And that will certainly be true with an artificial superintelligence (ASI).
We’re in for an incredible few years…
Jeff,
I have been an unlimited subscriber now for a few years. I very much appreciate all you do and your excellent guidance on investments.
Some years ago, there was concern by Elon Musk and some other tech leaders that AI might eventually pose a risk to society when fully developed. I haven’t seen much concern in the press recently, and AI is racing toward AGI and beyond. Is there still concern about controlling AI in the future? Can AI be fully developed and implemented where humans are still in control of it?
Thanks.
– Richard R.
Hi Richard,
I wish I could tell you, “No, there is nothing to worry about.” But it’s just not that simple.
I feel pretty confident that the likelihood of a positive outcome with artificial general intelligence (AGI) is very high. I believe that we can maintain alignment with our human goals for AGIs and the AGI software (i.e., we can avoid a situation where the AGI runs off and does something that we don’t want it to do).
The larger danger in my mind with respect to AGI is how it might be used by bad actors. Said another way, it won’t be the technology itself going rogue, it will be the people using it going rogue.
Just look at how China has used a simple social media platform like TikTok to push political narratives, sow division, and try and influence U.S. elections. Imagine what a nation-state with nefarious objectives could do if they are empowered by AGI.
This is why the race to develop AGI is so critically important. Because the only way to proactively defend against a bad actor using AGI is to have a more capable AGI designed to protect against those kinds of attacks.
But back to your point about whether or not the AI can be controlled – this point is more relevant to an artificial superintelligence (ASI), which is the next step to follow after AGI. I have predicted that we’ll see an ASI before the end of 2030, so it’s really not far away.
ASI will, by far, exceed the capabilities of AGI in that its intelligence will significantly surpass that of the most intelligent human expert in any field of endeavor. It will have the ability to discover and develop science and technology even beyond our current level of understanding. And yes, it may very well be sentient, a topic we explored in yesterday’s Bleeding Edge.
The risk will be greater with an ASI, especially one that becomes sentient. If its own goals and objectives conflict with those of the human race, there could definitely be a problem. I still believe that the most likely outcome is one where there is alignment and there is a mutually agreeable symbiosis between humans and ASIs, where we benefit from each other. After all, an ASI needs computational resources and loads of electricity that humans can supply it in order to survive.
In the end, it’s the human element, the bad actor, that remains the greatest risk to our global society.
Jeff, your work is amazing, plain and simple. I had a bit of bad luck during the early days of Exponential Tech Investor, and I ruined my entire portfolio, so I’m broke in the investment realm.
BUT. Every time I read your writing on future technology, I’m always amazed at the wealth of knowledge you have. And give. I just wish more people actually listened to you and invested (my family and friends). And good-hearted people. I wish there were a way to get the “good” more power to change things. I don’t see a way to do that other than money, unfortunately. Politics maybe… But yeah. Still comes down to money (big picture).
So I wonder how to get these types of people more power/money to change things in this world. Or is it just a fairytale to think this? Sometimes feel it’s just nonsense (again, thinking big, big picture). What you do by giving your knowledge is about the closest thing for it to happen. Starting with a small group and seeing if it compounds.
I’m sure you think about this with your family and future.
– Brad W.
Hi Brad,
I really appreciate you writing in.
I started investing when I was 16, and I can tell you that I made some terrible mistakes and learned some hard lessons during my first decade as an investor. I had some incredible winners, as well, but ironically, my losses turned out to be the most valuable to me, because I made them at a time when the stakes were much smaller. It would have been far worse to make them later in life.
And to your point, most of those lessons helped me realize that money and power were the drivers. They helped me realize the conflicts of interest on Wall Street and that fast money can “rip the face off” of retail investors. Many of these practices still go on today.
So those lessons are important because I always design my investment research or trading strategies to stack the deck in our favor. I want to do my best to invest and trade in a way that makes it hard for self-directed investors to be taken advantage of.
Institutional capital and hedge funds can artificially push a company’s share price up or down materially, quarter to quarter or even day to day. This realization alone is critically important. They are measured and compensated by their quarterly performance, but that’s not true for self-directed investors. One advantage that we have is time.
If our research shows that a company will most likely have a great outcome over the next several years, we’re not so concerned about short-term volatility. As long as the company continues to execute and build and its financial model is strong, in time that will be reflected in the share price.
And we should never go “all in” on one or two stocks. It simply creates too much risk. That’s why a balanced portfolio is so important for self-directed investors. We won’t miss out on the really big winners, and if one or two stocks run into unforeseen troubles, it won’t impact our overall portfolio that much.
I share your optimism. I do think that there is a way for us to compound our thinking and improve our knowledge of what’s possible – and what’s coming. That’s the great thing about exponential growth and the increased pace of technological advancement. It democratizes everything. It empowers us with the same kinds of tools, services, and capabilities that have historically been enjoyed only by the elites.
We, as self-directed investors and sovereign individuals with critical thinking, can “fight back” by improving our knowledge base of what’s possible. And having a clear vision of what’s happening right now – as well as what’s coming in the future – is one of the ways that we can best prepare for the future and affect a better outcome for ourselves and those we care about.
We are about to enter an incredible period of opportunity. The confluence of low interest rates, reindustrialization in the U.S., fairer trade, autonomous technology, general purpose robotics, and artificial general intelligence (AGI) will create an incredible period of time for self-directed investors.
The opportunity to create wealth from even small nest eggs will probably be the best chance in my lifetime. A few of the dynamics that I listed above still have to fall into place, but I know they will, and you can be sure I’ll be sharing these developments with you in The Bleeding Edge.
And you’re exactly right. This community that we’re rebuilding at Brownstone Research can be a force for good. That is my hope and my intention: to empower self-directed investors to position their portfolios to grow their wealth and ultimately achieve financial independence.
It won’t happen overnight, but if we stick to it, we’ll get there in the end.
We have so much to look forward to,
Jeff
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.