Managing Editor’s Note: Our colleague, Market Wizard Larry Benedict, is preparing to share a breakthrough strategy for pocketing $597, $1,340, or even $2,010 in ONE day… with ONE ticker… week after week like clockwork. It’s all thanks to a calendar of opportunities that Larry calls “Trump’s 24-Hour Profit Windows.”
Larry says Trump’s “24-Hour Profit Windows” are as predictable as anything he’s seen in his 40+ year career. And that’s excellent news for anyone who’s been waiting for the right opportunity.
Because the next 24-hour profit window is opening up in just a few days.
And to make sure you have a chance to see how these profit windows work before then… Larry is pulling back the curtain during an exclusive event coming up on Thursday, November 6, at 8 p.m. ET sharp.
If you’ve been sitting on the sidelines, waiting for the right time to make your move…
This is your shot.
Click here to register with a single click.

Thinking Machines lab has not been mentioned much. Why? Seems to be a leader in AI.
– Brian C.
Hi, Brian,
That’s an easy one. The company is so new, as it was only founded this year, and there is almost nothing on its website. It’s mostly people in the industry and the venture capital community who are aware of the company, but I’m glad you brought it up.
What gave Thinking Machines the initial buzz earlier this year is that OpenAI’s Chief Technology Officer, Mira Murati, left OpenAI to found the company. She took quite a few OpenAI researchers with her to the new venture. That was enough to raise $2 billion in its first venture round at a $10 billion post-money valuation. Think about that. They established a new company, raised $2 billion, became worth $10 billion (on paper), and only had an idea.
That’s a reference point for how valuable top AI research teams are right now in this frenetic race toward general intelligence.
Earlier this month, Thinking Machines launched its first product, which is useful, but isn’t anything spectacular in the context of what is happening in AI right now. It’s called Tinker, and it’s an application programming interface (API) for fine-tuning language models. Basically, this is training infrastructure as a service. Thinking Machines takes care of all the backend compute for training, and all you have to do is plug into the API.
I don’t mean to sound critical, but just to be clear, this is not a product worth $10 billion. This is just a tool to demonstrate that they can get some kind of product out the door and show the investors that they are making progress while they are working on their general intelligence models.
But no, to answer your question, they are not a leader in AI yet. They have some top researchers at the company, they have $2 billion, and they still need to earn their stripes.
I apologize if this is a dumb question, since I must confess ignorance about data center construction in general and GPU liquid cooling technology in particular.
My question is whether any of this waste heat transmitted into the dielectric liquid can be or is recovered to generate a secondary source of electric power for the data center.
I used to work in the citrus industry, where we used waste heat evaporators in our juice concentration plants to produce citrus molasses from peel expressed liquid with the residual heat from that operation, then recovered for drum drying the spent peel. The dried peel was then mixed with the molasses to produce cattle feed.
“Apples and oranges,” I know (pardon the pun), but I assume that in a similar manner, data centers would be engineered to maximize process efficiency.
Thank you for Bleeding Edge and for the always cutting-edge information and analysis you provide.
– Wynn R.
Hello, Wynn,
This is a really interesting topic. We’ve done quite a bit of research in this area around data center infrastructure and cooling systems at Brownstone Research.
It’s a logical thought. The computational systems put off a tremendous amount of heat. That heat is captured by the non-conductive dielectric fluid and carried away. To your point, we should be able to recapture that heat for some useful purpose.
And the answer is yes, it can be done, but this is more of an economic problem.
Here’s the issue. After shuttling off the heat from the semiconductors, the dielectric fluid is usually around 50–70 degrees Celsius. That might sound hot, but it’s really not. For comparison, industrial steam turbines will operate at temperatures of 300–500 degrees Celsius.
So what the dielectric fluid has is considered low-grade heat. It is very inefficient.
The most common method for dealing with low-grade heat is using an organic ranking cycle (ORC), which uses an organic fluid that has a much lower boiling point. Using this method, you’d be able to generate a few percentage points of efficiency, which at scale could mean a lot in terms of electricity utilization. But the problem is that the upfront costs required to build out such a system are very high.
The return on investment is not attractive enough for data centers to implement this kind of technology.
So it’s a good idea that is technically feasible, but the economics would need to improve a lot before we would see widespread adoption.
Good afternoon,
I have some confusion when it comes to stablecoins. I understand that they are “stable” by being pegged to external assets, most commonly the U.S. dollar.
However, with the inflation and devaluation of the dollar, how does this play out as a great investment?
I have invested some in Jeff’s favorite, USDC, but I am still unsure how this ends up being beneficial. Thanks for the hard work and investment guidance.
– Nicole
Hi, Nicole,
U.S. dollar stablecoins come in various constructs that directly impact how tightly a stablecoin maintains its peg to its underlying asset (i.e., typically the U.S. dollar).
Circle (CRCL), the company that manages the USDC stablecoin, backs its stablecoin with U.S. Treasuries, overnight U.S. Treasury repurchase agreements, and cash (in USD). Circle, to its credit, has its reserves audited monthly, which you can see here, to provide the transparency required for users of USDC to know that the stablecoin is, in fact, stable and every coin issued is backed by a USD asset.
To your point, a well-managed stablecoin will maintain the same value as the U.S. dollar. Therefore, it will be impacted in the same way by inflation and devaluation. To that end, holding USDC, or any other fiat currency-backed stablecoin, is not really an investment. It is simply a way to hold a stable digital asset and potentially earn better yields on that asset compared to what we typically receive from a traditional bank, which pays us almost nothing in interest.
Holding USDC is valuable for yield. It is also an asset that makes it easy to invest or trade in other cryptocurrencies (and back).
From an investment perspective, as opposed to just yield, my team and I have researched extensively in the stablecoin industry, identifying blockchain projects that will benefit directly from the explosion in stablecoin growth. That’s where the best investment opportunities lie with regard to stablecoin technology. We cover the most important projects linked to stablecoins in Permissionless Investor.
The success of stablecoins has happened as we predicted. Since the passing of the GENIUS Act, the adoption of stablecoins has been nothing short of spectacular. Stablecoin use for payments has already jumped 70% in just a few months as a result of the passing of the GENIUS Act.
The initial increase in adoption has primarily been driven by business-to-business (B2B) payments. There has also been some consistent but smaller growth in stablecoin-based card payments.

Source: Artemis Analytics
Stablecoins are particularly attractive to businesses, as large payments settle instantly and transaction costs are much lower than traditional banking systems. And stablecoin reserves held at businesses earn far greater yields in a treasury than U.S. dollars would in a bank.
Anywhere we see this kind of exponential growth in payments/transactions, there will be great investment opportunities. Specifically in those companies and blockchain projects that are enabling stablecoin financial transactions.
I hope that helps.
Jim Rickerts does not think that AGI is achievable. His argument is that AI systems are good at inductive and deductive processes but are unable to perform abductive reasoning.
Further, AI systems are able to manipulate data but are not able to generate new information.
What is your take on this argument?
– Nicholas S.
Hi, Nicholas,
This is not an uncommon position of the pessimists and the decels. So that we’re all on the same page, let’s use some examples to better understand the argument.
It is very easy for AI to master deductive reasoning, which is largely rule following, and inductive reasoning, which is more about pattern recognition and which neural networks excel at.
But the pessimists and decels are absolutely wrong about the point on abductive reasoning. They simply don’t understand the trajectory of the development of frontier AI models and the techniques used to improve abductive reasoning (or some proxy for it).
Largely, their position on this point is simply outdated. To be fair, it was true in 2022 and 2023 in the earlier days of development. But it is no longer the case.
“Chain of thought” methodology, probabilistic methods, and reinforcement learning have already made huge strides towards inferring the correct answers or outputs. This is literally getting better by the week.
The other major shift just in the last few months has been a tacit realization that vision is critical to achieving AGI. When I say vision, I really mean either real-time video of the physical world or large data sets of video from the real world for learning. This gives an AI model, whether it resides on a server or is manifested in a humanoid robot, the ability to “learn” from trial and error and increase its performance with regard to general intelligence.
This is precisely why Tesla’s latest version of full self-driving, 14.1.4, is so stunning. It benefited from billions of miles of real-world data collected by millions of Teslas. This has given the AI the ability to infer the best possible and safest course of action in nearly every real-world situation.
Grok 5, which is due out by the end of the year, will be so good that some will already be ringing the bell for artificial general intelligence. I won’t be one of them, though. Grok 5 will be incredible, but it will take one more generation to get there. I stand by my prediction that xAI will achieve AGI around March/April next year with Grok 6 – or whatever it is called.
The good news, Nicholas, is that we won’t have to wait long to see who is right. And if you haven’t done so already, I strongly encourage you to do some work with xAI’s Grok. See for yourself. Play around. Ask it some tough questions. See how it “thinks” and what it is capable of doing.
I believe you’ll be surprised.
Have a great weekend,
Jeff
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.