Superhuman Robot?
It appears that Boston Dynamics is ready to deploy its “superhuman robot”…
The biotech IPO pipeline is the best it’s been in five years…
This week is the week for the biotech industry. It’s the week of the annual JP Morgan Healthcare Conference held in San Francisco. It always sets the tone for the year in the industry and is a hub for deal-making and announcements…and it’s almost impossible to get an invite to if you are not a high-net-worth client, a VC/PE, or in the industry.
I’ve had a heavy travel week, but my senior biotech analyst was there conducting some boots-on-the-ground research. There were three key high-level themes of the conference with respect to the biotech industry:
It’s no surprise that biotech executives are using the word “cautiously.” After a four-year, pandemic-policy-induced biotech bear market, we can’t blame them for being a bit cautious.
High interest rates and the aftermath of the pandemic policies were the headwinds that they spoke of. The tailwinds are ones that we know well, which I covered this week in The Bleeding Edge – Patent Cliff.
Naturally, the application of AI to the biotech sector is everywhere. It has become a competitive disadvantage to neglect using the technology. And the most ironic part is that many of the biggest breakthroughs in life sciences have come from advertising companies like Google and Meta, not the industry itself. For more background, you can catch up in The Bleeding Edge – Proton Concentration.
Regardless of where the discoveries came from, the advancements are available to all to leverage in the drug discovery and development process.
And the biotech IPO pipeline is the best that I’ve seen in five years. The industry didn’t stop working during the pandemic. Those with the capital resources kept building and have become the most likely companies to access the public markets in 2026.
I can see what’s coming this year, and I’m leaning in…
Only optimistic,
Jeff
Hi and thanks, Jeff. What can you tell us about Thin Film Lithium Niobate – a man-made crystal material essential in quantum computing… and getting into the companies making it.
I sure don’t want to have to sign up to someone else’s research service to learn about this, since I bet you know about it too. Sounds like it could be an incredible investment opportunity. Waiting to hear!! Blessings.
– Brooke M.
Hi Brooke,
I’m glad you wrote in to ask. That’s exactly what I’m here for. And as a reminder, if there is ever a topic, a company, or a technology that you are curious about, we have a great search feature on the Brownstone Research website. Just go to the search bar in the upper right corner (shown below), type in what you’re looking for, and there you are.

I did a brief dive into thin-film lithium niobate (TFLN) technology back in October last year in one of the AMA issues – 1984 Was Supposed to Be Fiction – for our special Quantum Week event here at The Bleeding Edge.
For those who are unfamiliar with TFLN, it’s an incredible material with notably low propagation losses and high efficiency, as well as a wide optical bandwidth capability that makes it technologically superior for photonics-based quantum computers versus more traditional – and cheaper to manufacture – materials like silicon.
I also named a couple of companies I have been watching in the space…
There are two companies that I really like in this space as it pertains to quantum computing. Both companies are still private companies.
PsiQuantum has developed a silicon photonics platform that takes advantage of existing semiconductor manufacturing technology. PsiQuantum is using Global Foundries to manufacture its silicon photonics chips for its quantum computing systems.
The other company is Toronto, Canada-based Xanadu, which uses TFLN semiconductors for its photonics-based quantum computing system. Xanadu partners with another private company, HyperLight, for the TFLN technology.
I encourage you to give that AMA a read if you’re interested specifically in TFLN… or if you’d like to learn more about quantum computing technology as a whole… the impact it will have on society… and the future of the industry… we have several great Bleeding Edge issues focused on quantum:
This is a sector my team and I are particularly excited about. I have been researching it for many years. And we are tracking every major company, public and private, in this space. For anyone interested in learning more about some of the companies we have our eye on, you can go here to hear more about a few small companies we believe are particularly well-positioned to benefit from the quantum computing boom.
Hope this helps, Brooke.
Hi Jeff,
Do you have an opinion on using the Forge Global site for accessing investments in private companies, such as xAI or Cerebras?
Thanks and Happy Prosperous New Year!
– Gary D.
Hi Gary,
This is an area reserved for sophisticated investors who understand “the game.” There are a lot of pitfalls in private investing to be aware of. I’ll provide some general comments that I hope will be helpful.
ForgeGlobal actually started as Equidate back around 2014. I knew the founders and had several meetings with them over the years as they were working to expand their secondary market offerings for private companies. Equidate rebranded to ForgeGlobal in 2019, and then it acquired the other major player in secondary markets – SharesPost.
ForgeGlobal effectively creates a marketplace to allow holders of private shares to sell to interested buyers. This is for accredited investors only, and transactions tend to be six figures.
ForgeGlobal charges a transaction fee of 2–4% of the total amount invested. It facilitates the legal exchange of private shares between the buyer and the seller. But the big problem to watch out for is the price/valuation of the sellers’ shares.
Most sellers take the attitude of asking for a price that is well above the last known valuation of the private company (i.e., you are paying more than the company is worth).
The ASK price can be way off from what the company is worth. Buyers have to be extremely careful here. It is critically important to have accurate information and be well-informed on valuations before ever considering purchasing shares in the secondary market.
EquityZen is another company active in the private secondary markets. It obtains large blocks of shares in private companies and then pools investor capital into a special purpose vehicle (an LLC that holds the shares until a liquidity event). So rather than facilitating a direct transfer of shares, the buyer owns a portion of the assets held within the SPV.
The benefit of this is that the buyer can invest smaller amounts – as low as $5,000–10,000 – to gain access to companies that they are interested in. EquityZen usually charges 5% to the seller for transactions.
EquityZen also structures funds that investors can gain access to. These funds can have a management fee as well as carried interest for EquityZen on profits. EquityZen is only available to accredited investors.
Republic is another major player in this space. It crosses both worlds in that it has Reg D offerings for accredited investors and a wide range of crowdfunding deals for all investors.
Wefunder specializes in RegCF raises for unaccredited investors, and it also supports Reg D offerings, just not to the extent of Republic.
As for all private investments, it is most critical to understand the fees, the structure of the actual investment, the liquidity, risk, and, most importantly, whether the valuation is reasonable.
These are the types of issues/opportunities that I plan to do a lot more work on this year at Brownridge Research, which focuses exclusively on private investment opportunities.
Hi Jeff,
Happy 2026!
I am a long-time subscriber and enjoy your many articles that keep my brain function alive. I enjoyed your discussions about AI and the future of productivity.
It conjured up in me a vision of a slew of human-like robots, such as Elon Musk’s, picking up large objects, running across a warehouse, placing the load on a shelf, running back across the warehouse, and continually completing manual labor tasks such as that without stopping for lunch or a coffee break, other than a quick battery charge!
The thought occurred to me that if they developed human-like thinking, planning, etc., what would prevent them from developing emotions such as love and anger?
What would control the level of anger they could develop, which could result in attacking humans with physical force?
One slap upside the head would result in instant death for a mere human! How does that grab Ya, Jeff??
Perhaps, they could be programmed to think like hippies. “Make love, not war!”
Now that conjures up a vision for you! LOL! Kindest Regards.– Roger D.
Hi Roger,
Happy New Year. And no, I’m not interested in a slap upside the head from a robot… that would not suit me well…
Your question feeds into a broader topic that tends to cycle through The Bleeding Edge feedback file once every few months or so. At the heart of it is a concern that artificial intelligence will advance to a point beyond what a human can control or rein in, and then will, in some way, cause harm to humans.
It’s something we’ve spoken about at length, and with good reason. These concerns are natural, and these conversations are necessary as we build out any technology.
The specific scenario you present would be most relevant to artificial superintelligence (ASI), rather than artificial general intelligence (AGI).
AGI is going to far exceed the capabilities of the AI we’ve grown used to over the past few years, but in certain critical measures – like sentience – it will still be limited.
ASI’s intelligence will significantly surpass that of all human experts in any field or endeavor. It will be able to discover and develop science and technology even beyond our current level of understanding, all on its own. And it will be sentient. My prediction is we’re months away from AGI at this point and just years away from ASI – we’ll likely achieve it before the end of 2030.
In the case specifically of robots empowered with ASI – a hyper-advanced form of what I call manifested AI – it might look like coding something like Asimov’s “Three Laws of Robotics,” which I discussed a while back in a Bleeding Edge AMA, A Robot Kill Switch…
For readers who may be unfamiliar with science fiction writer Isaac Asimov’s work, the “Three Laws of Robotics” refers to the list of rules in the fictional Handbook of Robotics that first appeared in his short story, “Runaround,” and was later popularized in his famous I, Robot collection.
They state, in order of importance, that…
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Then, added later, was the ultimate “Zeroth Law,” which stated, above all the other laws, that, “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
A necessary distinction that placed the security and interests of the whole of humanity above those of the individual.
And as a sort of failsafe… there was the “roblock.” Shorthand for robot block or deadlock, the roblock acted as a sort of factory reset that would engage should a robot be presented with circumstances in which it may break one of the laws, immediately wiping the robot’s mind. To use a crude term, it’s somewhat of a kill switch.
Something like this can be programmed into the robot’s software to ensure that, even if does develop emotions such as anger, there’s a failsafe in place to prevent it from causing physical harm.
There’s also the perhaps more philosophical argument that a robot empowered with humanlike intelligence to the point of developing emotions like love and anger could also develop the emotional intelligence to know how to regulate those emotions in a way that doesn’t cause them to lash out, but that’s getting a little deep in the weeds.
If we’re talking more generally about scenarios in which ASI systems wreak havoc on humanity, I’ve discussed this in quite a few AMAs:
Each of these centers on the same core concern: an AI I would qualify as an artificial superintelligence advances beyond our control, and the result is apocalyptic.
I wish I could say these concerns are baseless, but it’s not that simple. Technology always has the potential to go off the rails when in the hands of unchecked bad actors.
I maintain my stance that a positive outcome with artificial general intelligence (AGI) is still the most likely outcome as long as we stay on the current trajectory. The word “alignment” is often used to describe the development and employment of artificial intelligence in ways that are aligned with human interests and the benefit of humanity so that we avoid a situation where an AI “goes rogue,” as so many fear.
It’s worth noting that we see variations of this conversation crop up throughout history whenever humanity finds itself in the early stages of a truly transformational technological shift.
Whether it’s related to productivity and job displacement, or perceived physical or cultural harm to humanity, there are always pockets of people vehemently opposed to allowing – and pushing for – a technology to progress, usually in the name of self-preservation.
It’s not an especially productive stance in the sense that technology will advance whether these groups oppose it or not.
It’s also important to keep in mind that when it comes to artificial general intelligence – or superintelligence… robotics and other forms of manifested AI… quantum computing… or any of the emergent transformational technologies that are rapidly advancing today, the U.S. and the West are not the only ones racing to build these technologies.
And every minute we spend digging our heels in, covering our eyes and ears, and fixating on the worst-case scenario instead of working towards building out the safest advanced versions of these technologies is another minute that adversaries with very different ethical frameworks, with bad intentions or nefarious motivations, move closer to achieving their own goals.
Bad actors getting their hands on AGI and ASI first – and the U.S. being defenseless because we opted not to pursue this technology, or we over-regulated the technology, in the name of self-preservation – is the far greater threat in my mind.
This is why the race to develop AGI is so critically important. Because the only way to proactively defend against a bad actor using AGI is to have a more capable AGI designed to protect against those kinds of attacks.
Fortunately, Elon Musk and his team at xAI seem right on track to be the first to achieve AGI sometime this spring, as I predicted in The Everything App last year.
And beyond defense, AGI is going to usher in a new era of productivity and advancement that most folks won’t be able to fully grasp until they’ve witnessed it themselves.
It will open gates to nuclear fusion for cheap, limitless clean energy… healthier humans with longer lifespans thanks to AGI-powered medical advancements from drug design and discovery to advanced diagnostics… better national security built on advanced AGI defense systems… and so much more.
We’re on the path to a cleaner, healthier, safer, better future thanks to artificial intelligence. We just need to keep calm and not let fear stop us from building that better future.
We have so much to look forward to,
Jeff
Read the latest insights from the world of high technology.
It appears that Boston Dynamics is ready to deploy its “superhuman robot”…
Exponential trends are particularly hard to quantify and contextualize…
After nearly four years of a brutal biotech bear market, the industry has sprung back to life…