The Bleeding Edge
11 min read

The Force Multiplier for the Biopharmaceutical Industry

It’s been a bumpy week in the markets. Before today’s AMA, let’s look at what’s going on…

Written by
Published on
Feb 13, 2026

Well, it was quite a bumpy week in the markets with renewed uncertainty over the AI infrastructure boom.

It’s completely irrational, of course, given the continued increased forecasts by hyperscalers to build at an even faster pace.

These capital expenditure increases are coming from companies that can clearly afford the investment. And the technological advancements now happening every week are so incredible and so tangible, they are driving the need for even more investment.

The largest developments of the week were the release of Google Gemini’s latest frontier model, which posted a jaw-dropping score on the ARC-AGI-2 benchmark.

As a reminder, ARC Prize’s ARC-AGI-2 is one of the two prevailing benchmarks to determine how close an AI frontier model is to achieving AGI, the other being Humanity’s Last Exam.

I’ll likely write about Gemini’s latest model next week. The other notable product release was Isomorphic Labs release of its AI-powered drug design engine, IsoDDE, which we explored in Bleeding Edge – The Successor to AlphaFold 3.

On top of that, frontier AI model developer Anthropic closed a $30 billion raise at a $380 billion post-money valuation in the second-largest private tech fundraising round on record. Stunning. There’s no question that Anthropic’s latest releases of AI coding software and agentic AI solutions drive the valuation to staggering heights.

The week will be capped by yet another successful launch from SpaceX for NASA’s Crew-12 mission to the International Space Station. Docking at the ISS will happen tomorrow.

The economy looks strong despite what the journalists tell us. GDP growth in Q1 is expected to be above 5%. We should be prepared for record levels of economic growth in the U.S. economy in the quarters ahead.

Have a wonderful weekend,

Jeff

Should students embrace AI or build traditional skills?

Hey Jeff!

It’s me again, Brigham!

Thanks so much for your advice [in your reply] to my last email about working in the blockchain industry, especially with getting specific about what we want to do in the industry. I did just that and am now looking to pursue tokenomics and incentives design for blockchain-based ecosystems.

As an economics major, I think that’s one of the most exciting, powerful, and revolutionary areas to be in this exciting new industry.

Most of blockchain is, at its core, economically revolutionary in some way. In the meantime, I’ve also become a teacher/curriculum assistant for our cryptocurrencies class and am the president of our Cryptocurrency and Fintech Club.

I’m also starting to join and network on the community forums of several blockchain projects I like (including some in our Permissionless Investor model portfolio). In about a year, I’m hoping to attend the University of Utah for their Master of Science in Financial Technology.

I just finished reading your most recent AMA – How Will Stocks React to AI Jobs Displacement? – and I wanted to ask my own question about this issue.

As a student right now, I’m stuck between embracing AI, as I see it revolutionize the world, and also wondering if I’m relying on it too much for things like making resumes, research, and building code projects.

I’ve actually used AI (a combination of Grok and Gemini) to build a whole blockchain voting system on Solana. It works great, but I’m worried that if I use the project on a resume and tell them I built it using AI, they will find it less valuable.

Our professors are somewhat split, too. Some of them say to use AI for basically everything except the exams, while others don’t want us to use AI at all. One teacher even took some solidity code and gave us a test where we asked an AI to explain what the code did.

So I want your take on this. How much should the next working generation rely on AI? What skills are essential to delegate to AI, and what skills are critical for us to learn ourselves?

Brigham M.

Hi Brigham,

Great to hear from you again, and thanks for writing in with such an important question.

Your question speaks to an even bigger thematic trend regarding how artificial intelligence will transform education around the world.

This is one of my favorite applications of AI, because high-quality education has largely been reserved for only the wealthy and elites who can afford years of private schools, which typically gain them access to the best universities around the world.

With AGI, none of that matters anymore. It will enable the complete democratization of education. Any child with access to a computing system and the internet will gain access to an AI tutor capable of providing instruction on any subject matter known to mankind. And it will be able to provide that instruction on a personalized level, optimized for each individual student’s learning style.

The entire planet will become more productive as a result, and in time, technology will lift the final remaining poverty-stricken class out and into a modern, safe standard of living.

As for your specific question, I suggest that you not think about it as an either/or problem.

The correct path is to embrace AI head-on.

Lean into the use of agentic AI and – very soon – artificial general intelligence. Use it as a force multiplier to make us more productive and capable. And we should also lean in, develop our fundamental skills and knowledge base, understand software architecture, and develop the innate ability to understand and explain what an AI has done.

Let’s think about this from a work perspective. If a future employer sat down with you five years from now in an interview and you were a talented programmer with great grades, but you had no experience working with AI, do you think you’d get hired?

No way. The employer would know that a talented programmer who leverages AI will be 10X more productive. A recent survey from Stack Overflow highlighted that 84% of developers are already using AI tools for their work.

With that said, for us to build a knowledge base of understanding and competency, we also need to do the work ourselves.

Imagine a professor or an employer asking us to verbally explain, code, or summarize the paper or presentation that we just emailed… and not being able to explain it because the AI did the work and you didn’t. That’s a simple example of not being competent, functional, or productive.

Individuals who can speak to the work, understand it, and explain it to others will stand out in their fields compared to those who can’t. This is why we also need to put the work in.

It’s fine to use AI to gather information and sources, which can be a great time saver in terms of data collection and curation. But we get smarter by reading and understanding, thinking about it, developing a framework for explaining that information, and then ultimately being able to write it down and/or explain it – to communicate the findings effectively.

This process helps us retain information, get smarter, and, in time, draw interesting connections that deepen our knowledge base.

I hope that is helpful. I’m really excited for you.

Biotech Regulation and Progress

Good day to the Brownstone Team,

Thank you for all your efforts to keep us informed! You’re the best!

You have told us to expect a lot from AI in the biotech industry. A friend, who was a Pharmacist (retired), says that unless things change at the FDA, that is where things will bog down.

He says, unless the requirements for clinical trials are changed (phase 1, 2, and 3), no matter how quickly the AI developments progress, nothing will change. Your thoughts and/or comments? Aloha.

– Gordian B.

Hello Gordian,

I certainly share your friend’s frustrations and criticism with the Food and Drug Administration (FDA). We learned during the pandemic how deeply corrupted the organization was and how dangerous its policies were to human life.

After all, it approved the mRNA-based COVID-19 “vaccines” based on very limited testing, no long-term safety analysis, and trials that showed more people died from the shots than from COVID-19.

They also changed the definition of what a vaccine is, as well as changed the definition of “immunization.” They even went so far as to falsely state that Ivermectin was only for horses, not humans, even though more than 1 billion doses of Ivermectin have been given to humans – a safety profile safer than Tylenol – and there’s been a Nobel Prize awarded for the drug’s discovery.

I am so glad those days are behind us. It still makes me sick thinking about it.

The situation at the FDA radically changed once Secretary of Health and Human Services (HHS) Robert F. Kennedy Jr. appointed FDA Commissioner Martin Makary.

Both of these gentlemen still have a lot of work to do, but I am optimistic. Here is where progress has been made:

  • The FDA issued new guidance on streamlining biosimilar approvals last October. Biosimilars are very comparable to a class of medicines known as biologics that are derived from living things like animal or plant cells. Having more biosimilars will increase competition and reduce the cost of therapies.
  • In September, the FDA cracked down on misleading direct-to-consumer advertising, which was widespread in the pharmaceutical industry and largely went unchecked. Some 100 cease-and-desist orders were issued to pharmaceutical companies. For perspective, during the last administration, only one warning letter (not even a cease and desist) was issued in 2023, and zero warning letters were issued in 2024. The pharmaceutical industry took advantage of this extremely lax regulatory environment and proliferated false advertising.
  • Also in October, the FDA announced a pilot prioritization program for abbreviated new drug applications (ANDAs) designed to accelerate the approval of generic drugs being manufactured in the U.S.
  • Even more exciting is that in May, Makary announced that the FDA would immediately start using AI internally across all FDA centers for scientific reviews. Tasks that used to take days to perform are now done in just minutes.
  • At the end of 2025, in December, the FDA announced an accelerated approval program to enable earlier approval for drugs that treat serious conditions or fill an unmet medical need.

This all happened in just one year. Incredible. And RFK Jr. has three more years to make even more improvements at the FDA. I’m confident this will happen.

But aside from all these amazing developments, we have an even larger reason to be optimistic.

One of the superpowers of employing advanced AI models for drug discovery and development is the ability to discover and develop novel drugs with dramatically higher levels of efficacy and lower levels of toxicity.

The net effect of AI will result in dramatically lowering the number of ineffective drugs and drugs with toxic profiles that ever make it to clinical trials. It will also increase the percentage of drugs that have a far higher chance of making it through Phase 3 clinical trials and, ultimately, FDA approval.

We can’t quantify the impact yet, as these developments have just happened in the last two years, but they will radically change and improve the clinical trial process. “Cleaner,” more effective drugs will move through the clinical trial process faster and more efficiently. It’s that simple.

AI is a force multiplier for the biopharmaceutical industry.

And if you haven’t had a chance to read The Bleeding Edge – The Successor to AlphaFold 3, which I published on Wednesday, I strongly recommend doing so.

Thanks for being an Unlimited member, and tell your friend there is great reason to be optimistic.

Licensing FSD?

Hi Jeff and Bleeding Edge staff,

First, I just want to say that I enjoy your articles immensely. I love reading about new technologies and also understanding how new tech could inform our investment decisions.

Secondly, reading the article below about Tesla ramping down consumer car production, I’m wondering, do you think that Tesla will ever license/sell their FSD to other car companies as another income stream?

I’d love to have the option someday to buy a new Hybrid SUV or even a gas guzzling SUV that is equipped with Tesla’s FSD technology – so that the SUV can drive me around. LOL.

Thank you,

Jeffrey H.

Hello Jeffrey,

Very well-timed question, as I just touched on this topic in yesterday’s Bleeding Edge – Way Mo Money. Definitely a must-read.

To avoid any misunderstanding, Tesla is not ramping down consumer car production. It is shutting down its production lines for its Model X and S cars, but it is increasing its production capacity for Models 3 and Y, as well as for the CyberCab, which will eventually grow to 2 million units a year.

As for your specific question about Tesla licensing its full self-driving (FSD) technology to other automotive companies, you might be surprised to know that Elon Musk has spoken about doing this for years.

He has also lamented that despite his outreach, no automotive company has agreed to do so. But Tesla is clearly willing to make it happen.

The automotive industry has held such animosity towards Tesla for so many years. They hated Musk and Tesla as a new entrant into the industry, and even more so because they made such a superior consumer product despite having no background in the industry.

The feelings that Tesla evokes in the automotive industry are similar to what happened when Apple launched the iPhone and destroyed Nokia, Motorola, and many other incumbents in mobile phones.

The automotive industry also hates Tesla because it circumvented the traditional dealer network and sold directly to consumers. And the industry is jealous of Tesla because it trades at a much higher valuation multiple for a “car company.” That frustrates the hell out of them because, to them, Tesla is just a “car company” like they are.

Except it’s not. It is one of the world’s leading artificial intelligence companies.

I can understand the hesitancy in the automotive industry on licensing FSD, especially back in 2024 or earlier, when it was still under development. But since last year, anyone who has experienced FSD can immediately understand the significance of the technology.

There was this radical ideological battle in the industry over how self-driving tech would be implemented. The entire automotive industry, its suppliers, and most AI “experts” insisted that self-driving could only happen using LiDAR.

Musk, his team, and I insisted that the path forward could only be vision-based to achieve scale in all driving conditions on all roads.

Today, it is clear who was right. This is no longer a question. Tesla achieved it with a vision-based system and cameras; no LiDAR necessary.

The real question now is this… Who will go first? Which automotive company will strike a deal with Tesla for FSD? This is the single smartest thing that any automotive company can do today. I have long maintained that the killer app for the automotive industry is autonomous driving. If they implement Tesla’s technology, they can remain relevant by offering a modern, competitive product that offers the incredible experience of a fully autonomous vehicle.

You might find this disappointing, but the employment of FSD will likely happen only for electric vehicles (EVs), not internal combustion engine (ICE) vehicles.

This isn’t a technology problem. FSD would work on ICE vehicles. The problem is that only EVs can optimize the operations of the car with the level of precision required, which is impossible right now with an ICE car.

It could happen, but I think it is less likely.

In the meantime, you can enjoy the Cybertruck if you’d like more space. And as Lithium-ion battery energy density improves, we’ll definitely see larger SUV-like EVs hit the road, hopefully using FSD for autonomy.

It’s time for the automotive industry to wake up, or risk irrelevance as consumers begin to flock towards the convenience and pleasure of unsupervised FSD.

Jeff

Jeff Brown
Jeff Brown
Founder and CEO
Share

More stories like this

Read the latest insights from the world of high technology.