The Greenfield Opportunity for Humanoids

Jeff Brown
|
Feb 24, 2025
|
The Bleeding Edge
|
5 min read


It happened late on a Wednesday night…

A robotics company employee was called in to check on a malfunctioning robotic arm stationed at a pepper sorting plant in South Korea.

The plant was two days away from undergoing a test run with a fleet of robots – in preparation for automating some of the agricultural processing of vegetable farming – when a sensor malfunctioned on one of the arms.

As the man was inspecting the malfunctioning robot, the robotic arm mistook him for a box of vegetables. It grabbed him and pressed his body against a conveyor belt.

Sadly, this crushed his chest and face, resulting in death.

It sounds like a story we might have read about in the 1980s or 1990s when the tech was still evolving. But this happened in November 2023.

It’s surprising, I know. It wasn’t that long ago at all, and yet it encapsulates everything we fear about the age of robots. We’d think that the tech would be advanced enough to discern between a human and a box – but it wasn’t.

Something has happened, though, between November 2023 and today. What’s important to understand is that the pepper-sorting robot was lacking intelligence. It was “dumb,” in the sense that it had been preprogrammed and fine-tuned to do a very specific task. It had basic computer vision, but not enough programming to distinguish between two large objects (i.e. a human and a box).

The robot was also incapable of switching on the fly to learn a new task, or to reason through an unexpected scenario like the one it encountered.

Fortunately, those days are over now.

The advancements in neural networks and reinforcement learning have radically changed the industry, opening the door for intelligent machines – manifested AI.

February has been an inflection point.

Last Thursday in The Bleeding Edge – A Scarecrow No Longer, I wrote about the latest developments at Apptronik and its partnership with Google’s DeepMind robotics team. The partnership is designed to give Apptronik’s Apollo a brain.

And on February 4, a company we’ve followed closely here at The Bleeding Edge, Figure AI, announced that it was discontinuing its collaboration with OpenAI, announcing that it had made a major breakthrough with its own AI. We explored those developments in The Bleeding Edge – Can Figure Catch Optimus?

In that issue, I suggested that Figure AI would announce that it had developed its own neural network with advanced reinforcement learning – an approach that mirrors what Tesla has done with Optimus.

And on Thursday, Figure AI just announced exactly that.

Figure Gets a “Brain”

Figure calls Helix – a “generalist vision-language-action (VLA) model.”

Figure describes Helix as a single neural network capable of understanding natural language prompts… with the ability to perform work without any task-specific fine-tuning.

In other words, Figure 02 and its new “brain” – Helix – has now become an intelligent general-purpose humanoid robot, capable of safely interacting in human environments.

And one of the more interesting tasks demonstrated by Figure was a demo of two Figure 02s receiving instructions verbally from a human and collaborating to complete the task.

Verbal Prompting of Two Figure 02 | Source:  Figure AI

Above, we see a man unpacking a bag of groceries, objects that Figure 02 had not been trained on before. The task given to the robots is to store all the items in appropriate places.

What makes this interesting is that some of the items require refrigeration, while others – like fruit – would normally be stored in a bowl on a counter, or some dry goods would be stored in drawers.

Helix, Figure 02’s “brain,” has to draw on its knowledge and use reasoning to complete the task in the most logical way.

Figure 02 Storing Groceries | Source:  Figure AI

And that’s exactly what “they” did.

Helix was able to figure out which items belonged in the refrigerator, versus those in a drawer, versus those meant for the fruit bowl.

And the two humanoid robots were able to collaborate to complete the task together.

Figure 02 Storing Groceries | Source:  Figure AI

Figure claims that this multi-robot collaboration is a first, but I highly doubt it. I’d be shocked if Tesla isn’t already doing this with Optimus in its own factories.

Nonetheless, it’s still impressive, and the implications are pretty remarkable.

Imagination is No Longer Required

Fleets of Figure 02s could work in complete synchrony towards a common goal – a task given to them from a single verbal prompt.

Or if we extrapolate from the demonstration shown above, it’s easy to see how profound the technology will be for home use. Consumers will use an agentic AI to order and pay for groceries and have them delivered. Figure 02 will collect the groceries at the front door after delivery, unpack them, and store all items in the appropriate place.

I spent an hour on Sunday performing all these same tasks, including a trip to the supermarket. With this technology, I could recapture all that time. I could have used that hour for much-needed rest.

Lindsey, my editor, tells me she gladly would have accepted a fleet of Figures to help host her child’s birthday party this weekend.

Between the work of meal pick up and prepping, cleaning the home, and setting up decorations, it’s at least half a day saved. And she would have much rather taken her kids to a movie after the event than spend hours deconstructing it all and re-cleaning the home.

A general-purpose humanoid robot would handle all these tasks… following a simple verbal cue.

In fact, Helix can even reason and understand the context of a verbal prompt.

“Pick up the desert item” | Source:  Figure AI

What’s incredible about this latest development is that Helix was trained on only 500 hours of high-quality data on object generalization. And yet it was able to both understand and manipulate a very wide range of objects.

The developments this month are sending us a signal that we should be prepared for developments with humanoid robotics – manifested AI – to begin to move extremely fast.

They are also an explicit acknowledgment that Tesla’s vision-based, neural network architecture for autonomous robotics is the optimal approach.

Apptronik and Figure AI are now on board. Others will follow quickly.

And it’s an easy prediction now that billions will be invested in humanoid robotic companies this year alone. Imagination is no longer required by the “money” (institutional investors) to understand the business opportunity and return on investment.

Apptronik is commercializing its technology with Mercedes Benz. Figure is doing the same with BMW. and Tesla is already using Optimus in its factories and offices.

It’s a greenfield opportunity. An entirely new multitrillion-dollar industry. And it’s a race to deploy as quickly as possible to earn dominant market share.

Jeff


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.