Even the very best and most successful tech companies can disappoint.
And that was certainly the case for Meta (META) yesterday, the day of its very first AI developers conference – LlamaCon.
It was a complete flop. And the sentiment was justified.
Earlier this month, Meta announced its latest suite of large language models (LLMs), the “Llama 4 herd.” Llama stands for Large Language Model Meta AI.
Source: Meta
Released and immediately available are Meta’s Llama 4 Maverick and Scout multimodal LLMs. Scout is significant as a powerful model capable of running on a single NVIDIA H100 GPU. And Meta claims that Maverick is best-in-class, beating OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across a variety of benchmarks.
Both models were optimized for cost performance, a clear indication of Meta’s strategy to push for the widest distribution possible.
Meta has been trailing far behind in the race to artificial general intelligence (AGI), but its distribution strategy has been successful. Meta has had 1.2 billion downloads of Llama models to date.
Again, if anyone suggests to you that artificial intelligence (AI) isn’t already widespread, you can politely set them straight.
Unfortunately, what Meta hasn’t made available beyond a preview was the AI in the “herd” developers were most excited to see – its Llama 4 Behemoth model.
It’s called “Behemoth” because the model is trained on 2 trillion parameters and is said to be its most powerful mixture of experts (MoE) AI model. MoE models are prominent right now as a way to improve performance by having specialized sub-models capable of handling different tasks more efficiently.
Naturally, developers were disappointed not to have access to Behemoth when Meta announced it on April 5. But the working assumption was that Meta would release the model at LlamaCon.
In addition to that, there was an expectation that Meta would announce something really big at its first AI developers conference. Otherwise, why bother with the event? And that “something” was expected to be a reasoning model on par with those from OpenAI, Google, and xAI.
And yet, yesterday’s conference came and went. None of it happened. All we know is that Behemoth is still training and not yet publicly available, and with no exact timeline for when to expect its release. Hence the flop.
Meta clearly didn’t have the goods.
In the absence of its highly anticipated Behemoth reveal, Meta instead leaned into the reputation it’s been trying to build in the AI industry as being developer-friendly and open source.
To that end, it spent a lot of the event around its Llama APIs (application programming interface), which is what makes it easy for software developers to plug their software into Meta’s multimodal language models.
Related was the collaboration with Cerebras and Groq (not to be confused with xAI’s Grok), enabling developers to select either semiconductor company upon which they can run Llama 4 for improved inference speeds.
Cerebras and Groq have emerged as the two leading semiconductor companies specializing in inference – the running of an AI application – for AI, so the partnership was a smart move.
Meta also announced a slew of cybersecurity tools related to Llama 4, specifically Llama Guard 4, LlamaFirewall, and Llama Prompt Guard. Who knew you needed LLM prompt protection? Joking, of course…
But APIs, smart partnerships, and software security are all table stakes in tech. They’re a given.
And the “open source” moniker as Meta uses it just isn’t taken seriously.
Meta’s software is not open source. It does provide the weights for its models and allows others to manipulate those weights, but Meta:
The Open Source Initiative (OSI) – the widely recognized authority on the open source software ecosystem – eviscerated Meta, stating clearly that “the license for the Llama LLM is very plainly not an “Open Source” license.” OSI went even further to ask that Meta “correct their misstatement” on this matter.
Meta does a disservice to itself and its developer community by pretending to be what it’s not.
It would be far better served by leaning in and developing a powerful reasoning model, as well as an agentic model.
Though it will need to do so quickly, otherwise it will risk losing market share quickly to OpenAI, Google, and, of course, xAI.
Meta’s distribution channels, mainly Facebook, Instagram, and WhatsApp, are its biggest assets. It would be a shame for one of the best businesses in the world to fail to leverage that.
Meta is not known for innovation, though…
So it’s no surprise that Meta has been acquiring and investing in a wide range of AI-related businesses over the last few years. It’s a strategy that has clearly worked out just fine and resulted in a company now worth $1.3 trillion.
Despite its size, Meta (META) is still growing at a healthy double-digit growth rate, enjoying 80% gross margins and generating almost $42 billion in free cash flow this year.
So while Meta is not going to win the race to AGI – it’s not even a contender in my book – it will continue to benefit from its massive distribution channels and increase further in value over any reasonable period.
The real question is, what the heck will it do with all that money?
Jeff
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.
The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.