• Twitter, X Corp., and X.ai…
  • A new open source entrant into large language models
  • The AI arms race is just getting started

Dear Reader,

“Everything after clearing the launch pad was icing on the cake.”

Those were the words of the SpaceX announcer during last Thursday’s historic launch of the SpaceX Starship from the Boca Chica spaceport in Texas.

Long awaited, it was a remarkable launch to watch.

Prior to launch, Elon Musk set expectations that he was just hoping to get the integrated booster and Starship off the launch pad. After all, this was the very first test of the integrated system with 33 Raptor engines and an attempt to do something never done before.

Which is what made the test launch such a stunning, stunning success. Not only did it get off the launch pad, Starship reached Max Q – the point of maximum stress on the rocket. It managed to reach an apogee of 39 kilometers, and it did so without six of the 33 Raptor engines functioning.

Just under four minutes into the flight, around the time that the first stage should have separated, Starship and the booster began to loop in what was clearly a sign of trouble. What happened next was referred to as a “rapid unscheduled disassembly.”

What a great engineering term for – BOOM!

Which is exactly what is supposed to happen when a rocket starts to malfunction. As usual, the reporting by the press was mostly negative, and inaccurate. The launch was an incredible success by all measures, and yet even Bloomberg came out with an article titled:

“Starship Explosion Shows Just How Far SpaceX is From the Moon”

The article should have said the opposite.

If SpaceX was able to get to 39 kilometers in altitude with six less operational rocket engines on a test launch, just imagine what will happen the next time around. And that’s the point. The reason that SpaceX was able to transform the aerospace industry was because it was willing to test and fail early. SpaceX did the same thing with the Falcon 9, had its series of failures, and then went on to demonstrate a series of successful launches unparalleled in the industry… and all for a fraction of the cost of any rockets that came before the Falcon 9.

Musk has already said that SpaceX plans to launch a second test Starship in a month or two. They’re ready to go. With each launch comes more valuable learnings.

And that means that SpaceX is one step closer to the moon.

Get ready for the everything app…

Elon Musk is making some dramatic moves with Twitter right now.

In the past few days, the corporate press has been fixated on the removing of “legacy blue checks” for accounts that were verified under the old model. As a reminder, Twitter’s previous way of giving out blue checks was deeply corrupt. As we learned, many paid Twitter employees to receive their blue check, or it was given out to those that fit some “preferred” political narrative.

But while much of the world is distracted by this silly vanity, something far more interesting is going on just underneath the surface…

For starters, Musk announced his plans to create a generative artificial intelligence (AI) to compete with OpenAI’s ChatGPT and Google’s Bard.

Musk’s goal is to train his AI to be objective, rather than biased. He specifically pointed to the proven bias demonstrated by OpenAI and Google and how it is reflected in the outputs of their generative AIs.

We can view this as a form of censorship. The AIs don’t openly censor information. But they will tend to present information that’s biased towards a certain political agenda or narrative depending on what data the AIs are trained on, as well as any guardrails that are programmed into the system.

So Musk wants to provide an alternative that’s completely data-driven.

The question is – where does this new generative AI fit into Musk’s umbrella of businesses? This is where Twitter comes back into the picture…

When Musk bought Twitter and took it private, he dissolved Twitter as a corporate entity. Now Twitter exists only as a branded platform. Twitter is now held by a company Musk set up years ago called X Corp.

At one time he suggested that this company could build the “everything app.” He envisioned the app doing a wide range of things – similar to WeChat in mainland China.

WeChat is a messaging platform that also enables users to make video calls, make payments, play games, order food and groceries, and even book doctor’s appointments. It basically combines the functionality of 10 or 12 different apps here in the U.S.

That said, Musk did something telling just a few days ago. He registered a website called X.ai. No doubt this is where his generative AI will live. And this suggests that his new AI will be housed within X Corp. as well… just like Twitter.

We can compare this to the setup with Facebook/Meta. Meta is the company, and beneath it are a variety of different products: Facebook, Instagram, and WhatsApp, primarily.

So it appears that Musk’s plans for an “everything app” are now in motion. And Twitter is going to be at the heart of it. This isn’t speculation either. X Corp. and X.ai exist, and X Corp. has already ordered 10,000 GPUs which are what is necessary to train the new generative AI. 

And given that much horsepower, it won’t be long before we’ll learn more about how Musk plans to infuse Twitter with artificial intelligence. This will likely be one of the biggest stories in tech this year.

Stay tuned…

Stability AI steps into large language models…

Speaking of competition in generative AI, a company we know well just entered the ring.

Regular readers will remember Stability AI. This is the company behind Stable Diffusion – the text-to-image generator that was a precursor to large language models like ChatGPT.

Well, Stability AI just released a suite of its own large language models. It’s called StableLM. And right now it consists of two separate generative AIs.

StableLM’s focus is on transparency. As such, Stability AI open-sourced the code. Anyone who understands coding can view it to see exactly how StableLM works and what it was trained on.

This is in stark contrast to ChatGPT (OpenAI) and Bard (Google). Both OpenAI and Google have kept their code proprietary… so nobody else knows what’s in it.

I see this as a very positive development for the industry. We want to have competing AIs to choose from. That way this powerful technology isn’t controlled by just one or two massive corporations.

I’m also excited to see that Stability AI is taking a very similar approach to Cerebras.

If we remember, Cerebras is the company that designed the world’s largest AI-specific semiconductor. And earlier this month the company released seven new GPT-based language models.

What I love about this is that Cerebras trained each of them with progressively larger data sets. This allows each model to be optimized for specific applications. Not everything needs to be trained on the entire open internet as ChatGPT was.

Stability AI is taking the same approach. It trained one of its AIs on 3 billion parameters. And Stability trained the second on 7 billion parameters. From there, Stability AI plans to release models trained on 15 billion and 65 billion parameters, respectively.

For comparison, OpenAI trained ChatGPT on 175 billion parameters. So Stability AI’s models are much smaller. And smaller language models don’t require as much computing horsepower to run.

They don’t need to be hosted at a big data center. Instead, the smaller models can run on the edge of networks – such as our phones and laptops.

So this is a big move by Stability AI. And because it’s StableLM AI suite is light and open-source, I suspect we’ll see a lot of companies adopt its technology as a backend for their own software applications.

China’s jumping on the generative AI bandwagon…

Not to be left behind, a host of Chinese companies announced over the last few weeks that they are working on their own generative AI models. They will each function much like ChatGPT.

Perhaps the most notable Chinese company getting in on the act is Alibaba. This is China’s version of Amazon.

Alibaba has already launched its own ChatGPT-style chatbot. The company has made it available to select businesses on a trial basis. That will allow Alibaba to work out the kinks and release an optimized version to the general public in the months to come.

And it’s not just Alibaba.

Baidu, China’s equivalent to Google, launched its own generative AI, also on a trial basis. It’s called Ernie Bot.

And both Huawei and SenseTime announced that they are working on a generative AI. Huawei is a major player in the smartphone and wireless technology space. And SenseTime is a Chinese company that specializes in facial recognition technology.

So four major Chinese companies are stepping into the ring in a rush to not be left behind. And I’m sure more will follow. It’s very clear that the race is on. This competition isn’t just about a corporate rivalry. This is deeply tied into China’s national ambitions to be the world’s leader in artificial intelligence by 2030.

But here’s the thing – all of these generative AIs will need to be trained on datasets approved by the Chinese government. That means the Chinese Communist Party will control what inputs go into the AI.

Obviously this will impact the AI’s output. They won’t be trained on all available knowledge. Instead, they will receive a curated dataset that contains only information that the Chinese Communist Party deems appropriate.

And that means few parties outside of mainland China will adopt these models. They simply won’t be competitive with the U.S.-based alternatives.

So I’m curious – will these Chinese companies make two versions of their AI models?

They could make one highly restrictive model for use in mainland China and a separate model trained more broadly for use outside the country. Is that the plan here?

It’s going to be interesting to see how this all plays out.

What we should take away from this is that the rise of artificial intelligence clearly won’t be a U.S.-centric phenomenon. We’re heading into a world where several nations will build out their own versions of this technology in a bid for dominance.

And that doesn’t mean just the largest countries. As a reminder, many large language models have already been open sources. And so much of the research in this area has been openly published. Training a generative AI is within the budget of just about any country.

We can think of what’s happening right now as an “AI arms race.” And it’s just getting started.


Jeff Brown
Editor, The Bleeding Edge