• A major disruption in generative AI from Cerebras
  • A small disappointment from Apple’s plans for mixed reality
  • A company that is a clear and present danger to our privacy

Dear Reader,

I hope all the incredible developments around generative AI (artificial intelligence) that I’ve been sharing in The Bleeding Edge have motivated and excited many readers to find and experiment with the technology.

You can do this by creating an account at OpenAI to access ChatGPT. Another way is to use Microsoft’s Bing search engine which now incorporates OpenAI’s technology. And as I wrote earlier this week, Google has started to make its own generative AI available to a limited number of users in the U.S. and U.K.

The technology is just incredible the more that we experiment with it. Yet it occurred to me that to access it, we typically use a desktop/laptop software environment. The reality is that the computing device nearly everyone uses the most is the smartphone. And it was just a matter of time before a company cracked the code and figured out how to bring GPT-4 to the smartphone in a simple-to-use interface.

Which is why Oasis AI caught my eye when I saw it on Product Hunt. Product Hunt is THE place in the industry where new software applications are launched and discovered. There’s so much activity on the site, it can be overwhelming for the uninitiated. But it’s a great resource for discovery.

The Oasis AI app jumped out at me was because of its elegant simplicity. I can’t imagine an easier way to access and leverage an immensely powerful technology like GPT-4.

Here are some screenshots…

The app has one primary screen shown on the left above which has a microphone “button” that we tap to start recording whatever we want to say. There’s no need to overthink it. If there’s a topic we’d like to put into a variety of formats, all we have to do is start rambling.

The screen on the right is the “Select Outputs” screen. This allows users to choose which outputs they would like the AI to produce. For example, I might only need a blog post, a LinkedIn Post, and a Twitter thread. Just select what you might use, and you’re done.

I recorded a minute-and-a-half stream of consciousness about the upcoming SpaceX Starship orbital test launch scheduled for April 17 just to experiment with the app. Here’s what Oasis AI produced in seconds…

Above I’ve provided three different outputs from the AI. These were all produced from the same audio recording that I provided. I thought that these were interesting examples because it’s easy to see the stylistic differences between a blog post, a TikTok script, and a Twitter thread. Communication style is quite different, but the content is still the same.

The best part is that the entire process took less than two minutes. And that’s including the time it took for me to record my minute-and-a-half speech.

Naturally, this technology can be used to draft e-mails for work, create a text, or even create a to-do list if the subject is task oriented.

Aside from the unbelievable utility of a software app like this, it’s so easy to use… no tutorial or training required. I hope you enjoy it and I encourage you to experiment with this incredible technology. 

It’s an amazing productivity hack that finds utility in both our personal and professional lives.

Cerebras democratizes access to generative AIs…

A private company we know well, Cerebras, just released seven new GPT-based language models. And Cerebras made them all available for free.

This is an incredible development. Current large language models have been dominated by proprietary software that only major companies like Alphabet (Google), Microsoft, and OpenAI have control over.

Somewhat surprising is that Cerebras isn’t a software company. It’s an AI-focused semiconductor start-up. Its claim to fame is that it made the world’s largest AI chip. Here it is:

Source: Cerebras

As we can see, Cerebras’ AI-focused semiconductor is huge. This kind of semiconductor design is referred to as a wafer-scale engine (WSE). This is because the semiconductor itself is the size of a single silicon wafer. And this is what its seven GPT-based language models were trained on.

The nuance here is that Cerebras’ chip was designed specifically for AI. So its language models are the first to be trained on an AI-application-specific chip.

For comparison, OpenAI’s ChatGPT and its early competitors were each trained on Nvidia’s graphics processing units (GPUs). These are powerful general-purpose semiconductors originally designed for graphics processing. They are now considered the workhorses of artificial intelligence and machine learning.

This gives Cerebras a competitive edge. It trained its seven AI models in just a few weeks. This is something that takes months to do using general purpose systems. That was the purpose of Cerebras… to demonstrate how its application-specific technology could outperform general-purpose semiconductors.

And what’s interesting here is that each of Cerebras’ models were based on different training sets. This graph tells the story:

Source: Cerebras

As we can see, Cerebras trained each language model on larger and larger data sets.

The first model trained on just 111 million parameters. And the seventh trained on 13 billion parameters. The other five models in between trained on data sets of varying sizes within the range. This compares to ChatGPT which was trained on 175 billion parameters.

Training large language models (LLM) of different sizes is relevant because certain applications don’t need massive language models to perform desired tasks. In reality, the industry won’t have just one LLM to perform every task. There will be many, each designed to perform a category of tasks extremely well and efficiently. Even individual companies will likely have their own generative AIs that specialize in that company’s area of expertise. 

Cerebras has demonstrated the direct relationship between the number of parameters and the amount of processing required to train a specific model. What’s more, Cerebras made its data sets and training instructions open and available for anyone to see. This is huge.

By making its training instructions openly available, Cerebras is blowing the industry wide open. Now any organization can take what Cerebras has done and customize it to create their own AI model suited for specific purposes.

This is a thrilling development. And it’s a genius move by Cerebras. It reminds me of what Nvidia has done with autonomous driving technology, developing prototypes for autonomous software that automotive manufacturers can build upon.

Ultimately, with this contribution, Cerebras is further democratizing AI. Making this kind of software available to all removes the historically large financial requirements for working with artificial intelligence. It makes the technology accessible even to students, hobbyists, and cash-strapped entrepreneurs. This development will speed up the adoption of generative AI.

As for Cerebras, suddenly it’s going to be a big player when it comes to training new AI-based language models. If organizations want to build a model that’s hyper-specific to their own purposes, using Cerebras’ chips to do the training makes the most sense. It will be more economical to train on a Cerebras array of semiconductors versus an Nvidia platform.

So Cerebras remains one of the most interesting early-stage semiconductor companies out there. As I write, Cerebras is already valued at $4.3 billion as a private company. This isn’t surprising given how explosive the growth has been in this industry over the last year.

We may have to wait a little longer for Apple’s Reality One launch…

Some news just leaked around Apple’s Reality One headset. It seems mass production has been delayed until the third quarter.

Concept Design of Apple’s Reality One Mixed-Reality Glasses

Source: macrumors.com

This is something we’ve been tracking closely the last eighteen months.

As a reminder, the Reality One product is Apple’s upcoming augmented reality (AR)/virtual reality (VR) headset. And up to this point we’ve been expecting Apple to launch the headset over the summer… but it seems that plan has slipped.

Mass production for the headset started earlier this spring. But it appears Apple has had second thoughts on the timing. Rumor has it the product launch has been pushed to Q3.

That means Apple may have pushed the product announcement date back to its major event this fall. That’s when Apple typically reveals its latest iPhone model. This year the event very well may center around the augmented reality headset.

So this is a bit of a disappointment. I’m itching to see what both the hardware and software are capable of doing.

I doubt this is a problem with the hardware. The technology to produce a product like this already exists and is well patented by Apple. If I had to guess, Apple is working hard to improve its software applications so that there are a handful of spectacular apps with a “wow factor” at the time of the product launch. 

Either way, we’ll have something to look forward to in the fall.

This controversial company is loved by law enforcement…

We’ll wrap up today with an update on Clearview AI. Regular readers will remember this as the company that threatens to kill privacy forever.

That’s because Clearview AI built its entire business around “scraping” pictures from publicly available resources on the internet like social media. It used bots to collect images of people from all over the world – without their consent.

As of last March, Clearview had scraped more than 10 billion pictures in total. Today that number is more than 30 billion.

And Clearview AI applied its facial recognition technology to all of these pictures… then it associated a name with each face.

The result? The largest facial recognition database the world has ever seen.

Naturally this raised all kinds of privacy concerns. Clearview was even hit with multimillion-dollar fines in Europe and Australia for its violation of privacy laws. But none of it mattered…

In a recent interview with BBC, Clearview’s CEO said that the company has run about a million searches on behalf of various police departments across the U.S. We can think of these searches as massive virtual police line-ups.

Among those using Clearview’s tech is the Miami Police Department. Miami PD reported that it has used Clearview’s process to investigate a wide range of crimes. Everything from murder to something as simple as shoplifting.

Naturally, law enforcement loves this kind of technology. It makes their jobs easier.

But there’s a major risk here. The AI isn’t perfect. And Clearview admits as much. The company doesn’t guarantee accuracy.

So this raises the possibility of somebody getting falsely accused and convicted of crimes they didn’t commit. Clearview’s tech could put innocent people in jail… all due to the AI making a mistake.

This should make all of us very uncomfortable. The challenge is law enforcement is unlikely to stop using Clearview’s services. It’s just too convenient. And in most cases, the software actually does get things right.

The bottom line is that we need to be very wary about this kind of technology. Clearview has been tremendously successful as a business so far… but at what cost? And what kinds of safeguards can be put in place to protect those who are innocent and have been falsely identified by the software as the perpetrator?

Regards,

Jeff Brown
Editor, The Bleeding Edge