• Autonomous deliveries in London are the future
  • As predicted, Magic Leap becomes an IP play…
  • How generative AI “killed” this search engine

Dear Reader,

It was a long time coming…

Yesterday, Apple finally announced its long-awaited new product for augmented reality – Vision Pro.

Formerly known in Apple’s developer software code as Reality Pro; as usual, Apple used a placeholder name leading up to the product announcement. But the final product looks quite similar to previous mockup photos that I’ve shared in The Bleeding Edge.

Source: Apple

This new product announcement is significant. It’s the first new product category from Apple since the release of the Apple Watch in the spring of 2015. The Apple Watch was yet another remarkable success story as Apple quickly became the largest watchmaker in the world.

Such a long gap between new product development is unusual for such a massive, well-resourced, and innovative company like Apple. But Apple wanted to redefine the product category of augmented reality…

And that’s exactly what it did, even down to its language. Apple dropped “augmented,” “mixed,” “virtual,” and even “extended” reality. What we’ll be hearing more of from Apple is the word “spatial.” Spatial computing, spatial computers, spatial photos, spatial videos, spatial audio, and yes… spatial reality.

The Vision Pro is a beautiful piece of hardware. The battery is not included in the headset, which lightens the headset’s weight for users. Power comes from a portable battery pack that connects to the side of the strap via a cable. This is right behind the speakers which provide “spatial” audio.

Source: Apple

But it’s not the cool design that makes the Vision Pro so interesting. It’s the hardware on the inside of the device. It’s powered by Apple’s M2 computing semiconductor, which is supplemented with a new chip, the R1, which is used for real-time processing of video from the real world.

And aside from the powerful chips that Apple uses, the camera and sensor technology is pretty incredible.

Source: Apple

There are 12 cameras embedded in the Vision Pro. Twelve. There are cameras that look out to a user’s field of vision, two that point down to “see” and track our hands, two IR cameras to track our eyes, and cameras that can scan our face as we put on the headset. All of this is supplemented with LiDAR and TrueDepth cameras that enable depth sensing and 3D imaging. This is useful to “see” and measure the real world, as well as for augmenting the real world with animation, images, and information.

The purpose of scanning our faces is application driven. Apple envisions users taking FaceTime calls using the Vision Pro. So rather than holding a phone in front of their face at some odd angle, a user can simply wear the Vision Pro. Using software and the IR cameras that track our eyes, the tech will display an image of us (with the appearance of not wearing the headset) to whomever we’re speaking with. And with all the sensors, it will be able to mimic our head and eye movements and even our smiles using the cameras pointing down toward our hands.

The key point is that this is an entirely new computing platform with a new operating system – Vision OS. This powerful new software has the potential to reinvent how we interface with computing systems. Those cameras on the bottom of the headset are also there to enable us to control the software with hand and finger gestures, the same way that the IR cameras allow us to control the software with our eyes.

It was a long time coming, but I believe it will have been worth the wait.

Apple developers will soon get the latest software development kits so that they can get to work on new “spatial” applications. Vision Pro will be available initially in the U.S. in the first quarter next year at $3,499. Obviously the “Pro” version is not meant for the mass market. It’s just the starting point, a catalyst, to launch the hardware and software applications that will redefine computing.

Apple is already working on a mass market follow-on product. That’s the device that will be smaller, lighter, and dramatically cheaper… I suspect on par with what consumers pay for high-end smartphones today.

And that’s the point. We should think about that future product release as the next generation of mass-market computing devices – spatial computing. A device that we can wear and still see the real world the same way we can still see the world when we wear tinted sunglasses. And it’s seamlessly empowering us to interact with software and the internet without ever having to touch a smartphone.

I can’t wait to get my hands on the future… We have so much to look forward to.

The disintermediation of shopping…

In the world of artificial intelligence (AI) there have been two major geographic hot spots – the United States and the U.K.

A handful of top-tier academic institutions and companies in the United States have been the genesis for incredible AI progress.

And so have the Universities of Cambridge and Oxford in the United Kingdom. DeepMind is a perfect example. AI-focused semiconductor company Graphcore is another. Both are based in the U.K.

So, it’s not surprising that the most promising self-driving company outside of the U.S. resides in the U.K. After all, AI technology is what powers self-driving cars.

The company is called Wayve. I’ve been tracking it for years now. And I’m excited to say that Wayve just made a big announcement on a project operating at scale. The company is teaming up with supermarket chain Asda to pioneer autonomous grocery deliveries in the London area.

And what really caught my eye here is just how ambitious this pilot program is. It’s going to cover about 72,000 households in London. That makes this a large-scale test.

Wayve and Asda estimate that there are 170,000 people living in their coverage area. These residents are already able to order groceries online and have Asda deliver them at a pre-determined time. Asda is going to route some of these orders to Wayve as fulfillment and build from there.

To me, this is the kind of real-world pilot that emerging companies need to really advance their technology. Obviously, London is a dense metropolitan area. Its roads aren’t the easiest for a self-driving car to navigate. I’ve driven them myself.

But Wayve’s self-driving technology looks to be up to the challenge. Below, we can see one of the company’s self-driving cars as it navigates busy streets. It can even steer around idling vehicles and construction equipment.

And the data that Wayve collects from its self-driving cars as they fulfill these delivery orders will be invaluable. The company will be able to upgrade its technology quickly based on what the self-driving AI “learns.”

So Wayve is set to make a lot of progress as this pilot program ramps up. That makes it a company we’ll want to keep a close eye on going forward.

And this also speaks of how the self-driving revolution will be a worldwide phenomenon. We spend a lot of time discussing the progress in the United States, and some in China which has been aggressively employing the technology, but it’s developments like this that show the autonomous future will be global.

This application is as much a cultural shift as it is a technological one. Post-pandemic, many consumers have retained the habit of buying groceries online. For many, it has become preferable. It’s convenient, saves a ton of time, and frees up even more time for other activities. Many strongly prefer disintermediated transactions, and that’s a trend that will continue to grow.

Self-driving delivery vehicles, whether they be a car or some other kind of vehicle like the Nuro, are the solution to that market demand.

My prediction for Magic Leap just came true…

We discussed Magic Leap’s epic fall from grace over the last six months a few weeks ago. At the time, the company had just signaled that it was throwing in the towel on its dream of becoming a full-stack solution in the augmented reality (AR) space.

As a reminder, Magic Leap was primed to become the Apple of the AR space. That’s because Magic Leap positioned itself as a “full-stack” solution. It was set up to produce the headset, the AR operating system, proprietary lens technology, and the software applications that would run on it. This is what Apple does with its consumer electronics devices. That’s why Apple’s products are so good.

And as we just learned yesterday, Apple just became the Apple of the AR space. Or according to Apple, the “spatial computing” space.

Magic Leap just couldn’t put all the pieces together and make it all work. And with its announcement a few weeks ago, I suggested that Magic Leap had become an intellectual property (IP) play.

My prediction was that the company would sell its IP for a fraction of what it was once worth. And that’s exactly what appears to be happening.

News just broke that Meta is in negotiations with Magic Leap to work out a patent licensing deal. This is exactly what I was expecting. There just wasn’t a path forward for Magic Leap as a product company anymore. Its strategic shift away from a consumer product to an enterprise AR play was a terrible failure, leaving it with its patents as the remaining assets.

Now, I did suggest that Apple may be the top suitor for Magic Leap’s IP. After all, Apple has been very acquisitive in the industry over the last decade for both companies and IP in this space. But any potential competitors to Apple should also be in the mix. So it’s no surprise to see Meta aggressively entering the fray.

That’s because Meta currently controls over 80% of the virtual reality (VR) headset market. This is thanks to its strategic acquisition of Oculus back in 2014. And it is looking to differentiate itself from whatever Apple does.

However, Meta knows that many of its VR customers will migrate over to mixed reality (AR/VR) headsets as soon as they are ready. And as of yesterday, Apple just leaped to the top with its Vision Pro. And while it won’t be selling until the first quarter, the product will soon be in the hands of developers taking mindshare away from Meta and all the other aspirational players.

So it makes perfect sense that Meta is looking to beef up its technology for future product offerings. And I believe Magic Leap’s lens technology is some of the most interesting tech in the industry. It’s the one thing that Magic Leap did right, and ironically, it is built on semiconductor technology.

Magic Leap kept its lens tech close to its chest. The company didn’t even allow contract manufacturers to produce its lenses. Magic Leap made them in-house – specifically to keep the design a secret.

With the latest discussions, it appears that Meta is looking to employ that technology in a future AR/MR headset. If it’s superior to the lens tech Apple has, licensing this tech could help Meta have a strong point of differentiation, as Apple is using microLED technology for its AR “screen.”

Generative AI just killed an upstart search engine…

We’ll wrap up today with an update on a promising search engine start-up called Neeva. The name may ring a bell for long-time readers. We first profiled Neeva back in July 2021.

Neeva was interesting because it was building a new kind of search engine. The company was founded by a team of ex-Google engineers. Their goal wasn’t to replicate Google’s approach to search. It was to index the web and build a completely different business model.

As we know, Google generates nearly all of its revenue from advertising with no regard to consumers’ privacy. Neeva’s founders wanted to provide an alternative. Their goal was to reindex the internet and provide a search engine with no ads whatsoever.

So instead of the search engine being geared to optimize advertising, Neeva’s search engine would be optimized for objectivity, accurate results, and privacy. The business model that they designed was an ad-free paid subscription.

I found this to be an attractive proposition. I would certainly pay a reasonable monthly subscription for a search engine that wouldn’t collect my data and steer me towards advertisements.

That’s why I was disappointed to see that Neeva decided to shut down development for its search engine. It struggled to gain scale as it was hard to convert consumers to the paid search model. And once generative AI took the stage, it was the final nail in the coffin. Neeva tried to make the pivot by developing its own generative AI, but it’s too far behind Google, OpenAI, and Microsoft to catch up, and it simply didn’t have the funding to train a massive, large language model of its own.

As we’ve discussed before, it’s become clear just in the last three months that generative AI is the future of search. Neeva knew that a legacy-style search engine would no longer be competitive.

But all wasn’t lost. Neeva’s recent work on generative AI bore fruit. Cloud software powerhouse Snowflake just stepped in to acquire the search engine start-up. No doubt Snowflake is interested in Neeva’s search and generative AI tech.

I think this is a smart move for Snowflake. While not yet disclosed, I believe that it was able to pick up Neeva at an inexpensive price, which will enable it to deploy Neeva’s generative AI across its own cloud services platform. This will be an asset for Snowflake’s broad customer base.

That said, I’m disappointed that the world lost a potential competitor in the search industry. As we have learned through the Twitter Files, neither Microsoft (powered by OpenAI) nor Google are trustworthy and have extensively manipulated the data/information that we are served when we are searching for the truth.

I only hope that another start-up rises up to challenge them.

Regards,

Jeff Brown
Editor, The Bleeding Edge