• Boeing’s big bet on eVOTLs…
  • Magic Leap’s big launch…
  • Will this technology transform Hollywood as we know it?

Dear Reader,

Success!

A couple of weeks ago, we enjoyed news from NASA concerning the Double Asteroid Redirection Test (DART). DART was designed as a test to determine if the trajectory of an asteroid could be affected through a collision.

The DART spacecraft traveled 7 million miles to slam into one of two asteroids, Dimorphos, which is orbiting around its larger counterpart, Didymos.

Below, we can see the immediate aftermath of the collision (Dimorphos is at the bottom of the image with all the ejecta):

DART’s Collision Test

Source: NASA

But that single image wasn’t enough to determine if the mission was a success. A collision is one thing, but whether or not it’s possible to materially change the orbit of Dimorphos is another.

While it took a couple weeks for the data to come in from this extraterrestrial scientific experiment, the results are now in. And they’re spectacular.

Prior to the collision, Dimorphos took 11 hours and 55 minutes to orbit Didymos. After the collision, Dimorphos’ orbit now takes only 11 hours and 23 minutes – a shortening of 32 minutes.

A 10-minute orbital shortening would have been considered a great success. So even though 32 minutes may not seem like much, it has epic implications for the protection of planet Earth. 

The implications are that if NASA can identify an asteroid that’s large enough to be a threat to Earth early on, it can intercept that asteroid and affect its trajectory so it won’t collide with our planet.

Far enough out, even the smallest adjustments in trajectory can result in us avoiding a planetary disaster.

For years, organizations like the B612 Foundation have worked to map and track asteroids in our solar system so that we can identify any threats. Unfortunately, we’ve only discovered about 40% of what’s believed to be out there.

Fortunately, NASA will launch its Near-Earth Object (NEO) Surveyor with the explicit purpose of improving our understanding of potential threats.

The NEO Surveyor is an infrared telescope – like the James Webb telescope – that’s designed specifically to find at least two-thirds of all near-Earth objects larger than 140 meters (460 feet).

This is exactly the kind of information that we’ll need to avert a planetary disaster.

DART’s Impact on Dimorphos

Source: DART

Let’s just hope politicians and unelected officials of the World Economic Forum don’t create one of their own doomsday scenarios before that time comes. 

If this global “cabal” of elites would simply get out of the way, we’d have so much to look forward to.

Wisk just revealed its sixth-generation aircraft…

Last week, we talked about Kittyhawk – one of the most promising electric vertical takeoff and landing (eVTOL) startups – shutting down.

The news came as somewhat of a surprise considering the almost unlimited amount of capital Google founder Larry Page had to fund the project.

But the good news is that Wisk Aero – the joint venture between Kittyhawk and Boeing – survived. And Wisk just revealed its sixth-generation aircraft. Here it is:

Wisk’s Four-Passenger eVTOL

Source: Wisk

This is the aircraft Wisk Aero will take into commercial operations. It’s a four-seater capable of flying about 90 miles at a time. The eVTOL’s operating altitude is between 2,500 and 4,000 feet.

And get this: It only takes about 15 minutes to charge. That means it will have minimal downtime between 90-minute flights.

Even more impressive, Wisk Aero is going fully autonomous right from the beginning. This eVTOL has four seats for passengers… and that’s it. There’s no seat for a pilot because there won’t be any pilots on board.

Instead, Wisk will employ remote “surveillance” pilots. These are professionals in a control room who will monitor flights. And if something goes wrong, they’ll remotely take control of the aircraft.

This is the same model we’re seeing deployed in the autonomous trucking industry. And it makes perfect sense.

Wisk will ensure safety by having “remote” pilots on standby. And this enables the company to maximize revenue by having up to four passengers per flight. As a result, Wisk’s services will be cost-competitive.

In fact, we just got our first view at what eVTOL travel will cost. Wisk announced that it will charge $3 per passenger, per mile.

To quantify this, let’s suppose we live 20 miles outside a major metro area. An eVTOL flight into the city would cost us $60 one-way, or $120 round-trip. Not bad considering the amount of time saved that would otherwise be lost in a commute.

The next step for Wisk is to secure what’s called the “Type Certification” from the Federal Aviation Administration (FAA). This is the first certification an aircraft manufacturer must receive. It states that the craft meets all safety standards required for a specific “type” of aircraft.

Once that’s in place, Wisk will need to get the “Production Certification.” This will allow Wisk to start manufacturing its eVTOLs.

Then the final step in the regulatory process is to get the “Air Carrier Certification.” That’s the green light to begin commercial operations, presuming that Wisk actually becomes the air carrier itself. This, of course, could also be a third party.

As we discussed last week, the FAA’s process is rigorous. It takes a while to get all the required certifications.

So, Wisk Aero is targeting 2027 as the year it will begin commercial air transportation operations. But I think that’s a conservative projection.

Remember, aerospace giant Boeing is behind this company. And Boeing has decades’ worth of experience of dealing with the FAA. I expect it will be able to navigate the process much faster than other startups can.

And we already know that there’s a handful of other eVTOL companies on track to begin their air transportation operations by 2024-2025. By then, the industry will be in an explosive growth mode phase. So, I’m optimistic we’ll see Wisk’s eVTOLs in the air much sooner than five years from now.

In the not-so-distant future, we’ll have to decide: Do we hop in our car, an Uber, or do we fly with Wisk?

I know which one I’d pick…

Magic Leap’s big enterprise launch…

Magic Leap finally released its second-generation augmented reality (AR) eyewear to the world. This has been years in the making. And the hardware is even more impressive than we expected…

If we remember, Magic Leap was the hottest name in the AR space a few years ago. I tested out the Magic Leap 1 headset in AT&T’s Manhattan location back in December 2019… and it was fantastic. Here’s a look:

The Magic Leap 1 Headset

As we’ll notice, the glasses were big and bulky at that time. But the lens technology was phenomenal, as were the depth perception and the controls. I even had the chance to play an AR game.

I put on the headset and watched a portal open on a wall in front of me. Animated robots crawled out of the portal. I had to blast them with my ray gun. It was an incredibly immersive experience. And I walked away impressed.

However, Magic Leap ran into problems. The company was too early to the consumer space. And it got itself into financial trouble.

This forced the company to restructure and pivot to enterprise applications. And that’s what the Magic Leap 2 is designed for.

Here’s a look at the new product:

The Magic Leap 2 Headset

Source: Magic Leap

The first thing that jumps out is the reduction in the size and bulkiness of the glasses. The glasses are 50% smaller and 20% lighter than the first version. That’s critical for adoption – even in the enterprise space.

These glasses are designed for employees in the health care, retail, and industrial fields. They will “augment” the wearer’s field of vision with pertinent information relative to the task at hand.

For example, retail workers will be able to look at inventory and quickly identify critical information, including how much of a given item is in stock. That data will show up right in their field of vision.

Another great example is machine repair. As repair professionals work on an item, these glasses could label specific parts and even provide step-by-step instructions.

The Magic Leap 2 base model is priced at $3,300. And the high-end model costs about $5,000. These price points are certainly reasonable for the enterprise market.

As excited as I am to see a commercial launch, I’m disappointed that Magic Leap abandoned (for now, at least) its consumer-facing ambitions. Prior to the company’s pivot, Magic Leap was in talks with some very interesting partners like Lucasfilm. How fun would it be to have R2-D2 right there in our living room?

I’ve long believed that augmented reality will be the next consumer electronics craze. A company just needs to have the technology – and the ambition – to crack it. I wouldn’t be surprised if Magic Leap becomes an acquisition target precisely for that reason.

A larger player with more capital to invest would be able to evolve the technology into a consumer-grade product at a price point appropriate for mass adoption.

Movie producers on every street corner…

We’ve been talking a lot about text-to-image generative (artificial intelligence) AI over the last few weeks. These are AIs capable of producing remarkable images based on text input. The number of breakthroughs that we’ve seen in such a short period of time is remarkable.

In fact, somebody just won the art competition at the Colorado State Fair with a stunning image he created using the generative AI.

Well, the industry has quickly refocused on applying AI to the next logical step. And that’s text-to-video generation.

Leading the way, Meta just announced its “Make a Video” feature. The name is pretty boring. But the technology is already promising.

It’s just like text-to-image in that users tell the AI what to produce. The only difference is that the AI outputs a three-second video rather than a static image.

Check it out:

Source: TechCrunch

Here we can see two distinctly different videos. These illustrate just how powerful Meta’s text-to-video tech is.

The first depicts a dog licking an ice cream cone at the beach – as the sun sets, no less.

Obviously, this is incredibly creative. There’s no real-world training example for the AI to model after. It had to imagine and create what such a scene would look like and extrapolate video.

The second video depicts two people walking under an umbrella in the rain. This is entirely AI-generated, but it looks like it could be real footage. While not perfect, it’s still quite lifelike.

So we can clearly see the potential for this kind of technology. And it’s only going to get better…

Soon, three-second videos will become three-minute videos. We’re just months away from that. And after that, three minutes will become 30 minutes. AI is developing at a pace that’s almost impossible to understand, and far faster than the advancements in other fields.

It seems obvious to me that Meta is going to make this technology available to users of its social media and messenger properties –including Facebook, Instagram, and WhatsApp. 

I can imagine that users will be able to create their own three-second GIFs to incorporate into their posts and messages. This is a perfect example of the kind of instant gratification that social media users have become accustomed to. And everyone’s GIFs will be personalized and unique, unlike what we’re used to today.

But let’s think bigger…

It doesn’t take much imagination to realize that the AI will eventually be able to produce feature films and television shows – all from just text input. Imagine providing an AI with a 30-page short story, or an entire novel, and having it produce a movie out of that input. It has remarkable implications. And that’s where this is going.

We could very well be just two or three years away from that kind of capability. Hollywood is in for a disruption, and it has no idea what’s coming. Movie producers will be on every street corner capable of producing content at the blink of an eye.

Regards,

Jeff Brown
Editor, The Bleeding Edg