Google and OpenAI are racing to outdo each other’s AI systems.

On September 19, Google released the latest version of its AI chatbot, Bard.

New features elevate Bard from a regular chatbot into something much closer to a personal assistant.

And this week, OpenAI began a limited release of its newest version of ChatGPT.

It can now see images and make sense of them.

So today, let’s take a closer look at the features Google and OpenAI built into their latest AI systems.

As you’ll see, the innovations are coming fast and furious.

ChatGPT Can Now See

OpenAI is still limiting who can use the latest version of its popular chatbot. So I haven’t gotten my hands on it yet.

But based on reviews from early testers, its biggest upgrade is the ability to make sense of photos.

One example online showed a tester troubleshooting how to adjust his bike seat.

He took a photo of the bike and asked how to lower the seat height.

ChatGPT was able to correctly tell him what tool he needed… and walk him through how to do it.

Here’s a clip of the exchange…

AI developers call this “multimodality.” It’s tech jargon for the ability to interact with AI through text, images, and even videos.

Earlier this year, developers were hotly debating when popular AIs like GPT-4 would become multimodal.

And Bard’s developers are also pushing the boundaries of what these systems can do.

Meet Bard, Your New Personal Assistant

Last week, Google released its newest version of Bard.

The update makes Bard more like a personal assistant.

For example, Bard can search your Gmail and Google Drive for information, even if it is buried deep within your inbox or documents.

That allows you to ask it, for example, to find all the emails you got from a client about a specific topic. Or it can find, and summarize, all the documents you created for a certain project.

You can also ask it for information about photos.

Say a friend sends you a photo from their vacation. You can ask Bard where the photo was taken. You can then ask it to give you flight and hotel options to go there.

That’s going to make our online lives a lot easier.

Instead of having to go to multiple sources… or dig through our inboxes or folders to find something… we can now just ask Bard to fetch the information.

So what does that mean for investors?

AI Is Going Everywhere

As I’ve been showing you, the AI boom will play out in three phases – hardware, software, everywhere.

It starts with hardware. AI systems don’t use the same kind of chips that go into your laptop or smartphone.

They use a special kind of chip called a Graphics Processing Unit (“GPU”).

Like our brains, these perform many computations at the same time. This makes GPUs ideal for accelerating AI systems, which involve large amounts of data and complex calculations.

Nvidia is the leading maker of GPUs. And its 100-series GPUs are the go-to chips for AI developers.

It’s why Nvidia’s revenue has doubled this year and its share price shot up 190%. Its hardware is allowing these tech giants to build increasingly better AIs.

Now, with GPT-4 and Bard, the software phase is taking off…

These are the kind of AI stocks that will boom next.

That’s why my upcoming recommendation at my Near Future Report advisory is an AI software play.

The final step is AI everywhere.

That’s when all kinds of different companies outside of the tech sector will utilize AI to solve some of the world’s biggest challenges.

We’ve seen glimpses of this. Already, Bank of America and JPMorgan Chase are using AI to help prevent financial fraud.

AI systems can be used to analyze large amounts of data to identify unusual patterns of activity that usually mean a fraudster is at work.

For example, an AI system can identify a customer who is suddenly making many transactions in unusual locations.

AIs can also detect forged or fraudulent documents such as checks, credit cards, and passports.

We still need more AI hardware and software before we enter the everywhere stage.

But judging by the rapid advancements in GPT-4 and Bard, we’re heading toward that destination at lightning speed.

Until then, I’d encourage you to try out the latest versions of these AI.

The multimodal feature for GPT-4 is still under limited release. If you’re already a paid user of ChatGPT Plus, you can find out how to sign up for the waitlist here. And you can try out Bard here.

Once you’ve tried it out, let me know what you think by writing to me at [email protected].

Regards,

Colin Tedards
Editor, The Bleeding Edge