Colin’s Note: The tech world is buzzing about OpenAI’s mysterious Q* algorithm…

This artificial intelligence (AI) darling is no stranger to secrecy… But while the company remains tight-lipped about the nature of this new AI system, there’s plenty of speculation about what exactly Q* is.

The leading theory is that Q* is the next generation of OpenAI’s famous chatbot, ChatGPT. And if the rumors are true… It could be a major breakthrough that AI developers have been waiting for.

Of course, as with any major tech breakthrough, people aren’t only tittering about the incredible potential behind the development… There’s also a loud faction of folks who believe Q* could be something much more sinister…

It could pose a threat to all humanity.

I get into all the details – and our take on the “Q* threat” – in today’s video. Just click below to watch… Or read the edited transcript below.


Hey everyone, Colin Tedards here.

Today, we’re diving deep into the latest developments in artificial intelligence at OpenAI… And the huge impact they could have on your finances.

After a whirlwind week with Sam Altman’s brief exit and return as CEO, the tech world is now focused on OpenAI’s mysterious new venture: Q* (pronounced “Q Star”).

Taking a step back, recall that early statements from OpenAI’s board members hinted at a lack of communication leading to Sam Altman’s firing. Many now are speculating Altman’s lack of candor and eventual firing was related to Q*.

You might be asking: What is Q*?

Well, Q* appears to be the next generation of OpenAI’s famous large language model AI, ChatGPT. And while these are still just rumors, Q* would represent a massive leap in capability.

First, it’s reportedly capable of self-improvement. This isn’t just about AI getting smarter by feeding it more human-created data. It’s about AI evolving independently… getting smarter and more capable on its own.

Possibly more impressively, Q* is allegedly making strides in math, one of the weaknesses in current language models.

Current AI models like ChatGPT are very good at writing anything from poems to computer code… But they all struggle with solving even fairly basic math problems.

I’ve tried to get ChatGPT to find the square root of a number. After much thought, the model simply returns an error message.

But if the rumors are true about Q* then OpenAI is making strides in developing a model that can solve not only basic math operations but complex ones as well.

This level of ability is turning heads and sounding alarm bells within the AI community.

A language model that’s capable of solving complex math equations would be a big deal in the world of AI … and potentially be a danger to humanity.

One primary danger of AI models excelling in math is online encryption. All online security relies on math.

Login passwords, for example, are usually encrypted and require a complex math algorithm to unlock them.

Hackers gaining access to online banking or other personal data is fairly common. But it’s usually through brute force attempts at guessing your passwords or by phishing techniques that trick you into turning over your login passwords to the hackers.

Breaking through encryption using an AI tool would basically be like leaving your front door open for burglars.

In theory, a sophisticated AI model could potentially break through the encryption that protects our banking, health care, and even our national security secrets.

In fact, the rumors circulating online suggest OpenAI’s Q* has cracked the AES-192 encryption – the global standard for encryption.

That’s something current supercomputers can’t even achieve.

Apparently, the advanced capabilities of Q* had OpenAI researchers so concerned, they penned a letter to the board of directors warning Q* is a “threat to humanity.”

And this is likely one of the reasons Sam Altman was abruptly fired.

The question many are asking today is this: Should we pump the breaks on AI development to adequately regulate the industry… and protect us from these future risks?

The answer is… no. Absolutely not.

As it relates to encryption… it’s never going to be perfect. Some historians suggest that WWII ended, in part, because of the Allied forces’ ability to decrypt messages sent by the Germans.

A more recent example was the San Bernardino terrorist attack in 2015. The encryption of an iPhone wasn’t turned over by Apple after an FBI request. After public uproar – primarily aimed at Apple – the FBI was able to gain access to the phone without Apple’s assistance.

Online encryption has come a long way over the past 20 years. But we can’t stop innovation because it’s inadequate.

I promise you that if hackers want access to your online banking or health records, they already have access or can find a way in.

The better news is that the threat of quantum computing has been known for some time, and new encryption methods are being rolled out in 2024 in response.

Just recently Google released new cryptographic keys designed to resist attacks from quantum computers that don’t even exist yet.

All the major tech companies have filed thousands of patents on AI-based cryptography. Meaning AI will likely aid in creating new encryption methods.

Could Q* pose a threat to humanity that goes beyond breaking online encryption? Possibly.

But is it more of a threat than the nuclear bombs already pointed at humans right this second? Is a breakthrough in AI more dangerous than gain of function research?

Correct me if I’m wrong, but the government is responsible for some of the most terrifying threats to humanity.

And we want them to regulate it?

To echo the words of former president Ronald Reagan… “The nine most terrifying words in the English language are, ‘I’m from the government, and I’m here to help.’”

I’m Colin Tedards and that was The Bleeding Edge for today, I’d love to hear what you think about AI and Q*. Is it an actual threat to humanity … or are our imaginations just running wild?

You can let us know at [email protected].