The “White Whale” of Material Science

Jeff Brown
|
Mar 29, 2023
|
Bleeding Edge
|
11 min read
  • Are we ready for GPT-5?
  • The “white whale” of material science
  • Microsoft’s Bing is back from the dead

Dear Reader,

With all of these exciting and radical advancements in generative artificial intelligence (AI) that we’ve been learning about in The Bleeding Edge over the last few months, I’m sure that many of us have been wondering or worried about the impact to human jobs.

That’s perfectly reasonable. Especially considering how quickly these AI products are becoming functional and capable of performing human-level tasks in a matter of seconds.

A couple of days ago, some new research was just published by a team that includes members of OpenAI and the University of Pennsylvania that performed some analysis on this very topic. And the results will probably come as a surprise.

The approach was straightforward. It simply focused on analyzing occupations and how those jobs aligned with the capabilities of large language models (LLMs) like OpenAI’s GPT-4 or Anthropic’s Claude.

The research found that about 80% of the U.S. workforce could experience about 10% of their tasks disrupted by this latest generation of AI. While the percentage of the workforce is quite large, the percentage of the tasks is quite low at 10%. This implies a limited impact to the workforce.

But the research also found that about 19% of the labor force could see as much as 50% of their tasks disrupted by these LLMs. That’s a big number. And that would radically change the labor landscape, not just in the U.S. but on a global level.

At an even higher level, the research suggested that about 15% of all tasks could be completed much faster using this new technology. And when coupled with additional software designed to help humans better leverage the AI technology, anywhere between 47 and 56% of all tasks could be accomplished faster.

No matter how we look at it, this technology is going to radically change how work gets done and how quickly it happens.

Some of the jobs that were specifically identified as having the highest exposure to being replaced by AI are:

  • Mathematicians

  • Interpreters and translators

  • Tax preparers

  • Quantitative analysts

  • Writers and authors

  • Web and digital interface designers

  • Court reporters

  • Proofreaders and copy marketing

  • Accountants and auditors

  • Journalists

  • Legal secretaries and administrative assistants

  • Clinical data managers

It’s not a comprehensive list, but it certainly gives some perspective on the types of jobs that are most at risk from this revolutionary technology.

Clearly, the world is in for disruption unlike it has never seen before. But it’s not all bad news. This technology will deliver some incredible productivity improvements, just like revolutionary technologies have done in the past.

If workers spend 50% less time on tasks that are improved upon by using generative AI, the U.S. economy could experience an increase in GDP of 4% by one estimate. That would be equivalent to about an additional $1 trillion in economic activity.

At a corporate tax rate of 21%, that would amount to an additional $210 billion in additional tax income to the U.S. government. That’s a big number equivalent to about 15% of the current $1.4 trillion 2023 fiscal deficit being run by the current administration.

Historically, it doesn’t matter whether it was electricity, textile machines, telecommunications systems, automobiles, or semiconductors… They all led to an explosion in economic growth and improved quality of life.

The challenge this time around will be the speed at which the disruption takes place. In the past, it took years if not decades for widespread adoption of a new revolutionary technology. In the case of artificial intelligence, the change will be measured in years if not months.

What can we all do about it? There is no easy solution to managing a disruption that happens this quickly. But if you’re reading The Bleeding Edge, you’re already doing something about it. We’re educating ourselves, we’re learning about the major players in the industry, we’re learning about the tools and applications for this incredible technology. And hopefully this is stimulating ideas on how we can use this technology in our current roles.

And for anyone looking to take the next step, there are all sorts of free and inexpensive short courses on “prompt engineering” that are now popping up on online education sites like Udemy.

I really encourage readers to sign up for a few of these courses and spend a few hours gaining an understanding of how this technology can be used, and for what applications. Just knowing about how to use these tools will give us all a leg up in our personal businesses, at school, or at our workplaces.

On any online education site, just search for courses around “prompt engineering” and you’ll find a list of options. And for those of you that take the next step and dig deeper with study, I’d love to hear about your experience here.

GPT-4 foreshadows what’s to come…

About two weeks ago, OpenAI released the next generation of its generative artificial intelligence (AI) GPT-4… just as we predicted it would earlier this year. GPT-4 is the large language model that’s a follow-on to ChatGPT.

Most didn’t think that would happen so quickly. After all, GPT3.5 and ChatGPT, which is built on GPT3.5, were only released in December.

We’ve already been covering much of what GPT-4 is capable of in recent weeks. But it’s worth taking a step back and taking a look at just how good GPT-4 really is. Here are just a few things that GPT-4 has been used for in the last two weeks:

  • It has been used to create arcade games that can be played immediately after they’re created.

  • It can write software code for just about any application.

  • It can create music based on any genre we like. We can just feed it the words we want.

  • It has created new extensions of internet browsers.

  • It can write and send e-mails for its users.

  • It can generate new business ideas.

  • It has been used to automatically configure and optimize software programs.

  • GPT-4 can now integrate with hundreds of software applications and use that software to perform tasks for the user.

  • It was used to hire a human worker to perform specific tasks.

  • GPT-4 can invent a new language.

  • It can now be used to hack computers.

  • It can design and code new smartphone applications.

  • It can design and create a website from a hand drawn image.

And the thing to keep in mind is that these are just a few examples, and they were all done in a matter of seconds. And if we remember, the old version of ChatGPT was even able to pass both the U.S. Medical Licensing Exam (USMLE) and a University level exam focused on law.

What’s more, this has all happened since ChatGPT was released in December. It’s only been around for about three and a half months now.

And this month, OpenAI released GPT-4, and ChatGPT received an upgrade. The CEO of OpenAI had previously downplayed how good GPT-4 would be compared to the previous version, but the opposite turns out to be true. GPT-4 is a major and very material improvement.

As a simple example, GPT-4 passed a bar exam with a score that was in the top 10% of human test takers. Of course, GPT-3.5 (ChatGPT) also passed the bar exam… but its score was in the bottom 10% of test takers.

So the AI went from the bottom to the top 10% in just over three months. That’s an incredible improvement.

And it doesn’t stop there.

OpenAI created a great chat that demonstrates the power of GPT-4 compared to GPT-3.5. Here it is:

Source: OpenAI

This chart compares GPT-4’s performance to that of GPT-3.5 across a wide range of academic tests. And we can clearly see that GPT-4 performed dramatically better on most tests.

In fact, GPT-4 more than doubled the performance of GPT-3.5 in many cases. That’s evidenced by the green bars being so much taller than the blue bars.

OpenAI also put together a table showing how the two AIs stacked up with regards to several key machine learning benchmarks. These are designed to test the performance of different AIs.

Source: OpenAI

Again we can see that GPT-4 dramatically outperformed GPT-3.5 when it comes to standard machine learning benchmarks. GPT-4 is significantly better across the board.

Two things jump out at me here. The most obvious is simply the magnitude of improvement in just three and a half months.

But perhaps more important is what this foreshadows…

GPT-4 is what’s called a multi-modal AI. That means it’s not just a generative AI that takes in text and produces an output. GPT-4 can also process images.

For example, somebody drew a picture of a website and fed it to GPT-4. The AI was able to create a functional website in line with the hand-drawn image in a matter of seconds. That’s multi-modal capability.

On Monday, I shared how generative artificial intelligence is progressing faster than “Moore’s Law,” the gold standard for exponential growth up until this point. This is a perfect proof point of that.

So this begs the question – what does GPT-5 look like?

If I had to guess, GPT-5 will be enabled with computer vision. This is a technology that allows an AI to “see” the real world in real time. And that would enable something absolutely transformational…

Equipped with computer vision, GPT-5 could be incorporated into hardware. That means devices like drones and various robots.

The moment that happens, the drones and robots will be able to process what they are seeing in the real world very much like we humans do. And they will be able to make decisions based on what they are seeing in order to accomplish the tasks that they are given.

At that point, we’ll have highly intelligent robots that can “think” and act much like we do. And then everything about our world will change.

To me, this is the natural progression for generative AI.

When we see OpenAI link a partnership with a prominent robotics company, we’ll know that “thinking” machines are right around the corner.

Of course, this is also going to introduce some important questions for society. ChatGPT was only launched last December. And we’re already looking ahead to GPT-5, an AI that can basically “think” as well as any human.

The changes we’ll see in the months ahead will be dizzying. And we’ll need to stay on top of these developments as we adjust to this new world.

A room-temperature superconductor?!?

We may have just had one of the biggest breakthroughs in materials science history, one that could potentially be revolutionary for a wide range of applications. The recent research was published out of the University of Rochester.

It appears to have demonstrated a room-temperature superconductor. Here it is:

Source: University of Rochester

Superconductors operating at room temperature have been a “white whale” of materials science. The industry has been chasing this for decades.

A superconductor is something that allows electricity to flow through it with no resistance whatsoever. That enables perfect efficiency. It allows us to harness all electricity produced with no waste.

As simple as that sounds, this is a major challenge for us today.

I doubt many realize this, but our power lines, through which we receive our electricity, lose anywhere from 7-15% of the electricity in route from a distribution station to the end user. That’s due to resistance in the lines.

This is a massive inefficiency. And it means that it takes more carbon-based resources to produce baseload power for the electrical grid than it would if we had superconducting material that worked at normal temperatures.

Scientists have demonstrated other superconducting materials before. But on each occasion, these materials typically required incredibly cold temperatures (-70 degrees Celsius) and very high pressure to work as a superconductor. Pressure equivalent to that of halfway to the center of the Earth (155 gigapascals). These are conditions that simply couldn’t exist in real world applications.

And that’s where this new research stands out. The researcher demonstrated a new material that is a superconductor at room temperature. And it can function at a “reasonable” pressure of 1 gigapascal, equivalent to that of the lowest point in the Mariana Trench.

The material consists of hydrogen, nitrogen, and a rare earth metal called Lutetium. This is an incredibly expensive metal that’s hard to produce as its concentration in minerals is about 0.0001%. It costs about $10,000 per kilogram because it’s so scarce and difficult to mine.

So this is something that could radically transform everything with regard to power production and distribution. This material could also power new forms of transportation based on magnetic levitation… and all kinds of other applications.

And here’s the best part –a University of Rochester researcher has already spun up a company to develop this technology. It’s called Unearthly Materials.

The company has already raised about $20 million to get off the ground. And one of the backers is none other than OpenAI CEO Sam Altman. This isn’t a surprise as superconducting materials have applications in both semiconductors and computing systems. A radical improvement using superconducting materials could not only deliver massive savings in reduced electricity consumption, but could also improve the performance of computing systems required to run complex artificial intelligence.

Microsoft’s Bing hits a first ever milestone…

Speaking of radical transformation – Microsoft’s Bing has made a remarkable comeback.

As we discussed last month, Microsoft incorporated OpenAI’s ChatGPT into its Bing search engine. This was an effort to revitalize the search engine which dramatically lagged behind Google Search. As of December 2022, Google Search claimed roughly 87% of the U.S. search market.

Well, it appears to be working.

A few days ago Bing surpassed 100 million daily active users (DAUs) for the very first time. It has never been more popular. This is impressive, but still a long way from Google’s 1 billion DAU usage.

Microsoft reports that about one-third of these daily users are completely new to Bing. They weren’t using Bing prior to the ChatGPT integration. Clearly, the AI is the big draw.

Remember, this was all made possible by Microsoft’s series of venture investments into OpenAI over the last few years. As I’ve reflected upon these moves over the last few weeks, I’ve come to the point where I have to say that this was a brilliant move by Microsoft.

Had Microsoft attempted to acquire OpenAI outright, that deal no doubt would have come under regulatory scrutiny. Microsoft is just too big of a company with tentacles in too many places to acquire the world’s most prominent generative AI company. No question about that. There would have been antitrust concerns.

But by simply making a series of investments in OpenAI without any clarity around its ownership level, Microsoft has avoided antitrust scrutiny entirely. Especially because it’s now licensing the tech from OpenAI… which is normal activity in the tech industry.

So Microsoft managed to fly under the radar completely.

Yet, the company gets all the same economic benefits that a full-blown acquisition would have provided. We can see that very clearly with Bing’s success this year and OpenAI’s dramatically rising valuation.

I have to tip my hat to Microsoft on this one. It’s not very often that a stodgy old incumbent pulls something like this off. Well played.

And suddenly Microsoft is worth tracking as a potential investment target.

I wouldn’t recommend the company at current levels. It is trading at a high valuation right now. But another strong pullback in the markets could provide a fantastic entry point.

Regards,

Jeff Brown
Editor, The Bleeding Edge


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.