• How to speak without making a sound…
  • A private tutor for every child on Earth
  • Bloomberg applies generative AI to finance

Dear Reader,

How would you like a high-tech job that pays as much as $335,000 a year plus benefits, and you don’t even need a high-tech degree?

Considering that the work is in the field of artificial intelligence (AI), this doesn’t seem possible. It would be natural to assume that we’d need a computer science degree at a minimum. But we don’t.

The hot jobs that have emerged in the last few weeks are for prompt engineers. And the work is just like the job title: “engineering” prompts for artificial intelligence.

These jobs wouldn’t have existed in the absence of large language models (LLMs) like ChatGPT. But now that LLMs are here and widely available, it turns out that the quality of the prompts that we feed an AI like ChatGPT have a material impact on the quality of the AI’s output.

To do the work well doesn’t require any software programming skills. I suspect that the best prompt engineers will have a combination of both analytical and creative skills with regards to word prompts.

Specificity and context are as important as how the words are strung together in a prompt. And subject matter expertise is also very useful. If we imagine company-specific LLMs that are trained on a repository of knowledge or industry-specific language, a deep understanding of that space will be critical to extracting the desired outputs from the AI.

Prompt engineering job opportunities have popped up on all the major job boards. Time magazine even wrote an article a few days ago on how to get a job in prompt engineering.

As I’ve shared before, Midjourney is a text-to-image generator. The product does exactly what its name says. We simply enter a prompt, and the AI produces an image.

Source: Midjourney

I also wrote previously about a marketplace that sprung up months ago called PromptBase that enables anyone to buy or sell a prompt that produces desired outputs.

PromptBase is now specializing in text-to-image prompts, and, because the output is visual, refers to prompt engineers as “AI artists” instead. The latest development that it announced is a watermark feature that allows any AI artist to distort the image that it displays in the marketplace.

This is necessary because there are already image-to-text prompt generators. We can think of these as another AI designed to reverse engineer an image into a text prompt. Without a watermark, the image could be easily taken and used, or reverse engineered without any payment.

Watermarks have been used for years to protect creators’ work, and they’ve become even more important given the power of this technology.

There’s a fantastic window of opportunity right now for those who are interested in this kind of work. I encourage anyone who finds the idea attractive to get educated on the technology and start applying for job opportunities.

There are already many online courses in generative AI, ChatGPT, and prompt engineering that are inexpensive and easy to understand. I know that some of my subscribers have already taken some of these courses as they were kind enough to write in and tell me about it. Awesome!

And a subscription to ChatGPT Plus is just $20 a month to have guaranteed access to the AI and priority access to new features as they become available.

And for those that don’t mind waiting for access from time to time, there is free access to ChatGPT when demand is low.

For those who are most interested in text-to-image generation, a generative AI like Stable Diffusion is also free to use. And I expect that Midjourney won’t be far behind as it has already made its product available for beta testing.

Irrespective of whether you might want to pursue a career as a prompt engineer or AI artist, I really encourage everyone to experiment and play around with the technology. I guarantee that it will be an interesting experience, and who knows… It just might provide a spark that leads to an idea about how we can use AI in our lives.

The Tesla strategy that almost no one sees…

We haven’t had much occasion to check in on Tesla’s full self-driving (FSD) technology in a while. It’s still out there, “in the wild,” but there haven’t been any major announcements in the last few months.

Regular readers may remember that Tesla tore apart its FSD artificial intelligence and rebuilt it using neural networks ahead of the Version 11 release last year. We checked in on this development last December.

Well, CEO Elon Musk just announced that Tesla owners are now being driven by their Teslas on Full Self-Driving mode about a million miles a day using the latest software (AI). This is big news.

It’s important to note that this 1-million-miles-a-day statistic is different than the AutoPilot stats we’ve looked at before.

If we remember, Tesla crossed a major milestone with the AutoPilot program in 2021. That’s when it eclipsed 5 billion miles driven on AutoPilot.

That was certainly impressive at the time. But those were selective miles. Here’s what I mean…

At the time, Tesla owners were most likely to use AutoPilot on the highway and on roads that are easy to navigate. The software wasn’t quite ready to handle full self-driving from Point A to Point B yet. Tesla owners usually had to intervene at certain points to take control of the car when it found itself in a sticky situation.

FSD is different. The FSD software can now handle far more complex driving situations.

That’s what makes this such a big development. And to me, it raises an exciting question – is Tesla now gearing up for its master stroke?

What I’m referring to is what I believe is Tesla’s ultimate goal of creating a shared autonomous vehicle (SAV) network. This would be a fleet of robo-taxis that directly compete with companies like Uber.

We can imagine how this would work…

Whenever we aren’t using our Tesla, we could instruct it to “join the fleet.” The Tesla would then go out and provide ride-hailing services for consumers on its own, getting paid for each ride.

As the owner of the car, we would be entitled to a portion of the revenue our Tesla generated in this way. And then we could summon the car back to us at any time.

In other words, Tesla owners would be able to put their cars to work on their behalf whenever they didn’t need them for personal use.

Historically, cars have been depreciating assets. But with a SAV network? Our car literally becomes an income generator. It makes money for us when we aren’t using it.

And get this – as Tesla’s SAV network gains adoption, there’s a good chance that Teslas could go out and make enough money to pay for themselves.

It wouldn’t take much – $600 a month or so should do it. That’s enough to cover a monthly lease payment on a Model 3.

At that point, Teslas would essentially be free. Once people figured that out, orders for Teslas would go through the roof. Imagine being able to sign up for a lease and never having to make any out-of-pocket lease payments. The car pays for itself.

This is something that no other car maker on the planet can do. Because no other company has the advanced FSD AI that Tesla has.

Needless to say, this would be absolutely transformational for Tesla… and for consumers. And I have to think Tesla’s stock would explode higher once the market figures this out.

There is still a lot of skepticism around Musk and his team’s ability to get FSD over the finish line. I don’t share that skepticism at all. The improvements with each release of FSD software over the last six months have been outstanding. 

And when Tesla rolls out the advanced FSD tech to its entire network, we’ll know the master stroke isn’t far behind.

It just happened – AI agents are now simulating human behavior…

Some fascinating research was just published out of the Stanford research laboratories. The work is being done in conjunction with Google.

They are experimenting with what they call “AI agents.” And this work has really stirred up the AI community.

The reason this research is causing such a buzz is because it’s centered around AI agents simulating human behavior. This is happening in a metaverse-like virtual environment.

In other words, the researchers created a digital world that mirrors a walled garden human community. And they deployed 25 different AI agents that can interact in this world.

Each agent is free to act in its own best interests. And of course, every agent’s actions impact the environment and the other AI agents around it.

Here’s a visual representation of what this virtual world looks like and how the AIs interact with one another:

Source: Stanford University

Here we can see a virtual world that represents a human community. There are homes, offices, schools, cafes, and parks. And we can see that the AI agents go about their lives and interact with each other as they see fit.

There are all kinds of implications here…

The first thing that comes to mind is gaming. Deploying these kinds of AI agents into games and metaverses will make them far more interactive, immersive, and realistic. These AI agents can play with and against human players in all kinds of games.

There are also some real-world applications.

Organizations could use this technology to simulate closed-end systems and analyze how people are likely to behave within them. This could inform what kind of incentive structures would produce the optimum results for the organization.

Then there’s the big-picture implication…

In Silicon Valley, there’s a popular theory that’s floated around for a few decades now – is the life we experience just a computer simulation? This is sometimes referred to as the simulation theory or simulation hypothesis.

Perhaps that sounds crazy to some of us. But there are people out there who deeply suspect it’s true. Musk really ruffled some feathers years ago at a conference when he said that there is only a one in a billion chance that the conference itself was actually reality. Said another way, statistically, the likelihood that we’re all in a simulation is dramatically higher than the likelihood that we’re living in a real world.

And well, here we are, using AI technology to simulate human life. We are now at the early stages of being able to do exactly what the simulation theory predicts. That advanced intelligence lifeforms would use artificial intelligence to simulate highly complex worlds and learn from those simulations.

This is ironic and comical on a certain level. But it also demonstrates that it is possible to run life-like simulations. After all, if it can be done with 25 AI agents, it can be done with 25 million AI agents or 25 billion for that matter. It’s just a matter of scaling up the computational power to run larger and larger simulations, just like the industry is now doing with large language models.

I suspect we’ll see a renewed interest in the old Silicon Valley simulation theory thanks to this research. Regardless, this is groundbreaking technology that will find utility in our “real” world.

The first big nanotechnology breakthrough in a decade…

If there’s one area of tech that we haven’t had much occasion to explore in The Bleeding Edge, it’s nanotechnology (nanotech). And there’s a reason for that.

Nanotech is the only area of high technology that hasn’t made any big advancements for decades now.

What’s surprising is that it was it was envisioned 40 or 50 years ago. And 30 years ago some were writing about how the application of nanotech was just around the corner.

But it wasn’t. Building and fabricating nanotechnology turned out to be an extremely difficult problem to solve.

So I was thrilled to see some new research on the subject recently. It’s a new methodology for 3D-printing nanoscale objects. And this latest research could be transformational for the industry.

The big breakthrough came from a new technique of making nanoscale objects using what’s called two-photon lithography.

This approach relies on liquid components that only solidify when they absorb two photons of light at the same time. That’s the key to 3D-printing objects just a few dozen nanometers in size.

With this approach we can 3D-print as much as 54 cubic millimeters per hour worth of nano-scale materials. That may not sound like much. To put this in context, there are 1 million nanometers in a millimeter.

So this new fabrication technique is the ability to produce millions upon millions of nanoscale objects every hour. That’s game-changing.

And there are some practical implications with this. To illustrate, let’s look at some of the nanoscale objects produced using this new approach:

Source: IEEE Spectrum

This graphic depicts some of the nanoscale objects we can create using two-photon lithography. As a reminder, these objects are unbelievably small, so small that they are invisible to the naked eye.

In the top row here we can see square mesh blocks manufactured at nanoscale. Naturally these are very light objects due to their size. But they’re also incredibly strong by design. That’s often a trade-off – but not so with this approach.

These objects could serve as the building blocks for new designs in those industries where both weight and structural integrity are critical. Electric vehicles (EVs) come to mind immediately.

These 3D-printed parts could be used to produce EVs that are equally as strong as current models but weigh materially less. That, in turn, would allow the EVs to travel longer distances on the same electrical charge.

Now let’s look at the lower right-hand corner of the above graphic.

There we can see tiny gears created at nanoscale. These could produce little nanorobots capable of movement.

Science fiction has long envisioned nanorobots being able to build, repair, and clean structures. Well, this is the tech that could make it all possible.

Simply put, nanorobots could make the maintenance of physical infrastructure far easier and more efficient. That alone would be transformational for society.

And there’s one more fascinating application with this…

Because these objects are 3D-printed on such a small scale, and because they are designed using two-photon lithography, it’s theoretically possible for them to both manipulate radiation and bend light.

Why’s that important? Well, this is the secret to invisibility.

Invisibility isn’t about making objects disappear. It’s about bending light.

I’m sure many readers are familiar with the invisibility cloak in the popular Harry Potter series.

This kind of technology could make such a cloak possible. These nanoscale objects could bend light in such a way that they mask whatever is behind them.

So this is by far the most exciting nanotech development I’ve seen in many years.

It’s just scientific research in a lab right now… There aren’t any commercial applications yet. But this is exactly how all of the major scientific breakthroughs get started.

Regards,

Jeff Brown
Editor, The Bleeding Edge