Last Wednesday was Tesla’s Investor Day which was billed as “Master Plan 3.” I was excited to hear what Musk and team had to say with a title like that. The subtitle was equally interesting “sustainable energy for all of Earth.”
Who wouldn’t want that? And if anyone might have a detailed vision on how that might happen, Musk would be near the top of my list. He has proven be very pragmatic on technology related to clean energy.
He even pointed out that more than 80% of global energy production comes from fossil fuels (around 84%). Said another way, the world is currently fueling electric vehicles with electricity produced by burning fossil fuels. That’s the kind of honest discussion that we need to have if we’re going to migrate to clean energy.
I was excited because I expected that Musk would reveal some new kind of revolutionary battery technology, or perhaps reveal the importance of a policy decision on nuclear fusion in order to ensure 100% clean, sustainable electricity production that’s abundant in every corner of the Earth.
Unfortunately, the presentation was a huge disappointment.
“The Plan” as outlined is just to build 240 terawatt hours (TWh) of electricity storage and use 0.2% of the world’s land area to install solar panels and thus we’ll be fine. For perspective, 1 terawatt hour is equivalent to outputting one trillion watts per hour, an amount large enough to supply many countries.
I’m afraid it’s not that simple, and “the plan” is remarkably short on details.
Tesla’s “Mater Plan 3”
There are of course certain parts of the world that are great for solar energy, particularly areas that have consistent and reliable sunlight and limited precipitation and cloud cover. Desert regions are a perfect example. They’re a great location for solar panels, but usually not located adjacent to the worlds 7 billion-plus population.
Electricity distribution is just as important as production, and the reality is that the majority of the world’s population doesn’t live in areas that are optimal for electricity production via solar panels. This problem was obviously skipped over for a reason…
And an even bigger issue with the presentation was a comment that “a sustainable energy economy involves less mineral extraction.” This one surprised me considering how obviously false the statement was.
Let’s dig into why this statement is so wrong.
25,000 pounds of lithium brines are required to produce the 25 pounds of pure lithium needed for a single electricity vehicle (EV) battery
30,000 pounds of cobalt ore are required for a single EV battery
6,000 pounds of nickel ore are needed to produce enough nickel
1,000 pounds of graphite ore per battery
25,000 pounds of copper ore
In total, about 90,000 pounds of ore are required to produce an EV battery for a single car. But it gets worse.
In order to get to all that ore, somewhere between 3 and 20 tons of earth need to be removed in order to access an ore deposit. Here’s a picture of what that looks like:
Mining for Battery Metals
Source: Wikimedia Commons
This is called overburden, which is the trees, grass, rocks, and natural habitat that needs to be removed from the surface of the Earth before the mineral extraction process can begin.
And in order to extract 90,000 pounds of ore, somewhere between 200,000 and 1,500,000 pounds of earth needs to be removed… for a single electric vehicle. Just imagine the environmental impact for 100 million EVs, or 500 million EVs!
It’s also worth mentioning that all of the massive mining equipment, haulers, drills, etc, all run on petroleum based fuels. And almost all of the EV batteries that are produced and manufactured using electricity are produced from fossil fuels, primarily coal.
In fact, I saw one estimate that the “cost” to produce a single EV battery capable of holding enough energy equivalent to one barrel of oil uses the equivalent of 100 barrels of oil to produce.
Does that sound green and clean? I’ll let each reader decide; but from my perspective that’s not green at all.
That’s why I’m so disappointed with the Tesla presentation. The “solution” to just add 240 TWh of electricity storage isn’t a solution at all, and it’s not a green process. Tesla may very well profit from it by selling battery storage and EVs, but it will come at unbelievable environmental cost.
To be very clear, a realistic transition to a world that can eliminate the use of fossil fuels cannot exist without the use of nuclear fission, or preferably nuclear fusion technology. Not addressing these issues openly, literally sets the world back on solving the real problems.
But perhaps this view is missing the point of the presentation.
I couldn’t help thinking that the presentation was created for political purposes. Musk has been somewhat antagonistic to the current U.S. administration which has largely overlooked Tesla and its contributions in the EV and solar industries.
The Master Plan 3 felt more like an appeasement, a way to get into the good graces of government in hopes of benefiting from the Inflation Reduction Act (i.e. Green New Deal) rather than being an honest and practical plan for clean energy.
ChatGPT Turbo is here…
Last week we had a look at OpenAI’s latest new development.
For the sake of new readers, ChatGPT is an incredibly powerful generative artificial intelligence (AI). It’s able to produce content and write software code upon demand. And it’s capable of having intelligent conversations with humans.
And last week OpenAI made ChatGPT available to the world through its application programming interface (API). This enables any company or even government agency to integrate ChatGPT into its own system.
What’s more, we just learned that ChatGPT’s computational costs have come down by over 95% over the last twelve months.
Now it costs just $0.002 for ChatGPT to produce 750 words of output. This prompted the industry to refer to the generative AI as GPT-3.5 Turbo.
To quantify this dramatic drop in computational costs even further, it would now take less than $3 for ChatGPT to become an absolute expert in everything pertaining to Harry Potter. That’s by training on the entire universe of Harry Potter books. They span over one million words.
And if we think about training the AI on something like the Encyclopedia Britannica or even Wikipedia, that would cost less than $10,000.
We’re talking about having the AI comb through 4.2 billion words that largely represent the accumulated store of human knowledge. All for a few thousand dollars.
In other words, the barrier to entry has basically been removed. Access to this bleeding edge technology is completely democratized. With the new API and the dramatic cost reductions, we’re going to see an explosion of enterprises rushing to adopt this technology and apply it for their own purposes.
For example, a University could train the AI on its entire body of knowledge in the field of biology… or any other subject. Suddenly that AI would become the world’s foremost tutor on its particular subject of expertise with absolute perfect recall.
That’s how powerful this development is. And it’s going to radically reshape our world in a very short period of time.
A new competitor for Tesla’s Optimus…
Big news on the robotics front – a new AI-powered robot just came out of stealth. And it’s quite promising…
The start-up behind this one is called Figure. And the company is loaded with talent. The team is composed of people who previously worked at Boston Dynamics, Tesla, Apple, Cruise, and even Alphabet X, which is a secret research division at Google.
Figure has been in stealth mode. Very few knew what they were up to. But now we do… here’s the alpha version of the robot they designed:
Figure’s Bipedal Robot
The robot is five foot six inches tall. It weighs about sixty kilograms (kg). That’s equivalent to about 132 pounds. And it can carry up to twenty kilograms. That’s about forty pounds.
I like Figure’s approach. It is coming out of stealth with a prototype “alpha” product to show. This was possible because the founder, Brett Adcock, provided the company with the first $100 million in funding. Adcock is also the founder and co-CEO of Archer Aviation.
Having a prototype like this makes it a lot easier to go out and raise additional capital to grow and produce a “beta” product, which would be a pre-commercial version. I expect we’ll be hearing about a major funding round within the next few months.
While Figure’s humanoid robot is being designed to be general purpose, the robot will be initially geared towards enterprise applications. Figure hopes that it can help companies shore up their supply chain and logistics issues by offsetting the labor shortages we’ve seen in recent years.
Once again, we’re seeing more evidence that intelligent robotic workers are closer than many realize. Boston Dynamics is in the field with Atlas. Tesla has Optimus. Agility Robotics has its bipedal robot. Now, Figure is throwing its hat in the ring.
We’re witnessing the beginning of a massive, multi-decade trend. One of Musk’s more interesting comments at his investment day was that in time there will be more robots than the human population on Earth.
That’s hard to imagine, but likely true.
And the implications to the global economy will be immense. Needless to say, there will be incredible investment opportunities along the way.
This AI can now read our minds…
We’ll wrap up with some absolutely mind blowing research out of Japan.
A team of researchers have been taking a form of AI called diffusion models and applying it to functional MRI (fMRI) imaging technology. And I kid you not, they have developed an AI that can read a human mind.
Here’s how it works…
Diffusion models are what empower generative AIs like Stability AI’s Stable Diffusion. And functional MRI technology is used to monitor human brain activity in real time.
Here’s how the research experiment worked. The research team presented a person with an image while they recorded their brain activity with an fMRI scan. The fMRI captured the changes in neural activity while the person viewed the image… and the team fed that data into their diffusion model. And guess what happened?
The AI analyzed the changes in neural activity and reconstructed the image that the person had seen entirely from the imaging of neural activity. The AI had never actually “seen” the image.
Here are the results…
Source: Osaka University, Japan
The top row of images is what was shown to the test subject, not to the AI. And the bottom images are what the AI produced based on the neural activity measurements from the fMRI scans. It’s remarkable.
The diffusion model proved capable of reading the test subject’s mind just from analyzing their brain activity. And the reconstructed images are clear enough for us to understand what the images represent.
And think about the implications here…
Obviously, law enforcement would love technology like this. This is far superior to the antiquated technology underpinning the old lie detector tests. Subjects could simply be questioned about a crime, an fMRI would capture the neural activity, and the AI would reconstruct what the subject saw.
While it can be easy to simply not say anything when being asked a question, it is nearly impossible not to think about something or have images or memories flash in our brain when being prompted with related questions.
It raises an interesting question, will there be warrants issued to scan a brain?
This tech could also be game-changing for people who have lost their ability to communicate due to strokes or spinal cord injuries. Theoretically, they could communicate their thoughts through an AI just by envisioning images in their mind.
Of course, there are other implications we could consider. Sometimes, it seems like our own minds are the last truly private space we have. But this research suggests that might not be the case for much longer.
If any readers are worried about an AI that can read your thoughts, I certainly don’t blame you. As I often say, responsibly deploying advanced technologies like this will be one of the biggest challenges humanity will face in the years ahead.
We’ll be following this closely.
Editor, The Bleeding Edge