- This robotic hand can “feel”
- Welcome to Wendy’s, the AI will take your order…
- The first industry-wide AI boycott arrives
Yesterday we had a look at the status of 5G wireless technology and the continuing rollout and development of 5G around the world. We’ve now reached the mid-point of 5G wireless standards with the recent focus on what’s known as “5G Advanced.”
The standards for 5G Advanced, 3GPP Release 18, are expected to be finalized by next June. Well-defined standards open the way for hardware manufacturers to build products incorporating the technology, which enable services for both consumers and enterprises.
It may still feel early to raise the subject of what comes next – 6G. That’s natural considering that 5G is still a work in process. But as we saw with 5G, early deployments of the technology happened in 2018 in the U.S. and 2019 in Europe.
New wireless technology has consistently been rolled out every decade, which means we can expect early deployments of 6G in the 2028/2029 time frame. That’s not very far away. And that’s why industry groups are already researching 6G wireless technology.
The current plan from standards body 3GPP is to begin work on Release 21 in early 2027, which will provide the foundational standards for 6G. This is what early 6G deployments will be built upon.
As with all generations of wireless technology, the critical improvements will center around speed, lower latency, higher bandwidth, and new spectrum.
Here’s a rough overview of the differences between 5G and 6G:
Peak Data Rate
Average Data Rate
100 – 500 Mbps
1 – 10 Gbps
What does the above mean? At a minimum, average data rates will increase by at least 20x over 5G. And latency (delay) will drop by about 90%. Not that there would be much need for it, but with 6G, one could even take a phone call flying (presuming the aircraft was close enough to the ground).
In addition to the above metrics, each wireless generation tends to use a new radio frequency spectrum. This is largely driven by the need for more bandwidth. The reality is that wireless data traffic has been growing exponentially since the birth of wireless technology. That’s a trend that won’t change. And the way to deal with that exponential growth is to access higher and higher frequencies that are not currently in use.
The big jump with 5G opened up frequency bands in the 24 GHz to 40 GHz range. This is referred to as the mmWave bands. For anyone who has experienced 5G when using a Verizon or AT&T network deployed in this range, the experience is remarkable. One Gbps speeds a nearly instant response due to the super low latency (delay). I personally researched and tested the earliest deployments of 5G mmWave around the U.S. years ago. It’s incredible and needs to be experienced to understand what 5G is capable of.
6G is going to make an even larger jump into the terahertz (THz) range, which is between 300 GHz and 10 THz. It’s in this range where the highest performing 6G networks will operate. But that will come with incredible expense.
What we’ve already seen with 5G is that the mmWave deployments at the higher frequencies have required mmWave base stations, which have a useful coverage measured in hundreds of feet. We can imagine the architecture has a base station on every two or three city blocks in order to provide wide coverage.
With 6G technology operating in the terahertz range, a base station might provide a range of 30-50 feet (10-15 meters). Transmission distances will be much shorter, requiring an extremely dense network capable of delivering such high speeds. Not only is this extremely expensive, it comes with incredible infrastructure challenges.
In that way, 5G’s small cell infrastructure is a warmup for what’s to come with 6G.
But 6G isn’t just a technological or economic challenge. It has become both a competitive and a nation-state level issue over control of the technology. This is why early development of the standards and the intellectual property is so important.
There is a large battle already brewing between China and the U.S./Europe over 6G. Wireless technology has largely been controlled by Western companies in the past, which has been disadvantageous for a China-based company like Huawei, which has very close ties to its government.
The three giants in the wireless network infrastructure business are Nokia Networks, Ericsson, and Huawei. And naturally, they all want to have a strong influence on the standards as they build base stations, antennas, and a wide range of equipment necessary to operate wireless networks.
It may seem odd that there are no U.S.-based companies in the mix, but that is by design. Network infrastructure tends to be a lower margin business. Over the years, U.S.-based companies have been focused on building wireless technology and licensing that technology to all companies around the world. Licensing models are very high gross margin businesses.
The other area that U.S. companies have focused on is semiconductors for wireless technology. These are companies like Qualcomm, Skyworks, MACOM, Qorvo, Broadcom, and many others. They also support higher gross margins than infrastructure.
And that’s where the battle will be… over intellectual property and protecting market share for these higher gross margin businesses. The wildcard, of course, will be Taiwan. All of the above U.S.-based semiconductor companies produce chips through Taiwan Semiconductor (TSMC).
And if China controls Taiwan, it can control TSMC’s output… which leaves us with one simple question…
When will China make its move?
Fine motor skills for a robot…
One of the most challenging tasks to design humanoid robots circles around their fine motor skills. Manipulating small objects of odd sizes and shapes is particularly difficult. That’s why we normally see robots capable of lifting and carrying heavy boxes in a warehouse setting… It’s a much easier task to solve for.
But if we think about activities like catching a ball or carrying an egg or something with an odd shape – these are things humans can do easily. But it’s incredibly difficult to convey these same skills to robots. That’s because it’s been difficult for robotic arms to “feel” an object.
And if we want to have humanoid robots assisting us in our homes, offices, and healthcare settings, they have to be able to perform these kinds of tasks.
Which is why I was excited to see that a group of researchers at Columbia University presented an interesting solution. They designed robotic fingers with a sensory system that allows robots to “feel” the shape of objects. This enables the robots to handle any object correctly.
Here’s a look at the new technology:
Here we can see a robotic hand manipulating an odd-shaped rectangular object. Notice how all the fingers work together to handle this object correctly.
And in the top right-hand corner of the screen, we can see the “Tactile Reading.” It shows a red dot on the fingers. That’s a light-emitting diode (LED).
This system pulses the LEDs from inside the robotic fingers. Then that light reflects off the membrane and returns back to sensors within the fingers. This is what enables the robot to “feel” an object. It can sense the shape of the object based on the impressions in the “fingers’” membrane.
What’s interesting here is that this system does not use any computer vision. There are no “eyes” to see the shape of the object, only the data from the LED sensors. The robot doesn’t “see” anything. It’s all feeling based on the LED sensors.
This is impressive already. And that means that incorporating additional inputs in the form of video and computer vision will only improve performance. One of the hottest areas of development right now in artificial intelligence (AI) is multi-modal applications. This is when an AI can incorporate inputs from different data sets (different kinds of sensors), synthesize that data, and optimize for any desired goal.
The end result in this case will ultimately be advanced fine motor skills for robotic application. This will be great not just for factory or logistics applications, but also for home and healthcare settings.
The timing is great as well. As we’ve been tracking in The Bleeding Edge, companies like Boston Dynamics, Agility, 1X Technologies, and of course Tesla are all making great progress developing humanoid/bipedal robots.
Google’s new announcement is very uncharacteristic…
Surprisingly, Google just teamed up with fast food chain Wendy’s to customize an AI capable of taking orders at the drive-through window.
To do this, Google took its own generative AI model and trained it on all of Wendy’s food and drink products. This includes the official menu as well as customary lingo, abbreviations, and nicknames that customers sometimes use.
The two are now piloting this technology at a Wendy’s location in Columbus, OH. Here’s what it looks like:
The big screen on the left is where the AI takes customer orders. We can see it looks like a giant smartphone – both in shape and in the user interface.
The AI also speaks with customers in a very human-like voice. Chances are we wouldn’t know we were speaking with an AI. But if we see this kind of screen, we can be sure it’s an AI operating behind the scenes.
This could be a huge productivity boost for Wendy’s. The AI will be faster, more consistent, and more accurate than human workers.
Plus, the AI can be trained to consistently upsell customers. It can recommend additional items that tend to compliment what somebody has already ordered. This is something that human workers find uncomfortable… so they don’t often do it.
I’m very interested to see how the pilot in Columbus goes. If any readers are able to try it out, please let me know about your experience. You can write to me right here.
If all goes well, Wendy’s plans to roll out this technology at all of its restaurants. I expect this will be deployed quickly.
That said, this type of thing is very uncharacteristic for Google.
The tech giant hasn’t done any enterprise licensing deals like this before. Instead, Google has kept its AI technology in-house up to this point.
That’s because Google’s motivation has always been to disseminate its technology far and wide within the consumer market. Its business model is to collect data on consumers, package that data, and then sell access to that data to advertisers. That’s where the vast majority of Google’s revenue comes from.
Even Google’s software solutions for enterprise applications (G Suite or Google Workspace) culls through all information to collect useful data on users for the same purpose.
So while licensing AI technology to Wendy’s might generate a new stream of revenue, it will still be tiny in comparison to Google’s advertising business. So it begs the question – has Google turned a new leaf? Or is this another attempt to somehow extract more data from people?
It’s certainly possible that enterprise licensing deals like this could be a move to diversify Google’s business. But I’m skeptical…
If we remember, Google owns the Waze navigational app. It’s the most popular app out there for getting driving directions. It’s a great app… and it’s free.
But for anyone with Waze installed on their phone, Google knows their location at all times. Using GPS technology, Google tracks the location of our phones 24 hours a day. And that means Google will know when we go to Wendy’s.
That being the case, it’s possible Google plans to capture data on our fast food orders and consumption patterns as well. This is data Google never had access to before…In that way, it is valuable for advertising purposes.
Either way, this technology is going to continue to be adopted.
McDonald’s is already working to automate its drive-through window through its acquisition of Apprente years ago. And fast food giant Checkers is doing the same with its partnership with Presto. And now Wendy’s is getting in on the act too.
The technology is ready for prime time. And now, the industry has a catalyst in the form of labor shortages in the fast food industry. That’s why we can expect to see rapid adoption of this technology in fast food and quick service restaurants.
The first organization to officially boycott generative AI…
The Writers Guild of America (WGA) is officially boycotting generative AI. In fact, they are striking in protest. This is the first work stoppage on Hollywood in 15 years.
We had to know that some organizations would react this way to generative AI. But I’m still surprised at how quickly this is all happening.
Remember, OpenAI released the groundbreaking generative AI ChatGPT back in December. This tech has only been around for about six months now.
So it’s all coming to a head incredibly fast. And I have to chuckle a little bit here – the WGA is acting in just the same manner as a modern day Luddite.
The WGA is trying to force Hollywood studios to restrict the use of generative AI when it comes to writing scripts for movies and TV shows. And they are demanding that the studios not train generative AI on any existing scripts.
So the WGA wants Hollywood to completely reject using generative AI technology. This is exactly what the old Weavers Guild tried to do with the automated weaving loom back in the late 16th/early 17th century.
Of course, the WGA is making plenty of excuses as to why the industry should reject the technology. They are calling generative AI “plagiarism machines.” And they are claiming that they would be on the hook to clean up sloppy rough drafts produced by AI.
I don’t buy it. And I don’t think Hollywood will either.
The fact is, these AIs are great at enhancing human labor and productivity. They aren’t going to replace human writers. They will simply increase the amount of content human writers can produce.
Interestingly, the computer graphics part of the production industry has been quick to embrace the technology rather than try to ban it. This cohort has been excited to leverage the technology to improve productivity with especially time-consuming tasks. This is the way.
The reality is that Hollywood has a massive catalogue of scripts that go back decades. They could train a generative AI on this entire catalogue… and the AI would be capable of writing scripts in any style or time period. The technology could bring old content and franchises back to life and create even more opportunity in doing so.
And I should mention, the studios own the rights to these scripts – the WGA does not. That’s just how the industry works. The WGA can protest all they want, but they can’t legally stop Hollywood from training generative AI on past content.
So this is an interesting development. I’m very curious to see how the studios respond to the WGA’s strike. I’m also interested to see if any other organizations follow the WGA’s lead.
That said, the genie is out of the bottle here. It’s going to be impossible to keep AI out of Hollywood… and any other industry for that matter.
Editor, The Bleeding Edge