- This is a big step forward for AI…
- This new feature could be transformative for Twitter…
- Soon people may wear Facebook glasses…
Yesterday, we looked at the dramatic drop in new COVID-19 cases – 82% to be exact – since January 8. And we also looked at the surprising collapse in daily COVID-19 tests, which dropped by about 1 million tests a day. That’s roughly a 42% drop.
This tells us that the rapid fall in COVID-19 tests is not 100% responsible for the incredible decline in new COVID-19 cases. As I analyzed these numbers, I knew that there was something else happening in the background. There had to be something going on with the testing for COVID-19.
As a reminder, the use of PCR tests to determine if someone is “positive” or “negative” and determined to be infectious – and thus required to be quarantined – has been the standard diagnostic practice throughout the pandemic. But there’s a major problem… PCR tests were never designed for this purpose.
Here is what Kary Mullis, the inventor of the PCR test, had to say about it:
“If you do it well, you can find almost anything in anybody… If you can amplify one single molecule up to something that you can really measure, which PCR can do, then there’s just very few molecules that you don’t have at least one single one of them in your body.”
His comments perfectly capture the absurdity of how PCR tests are being used today to determine if there is a “positive” diagnosis or a new case of COVID-19. The tests are set at a very high cycle threshold (sensitivity) that is so sensitive that it will give “positive” test results even when it finds dead viral fragments in our system from months ago.
What’s equally incredible is that scientific research has already determined the correct settings that are appropriate to determine the likelihood of a live viral load, as opposed to remnants of a dead virus. Yet the medical community is using a sensitivity setting that amplifies the genetic material over a million times more than what was proven to be appropriate.
I had heard rumors that the cycle threshold setting was being decreased in the last couple of months, which would dramatically lower the amplification and sensitivity of the test.
If this were true, the result would be that the PCR tests with a lower cycle threshold setting would only pick up new cases that actually have large viral loads. That would be more indicative of someone having the live virus. This would cause new cases to plummet, just as we have seen in the numbers.
But those were just rumors. I couldn’t confirm them at all. And it led me down the path of thinking that there must be something else happening.
And I found it. It wasn’t easy, and it took me weeks of digging around and phone calls to figure out what has actually been happening.
It wasn’t that the cycle threshold was being reduced in PCR tests. That likely hasn’t changed. The answer was in the tests themselves. There has been a rapid and very material shift away from PCR testing to antigen testing over the last few months.
Antigen testing doesn’t have the problem of PCR testing. If COVID-19 antigens are discovered in our bodies, by definition this means that we have the live virus and should self-quarantine. Having no antigens means no virus.
In other words, antigen testing doesn’t generally produce any false positives. And as a reminder, between 60–90% of all “positive” PCR tests run at a high cycle threshold of 40 (which has been standard in the U.S.) are false positives. These don’t really represent new cases of COVID-19.
So when I finally found these numbers below, it all made sense.
|Sept. 2020||Dec. 2020||Jan 2021||Feb 2021|
The data above represents the monthly testing capacity for COVID-19. This is an excellent proxy for how much of each test is being used for determining a new COVID-19 case. And we can see that something dramatic happened in December of this year.
Antigen testing, which used to make up only about 26% of all COVID-19 testing last September, suddenly spiked in December to nearly half of all COVID-19 diagnostic testing.
Why is that so significant? Because antigen tests don’t generally produce false positives like PCR tests often do. And this rapid shift away from PCR testing to the accurate antigen testing, combined with the 42% decline in daily COVID-19 tests, perfectly explains the 82% decline in new COVID-19 cases presented to us every day.
We can imagine how much less fear and panic there would have been if we would have used antigen testing all along. The majority of what we were told were new COVID-19 cases simply wouldn’t have existed.
And we should remember that these same PCR tests were also used to determine so-called COVID-19 mortalities. If dead viral fragments are found in any person who has died in the last year, it is considered a COVID-19 death.
There have absolutely been excess deaths related to COVID-19. And knowing that the new COVID-19 case numbers and mortalities are materially overstated, excess deaths will ultimately be the best way for us to understand the actual impact that the pandemic has had on society.
But we’re going to have to wait another year or so before that analysis can be done properly. We’ll have to look at the 2020–2021 winter season in the context of the prior winter season, which was a very light year for influenza and pneumonia.
And we’ll have to look at the final numbers for the forthcoming 2021–2022 winter season. With that information, we can have an informed understanding of the impact of COVID-19.
And I hope that we’ll learn a few lessons along the way.
Now let’s turn to today’s insights…
Yet another major breakthrough in the world of AI…
Artificial intelligence (AI) has just taken another step forward.
Researchers from OpenAI and UberAI Labs have developed an AI that is able to master classic Atari games from the 1980s. Believe it or not, this is a big milestone.
The research team unleashed the AI on old video games like Centipede, Berzerk, Pitfall, and Montezuma’s Revenge. These are all games that I played as a kid in the ’80s. I’m sure many readers will remember them as well.
While these games look simple and basic by today’s standards, they are deceptively difficult for an AI. That’s partly because advancing in the games isn’t a linear process. Players must solve puzzles, which often require some backtracking to previously explored areas in the game.
For this reason, even Google’s DeepMind couldn’t produce an AI capable of mastering these old games. DeepMind’s Agent57 AI came close, but it had some flaws that held it back.
The main problem with Agent57 is that it operated using what’s called intrinsic motivation. This is a model where the AI is rewarded for discovering new aspects of a game.
But this approach also leads to what AI researchers call “detachment.” Because the AI is rewarded for learning how to advance to new areas, it sometimes forgets to go back and explore old areas later in the game. It becomes “detached” from those parts of the game.
OpenAI and UberAI Labs solved this problem. And their solution was elegant in its simplicity.
The team programmed their AI to “archive” areas of the game as it explores them. The AI makes note of certain areas that may be useful later. Then it revisits these areas later in the game. It comes back to see if there’s something new that can help it advance in the game.
And sure enough, this approach helped the AI master these old Atari games. It can now beat these games faster than human experts.
I know this may sound like a silly application for AI, but this is a big deal.
Now that this AI has proven its ability to archive old information to check later, this same approach can be turned loose on other areas of research. This could be applied to supply chain management, materials design, therapeutic development, and many other more practical areas of research.
I can’t wait to see this new tech applied to tough optimization problems. I expect we will see more breakthroughs made using this research in the coming months.
Twitter’s new feature can help solve “fake news”…
Twitter just announced a new feature called Super Follows. This allows prominent Twitter users to post premium content behind a paywall that only “super followers” can see.
What we are talking about here is a subscription business model. Super Follows allows Twitter users to build a subscription-based business inside the platform, where super followers pay a monthly fee for access to bonus content.
Here’s what it looks like:
Super Follows on Twitter
This feature comes thanks to a key acquisition Twitter made in January. Twitter acquired an early stage company called Revue. Revue developed the technology that enables the Super Follows subscription platform.
And this new feature could be transformative for Twitter.
As we know, Twitter has been a free-for-all up to this point. While consumers clearly love the free model, they can essentially post anything they want on the platform. That makes it hard to sift through the oceans of unimportant content to find the information we are looking for.
And it turns out that a lot of people are looking for pertinent information on Twitter.
According to a Pew Research Center survey, 53% of adults in the U.S. say they get news from social media sources. And 15% of U.S. adults say that they use Twitter as a primary news source. That’s a lot of people scouring Twitter for information.
I’m sure this has a lot to do with the degradation of the mainstream media.
As we learned the hard way last year, it is difficult to trust anything coming from mainstream outlets. They have become so heavily biased that they have eliminated nearly all objectivity. “News” is now all about pushing a narrative.
In fact, we have seen objective journalists like Glenn Greenwald and Matt Taibbi get kicked out of their own organizations because they refused to conform to the bias that was being pushed by their colleagues.
With Super Follows enabled, Twitter could now become an outlet for these journalists who have proven their integrity and objectivity. These individuals can start using Twitter as more of an unfettered platform for their work and build a subscription-based business in the process.
And for the many people who are turning to social media for news, being able to access these trusted sources directly through the Super Follows feature will be a major time-saver. No more sifting through content and comments to find information that we can trust.
Of course, Twitter wins here as well. I’m sure the company will take a cut of all the subscription revenue generated by Super Follows.
So this is an exciting development. I expect we’ll see Super Follows catch on quickly in the coming months. And if I project out a decade, I can easily see this kind of model displacing the mainstream media as the public’s primary source for news.
And that means we should keep an eye on Twitter (TWTR). It may become an attractive investment target in the near future.
Facebook’s bold approach to augmented reality…
Facebook just made an interesting announcement. The social media giant is partnering with Luxottica, the parent company of Ray-Ban, to develop a pair of smart glasses. The first version of the product will be released later this year.
This is a smart move. And it is a big step toward Facebook’s goal of producing full-blown augmented reality (AR) glasses.
I have long talked about how important it is for smart glasses and AR glasses to have normal form factor. Consumers will not adopt these products unless they are sleek, lightweight, and attractive.
That’s why partnering with a popular sunglasses brand like Ray-Ban makes a ton of sense. Ray-Ban has the expertise necessary to make sure the Facebook glasses don’t look geeky. And of course, it has great brand recognition and distribution channels all over the world as well.
Facebook didn’t release much in terms of product features. But the company did shed some light on its strategy to develop the best AR technology on the market. And no surprise, it is quite controversial with regard to privacy infringement.
Facebook employees and contractors are going to start wearing prototype smart glasses in public starting in September. This is for research. These glasses will capture video and audio that will be sent back to Facebook for analysis. They will track eye movements and location data, and they could be equipped with facial recognition technology.
This is a bold move. Here’s what the “research” glasses look like:
Facebook’s Research Glasses
There’s no question that gathering data like this will help Facebook improve its AR technology. It reminds me a little bit of Tesla’s strategy. Tesla improves its AI by collecting data from all its cars when they are driving on Autopilot.
The difference here is that Facebook isn’t capturing data from the road. It is collecting video and audio from unsuspecting people out in public.
Imagine sitting at our local coffee shop with a friend only to have our meeting recorded without our consent by someone with Facebook’s smart glasses. Now there’s a record of everything we said in what we thought was a private discussion.
Even worse, if the smart glasses are equipped with facial recognition, Facebook will know exactly who we are. And that’s true even if we don’t use Facebook’s products ourselves.
Obviously, this is something that should undergo a lot of scrutiny. I expect this will be a hot topic in regulatory circles as well.
That said, it is easy to see how this feature would be useful.
Facebook’s AR glasses could ensure that users always know the names of everyone they encounter. When we look at someone, the glasses could display that person’s name right in our field of vision. And the glasses could even display pertinent information such as when we last saw that person, whether they have kids, and what their hobbies are.
Yet nobody else would know that the glasses were feeding us this information. That’s an interesting dynamic.
So we’ll keep a close eye on Facebook and its augmented reality aspirations going forward. And let’s make sure we are on the lookout for anyone wearing these “research” glasses out in public.
Editor, The Bleeding Edge
Like what you’re reading? Send your thoughts to [email protected].