• Will AIs Be Heavily Biased? Are they already?
  • When will commercial aircraft be “pilot free”?
  • “Set it and forget” strategies for income…

Dear Reader,

Welcome to our weekly mailbag edition of The Bleeding Edge. All week, you submitted your questions about the biggest trends in tech and biotech. Today, I’ll do my best to answer them.

If you have a question you’d like answered next week, be sure you submit it right here. I always enjoy hearing from you.

Can AI be used to shape narratives?

Hi Jeff,

It’s not a question of whether students should be using AI to write essays, but how and when. The process of writing teaches students to think, to create new ideas, and to make logical arguments.

Perhaps one of the greatest benefits of writing is that it forces us to slow our thinking. Daniel Kahneman won the Nobel prize for his work in behavioral economics and described the trade-offs between careful thinking and reactive thinking in his book, Thinking: Fast and Slow. I highly recommend it.

Students who only know how to use a calculator don’t understand arithmetic as well as students who can do arithmetic without a calculator. But, it’s more complicated for writing essays.

On what kind of data has the AI been trained? Biased news articles, pundit opinions, and troll comments, which are heavily laced with logical fallacies? How much of that training data involves a detailed analysis of the procedures and statistics in a proper scientific study? Or a literary critique that’s not simply click-bait?

Neural networks are generally more associative than logical. That’s why the epistemic challenges for AI are very large. Yes, the technology will get better but that doesn’t matter if people who use it are not well educated.

I think that in this case New York’s public schools are doing the right thing by banning ChatGPT until they can teach students how to use AI properly. And, it’s better for them to first learn to write without it.

BTW, is The Bleeding Edge ever written by an AI?

Best Regards,

Joe B.

Hi, Joe. I really liked your questions and observations. Very well said, and you’ve hit on a hot topic for me that I haven’t yet discussed.

And before we dig in, I can confirm that The Bleeding Edge has never been written by an AI. It has always been me, a bit of a labor of love. 

But I have asked myself that question. Could I train an AI on every issue of The Bleeding Edge that I have written? And could it learn my writing and communication style? It could be a powerful tool to help me do even more.

My gut tells me that this year I’ll be able to experiment with just that kind of technology. And of course, if I do find something interesting, I’ll share it in The Bleeding Edge.

Back to the points that you raise…

One of my deepest concerns about the use of AI, and for that matter other forms of technology like social media applications, is the “dumbing down” of society.

Many of us are literally losing the ability, or the patience, to read and comprehend long-form content and information. To the point that you made, society has largely become so reactionary, unable to slow their thinking, absorb and question information, and view it in a calm, rational way.

Too many of us simply react to something that we see without asking ourselves the source of the information. Is it true? Is the information biased? What research or data set was the conclusion based on?

And at times, answering these questions can be very difficult. I can’t tell you how many times during the pandemic that I was reading scientific research and noticed a remarkable discrepancy between the title of the research and/or the conclusion, versus what the research actual discovered.

One was superficial to fit a political narrative, and the other was the truth that often was intentionally buried in the research. This made my work and analysis that much harder, but it really honed my analytical skills and was critical in maintaining objectivity.

AI, like ChatGPT and so many that will follow this year, are the “easy button.” Just like ordering food from Uber Eats, or buying something on Instagram using Shopify, it’s just a click of a button and its done. 

For that reason alone, the technology will be used by the masses and will make many of us lazy. And intellectual laziness is not a recipe for a peaceful and decent society. 

The other thing that has me so worried is the other point that you raised. AIs can have deep biases. And yes, it completely depends on the data set that an AI is trained on. 

This problem is already being talked about a lot in technology circles. We’ve already seen a very left-leaning bias to AIs in use because they are largely being trained on what is freely available on the internet.

And as we have recently learned from the Twitter files, we now know that U.S. government agencies were censoring, controlling, and even paying internet companies to manipulate the information that is available on the internet to fit a certain political narrative. 

Google, Facebook, Microsoft (LinkedIn), even Wikipedia were all part of the “great manipulation.” And while Twitter has been “freed” for now, all the rest are still part of this charade.

For instance, one thing that caught my eye recently was a tweet from Alex Epstein, an energy researcher and writer. He demonstrated how ChatGPT is now expressly forbidden from making arguments in favor of fossil fuels.

Source: Twitter

ChatGPT was clearly programmed around the above topic. Who knows how many other guardrails the team at OpenAI put on ChatGPT?

The issue of training an AI on a body of biased information that contains false information and censors scientific research and data will not produce an objective AI. 

Worse, these large language models ascribe confidence to answers based on how often they see the same thing on the internet. The problem with this is fairly obvious…

If a certain group of people control most media outlets and thus can literally suppress truth and amplify false information, this can only lead to bad outcomes.

One thing that I’m sure of is that we’re going to have to keep a critical eye on AI implementations and not just trust any output that it gives us. AI companies will continue to train on large, free datasets from the internet like Wikipedia, which is heavily biased. 

It was painful for me to watch history being rewritten on Wikipedia over the last few years. But AI companies will keep doing it because it is the “easy button.”

In an ideal world, a company would carefully curate the data set for training to ensure objectivity of the large language model; but this is a lot of work. And it would cost a lot of money to perform. Time + money = difficulty. Free + fast = easy.

I wish there was a way to teach all students to read, think, comprehend, synthesize, and communicate their thoughts on topics. I can imagine how hard that task is today. 

I try to do that with my own children, and I have to say it isn’t received well. Students today largely feel that kind of “work” is unnecessary. I wish that weren’t the case.

One thing that I’m sure of is that the cat is out of the bag. It simply can’t be stopped. 

No matter what you and I do, or think, these tools will be available on the internet, or through smartphone apps this year. This technology will spread like crazy. I see no point in banning it. 

A far better use of our time is to spend time educating students about it, identifying pitfalls, and demonstrating the importance of doing our own work and research to maintain objectivity.

I fully acknowledge, it is almost certainly a losing cause; but I’m sure that its worth trying.

Thanks again for such an interesting topic.

Will we need airline pilots in the future?

Hi Jeff,

I really enjoy reading your daily newsletter and I’d like to get your thoughts on something. One of my sons wants to be a pilot but is worried that eventually AI will completely take over that occupation and his career might come to an end long before he is ready to retire. What say you?

– K&J

Thanks for writing in with your question. Some readers may know that one of my first jobs was working in the aerospace industry. I worked as a contractor up at Boeing in Everett, WA on the 777 program after earning my B.S. in Aeronautical and Astronautical Engineering from Purdue University.

Your son is smart to be asking that kind of question. And that’s a very tough question to answer with specificity. 

The answer is “yes”, but within what timeframe? 

I believe that for the next decade or so, there will still be humans in the cockpit. They won’t be doing a whole lot though, unless you’re a bush pilot.

It may surprise readers to know that airline pilots already do very little hands-on-the- yoke flying. On a typical commercial flight, the pilot only flies in a traditional sense for a few minutes on landing and take-off. The rest of the time, the plane is on autopilot.

As commercial airplanes expand the use of autonomous technology, pilots will be largely in the cockpit as “safety pilots.” How long will the need for safety pilots last? Another decade? By then, we’ll already have artificial general intelligence, which changes everything.

Fully autonomous eVOTLs are already being developed for use by 2025/2026. This is an easier technological problem to solve compared to a commercial aircraft, which is why I believe we’ll see pilot-free eVTOLs adopted sooner than commercial aircraft.

If your son has a desire to fly, I would absolutely encourage it. There should be a good 10 – 20 years of good work ahead of him; but I would encourage some additional study. 

Computer science and artificial intelligence would be ideal. There will always be a need for air transportation, so having some additional technological training as applied to aircraft would certainly go a long way and provide a path towards future employment when human pilots become less necessary for flights.

The combination of a skilled pilot with an understanding of aerospace engineering and artificial intelligence would be a great combination for a successful career.

My preferred “set it and forget it” income strategy…

Hello,

I am a 73-year-old with cash sitting in a money market account. Are municipal bonds a good way for me to supplement retirement income?

Lorraine T.

Hi, Lorraine. As I’m sure readers know, I can’t offer personalized advice. But I can speak generally about the topic of municipal bonds.

The reality is that there will be a time (or times) in our investing career when growth will become less important. Our priorities will shift to preservation of capital and income. And when that time comes, municipal bonds (munis) are one of my preferred strategies for “set it and forget it” income.

As a reminder, municipal bonds are issued by state and local governments to finance projects like building new schools, sewer systems, or water utilities.

I did a deep dive into this area last year, leveraging some of my contacts in the industry. What I found is that municipal bonds have, historically, been incredibly safe. Going all the way back to the Great Depression, municipal bonds have a general default rate of less than ½ of 1% (a default means a missed principal or interest payment). And the default rate of high-quality muni bonds is near zero.

And with this low risk comes safe, consistent, and perhaps, a bit “boring” returns. And at a time like this when markets are being turned upside down, words like “safe” and “consistent” have considerably more appeal.

Last year, I recommended a municipal bond to my Brownstone Unlimited members. We had the chance to invest in a municipal bond offered by a municipal utility district (MUD) in Harris County, TX.

Officially, our yield was around 4.9%, depending on what maturity readers picked up. But that’s not the full story…

One of the great benefits is that municipal bonds—almost always—are exempt from federal income tax. For an investor in the highest tax bracket, that 4.9% yield was closer to 7.79% when we account for the tax-advantaged nature of the bond.

A nearly 8% yield on an asset that is essentially risk-free is hard to beat. And with interest rates on the rise, the yield on munis is becoming even more attractive. I’ve seen muni bonds that are yielding close to 10% when we consider the tax implications.

The only “problem” with munis is that they have historically been difficult to access. Primary offerings—we can think of them as a “bond IPO”—are typically snapped up by high-net worth investors or family offices.

Finding access to our municipal bond offering in Brownstone Unlimited took a tremendous amount of work. I plan on recommending some more muni bond primary offerings in the future.

I will make one final point. 

Investing in municipal bonds for their tax-free benefits makes the most sense for those who are in the 30%+ tax brackets. These investors really gain the benefit of the tax-free nature of municipal bonds.

For those investors who aren’t in those tax brackets, my favorite “set it and forget it” strategies in these markets is investing in convertible bonds – I call them X-bonds – in high quality growth companies. 

This is a keen area of research for me last year and this year in Exponential Tech Investor

The right kind of convertible bonds have very strong downside production, a coupon, great yield to maturity, AND the added bonus of potential for significant capital gains. 

I hope that information is helpful to all. Thanks for writing in with that question Lorraine. I’m sure that there are a lot of subscribers that have the same question.

Best wishes for a wonderful weekend,

Jeff Brown
Editor, The Bleeding Edge