• The darker side of AI…
  • Microsoft has an AI-powered web browser
  • What are the risks around CRISPR?

Dear Reader,

Welcome to our weekly mailbag edition of The Bleeding Edge. All week, you submitted your questions about the biggest trends in tech and biotech. Today, I’ll do my best to answer them.

If you have a question you’d like answered next week, be sure you submit it right here. I always enjoy hearing from you.

Will AI be used to control us?

Hello Jeff,

I’ve been following your AI commentaries with great interest and can certainly see the terrific investment potential as well as the benefits to companies, people and society in general, but I also see some very dark aspects.

The one that most concerns me is the potential for the AIs to be trained on data sets and information that strongly favors or supports the aims of certain groups. There is no way that I or any individual I know would ever be able to own or control an AI – they will only be available to governments and mega corporations, which means the data sets will certainly be slanted so as to ensure the AI acts in the interest of the owner or controller.

This begs the question of whether those organizations are going to be more interested in controlling me to my detriment for their own profit, or whether they will be focused on benefiting society in general. I’m quite sure we know the answer is going to be that the control will be to our detriment.

– Alfred R.

Hi, Alfred. The points that you raise are not only critically important, but they will also almost certainly define what will become a decades-long struggle between those who want to control what we think, and those that want to think for themselves.

I explored some of these issues on a recent video podcast that I was invited on with Glenn Beck. For those interested, you can view the episode here

While we covered a wide range of topics, artificial intelligence and its impact on society was the one that we spent the most time on. I hope you find it interesting.

I wish that there was a clear and simple answer to the questions that you raise. But I’m afraid that the reality is quite muddy and nuanced. And we’ve already seen some very concrete examples of problems and bias in AIs.

OpenAI’s work with its large language models, specifically, GPT-3, GPT-3.5, and most recently ChatGPT are a perfect example. There are two major issues with OpenAI’s implementation that will impact everyone that uses the technology.

The first is that these large language models train mostly on free and widely available information on the internet. It uses information produced by mainstream media, from Google, and the online “encyclopedia” Wikipedia, to name three examples.

I’m sure that we can see a problem already. 

All three sources of information are heavily biased. And we’ve learned how biased these sources have become over the last few years. The worst part is that we’re just scratching the surface of the degree to which our information, our internet searches, and our advertising has been manipulated to support agendas of larger organizations and political parties.

So that’s one major problem. An AI can learn on heavily biased information rather than objective, truthful inputs. That means that the outputs will also be heavily biased.

The second problem which has also been demonstrated by OpenAI is that ChatGPT has been programmed by humans who have a heavy bias. 

Sometimes, they refer to this as programming in “safety rails” or “keeping everyone safe”; but the reality is that this human programming is designed to manipulate us and how we think.

As you noted, this kind of control would be to our detriment.

To this end, one of the first things that we should always do is ask ourselves ‘where does this AI come from’? If it has been developed by OpenAI, Google, Microsoft, or some government agency, we should assume that the AI was designed with bias.

All hope is not lost, however. 

There is a huge market opportunity for unbiased AIs. I believe that a large percentage of any population will just want unbiased, objective and truthful information from an AI. Hopefully, at least a few tech companies will fill that void.

This issue is the most challenging with the large language models – the Chatbots – because they perform better with larger training sets. It is very time consuming to curate a massive multi-billion parameter training set to eliminate bias; but it is a solvable problem.

Where AIs are likely to have far less bias are when they are targeted at specific applications. 

Perhaps an AI is trained just on an individual company’s body of knowledge. Or an AI is designed explicitly to teach just world history, or just mathematics. I kind of think of this as modularizing AI. Distinct AIs can be developed for distinct applications.

The other important question that we can ask ourselves is about the business model of the company or organization behind any given AI. 

If the AI is free to use by anyone, that immediately tells us that we’re the product. Said another way, if it is free, we can be sure that the AI is collecting our data, developing a profile on us, and then influencing us through advertising.

And if an AI comes from a government, we should be even more skeptical. 

After all, in the last few months, we learned that U.S. government agencies were pushing Microsoft, Google, Facebook, Twitter, and probably others in media to push political narrative and suppress differing opinions and scientific research that didn’t fit the narrative. 

As we have all painfully learned, that was indeed a detriment to society.

I hope we’ll do our best to keep our wits about us and make efforts to keep the “system” honest. It won’t be easy, but it’s a worthy fight to ensure freedom of thought and freedom of speech.

Thanks again for the thoughtful question.

Looking at AI-powered web browsers…

Hello Jeff,

I remember you advised using the Brave browser. I believe you felt it was effective and more private and safer. I switched and have been satisfied.

Now Microsoft Edge, the browser that came with my computer, has AI. Also, they have been promoting a better more protected environment. Is it a good move to return to Microsoft? I would really appreciate Jeff’s


Thank you,

Tom K.

Hi, Tom. Thanks for the question and your timing is great considering all of the changes happening at Microsoft right now around artificial intelligence.

To catch readers up, Brave is a blockchain company looking to disrupt the web browsing market. Specifically, Brave’s product—the Brave Browser—is an answer to Google’s Chrome browser, which still has 65% market share in the space.

The value proposition is straightforward. Brave does not monitor or surveil its users the way Google does. Instead, it allows its users the option to “opt in” to view occasional ads. In exchange for opting in, Brave compensates users by paying them in the project’s native asset, the Basic Attention Token (BAT).

What’s actually happening is that Brave is sharing a portion of the advertising revenues generated with the users. This is an exciting new business model and if Brave can ever reach the kind of scale that comes with more than 100 million users, the economics could be incredible.

But for now, the payments are relatively small—just a few dollars’ worth of BAT every month. But because the asset has the potential to appreciate, the returns could be much larger in the years ahead. 

And with more users, the advertising revenues generated increase accordingly. And of course, a user’s BAT tokens can be converted into any digital asset or even “off ramped” into fiat currency.

But now that Microsoft is integrating ChatGPT into its suite of products, is it worth making the jump to the company’s browser, Microsoft Edge?

That’s a decision users will have to decide for themselves. And given the excitement around ChatGPT, it’s easy to see the appeal.

With ChatGPT integrated into the browser, the powerful generative AI would be always accessible to users.

For instance, we can imagine we’re updating our profile on LinkedIn, but we’re not sure what to write. We could simply tell ChatGPT—available in a sidebar—to write a job summary for our place of employment. And as we’ve been covering, the AI could easily handle that task.

And technology like ChatGPT will transform search. Rather than getting 10 pages of links to websites, we’ll start to get clear, concise answers to exactly what we’ve been looking for.

But the real question is whether or not we trust the company behind the technology. As I wrote in response to Alfred’s question above, OpenAI has already proven to be heavily biased. 

That means that Microsoft’s usage of ChatGPT and associated technology will be as well. We also learned from the Twitter Files that Microsoft was colluding with the government to censor, ban, and manipulate information in both its search products as well as LinkedIn. 

That makes me deeply uncomfortable.

At the moment, we don’t yet know of an unbiased, objective generative AI-based search engine. I hope and believe that one will be developed. And I can assure you that we’ll be writing about it in The Bleeding Edge.

Unintended consequences of genetic editing?

Hi Jeff,

In your Bleeding Edge, you suggested modifying the DNA of the sharpshooter to stop the bacteria it harbors for grape vine destruction. It seems like it would be helpful to enlist AI to think of any unintended consequences to this type of focused alterations.

– Rodger H.

Hi, Rodger. Thanks for writing in.

To catch readers up, earlier this month we looked at an interesting application of CRISPR genetic editing technology.

Longtime readers will know we can think of CRISPR as “editing software” for our DNA. It’s able to cut, insert, or replace pieces of genetic code. We’ve mostly covered the application of this technology in healthcare. But it goes far beyond that…

CRISPR can also be used on plants and insects. And a team at Baylor College of Medicine is working to apply the technology to a disease every farmer will know; Pierce’s Disease.

The disease stems from a bacterium that can cause devastation to crops. It’s so destructive that it can cause grape vines to whither completely.

The bacteria breed in the mouth of a strange little bug. It’s called the glassy-winged sharpshooter.

The Glassy-Winged Sharpshooter

Source: Nikonians

When the insect lands in crop fields, it spreads the bacteria that cause Pierce’s Disease.

But the team at Baylor was able to pinpoint the part of the insect’s genome that allows the bacteria to thrive. And with CRISPR, they hope to correct the problem. No more bacteria, no more Pierce’s Disease; and entire crops could be saved.

But you bring up a good point, Rodger.

Anytime we’re talking about genetic editing on such a large scale like this, we have to assess the risks. Will there be any unintended consequences as a result of these genetic changes?

The answer to that question certainly isn’t clear right now. We would be wise to make sure that we can answer it definitively before putting any CRISPR’d insects out into the wild.

And artificial intelligence is a fantastic technology to use to answer this kind of complex question. Anytime we have high levels of complexity with large numbers of variables and huge data sets, AI can be used to sift through everything.

From there, it will synthesize it, figure out which parts of the information are most relevant to the outcomes of the system, and potentially suggest ways to optimize whatever it is that is trying to be solved.

Cosmology, the global climate, complex logistics systems, complex supply chains, and ecological systems are all great examples of areas where AI can be used to help inform better decisions.

If CRISPR is used responsibly, this technology has the potential to eradicate all disease of genetic origin. However, if used irresponsibly it could cause chaos in unexpected and unintended ways.

Needless to say, we’ll be keeping an eye on this going forward.


Jeff Brown
Editor, The Bleeding Edge