• Getty Images sues Stability AI over copyright infringement…
  • OpenAI is now available on Microsoft’s Azure…
  • Is social media ready for generative AI influencers?

Dear Reader,

“What?! You want me to write my homework, in class, with – like – pen and paper? NO. WAY.”

The above is an example retort of what’s already started to happen in universities across the country. The catalyst, of course, is a topic that we know very well in The Bleeding Edge

OpenAI’s ChatGPT artificial intelligence (AI) chatbot.

The use of ChatGPT by students has been so rapid, and the quality of the AIs “writing” and analysis so good, that many teachers have quickly come to the conclusion that the homework was simply too good.

As a reminder, OpenAI released ChatGPT just a few weeks ago, on December 1. The speed of adoption has been nothing short of spectacular. But the speed at which universities around the country are jumping to action has been quite a surprise. 

These storied and heavily bureaucratic institutions tend to be very set in their ways – and slow to adapt to change (with, of course, the exception being to raise tuitions).

A few days ago, we had a look at some work from a Princeton University student who created GPTZero. It’s an AI capable of assessing the likelihood that any text it analyzes was crafted by another AI like ChatGPT, Google’s LaMDA, Anthropic’s Claude, and others.

More than 6,000 teachers at some of the most well-known universities in the world, including Yale and Harvard, have already signed up to use GPTZero. And since January 3, the release of GPTZero has already gone viral on Twitter, with more than 7.5 million views.

But the reality is that it’s just a stop-gap measure. A tool like this is akin to the story of the little Dutch boy who put his finger in the dike to stop a leak in the hopes that the dike wouldn’t break.

Universities have already recognized this. With the tool widely available, it’s a dam they can’t stop from breaking. The flood has already started. 

Every student can use ChatGPT – and the others that will follow it – as a real-time version of CliffNotes for just about any homework assignment imaginable. It’s an intelligent search engine that gives instant answers to complex questions, which can then be adapted to homework assignments and take-home open-book tests.

Which brings us back to pen and paper.

Universities are quickly adjusting to this new reality. One teacher requires students to handwrite their first draft of an essay in class. Students are further required to explain changes that they make to future drafts.

Other professors are changing the way they issue homework entirely. They’re shifting toward oral exams, handwritten assignments in class, and group work that’s far less likely to be plagiarized with the help of an AI.

This is a far more productive direction. Anything that can be done and powered by an AI outside of the classroom will likely be de-emphasized in terms of grading. 

And one of the most obvious ways to check if a student has done the work and understood a subject matter is to test comprehension in class in the absence of an AI.

Hence – pen and paper.

So these generative AIs will still be widely used, but as a tool for study – a tool for accelerated learning. They’ll help us find exactly what we’re looking for, and provide interesting insights, in a matter of minutes.

I have to say: I’m both excited and jealous. 

I wish I had a tool like that at my fingertips back when I was in school. It would have allowed me to save so much unnecessary time searching for information, and enabled me to spend the majority of my time studying and learning even more.

Do generative AIs break the law…?

Unsurprisingly, there’s suddenly a lot of discussion around the legalities of generative AI. Especially with regard to copyright law.

We’ve talked a lot about generative AI recently. As a reminder, generative AI refers to AI that can produce content, images, and even software code on command. GPT-3, ChatGPT, Stable Diffusion, Google, Meta, Midjourney, and OpenAI’s DALL-E are all great examples.

However, there’s a big question around the data sets used to train these AIs. What’s in them? Do they contain copyrighted material?

Most of the generative AIs we’ve discussed have kept their data sets close to the chest. They don’t share them publicly. We don’t know exactly what they contain.

Stability AI, the creator of Stable Diffusion, took a different approach. Its data set is open and available for all to see and use.

And as it turns out, Stable Diffusion used stock images from Getty Images that it scraped from the internet. The giveaway was an AI-generated watermark on some of the images that Stable Diffusion produced.

Here’s an example:

Stability AI Scraped Publicly Available Images From Getty

Source: The Verge

This prompted Getty Images to investigate Stability AI’s data set. And Getty found that the data set used to train Stable Diffusion contains many of its stock photos.

Unsurprisingly, Getty just filed a lawsuit against Stability AI claiming copyright violation. And on one hand, the company certainly has a good case.

But the situation is far more complex than it appears on the surface.

Getty makes its stock photos publicly available on the internet. Those images aren’t behind a paywall. That’s the business model. Creators can view its entire library of stock photos with a watermark on them. Then, if somebody wants to use a particular photo, they have to purchase it to get the watermark off.

In building its data set, Stability AI simply scraped millions of publicly available photos from the internet. This is largely an automated process. The team didn’t pick and choose between photos.

So, I’m sure that Stability AI’s legal team will argue that this isn’t a case where Stability AI knowingly violated Getty’s copyright protection.

But here’s the thing: For those companies that didn’t make their generative AI training data open and available, there’s simply no way to know what’s in there. Stability AI is being targeted simply because it’s the only one that let the public see behind the curtain.

The point is, there’s no way Getty or anybody else can stop generative AI or control the data sets. If companies keep their data sets private, there’s no way for anyone to know what’s in them.

Clearly, Getty Images is just trying to set a legal precedent here. Its primary goal is to get a licensing model in place. That way, if an AI company wants to use its photos to train its AI, it will have to pay royalties to Getty for the privilege.

I see this as an act of defense and self-preservation.

In a world where AIs can create high-quality, creative, photo-realistic images in seconds, where does that leave the stock photo industry? I can easily make the case that stock photos are about to become obsolete.

So this is an interesting case to watch this year. How it shakes out could set a strong legal precedent for the rest of the industry.

Microsoft’s brilliant product strategy…

Microsoft just made another big announcement. The tech giant just revealed that it’s making all of the OpenAI technologies available on its cloud service Azure. This is huge.

What this means is that Microsoft Azure customers will suddenly have access to everything OpenAI has put out: GPT-3, ChatGPT, DALL-E 2, and even Codex.

We’ve talked about each of these tools recently. They’re powerful AIs.

Microsoft had already integrated OpenAI’s tools into Azure. That happened last November. But until now, everything was on an invitation-only basis. Most Azure customers couldn’t get access.

Now the floodgates are open. Microsoft is making OpenAI’s tech broadly available to anyone willing to pay for its Azure cloud services. And I have no doubt that we’ll see an avalanche of adoption as a result.

OpenAI’s tools have already been used for things like customer support, research and analysis, data classification, and much more. These are complex problems that are expensive and time-consuming to solve using human labor. But with AI, these projects can be completed in minutes – and sometimes even seconds.

What’s more, Microsoft also integrated GPT-3 into its GitHub Copilot. That’s a service that helps developers write code. No doubt GPT-3 will spur a productivity increase for software developers.

So all of a sudden, Microsoft has become one of the more interesting companies in tech. I never thought I’d say that.

But it’s not an exaggeration to say that this is the single most significant thing Microsoft has done in at least two decades. By infusing OpenAI’s tools, Microsoft Azure now has a competitive differentiation that none of its competitors have in cloud services. 

Prior to this development, Azure has often been thought of as the least-attractive cloud service provider from a technology services perspective. It struggled to win major business without “buying” its business through investment vehicles, just like it did with OpenAI.

At Brownstone Research, we’ll be keeping a close eye on Microsoft (MSFT), and its control of OpenAI, this year.

I wouldn’t recommend the stock at these levels, as it’s still richly valued. But if we get one more material pullback in the markets, MSFT could become a great investment opportunity.

Get ready for an onslaught of AI influencers…

Generative AI has been the hottest topic in The Bleeding Edge recently. As we’ve discussed, we’ve reached the point where it’s nearly impossible to know if you are dealing with an AI or a human online.

That’s of course true when it comes to virtual chats. But it’s also true on social media.

Regular readers may remember Lil Miquela. She’s an influencer with over 3 million followers on Instagram.

And as we noted last October, Lil Miquela has actively promoted various consumer brands to her followers on Instagram. No doubt this has generated a ton of advertising revenue for “her.”

Here she is:

Popular Social Media Influencer Lil Miquela

Source: YouTube

Here, we can see Miquela standing in front of a restaurant with a branded cup.

But here’s the thing: Lil Miquela isn’t a real person. She’s a computer-generated image. She only exists online. And she’s developed and managed by a team of humans.

However, I would wager that many of her followers don’t realize this. And to someone without a trained eye, many wouldn’t even recognize that she’s not real…

Well, the inevitable just happened.

Inspired by Lil Miquela’s success, creators are now using generative AI to create their own Instagram influencers. Meet Alice:

Alice Is a Generative AI Influencer

Source: thismodeldoesnotexist.co

These images of Alice were all created by generative AI. A few of them are a little wonky. But most of these are absolutely stunning. They make Alice look like a real person engaged in real activities.

And as we can see at the top, the creator is asking their community to vote on which images should go up on Instagram. They’re leveraging the power of the crowd to pick out the best ones.

To me, this is a brilliant model. The AI can create an enormous number of images for the creator. And then the community can help determine which images are best. The creator is catering to the “likes” of the artificial influencer’s fans.

And think about this: Generative AI could also produce an unlimited amount of content in Alice’s “voice.” That way, Alice’s Instagram could make posts all day, every day. It can produce as much content, in a nearly unlimited number of settings, as the market will take. 

Human influencers simply can’t do that.

If we think about ChatGPT, if Alice is enabled with this technology, the AI could have a million conversations with followers in real time.

It would be able to develop meaningful relationships with every individual… at least, from their perspective. And that would increase Alice’s influence all the more.

For those who might find this topic interesting, I highly recommend the movie Her with Joaquin Phoenix, which explores what life will be like as this technology evolves.

This technological development will become absolutely transformational both for influencers and social media as a whole. Pretty soon, social media platforms will be inundated with AIs. There won’t be a way to know who’s real and who’s not. And interacting with these personalities will feel deeply personal.

There’s no doubt it’s a massive business opportunity. Now that generative AI is widely available, anybody savvy enough could do this. And anyone who can get up to a million followers will be able to make all kinds of money from advertising revenues and by promoting products.

I’m sure we’ll see this catch on in a big way in the months to come. I’m interested to see how the social media companies handle it.


Jeff Brown
Editor, The Bleeding Edge