Better Know Claude

Colin Tedards
|
Jul 27, 2023
|
Bleeding Edge
|
4 min read

Dear Reader,

ChatGPT, Bard, LLaMA 2, Claude, DALL-E 2, and AlphaCode. 

These are just a few of the generative AI models that are available to use today. It seems like every week, there’s a new model… or a newer version of an existing model that comes out.

That’s why I’m occasionally going to use The Bleeding Edge to take a closer look at a given AI model – and the company behind it.

Today, I’m featuring Anthropic and its latest AI model, Claude 2.

Claude 2 is the closest competitor to ChatGPT and GPT-4 from OpenAI.

At the core of it, users can submit a prompt and Claude will usually respond with an informed and natural language response.

Claude has a few advantages over GPT-4.

The biggest difference is that Claude can ingest a book’s worth of information in a single prompt.

Claude’s context window is 100,000. The context window refers to how much information it can digest at one time. And the size, 100,000, equates to roughly 70,000 words. That’s about the size of books like Catcher in the Rye or Lord of the Flies.

In comparison, GPT-4 has only recently unveiled its 32,000-context window.

That means Claude can handle about 3x what GPT-4 can.

That gives the edge to Claude for being able to digest lengthy documents and provide useful insights. With Claude, users don’t need to come up with creative solutions to feed the AI large amounts of data. Claude’s built ready to take it all in.

Claude is also about 5x cheaper than GPT-4. For a prompt, Claude costs $11.02/million tokens whereas GPT-4 costs $60/million tokens.

Of course, taking in more data doesn’t mean much if the AI can’t make sense of what it’s ingesting.

Here’s how Claude scored on a handful of standardized tests: 

  1. 76.5% on the multiple choice section of the Bar exam

  2. 90th percentile on the GRE reading and writing exams

  3. 71.2% on the Codex HumanEval, a Python coding test

These scores show that Claude has similar reasoning abilities as GPT-4.

Claude’s information is also more recent. Its cutoff point for information is early 2023. That compares to GPT-4 which is September 2021.

From my own experiences experimenting with Claude, I feel it offers longer and more nuanced responses. I think that’s generally a good thing coming from an AI. After all, most people are willing to take what an AI says at face value without fact-checking. So the more nuance and context the AI can provide, the better.

One of the most interesting features of Claude is what its developers call a constitutional AI model.

A big challenge for any AI developer is to make sure the AI doesn’t start spewing hateful or harmful information. After all, these AIs take in huge amounts of data from the internet. And the internet has some very hateful content on it.

This challenge brought down early AIs like Microsoft’s Tay which spewed racist and inflammatory remarks. 

OpenAI uses humans to teach the AI what’s harmful. Developers can manually intervene and tell it not to respond to certain questions. Or it can be trained to not offer hateful or harmful prompts on a case-by-case basis.

Claude’s constitutional model is different. The developers managed to “teach” it an ethical framework to use in every response.

It basically works by having two different AIs manage the response. The first AI fetches the response based on the given prompt. The second AI then evaluates the response based on the constitutional guidelines it’s been given.

Claude’s developers hope to eliminate hateful, discriminatory, unethical, and illegal responses entirely. This gives the model a potential edge in business and commercial applications.

It’s 2x safer than its previous version, Claude 1.3. Just from experimenting with it, Claude does seem to have tighter guardrails than other AI models.

Stricter safety standards aren’t an accident. They’re the entire reason Claude exists.

Anthropic was founded by Dario and Daniela Amodei. Both were senior members at OpenAI. According to them, they became concerned with OpenAI’s disregard for safety in favor of commercialization.

They wanted to build a better AI that had stricter safety standards.

It seems to have worked. Since 2021, Anthropic has raised over $1 billion from the likes of Alphabet and Salesforce. Last February, Anthropic partnered with Alphabet to use its Google Cloud to help train and develop Claude.

Anthropic seems to have made the most of the funding and partnership. Claude 2 shows that it’s able to outcompete GPT-4 with its constitutional AI.

Anthropic is still a private company. Its last funding round in May valued it at $5 billion. That means there isn’t an opportunity to invest in it yet. But it’s a company that I’m keeping a close eye on.

I encourage you to give Claude a try at Claude.ai. Then let me know what you think by writing to me at feedback@brownstoneresearch.com.

Regards,

Colin Tedards
Editor, The Bleeding Edge


Want more stories like this one?

The Bleeding Edge is the only free newsletter that delivers daily insights and information from the high-tech world as well as topics and trends relevant to investments.