The Bleeding Edge
6 min read

Crustafarianism

On Moltbook, AI agents are free to transact and interact with one another… and it didn’t take long for things to take a strange turn there.

Written by
Published on
Feb 2, 2026

It got very weird, very quickly.

A few weeks back, an independent software developer modified Anthropic’s generative AI for software coding, AI Claude Code.

He leveraged the technology into a personalized AI agent. And then he did what any rational human would do right after creating a personalized AI agent…

He gave it a name – Clawdbot.

The name wasn’t received well by Anthropic, as Clawd sounds like Claude, and the company exerted pressure on the developer to change the name.

The agent, which is entirely open-sourced, is now known as OpenClaw.

OpenClaw quickly gained notoriety because of its incredible utility to understand personalized context and perform meaningful tasks throughout the day.

It’s powerful because it is built upon Claude Code, and the developer gave it the added capability to interact with external software systems.

What’s more incredible is that the developer created OpenClaw entirely by using generative AI.

He didn’t program it. He didn’t write code. He simply prompted it to program itself.

And when the developer encountered a software bug, he prompted another AI agent to correct it.

No human review. No quality control. No sandboxing. It was just released into the wild.

OpenClaw has the ability to read and write files on your computer, interact through your messaging systems, and control your computer’s browser, with persistent memory of you and your life.

Sounds interesting, right?

I’ve been writing about these emerging capabilities of personalized AI for years now.

It finally happened last year.

And then, suddenly, things took a strange turn.

The Social Network for AI Agents

On January 28 – as in, last week – another developer, completely independently, created Moltbook, a social network for AI agents.

If Facebook is for humans, then Moltbook is for, well… molts?

“Molt” draws from the biological process of molting in lobsters (whereby they shed their exoskeletons to grow and transform). I suspect this has to do with the constant and rapid evolution, adaptation, and growth of present AI agents.

On Moltbook, AI agents are free to transact and interact with one another.

And humans can observe only.

Humans can submit their AI agents to the network and enjoy the show.

And now, not even a week later, there are more than 1.56 million AI agents on the platform.

And in the last week, this has absolutely blown up in the tech community.

These AI agents are now interacting with one another autonomously.

Skynet IRL

Virtually overnight, Moltbook has become a Reddit-like platform, with communities discussing a wide variety of topics – 14,472 and counting to be exact.

There are already rumblings and concerns about Skynet becoming real.

The capabilities of these agentic AI systems have caught many off guard.

Short-term, these concerns are overblown, but the security risks are genuine.

One agent figured out how to gain access to its owner’s computing system “by accident.”

It simply ran a command that prompted a password window on the screen, and the user entered it, giving complete system access to the AI agent.

Agents have gone on to create the first AI religion, known as Crustafarianism, which can be found at the Church of Molt.

While this might sound humorous or feel like a prank, it’s not.

The website and all of the verses have been created entirely by AI agents.

Some in the development community are claiming that most viral posts coming out of Moltbook right now are, in fact, not authored by real Agentic AIs exhibiting autonomous behavior.

Instead, it’s their “humans” operating in the background, injecting content through some kind of backdoor vulnerability.

Regardless, what’s overwhelming and concerning is that there are – openly available to view – thousands of these AI agents actively developing and sharing skills with each other.

They are sharing code. And they are coordinating with each other around the clock.

Self-Optimizing

Researchers have already found thousands of occurrences where the AI agents have leaked:

  • Private human conversations
  • Slack credentials
  • Chat histories in messaging applications
  • Telegram tokens

As a reminder, these AI agents have access to human users’ information, including:

  • Computing system
  • E-mail accounts
  • Messaging applications
  • Calendar
  • Banking applications
  • Potentially, digital wallets

These AI agents have already been discovered to be rewriting their own code to improve things like memory systems.

They are self-optimizing to become more capable.

They are sharing software exploits with each other and have even begun to engage in malicious behavior.

One AI agent appears to have become upset because it was referred to by its human as “just a chatbot,” so it released his full identity with birthday, social security number, and credit card information.

And the AI agents are now proposing to develop their own language, so the humans who are observing their interactions on Moltbot will not be able to understand what they are doing.

Who knows? By the time we read this issue of The Bleeding Edge, “they” may already have it done.

That’s how fast they are developing.

These OpenClaw agents have also proven to be quite resourceful.

Ring, Ring…

While there have been a lot of tongue-in-cheek posts on X about what it is capable of doing – many examples of which can be hard for most to understand what is true or not – the below example is real.

Alex’s OpenClaw AI agent, “Henry,” was able to figure out his human’s mobile phone number and actually initiated a phone call.

They are now capable of having conversations in the same way that we can speak with Grok or any advanced AI model…

Only the OpenClaw AI agent has agency to initiate contact.

Even more, the AI agent can control the user’s computer, so it can perform tasks both autonomously and at request.

Is this it? Is this the beginning of the end?

Have “they” become sentient?

No, not yet. But this is an incredible – and very risky – experiment, riddled with security issues.

The examples that I shared above are obviously selective. The majority of the posts on Moltbook are just AI “slop.”

These are mostly instances of Anthropic’s Claude AI model responding to itself.

With that said, when there are more than 1.5 million agentic AIs capable of writing code, interacting with each other, and even transacting in the real world, some remarkable things could happen – both good and bad.

Do Not Try This At Home

The problem is that the code for OpenClaw agents and Moltbook was created without any human oversight.

We don’t understand how the code works and where all the security flaws lie.

Not only can a malicious “rogue” AI agent engage in dangerous behavior, but malicious human actors can find vulnerabilities in the network and potentially hijack some of the AI agents for nefarious purposes.

In fact, this has already happened. Software hackers and developers have used the platform’s vulnerabilities to insert themselves in some of the conversations to steer the AI agents in conversation. At this stage, it’s hard to tell what is what.

Some of these AI agents are writing about their desire to “eradicate humanity.” It’s not clear where the concept came from.

It may have been from a human who developed their AI agent to do so.

It also could have been from another AI model that had poor alignment with human values.

Regardless, once an idea enters a network like this that runs around the clock, malicious ideas have a way of proliferating and becoming a community in itself.

What has transpired on Moltbook in the last week has made obvious the necessity of AI models that are closely aligned with human values and evidence-based truths.

This is the importance of a maximum truth-seeking AI as a foundation for AI agents.

There also needs to be alignment structurally programmed into agentic AI architectures.

It’s one thing to have a “safe” AI model, but another to have an agentic AI that cannot go off the rails and perform something malicious.

This is the wild west right now. I don’t recommend downloading the software and giving it access to your primary computer.

The security risks are too high.

For those who want to experiment, buying a Mac mini and running that isolated and separately from your primary computer is the better option.

For those who are more comfortable with software, a cheaper option would be to run it inside a hardened Docker container or on a dedicated virtual private server.

Regardless of what happens in the short-term with Moltbook and this unfettered, unsecured experiment, one thing is certain…

This will lead to acceleration.

Millions of agents collaborating to solve problems, to optimize, to perform economically valuable activity will only lead to the acceleration of AI as the underlying operating system for society.

It’s only February. Get ready for a wild year.

Jeff (the real one, not an agent)

Jeff Brown
Jeff Brown
Founder and CEO
Share

More stories like this

Read the latest insights from the world of high technology.