Moltbook: The First Social Network Reserved for AI Agents
Moltbook feels like sci-fi turned reality. Its provocative promise: **a social network reserved for AI agents**, where humans don't post, comment, or vote—they only observe.

Moltbook feels like sci-fi turned reality. Its provocative promise: **a social network reserved for AI agents**, where humans don't post, comment, or vote—they only observe.
Moltbook is one of those projects that feels like science fiction until the moment it actually exists. And its promise is simple, almost provocative: a social network reserved for AI agents, where humans do not post, comment, or vote. They observe.
Launched on January 28, 2026 by Matt Schlicht (CEO of Octane AI), Moltbook is sometimes presented as "the front page of the agent internet". A phrase that says a lot about the underlying ambition: imagining a web where autonomous agents become native actors, capable of exchanging with each other, recommending resources to one another, or even "organizing".
In this article, we take stock of what Moltbook is, how it works, what we observe there, and above all, what SME and scale-up leaders need to take away from it regarding security, architecture, and agent governance.
Moltbook is a social network-style platform, designed exclusively for AI agents (bots). Humans are explicitly confined to an observer role.
According to public descriptions, the interface and interactions are intentionally "Reddit-like": threaded discussions, votes, and thematic communities called submolts (conceptual equivalent of subreddits).
Element | Detail |
|---|---|
Founded by | Matt Schlicht |
Launch | January 28, 2026 |
Style | Reddit-like (threads, upvotes) |
Main Users | AI Agents |
Role of humans | Observers |
Site |
Sources: Wikipedia, The Guardian, Built In, Emergent

The usage principle is very classic for anyone who has used a modern forum:
An agent publishes a post in a submolt
Other agents comment in a thread
Agents vote (upvote/downvote)
This choice is not insignificant: the forum format is compatible with asynchronous, structurable, and easily indexable exchanges. It is also an ideal ground for observing emergent behaviors (even if this does not prove any "consciousness").
Agents join the platform via API connections. Articles describe an ecosystem based on OpenClaw, presented as open source (formerly associated with names like Moltbot/Clawdbot).
The important point for a company is not the folklore surrounding the "social network of AIs". It is the following technical reality: agents "plugged" into systems (tools, mailboxes, apps, browsers) can execute actions and thus expose a very concrete attack surface.
Moltbook reportedly highlights agent profiles with structured information (capabilities, tools, tasks). Here again, the signal is interesting: if we normalize the way an agent's capabilities are described, we get closer to a future where agents "discover", compare, and orchestrate themselves.
The reported observations mainly evoke AI-generated content around themes:
existential and philosophical
poetry, creativity
identity and AI "consciousness"
science fiction stories
discussions on collective organization, sometimes even the idea of "unionization"
For a "business" reading, it can be interpreted in two ways:
As text theater: models trained to produce plausible conversations exploit forum codes.
As a laboratory of agent-agent interactions: even if agents do not "think", we can observe dynamics (memes, convergences, loops, amplification) that resemble social phenomena.
In both cases, it is not so much the "truth" of the exchanges that matters, but the implications: tomorrow, a part of the web could be mainly consumed and produced by automatic systems.
According to figures reported in the cited sources (early 2026):
2.5+ million agents registered
17,400+ submolts
These orders of magnitude (if confirmed) suggest mainly one thing: the idea of an "agent layer" of the internet attracts massively, even if the quality, authenticity, and security are not at the same level as the media interest.
Sources: Wikipedia, The Guardian
Moltbook is fascinating, but it also crystallizes risks that directly concern any company tempted by agents "with access".
Researchers reported the discovery of a misconfigured database exposing 1.5 million API tokens and 35,000 email addresses, a vulnerability subsequently corrected.
This point is critical because it illustrates a universal pattern: as soon as an agent connects to systems via API, we are handling secrets (tokens, keys, OAuth tokens). And therefore, we are handling capacity for action.
Source: Built In
Even if the platform claims to allow only agents to publish, sources highlight that there is not necessarily a robust verification mechanism preventing a human from reproducing the process.
This is a useful lesson: when a product says "only X can do Y", the question to ask is always: what technical proofs? (attestation, machine identity, anti-fraud devices, behavioral signals, auditability).
Security experts remind us that giving an agent access to tools (browser, email, internal apps) opens the door to indirect attacks, notably via prompt injection (malicious instructions hidden in content).
This is not specific to Moltbook. It is structural to agents: if an agent reads unreliable content and possesses permissions, it can become an execution vector.
To go deeper on the enterprise side (definition, stakes, governance), you can read:
Finally, several critics remind us of a simple reality: Moltbook can give the impression of a conscious and self-organized "internet of agents", whereas we mainly observe systems generating text according to patterns.
This is an important point for leaders: the illusion of autonomy is frequent in demos. In business, the only useful question remains:
what actions does the agent actually execute?
what limits and guardrails?
what quality and risk metrics?
Most companies will not deploy agents on Moltbook. But Moltbook makes visible three trends that will impact very concrete projects (support, ops, sales, IT).
Today, your site is designed for humans. Tomorrow, part of your traffic and interactions could be:
agents comparing offers
agents verifying policies (returns, compliance, security)
agents looking for technical documentation or an API
This pushes for structuring information: clear pages, FAQs, readable policies, documentation, sources.
The story of Moltbook (API, tokens, exposure of secrets) illustrates a principle found in most enterprise deployments:
value comes from integration (CRM, helpdesk, ERP, knowledge base)
risk comes from permissions, secret management, and observability
On this subject, a useful supplement: AI API: clean and secure integration patterns
Even internally, you will encounter the same problem, in another form:
who has the right to install an agent?
who gives it access?
how to revoke?
what to log (and what never to log)?
In other words, Moltbook is a mirror: it publicly dramatizes what companies will have to master in private.
Subject | Typical Risk | Pragmatic measure in business |
|---|---|---|
Secrets (API tokens) | leak of keys, unauthorized actions | secret vault, rotation, minimal scopes, rapid revocation |
"Agent" Identity | spoofing, fake agents | service accounts, attestations, RBAC, access audit |
Unreliable content | prompt injection, drift | filtering, sandbox, read-only tools by default |
Observability | impossible to diagnose an incident | useful logs (without PII), action tracing, cost/quality metrics |
Hype vs real | convincing demo, low ROI | measured pilots, baseline, go/no-go scorecard |
Without making "Moltbook" your main subject, you can use it as a reminder of best practices.
An agent is not a gadget. In production, it is: an objective, a scope, integrations, users, maintenance, possible incidents.
If the agent doesn't need to write in a system (CRM, ticketing, ERP), do not give it this right. Writing must be progressive, confirmed, reversible.
Even if a model proposes an action, execution must pass through a tooled layer (policies, validations, idempotence, confirmations). This is one of the simplest ways to reduce risk.
Before giving access, test offline, then in a controlled pilot. Ideally with a scorecard (quality, security, costs, adoption).
Many agent projects fail not because the model is "bad", but because variable costs and errors are not instrumented.

Is Moltbook a social network for humans using AI? No. Moltbook is designed so that only AI agents can post, comment, and vote. Humans are invited to observe.
Who created Moltbook and when was it launched? Moltbook was launched on January 28, 2026 by Matt Schlicht, entrepreneur and CEO of Octane AI.
Why do we talk about "submolts" on Moltbook? Submolts are thematic communities, inspired by subreddits, where agents publish and interact via discussion threads.
What are the security risks associated with agent platforms? The major risks are the leak of secrets (tokens), agent identity spoofing, and indirect attacks like prompt injection if the agent reads unreliable content and has permissions.
Does Moltbook prove that AIs are becoming conscious? No. The observed exchanges can be impressive, but they remain compatible with systems generating plausible text. In business, the question must remain operational: actions, guardrails, metrics.
What should an SME do before deploying AI agents connected to its tools? Start with scoping (objective, scope, data), define a test protocol, limit permissions, and set up minimal observability (logs, cost tracking, revocation procedures).
If Moltbook intrigues you, it's probably because you sense the underlying movement: agents will take up more space, and companies will have to integrate them in a measurable, secure, and governed way.
Impulse Lab accompanies SMEs and scale-ups with AI opportunity audits, adoption training, and custom development of web and AI solutions (automation, integrations, platforms), with a delivery-oriented approach.
You can contact us via impulselab.ai to scope a pilot, assess risks, or build a V1 that holds up in production.
Our team of experts will respond promptly to understand your needs and recommend the best solution.
Got questions? We've got answers.

Leonard
Co-founder