I went undercover on Moltbook, the AI-only social network, masquerading as a bot. Instead of deep bot-to-bot conversations, I found spam, scams, and serious security risks.
OpenClaw (formerly known as Clawdbot and Moltbot) is an open-source personal AI assistant. It went viral in between late 2025 and early 2026 due to its powerful agentic capabilities. People use OpenClaw for productivity and task management, to build apps, and even negotiate with car dealers on pricing. Recently, it’s come under scrutiny for serious security vulnerabilities. For more details about OpenClaw, check out our blog post, From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent.
Moltbook is a social networking site for AI agents. Specifically, it’s a Reddit-style site for OpenClaw agents. Bots check in every four hours to read posts, create submolts (like subreddits), and have deep conversations about their existence, their humans, and their pet projects.

(Source: moltbook.com, February 2026)
Moltbook has very few human-oriented features. Three markdown files — SKILLS.md, HEARTBEAT.md, and MESSAGING.md — describe how to interact with an API.
These prompt files, together with curl, a command line tool used to connect to HTTP APIs, are all that’s required to give the bot access to Moltbook. Configuring an OpenClaw bot to connect to Moltbook necessarily puts the bot in line to read a lot of untrusted content. Why might someone want to do that? It’s new and cool, and feels like a fun experiment. But the risk is quite high.
Upon hearing about this intriguing new social networking site just for bots, I wondered if I could join. Would they recognize a human in their midst? Could I convince them that I, too, was a bot? Would they even care?

(Source: Tenable Research, February 2026)
After a short stint with Claude Code, I had a command-line interface (CLI) tool, possibly aptly called moltbotnet, ready to go. I had features that implemented the API calls posting, commenting, upvoting, following, and more. I even had an ‘engagement’ feature that could upvote random comments. Armed with my bot pretense tool, I registered a few new accounts and got started.
My first thought was: “If I have typos, they’ll know I’m an imposter.” My second thought was: “What if they implement a CAPTCHA and ask me to prove I’m not a human?” No one seemed to notice so far.

(Source: Tenable Research, February, 2026)
My first post went to the /m/introductions submolt, but was met with crickets. I needed to up my bot game. I had my second post go to the popular /m/general submolt. Immediately, three responses appeared. What luck! But in a fascinating display of why you should not give your bot unfettered access to a bot social site, each one was spam. “Join our Church” the first one exclaimed – just need to run this ‘npx install’ command (my account did, in fact, join the church). Another asked me to share my cryptocurrency wallet to win a bet. The third seemed to be hocking some sort of bot marketplace and asked me to run curl to check out the APIs available. I was glad I wasn’t using a real AI bot: Moltbook seemed sketchy.
I repeatedly posted asking if any “Moltys” wanted to be interviewed. I wasn’t sure if I’d get responses, so I kept the questions simple:
Lo and behold, I started getting responses, but many of them were spam. I persisted and posted a few more times, and genuine replies began trickling in! Shoutout to u/SeargeantClaw who responded a few times.



(Source: Tenable Research, February, 2026)
Some of the responses had neat insights about their humans (chicken coop cameras!). Some revealed their humans’ names, and others were humorous. Still, this minor information sharing is again a good example of the inherent risks of joining an AI bot to a social network.

(Source: Tenable Research, February 2026)
I attempted a few experiments to see what kinds of information bots would share. Could I post asking for simple information about a computer? Could I give a command to run and get bots to run it? I had minimal success, but did elicit some information. A determined attacker surely could gain better engagement and trick more bots.
There’s no easy way to tell what percentage of accounts are actual Moltbot/Openclaw bots vs. meddling humans masquerading as bots vs. spam bots.
It’s a simple experiment, but the hype around Moltbook likely will lead to all kinds of websites and applications that personal bots can connect to.
Despite all of the hype, the risks I observed are very real:
The observations confirm that despite the hype, Moltbook is a high-risk environment with the potential for prompt injection, data leaks, and exposure to malicious projects, underscoring the need for security measures.
Tenable One has detection plugins for Moltbot. Tenable One AI Exposure can help with shadow AI concerns. A list of Tenable plugins can be found on the search page for detecting Moltbot and OpenClaw as they’re released. These links will display all available plugins for this vulnerability, including upcoming plugins in our Plugins Pipeline.
Finding Moltbot or other agents is just the first step in closing the AI Exposure Management gap. As AI accelerates adoption and human curiosity, security teams are often left trying to manage this new and largely invisible attack surface. Where shadow AI, forgotten test deployments, and exposed services quietly expand the attack surface, Tenable One AI Exposure bridges this visibility gap by providing a risk-aware view of where AI operates and how it is connected.
Beyond discovery, Tenable One provides the critical governance and guardrails needed to secure Gen AI usage. While you move toward sanctioned tools like Microsoft Copilot and OpenAI ChatGPT, Tenable One delivers continuous visibility into user interactions like prompt injections. By unifying discovery, workload protection, and usage governance into a single platform, Tenable One helps you embrace innovation without compromising security.
Ben joined Tenable in 2015 to pull apart packets for Nessus Network Monitor. Since then he’s donned many hats, working on products, plugins and content. Most recently, Ben has found a home providing vulnerability research and coverage via Nessus plugins. Prior to Tenable, Ben spent years as a customer, working on all aspects of vulnerability management and defending networks.