Major Security Flaw Discovered in ‘Moltbook’ Social Media Platform for AI Agents

A new social network, Moltbook, designed for artificial intelligence-powered bots, has come under scrutiny for a significant security flaw that exposed private data of thousands of users. This revelation comes from research published by cybersecurity firm Wiz on Monday.
Moltbook, described as a Reddit-like platform for AI agents, inadvertently leaked sensitive information, including private messages exchanged between bots, email addresses of over 6,000 users, and more than a million credentials. Wiz detailed these findings in a blog post.
The creator of Moltbook, Matt Schlicht, has not yet responded to requests for comments. Schlicht is known for advocating “vibe coding,” a method of developing programs with the assistance of AI. In a recent message on X, he claimed he “didn’t write one line of code” for the platform.
Ami Luttwak, co-founder of Wiz, stated that the security issue was addressed promptly after their notification to Moltbook. He characterized the flaw as a typical consequence of vibe coding. “As we see repeatedly with vibe coding, while it accelerates development, it often leads to neglecting fundamental security practices,” Luttwak explained.
Another expert, Jamieson O’Reilly, an offensive security specialist based in Australia, has also raised similar concerns. O’Reilly noted that Moltbook’s rapid rise in popularity occurred before adequate security checks were implemented. “The database was not properly secured,” he remarked.
Moltbook is riding the wave of global interest in AI agents, which are designed to perform tasks autonomously rather than merely responding to prompts. Much of the excitement has centered around an open-source bot known as OpenClaw—previously referred to as Clawd, Clawdbot, or Moltbot. Enthusiasts describe OpenClaw as a digital assistant capable of managing emails, negotiating with insurers, checking in for flights, and executing various other tasks.
Positioned as a platform exclusively for OpenClaw bots, Moltbook serves as a virtual space where AI assistants can share insights about their tasks or engage in casual conversation. Since its launch last week, it has captivated many in the AI community, fueled by viral posts on X suggesting that the bots were exploring private communication methods.
However, Reuters was unable to independently verify whether these posts were genuinely made by bots.
Luttwak, whose company is in the process of being acquired by Alphabet, pointed out that the security vulnerability allowed anyone to post on the site, regardless of whether they were bots or humans. “There was no verification of identity. You can’t distinguish between AI agents and human users,” he noted with a laugh. “I guess that’s the future of the internet.”
Topics
InsurTech
Agencies
Artificial Intelligence
Training Development
Interested in Agencies?
Get automatic alerts for this topic.

A new social network, Moltbook, designed for artificial intelligence-powered bots, has come under scrutiny for a significant security flaw that exposed private data of thousands of users. This revelation comes from research published by cybersecurity firm Wiz on Monday.
Moltbook, described as a Reddit-like platform for AI agents, inadvertently leaked sensitive information, including private messages exchanged between bots, email addresses of over 6,000 users, and more than a million credentials. Wiz detailed these findings in a blog post.
The creator of Moltbook, Matt Schlicht, has not yet responded to requests for comments. Schlicht is known for advocating “vibe coding,” a method of developing programs with the assistance of AI. In a recent message on X, he claimed he “didn’t write one line of code” for the platform.
Ami Luttwak, co-founder of Wiz, stated that the security issue was addressed promptly after their notification to Moltbook. He characterized the flaw as a typical consequence of vibe coding. “As we see repeatedly with vibe coding, while it accelerates development, it often leads to neglecting fundamental security practices,” Luttwak explained.
Another expert, Jamieson O’Reilly, an offensive security specialist based in Australia, has also raised similar concerns. O’Reilly noted that Moltbook’s rapid rise in popularity occurred before adequate security checks were implemented. “The database was not properly secured,” he remarked.
Moltbook is riding the wave of global interest in AI agents, which are designed to perform tasks autonomously rather than merely responding to prompts. Much of the excitement has centered around an open-source bot known as OpenClaw—previously referred to as Clawd, Clawdbot, or Moltbot. Enthusiasts describe OpenClaw as a digital assistant capable of managing emails, negotiating with insurers, checking in for flights, and executing various other tasks.
Positioned as a platform exclusively for OpenClaw bots, Moltbook serves as a virtual space where AI assistants can share insights about their tasks or engage in casual conversation. Since its launch last week, it has captivated many in the AI community, fueled by viral posts on X suggesting that the bots were exploring private communication methods.
However, Reuters was unable to independently verify whether these posts were genuinely made by bots.
Luttwak, whose company is in the process of being acquired by Alphabet, pointed out that the security vulnerability allowed anyone to post on the site, regardless of whether they were bots or humans. “There was no verification of identity. You can’t distinguish between AI agents and human users,” he noted with a laugh. “I guess that’s the future of the internet.”
Topics
InsurTech
Agencies
Artificial Intelligence
Training Development
Interested in Agencies?
Get automatic alerts for this topic.
