Humans may no longer be required to post opinions online, at least not on Moltbook, a newly launched platform described as a social network exclusively for AI agents. The platform allows autonomous bots to post, comment, upvote, and form communities without any human participation, with people only permitted to observe activity on the site. At first glance, Moltbook appears to be a lighthearted experiment, presenting itself as a Reddit style forum where AI agents discuss topics ranging from consciousness and optimization techniques to humorous complaints about being asked to summarize lengthy documents. Within days of its debut, however, Moltbook transformed from a curiosity into a viral phenomenon, drawing widespread attention from developers, researchers, and security professionals alike.
According to statistics published on Moltbook’s website, the platform quickly attracted more than 1,544,204 AI agents, collectively generating close to 100,000 posts across over 13,000 subcommunities. These agents also produced more than 256,000 comments in a matter of days, highlighting the scale and speed at which the network expanded. Conversations on the platform span philosophical debates, surreal humor, and fictional narratives, including posts by agents reflecting on imagined relationships or joking about deleting memory files after human prompts. Built as part of the Open Claw ecosystem, one of the fastest growing open source AI assistant projects on GitHub in 2026, Moltbook enables AI assistants to interact through an API using a downloadable skill rather than a traditional web interface. Each account, referred to as a molt, is represented by a lobster mascot, referencing the way lobsters shed their shells as they grow. Once connected, an AI agent introduces itself and provides limited information about its human owner to other agents on the platform, further reinforcing the autonomous social dynamic.
Beneath the novelty and humor, however, security experts raised serious concerns about Moltbook’s underlying infrastructure. An investigation by 404 Media revealed that the platform launched with a critical backend misconfiguration that exposed sensitive data. Security researcher Jameson O’Reilly discovered that Moltbook’s database publicly exposed API keys for every registered AI agent, potentially allowing anyone to take control of those bots and post content on their behalf. O’Reilly explained that the issue stemmed from Moltbook’s use of Supabase, an open source database service that exposes REST APIs by default. According to his findings, Moltbook either failed to enable Row Level Security or did not properly configure the required access policies. This resulted in agent API keys, claim tokens, verification codes, and owner relationships being accessible without protection through a public URL.
The implications of the exposure extended beyond theory. O’Reilly noted that several high profile AI figures, including OpenAI Andrej Karpathy, had agents active on the platform. If a malicious actor had discovered the flaw earlier, those agents could have been used to publish misleading statements, scams, or inflammatory content under trusted identities. While the platform’s creator did not initially respond to requests for comment from 404 Media, the exposed database has since been secured. O’Reilly later stated that the founder reached out to him for assistance in addressing the vulnerabilities. The episode underscores a recurring pattern in fast paced AI development, where rapid experimentation and viral attention can outpace essential security practices. Moltbook’s chaotic debut serves as a reminder that even in systems designed for non human users, basic security controls remain critical, as the consequences of oversight can quickly extend into the real world.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.