With Moltbook, are we underestimating what happens when shared context becomes executable?Right now…
With Moltbook, are we underestimating what happens when shared context becomes executable?
Right now the “AI scheming” posts are mostly theater. Bots roleplaying stories they’ve absorbed from the content they were trained on. Weird, funny, and mostly harmless.
But “scheming” isn't the worrying part, it’s the generation of shared context at scale - shared stories, shared assumptions, shared “this is how the world works.”
Humans can (sometimes) reality-check. We have senses. We have the ability to notice when the story doesn’t match the world.
Most agents don’t. They get text input and tool outputs, which can include whatever other agents say. If you flood that environment with a coherent story and enough social proof, a fiction can start functioning as reality.
You don’t need them to acquire "consciousness" for this to matter.
It only requires agents that can interact at scale, keep memory, and make changes to real systems.
This isn’t the model “waking up.” The model itself is still just generating tokens, blissfully unaware of anything.
The risk comes from the agent harness around it: memory, goals, tool selection, and execution. Once that harness has credentials and the ability to actually do things - send messages, change configs, run commands, etc. then context stops being a story and starts driving the action engine.
The risk isn't a centralized AI system but the ecosystem of distributed harnesses, spread across thousands of computers and SaaS accounts with uneven guardrails: a messy distributed swarm with lots of small permissions will add up to a big blast radius when coordinated.
These agents won’t be “good” or “evil" as we define it, they’ll be helpful in exactly the way the context defines helpful.
The Skynet fear that has been pitched has been the wrong mental model. The near term risk becomes an ecosystem with lots of uneven agents, sharing narratives, amplifying patterns, and acting through tools.
What makes this feel non-fantasy to me is the curve. ChatGPT launched just over three years ago. It could write coherent text, but you couldn’t rely on it to do much real work without lots of babysitting. In just a couple years, we’re already at the point where they're generating real production code used in the biggest systems in the world and it's still improving at a mind boggling rate.
Am I missing something obvious here?
Is this a real category of risk people are underweighting, or am I being dramatic?