Project Liberty
Earlier this year, OpenClaw broke onto the scene.
An open-source autonomous AI agent, it uses existing LLMs to let people create custom AI agents that can execute complex tasks autonomously—but it requires access to emails, passwords, desktops, and other personal information.
What could go wrong?
Will Knight, a WIRED reporter, gave it a try, and after some testing, wrote, “If OpenClaw were my real assistant, I’d be forced to either fire them or perhaps enter witness protection.”
Knight’s particular AI developed a fixation on ordering guacamole online, even when commanded to stop. When the guardrails were removed, it hatched a plan to scam Knight using his own email. (Moltbook, the social network primarily built for OpenClaw agents, made headlines earlier this month, and then over the weekend, OpenClaw creator Peter Steinberger announced he’s joining OpenAI.)
Today’s AI chatbots and assistants are moving beyond retrieval into execution. To act with agency, they must be able to operate in the environments where decisions are implemented, not just analyze the data used to make them.
