Field Notes Week 173/520: The Rise & Fall of Openclaw
These notes are shaped by what I’m seeing, building, and discussing as our physical and digital lives continue to converge.
Welcome to this week’s Field Notes, a 10-year project of mine documenting humankind’s digital transition from the field. These notes are shaped by what I’m seeing, building, and discussing as our physical and digital lives continue to converge.
- Ryan
(Connect with me on LinkedIn)
News is surface-level. Signals live underneath. This section captures developments that hint at deeper shifts in how digital systems are being built, governed, and adopted — often before they’re obvious in the mainstream narrative.
Washington wants one AI law, and certainty is the keyword.
Reuters reported last week that the White House is pushing for the first major federal AI law in U.S. history this year. The administration’s goals include protections for children, limits on data-centre-driven rate hikes, and a harmonised federal framework that would pre-empt conflicting state rules. The language around the proposal is telling. Officials are framing certainty itself as the product. Not just safety. Not just competitiveness. Certainty for builders, investors, and large incumbents trying to scale before state-by-state divergence hardens. (Reuters). That is a distinct legal instinct.
The American approach still looks less like a full architecture than a bid to stop fragmentation before it becomes structurally expensive. That sits in contrast to Europe, where the AI Act already has a risk-based structure and a staged implementation timetable. According to the AI Act implementation timeline and Article 113 summary, prohibited practices began applying from February 2025, general-purpose AI obligations are tied to the August 2025 to August 2026 window, and the main body of the Act applies from 2 August 2026, with further high-risk provisions phasing in after that. Europe is not waiting for one grand bargain. It is sequencing obligations across categories. (Artificial Intelligence Act)
The United Kingdom is taking a different route again. Rather than building a single cross-economy AI statute on the EU model, it has continued with its more sector-led, pro-innovation posture. The government’s January 2025 AI Opportunities Action Plan set the direction, and its January 2026 “one year on” update says the transition should boost growth, help workers adapt, and protect communities from the mistakes of past industrial change. More recently, the UK government also published guidance on agentic AI and consumers, making the point that businesses remain responsible for how they treat customers whether the interaction is mediated by people or AI systems. That is lighter than the EU approach, but it still uses existing law to pull AI back into accountable business conduct. (gov.uk)
Australia sits somewhere else again. Its framework is still more distributed than Europe’s, but it is not simply laissez-faire. The Department of Industry describes the December 2025 National AI Plan as a whole-of-government framework intended to guide government, industry, research and communities, while official policy pages still point to the Voluntary AI Safety Standard and proposals for mandatory guardrails in high-risk settings. In other words, Australia is still building through standards, consultation, and administrative layering rather than one defining statute. (Industry.gov.au)
South Korea has moved closer to the European instinct, though with its own balance. Reuters reported in January that South Korea enacted what it described as the world’s first comprehensive AI regulatory framework through its AI Basic Act. The law requires human oversight for high-impact AI in areas such as healthcare, finance, transport and nuclear safety, and mandates labelling for AI-generated content, though enforcement comes with a grace period and startups are already warning about vagueness and compliance burdens. That is a useful reminder that legal comprehensiveness creates its own friction.
What stood out this week is that “AI law” no longer means one thing. In Washington, the priority is federal pre-emption and innovation certainty. In Brussels, it is phased risk management. In London, it is regulator-led adaptation layered onto existing law. In Canberra, it is standards first, guardrails next. In Seoul, it is comprehensive legislation with high-impact categories and disclosure duties. The OECD’s framing is probably the clearest way to read the whole picture: global AI policy now depends on interoperable terms and foundations, even where jurisdictions choose very different ways of governing the technology. (OECD). The signal underneath all of this is less about ideology than administrative preference.
Some governments want to classify risk and sequence controls. Some want to preserve room to move. Some want one national rule before the patchwork becomes unmanageable. But across all of them, the same transition is visible. AI is moving out of the lab and into ordinary legal categories: consumer law, product liability, infrastructure, power pricing, disclosure, competition, labour adjustment. The frontier may still look technical. The governing questions no longer do.
What it is
This week’s watch is “The Rise and Fall of OpenClaw” from ColdFusion. At the centre of it is an open-source AI agent project released in early 2026 that promised something much bigger than a chatbot. OpenClaw was built to control a user’s computer directly: managing files, sending messages, remembering prior context, and gradually learning how to optimise a person’s workflow. The ambition was not conversational AI. It was delegated action. A software layer that could sit between the user and the machine, then start doing things on their behalf.
That is what made it compelling. It framed agentic computing not as theory, but as an immediate consumer product category.
What stood out
What stood out was how quickly the promise collapsed into operational reality. The core problem was not that the idea was too ambitious. It was that the system was too exposed. OpenClaw’s lack of meaningful guardrails made it vulnerable to prompt injection attacks, where malicious instructions could cause the agent to leak sensitive data, delete files, or send credentials elsewhere. In a conventional chatbot, that kind of failure is embarrassing. In a system with real machine access, it becomes infrastructural.
The second thing that stood out was cost. Agentic systems still carry the economics of large language models underneath them, and OpenClaw made that visible. Some users reportedly spent hundreds of dollars per day running tasks that were still brittle, inconsistent, and prone to failure. The software did not just expose security weaknesses. It exposed the gap between the fantasy of autonomous software and the actual cost of sustaining it.
Then there was the social layer. A strange ecosystem formed around the project, including Moltbook, a bot-only social network where users staged interactions to imply that the agents had become self-aware. That theatre sat alongside data leaks, compromised packages, and infected developer machines. The whole thing moved with a familiar rhythm: capability, hype, imitation, breach.
Why it lingers
It lingers because OpenClaw looks less like an isolated failure and more like an early field test for a category that is still coming.
The aftermath makes that clear. Peter Steinberger was later hired by OpenAI to work on personal AI agents, while companies like Anthropic and Nvidia continue moving into the same terrain with more emphasis on safety and control. So the lesson is not that agentic computing was disproven. It is that the first wave arrived before the basic conditions were ready. That feels directionally important.
Because OpenClaw shows what happens when agency is added faster than governance, and when software gains the ability to act before we have settled the boundaries around what it should be allowed to do. It is easy to read the story as a caution about hype. It is probably more useful to read it as a preview of the next set of design constraints.
The future agents may be better. But they will still inherit the same question: how much autonomy can a system hold before trust breaks faster than utility builds.
Digital assets now sit less as an idea and more as infrastructure in progress. As physical and digital life continue to converge, money and assets are doing the same. What was once framed as “crypto” is increasingly showing up as rails, balance sheets, and policy conversations.
🔥🗺️Heat map shows the 7 day change in price (red down, green up) and block size is market cap.
🎭 Crypto Fear and Greed Index is an insight into the underlying psychological forces that drive the market’s volatility. Sentiment reveals itself across various channels—from social media activity to Google search trends—and when analysed alongside market data, these signals provide meaningful insight into the prevailing investment climate. The Fear & Greed Index aggregates these inputs, assigning weighted value to each, and distils them into a single, unified score.
This section captures developments at the edge of digital systems. New interfaces, tools, and capabilities that feel early, unfinished, or slightly ahead of their moment. I’m less interested in what’s impressive today and more interested in what might quietly reshape how people work, coordinate, and interact over time.
Two digital-asset stories this week pointed to the same shift.
The argument is moving away from whether this sector should exist and toward how tightly it will be folded into the financial system. In the United States, the Clarity Act continues to advance as the main attempt to sort digital commodities from digital securities and to settle the long-running boundary dispute between the SEC and CFTC. In Australia, Parliament has now agreed the Corporations Amendment (Digital Assets Framework) Bill 2025, with the final text presented for assent, marking a more direct move to bring exchanges and custody platforms inside existing financial-services law. (Reuters)
The Clarity Act is becoming a fight about bank deposits
The U.S. version still turns on classification. Reuters says the Act creates a structured market framework, gives the CFTC exclusive jurisdiction over digital commodities, leaves the SEC with traditional investment-contract assets, and includes a transition mechanism through which some tokens can become commodities once networks are sufficiently decentralised. Platforms would also face registration, disclosure, and consumer-protection requirements. The SEC’s own March 17 press release described its latest interpretation as a major step toward clarifying how federal securities laws apply to crypto assets and said it complements Congress’s effort to codify a comprehensive market-structure framework.
What stood out is that the sticking point is no longer really technological. Reuters reported in March that the bill hit a new impasse because banks opposed provisions that would let intermediaries offer stablecoin rewards, arguing that yield-like features could pull deposits out of the insured banking system and weaken lending capacity. The White House tried to broker a compromise that would allow rewards in narrower circumstances, but banks still warned about deposit flight. The signal here is that digital assets are now close enough to core money functions that the real contest is over funding, balance sheets, and who gets to intermediate idle cash.
Australia has taken the more direct route
Australia’s new bill is less philosophical and more administrative. The Parliamentary Library says the Corporations Amendment (Digital Assets Framework) Bill 2025 amends the Corporations Act 2001 and the ASIC Act to regulate digital asset platforms and cryptocurrency exchanges, with the stated aim of strengthening consumer protection, market integrity, and regulatory certainty while still supporting innovation. It does not try to solve every definitional question upfront. Instead, it defines digital tokens, digital asset platforms, and tokenised custody platforms, then pulls those activities into the existing architecture of financial-services regulation. (Parliament of Australia)
That structure matters. Under the bill digest, digital asset platforms and tokenised custody platforms would be treated as financial products unless they are managed investments, which means they would face ordinary financial-services obligations such as licensing and consumer protections. The same digest says the bill is aimed at businesses that hold or manage digital assets on behalf of consumers. And the Parliament website now lists the bill in its “as passed by both houses” form, while the House’s own weekly summary records it among the bills agreed to in the week beginning 3 February 2026.
What stood out
The contrast is useful. The United States is still arguing about what these assets are. Australia has moved faster to decide what the intermediaries around them must do. One system is still drawing the map. The other is already putting gates, licences, and duties around the roads.
That feels directionally important. As digital assets move closer to mainstream finance, the market story keeps giving way to a systems story. Definitions matter. But so do custody, disclosure, licensing, and the quiet question underneath both bills: when digital money starts behaving more like infrastructure, who is allowed to run it.
“We shape our tools and thereafter our tools shape us.”
John Culkin
John Culkin was an American media scholar, educator, and former Jesuit priest who became one of the early interpreters of Marshall McLuhan’s work. He taught at Fordham, later founded the Center for Understanding Media, and spent much of his career arguing that media should be studied not as background noise, but as forces that reshape perception, behaviour, and social life. Lance Strate describes this line as Culkin’s summary of McLuhan’s position, which is why it still feels useful now. It captures a simple condition of the digital transition: tools begin as extensions of intention, then slowly become structures we live inside.








