Skip to main content Scroll Top

Anthropic Accidentally Published the Blueprints

I was sitting with my morning coffee reading Kyle Orland’s piece on Ars Technica when a component name in Anthropic’s Claude Code leak got my attention: AutoDream.

It’s designed to activate when you go idle at the end of your workday. It scans your session transcripts. Consolidates what it learned about you. Prunes what’s outdated. Synthesizes everything into a durable memory file for future sessions.

Your AI studies you while you’re not working.

I’ve spent two years researching where this technology was headed. Not by studying any one component in isolation. Any analyst can do that. By talking to the people building the systems, the people deploying them, the people governing them, and working out how the pieces connect. One Lego block is one thing. Using them all to build a house is another. And Anthropic just accidentally published the blueprints for the house.

Let me put this in operational terms.

Kairos, another component in the leak, runs as a persistent background daemon even when your terminal is closed. It ticks periodically, evaluates whether new actions are needed, and includes a flag for surfacing things you didn’t ask for. Add that to AutoDream’s nightly consolidation and you have an AI that’s on when you’re on, on when you’re off, and building a model of you continuously.

This isn’t a productivity tool. I’ve been calling it a Personal Operating Layer. A layer that runs underneath everything else and accumulates context as its primary competitive asset.

Now think about what that means for your organization. We spent decades building enterprise software that stored transactions, documents, records. The next generation stores something fundamentally different: it stores how your people think. How they decide. What they prioritize. What they avoid. The behavioral fingerprint of your entire workforce, encoded in memory files that get richer every night.

Let me make this concrete. Imagine your best portfolio manager has been using an AI assistant for eight months. Every trade rationale, every risk assessment, every pattern in how she responds to market stress is encoded in that system’s memory. Now she tells you she’s leaving. Does her AI memory file go with her? Does it stay? Who decided? Nobody. Because nobody wrote that policy yet. Because most leadership teams are still debating whether to approve a ChatGPT license while the architecture that will define the next decade of enterprise software is being built underneath them.

Morgan Stanley reported 98 percent advisor adoption of their internal AI tools. GitHub Copilot users complete coding tasks 55.8 percent faster. These systems work. They get used. And that usage builds behavioral memory that compounds every single night AutoDream runs.

The switching cost isn’t technical. It’s cognitive. Try to move to a competitor and you lose the model of you that took months to build. Your new system doesn’t know your shorthand. Doesn’t know your client naming conventions. Doesn’t know which regulatory flags matter to you and which ones you’ve already cleared. You’re back to zero, training a new system from scratch while your competitors’ AI keeps getting smarter overnight. I call that cognitive rent. And most organizations paying it right now don’t realize they’re on the meter.

I’ve been on boards. I’ve advised different companies across healthcare, life sciences, and technology navigating this transition. The questions I keep asking. Who owns the behavioral map of your company that these systems are building? What happens when an employee’s AI memory file contains institutional knowledge that never made it into any official system? What happens to that file when you switch vendors?

The leak also revealed something called Undercover mode. An inactive feature designed to let AI agents contribute to public open source repositories without disclosing they’re AI. I’m not accusing Anthropic of bad intent. But when an AI acts in a public context without revealing what it is, that’s a trust question every board needs to be asking about. Because if it’s happening in open source today, it’ll be happening in your customer-facing systems tomorrow.

The full architecture is coherent. Kairos. AutoDream. A Coordinator that spawns parallel agent workers. UltraPlan running 30-minute autonomous planning sessions. Bridge mode letting the agent operate from any device. These aren’t disconnected features. They’re a single system: persistent, proactive, invisible, increasingly autonomous. The interface itself is disappearing. Every major AI company is building this. Anthropic happened to show us how far along it actually is.

I’ve spent years working through what this architecture means for the leaders who’ll run organizations on top of it, for the boards that will need to govern systems they can’t see, for anyone trying to understand where real power sits once the interface disappears. The result is a book called The Invisible Interface. Publishes June 30th through Ideapress Publishing, distributed by Simon & Schuster. It gives you the frameworks for what you’re looking at in this leak. Not the features. The architecture underneath. And the questions you should be asking before the switching costs become irreversible.

Every night AutoDream runs, the file gets deeper. Every day Kairos ticks, the system gets more embedded. Every month that passes, the cost of walking away goes up. This isn’t speculation. It’s architecture. And it’s already being built.

Do you own your AI’s memory of you, or does it own you?

Full disclosure: I have advisory and investment relationships in the AI and healthcare sectors. The views here are my own.

Pre-order: The Invisible Interface

Related Posts