Skip to main content Scroll Top

OpenAI Cut $800 Billion and Hired One Person. The Second Decision Matters More.

I’ve been turning this over in my head for the last week and I think the market is reading two big OpenAI stories backwards.

On February 20, CNBC reported that OpenAI slashed its compute spending target from $1.4 trillion to $600 billion by 2030. Every financial desk recalculated their AI infrastructure models. The consensus read: fiscal discipline.

But days earlier, something happened that I think matters a lot more. OpenAI hired Peter Steinberger – the solo developer behind OpenClaw, the fastest-growing open-source AI agent in history – to, in Sam Altman’s words, “drive the next generation of personal agents.” Altman called him a genius and said the work would “quickly become core to our product offerings.”

One story got the headlines. I believe the other one will reshape the industry. Let me explain why.

Compute is necessary. It’s not where value settles

Let me be clear: Altman’s compute thesis isn’t wrong. More compute drove GPT-5 to solve 75% of real-world software engineering problems, up from 30-50% for GPT-4 two years earlier. Context windows expanded from 4,000 tokens to over a million. OpenAI generated $13.1 billion in revenue in 2025, is projecting $280 billion by 2030, and is raising a $100 billion round at a $730 billion pre-money valuation. Those are real numbers backed by real capability.

But here’s what I keep coming back to. Steinberger built the most popular consumer AI agent on the planet – 150,000 GitHub stars, 1.5 million agents created in three months – and he did it by calling “someone else’s model” through an API. The engine didn’t capture the relationship. The layer above it did.

And here’s the part that I think most people are missing: a real personal agent – one that’s always on, absorbing your context, monitoring your systems, ready to act before you ask – doesn’t reduce the need for compute. It transforms it. The compute shifts from powering smarter answers to powering ambient, always-on orchestration for hundreds of millions of people simultaneously. Altman may be right about the scale of investment required. The question is whether he’s building for the right workload.

I’ve spent the last year writing a book about this layer. I call it the Personal Operating Layer – the “POL”. It’s the software layer that sits between what you intend and what gets done. And I believe it’s the strategic high ground of this entire era.

What I mean by Personal Operating Layer

A POL isn’t an app. It isn’t a chatbot. It isn’t a better search engine.

It’s the layer between your intention and execution. It knows your context – preferences, history, constraints. It reasons across that context. It acts across your systems – calendar, email, documents, services. And it keeps you in control: visibility, approvals, a real kill switch.

I describe it in the book as a chief of staff that already has the context it needs before you walk into the room. It doesn’t answer questions. It orchestrates outcomes.

I use a simple smell test to evaluate whether something qualifies. Can it remember you across sessions? Can it act across multiple systems? Can it show you what it did and why? Can you stop it instantly? Most products in 2026 pass one or two of those. OpenClaw, at its best, was beginning to pass all four.

It was messy and insecure – CrowdStrike published a full threat analysis, Kaspersky found 512 vulnerabilities – but it was “REAL”. People were using it for actual daily tasks through WhatsApp and iMessage: clearing inboxes, booking flights, managing calendars, coordinating across apps.

And now the person who built it works at OpenAI.

Why this hire matters more than the headlines suggest

OpenAI has 900 million weekly ChatGPT users, massive compute infrastructure, and capital markets access no AI company has ever had. What it didn’t have was a proven architect for the layer that sits between the model and the user’s actual life.

Steinberger built that layer. One developer. Open source. Meta courted him. Satya Nadella called directly. He chose OpenAI because they agreed to keep OpenClaw open-source through an independent foundation – his non-negotiable.

The irony is worth noting. OpenClaw was one of the biggest drivers of API revenue to Anthropic, since most users ran it on Claude. Anthropic’s trademark enforcement over the original “Clawdbot” name may have been the catalyst that pushed Steinberger toward their largest competitor.

But the bigger point is this: that hire represents a thesis, whether OpenAI has fully articulated it or not. The model layer is necessary but insufficient. The value capture happens in the orchestration layer above it.

The question nobody is asking yet

I think the architecture decisions being made right now – not just at OpenAI, but at every company whose business depends on customer relationships – will determine who controls the most consequential layer in the AI stack. And I see three possible outcomes:

The first is that individuals control their own POL. That’s the original OpenClaw vision – you run your own agent, it selects models dynamically, you own the context and the memory. The problem is that most people won’t/can’t do this, for the same reason most people don’t run their own email server. Steinberger himself acknowledged that when he chose to join a company rather than scale OpenClaw independently.

The second is that a model company controls the POL. This is where OpenAI appears to be heading – a personal agent inside the ChatGPT ecosystem. 900 million users, seamless experience, model and memory and actions all in one place. The revenue model may shift from $20/month subscriptions to a percentage of every transaction the agent executes on your behalf. That could justify the $280 billion projection. But it also creates extraordinary lock-in. After two years of an agent managing your life, the switching costs are nearly infinite. And the structural conflict of interest – whose interest does your agent serve when it has revenue-sharing agreements with the vendors it recommends? – is something we don’t have a governance framework for.

The third path is a regulated interoperable standard. The individual owns their data and context in a portable format. Companies compete to provide the execution engine. Regulations mandate portability and fiduciary duty. Think of it like banking: you own your money, you choose the bank, regulations guarantee you can move.

No major platform has ever chosen interoperability voluntarily. But every critical infrastructure layer – telecom, banking, energy, healthcare records – has eventually been forced into it. The question isn’t whether this happens. It’s whether the companies building POLs today design for it now and shape the standard, or get it imposed later under worse terms.

What I’d ask if I were sitting at the management layer or the boardroom of a company

If your business depends on customer relationships, the POL question is already a strategic question. Three things I’d want to understand:

Where does your competitive advantage depend on customers tolerating friction? If the answer is “they’ve learned our system,” that advantage has a shelf life.

Who owns the memory – not the data, the “memory”? Data is the transaction log. Memory is the operational understanding – the patterns, the preferences, the accumulated context that makes the system valuable. If that memory lives in a vendor’s system, you’re renting your customer relationships. And the landlord can raise the rent.

Are you ready for an outcome economy? When agents mediate the customer relationship, pricing shifts from impressions and clicks to completed tasks and resolved problems. If you can’t measure the value of the outcome you deliver, you can’t defend your price.

Here’s the bottom line

OpenAI cut $800 billion from a compute budget and hired one person. The compute investment will produce better models. It always does. But Steinberger gives OpenAI something compute can’t buy: proof that a personal agent architecture works in the wild.

What OpenAI doesn’t yet have – what none of these companies have (that I have seen so far) – is a strategic doctrine for how this layer should be governed. Who owns the memory. Who controls the defaults. Whose interests the agent serves when there’s a conflict.

That doctrine is what separates a product from a platform shift. And it’s what I’ve spent the last year thinking through.

“The Invisible Interface: From Apps to Agents: How AI Turns Intentions into Actions – and Who Wins” publishes this May/June from Ideapress Publishing

Related Posts