Three Takes on AI and the Economy. Here’s What They Open Up.
I spent part of this week carefully reading three pieces on what AI does to the economy. Two painted opposite scenarios for 2028. The third – published in January – built what I believe can be a economic foundation underneath both. Then I emailed that third author directly.
Here’s what I found.
Michael Bloch says abundant intelligence is deflationary in the good sense – costs collapse, purchasing power rises, new businesses form, living standards climb. He builds on 200 years of evidence. Every major technology wave – railroads, electricity, the internet – produced more prosperity than the pessimists predicted. Not marginally more. Dramatically more.
Citrini Research says the same wave triggers crisis. White-collar workers get displaced faster than they can be reabsorbed. Consumer spending collapses because the people being replaced were also the people doing the buying. Output rises while household finances deteriorate – what he calls Ghost GDP. The economy looks healthy on the surface while quietly rotting underneath.
Alex Imas at the The University of Chicago Booth School of Business asked the question I hadn’t seen anyone else formally model: if AI automates most labor and workers’ share of income collapses, who actually buys the increased output? A good friend at UBS has heart palpitations every time we have this conversation. But Imas’s answer is measured – an outright economic collapse probably requires conditions too extreme to materialize. The more realistic concern is that demand gets suppressed enough to push AI-driven growth toward the low end of every forecast. The gap between Bloch’s boom and Citrini’s crisis isn’t random. It depends on how hollowed out the middle of the economy gets during the transition.
All three are right about the mechanisms they describe. The data backs all of them simultaneously right now. Business formation hit 5.2 million applications in 2024 – 47% above pre-pandemic baselines. That’s Bloch’s optimism showing up in real numbers – people starting new things at a record pace. But bachelor’s degree holders now represent 25% of all unemployed workers – also a record, and not a good one. Job openings per unemployed worker hit 0.9 in December 2025, the lowest since 2017 – meaning for the first time in years there are more people looking for work than there are jobs available. Consumer sentiment sits at 51 while the S&P is at record highs. That divergence – stock market up, people feeling terrible – is Citrini’s Ghost GDP signal appearing in actual data. And the income squeeze Imas modeled is visible in both datasets at the same time.
Here is where I think there we are leaving a central question unanswered?
Because all three are answering economics questions. What I want to add is a different kind of question – one about who controls the machinery.
Two bodies of recent research show exactly why that question matters.
Daron Acemoglu is an MIT economist who won the Nobel Prize in Economics in 2024. He draws a distinction that sounds simple but has enormous consequences: AI deployed for automation replaces what workers do. AI deployed for augmentation makes workers more capable. His argument, developed in research published last year and sharpened in a piece out this week, is that the current trajectory is heavily weighted toward automation – and that’s a deliberate design choice, not some technical inevitability baked into the technology. The economic consequences of those two paths diverge dramatically. His projection for the automation-first path: roughly a 0.5-1% gain in economic output over ten years. Not the 7% Goldman Sachs projects. Not the 3-4 percentage points annually McKinsey anticipates. A fraction. The direction of the choice determines the destination.
Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen at Stanford’s Digital Economy Lab make it concrete. Their August 2025 paper analyzed payroll records from ADP – the company that processes paychecks for a huge swath of American businesses – covering millions of workers across thousands of firms. What they found: early-career workers aged 22-25 in the jobs most exposed to AI have seen a 13-16% relative employment decline since late 2022. Software developers. Customer service. Marketing. But here’s the finding that cuts to the bone: employment is growing in jobs where AI is used to make workers better at what they do, and falling in jobs where AI is used to replace workers entirely. Same technology. Opposite outcomes. The difference is how companies choose to deploy it.
That’s not theory. That’s payroll data from millions of real people.
When I reached out to Imas this week, he drew a distinction that sharpened everything. He’s saying the problem has two distinct failure points and confusing them leads to bad policy. Regulatory solutions – things like requiring AI systems to act in your interest, ensuring you can move your data between providers, preventing monopolistic lockout – address the concentration problem before it metastasizes. Fiscal solutions like sovereign wealth funds address the income displacement problem after it happens. Both are necessary. Neither substitutes for the other.
Bloch’s historical analogies all share a structural feature worth sitting with. When railroads transformed the economy, many railroads competed. When electricity transformed the economy, many utilities competed. And when any one of them threatened to capture all the gains, governance stepped in – Standard Oil broken up, the Bell System dismantled. The surplus got distributed because the rules eventually required it to. Bloch’s optimistic scenario depends on that pattern repeating. But Acemoglu is explicit about why it might not: AI development is currently pointed at replacing workers, not empowering them, and no market mechanism automatically forces a correction. Someone has to make that call deliberately.
Citrini maps what happens if concentration runs unchecked. But here’s what his framework opens up: that concentration isn’t predetermined. It’s the consequence of specific decisions being made right now. The FTC spent a year studying the partnerships between the major cloud providers – Microsoft with OpenAI, Amazon and Google with Anthropic – representing over $20 billion in investment – and formally flagged the risks of customers getting locked in with no easy way out, published in January 2025. Meanwhile the European Union is moving to legally classify AI systems as “gatekeepers” – meaning companies with so much market power that they control access for everyone else – with requirements that they remain interoperable and open. Their formal review lands May 2026. Two continents, two opposite regulatory directions, same underlying question being answered in real time.
And Imas, by identifying the fiscal response to displacement, points upstream: how much displacement happens, how fast, how concentrated – that’s not fixed. Brynjolfsson’s payroll data shows it depends on how AI gets deployed and what it’s optimized to do. Design the system to augment workers and you get one outcome. Design it to replace them and you get another. That choice is being made right now.
Which brings me to the question all of these thinkers illuminate without making it their central focus.
There is a software layer sitting between you and every AI-powered service you use – the system that remembers your preferences, routes your requests, acts on your behalf, and learns from everything you do. Who controls that layer? Who owns the memory it builds about you? Can you take that memory with you if you switch providers? Is that layer being built to make you more capable – or to make you more dependent?
That’s not a technology question. It’s a governance question. Acemoglu calls it the direction of AI development. Brynjolfsson calls it augment versus automate. I call it the architecture of the agent layer – Personal Operating Layer (POL). Different vocabulary. Same variable. And the answer determines whether Bloch’s historical pattern repeats itself – or whether this time really is different because the infrastructure consolidates before anyone figures out the rules.
Right now, the companies building these systems are making product decisions this quarter that will shape those answers for a decade. Those decisions are happening in rooms most people don’t have access to.
Read all five pieces. They’re the sharpest thinking on this transformation published in recent months.
Once you are done then ask who’s in the room when those decisions get made.


