The Question I Keep Circling
By Harry Glorikian
A quick head’s up. My wife tells me I take too long to get to the punchline, so I’m putting some of it up here. There’s a four-question screen for management teams near the end of this piece. You can skip there if you want the tool first. The rest is how I got there, and I still hope you’ll read the whole thing, but the screen works on its own.
I’ve been going back and forth on AI and jobs since I first started playing with GPT-3, and I finally understand why.
It isn’t because the question is hard. It’s because the smartest people I read disagree with each other at a fundamental level, and the more I read, the more I realized they aren’t even answering the same question.
Dario Amodei runs Anthropic. He’s inside the lab. He sees the capability curve up close, and he’s been saying publicly that entry-level white-collar work is going to get hit hard and soon. Daron Acemoglu won the Nobel Prize in economics in 2024. He’s calculated the total US productivity impact from AI at well under one percent of GDP over ten years, a rounding error at the macro level. Recently, as part of my research at MIT, I interviewed a researcher who is also a trained anthropologist at Microsoft. She reminded me that in two hundred years of industrial history, technology has never net eliminated the jobs humans evolve into. Andrej Karpathy, another voice close to the frontier, built a viral dashboard in March ranking 342 US occupations by AI exposure, then pulled parts of it down when it went sideways on X, because exposure was never the same thing as displacement.
Many serious camps. Many different altitudes. Many different answers. In between, a steady flow of op-eds, Substack pieces, and LinkedIn threads where people pick a side and argue it like the answer is obvious.
I’m not an economist, and I have no interest in pretending to be one. But the strategic implications of this technology on boards, management teams, and the workforce are exactly the kind of thing I spend my time on. I’ve learned over many years that when smart people disagree this fundamentally, the fastest way to get clarity is to try to read past the headlines and go to the source material. That’s what I tried to do. I want to share where it got me. (Mind you, this is not comprehensive.)
What the research I reviewed says
The piece that reframed this for me, and that this article builds on directly, was a recent Substack post by Alex Imas and Soumitra Shukla called “How Will AI-driven Automation Actually Affect Jobs?” Shukla is a research fellow at Harvard Business School and the Burning Glass Institute. Imas is an economist at the University of Chicago Booth School of Business. They were unpacking a January 2026 NBER working paper from Joshua Gans and Avi Goldfarb at the University of Toronto Rotman School of Management titled “O-Ring Automation.”
The name comes from Michael Kremer’s 1993 paper on economic development, which itself was inspired by the Challenger disaster. One failed O-ring took down the entire system. As Imas and Shukla put it, Kremer’s insight was that productivity becomes a multiplicative rather than an additive function of skill. Small quality gains on any one step compound through everything else.
Gans and Goldfarb applied that framework to AI automation. They showed mathematically that the popular exposure indices everyone cites, the ones generating the scary headline numbers, are built on an assumption that almost certainly doesn’t hold for most real jobs. They assume tasks are separable. Automate task A, and task B is unaffected. Add up the percentage of tasks that can be automated and you get an “exposure score.” In their words, that approach is mathematically inconsistent with how actual production works.
The implication of this framing can change how you read the whole debate.
When AI automates some of the tasks in a job that has many complementary pieces, the worker doesn’t disappear. They reallocate their time to the tasks that remain. Their output on those tasks goes up because they have more time to concentrate on them. Because output is multiplicative rather than additive, small quality improvements on the remaining tasks multiply through the entire job. Gans and Goldfarb call this the focus effect. The worker becomes more valuable, not less.
When AI automates the tasks in a job that has only one or two essential pieces, there is nothing to reallocate to. The job ends.
Imas and Shukla call this dimensionality. It’s simply the number of essential complementary tasks in a job. And it tells you more about displacement risk than any exposure score ever will.
There’s a second variable that matters almost as much. Demand elasticity. When productivity rises and the price of the output drops, do customers buy a lot more of it, or do they buy roughly the same amount? If a lot more, headcount can grow. Think about what happened to software developers after the cloud made developers dramatically more productive. We hired more of them, not fewer. If roughly the same amount, productivity gains turn into layoffs. Think about what happened to bank tellers after ATMs, and then to bank branches after the iPhone.
There’s a third variable, and it’s the one management teams almost never discuss. How strong is the firm’s incentive to automate the whole job versus a piece of it. Imas and Shukla illustrate this with a ten-million-dollar hypothetical integration project. Spending that kind of money to automate one task inside a twenty-task job rarely works out. Spending the same amount to automate the last remaining manual task inside a two-task job can be a layup. Firms invest much harder against low-dimensionality jobs because the return is the entire wage bill, not a productivity fraction.
Why I see all the camps above are right
Once I saw this, the disagreements started to make sense.
Amodei is most probably right about what the technology will be capable of in the next thirty-six months. He’s inside the lab. He sees the capability curve. He’s also probably too fast on deployment, because he doesn’t sit in boardrooms watching ten-million-dollar integration projects die in debate. On capability, though, I take him and others like him who have their hands on the fast-growing animal we call AI seriously.
Acemoglu is probably right about the macro number. Task-level adoption has real frictions. Most of what AI can technically do won’t be profitable to deploy at scale inside complex organizations inside a decade. His math is sound. What macro numbers hide is distribution. The China shock showed up as net gains to US GDP and a generational catastrophe in specific counties. Both were true at the same time. Acemoglu’s aggregate calculation can be correct while specific communities still get hollowed out.
The Microsoft researcher is right about the long arc. Two hundred years of industrial history says humans rotate into new work. I believe that. What she can’t tell you is what happens during the transition. (The difference between evolution and revolution is time.) Research by James Bessen at Boston University is usually cited to prove technology doesn’t destroy jobs. Read it carefully and for me it says the opposite. Bank teller employment grew from roughly 300,000 in 1970 to over 600,000 by the early 2000s, even as ATMs spread, because ATMs cut the cost of opening a branch and banks opened 43 percent more of them. Then the iPhone arrived, mobile banking killed the need to visit a branch at all, and teller employment has been declining ever since. The Bureau of Labor Statistics now projects it will fall another 13 percent by 2034. Two hundred years of adaptation is true. The transition still broke certain jobs and careers.
The Imas and Gans-Goldfarb frame is what I believe can reconcile all of them. Low dimensionality plus inelastic demand plus sharp ROI to eliminate the full wage bill is the zone where Amodei is probably right for those specific workers. High dimensionality plus elastic demand is where the anthropologist is probably right, and the focus effect actually raises wages. The aggregate of everything is where Acemoglu is probably right on the macro number. All three are correct. They’re describing different parts of the same animal.
Which means the macro debate about whether AI destroys jobs in the aggregate is, for anyone with fiduciary responsibility, a distraction from the question in front of us. The question is about specific roles inside specific companies. The screen at the end of this piece is built around that question.
Two workers
Imas and Shukla frame this with two workers, and it’s worth walking through their example because it’s clear.
Picture a management consultant. Her job is roughly eight things. Research, data analysis, client communication, slide construction, strategic reasoning, team coordination, relationship management, implementation support. AI can help with most of those. She uses Claude for first drafts. She uses specialized tools for financial modeling. She uses AI for meeting synthesis. By any exposure index you can find, her job looks highly exposed to AI.
Now picture a long-haul truck driver. His job is roughly one thing. Move the truck safely from point A to point B. Logistics, loading, and dispatch are someone else’s jobs. By any exposure index, his job looks lightly exposed to AI because he isn’t the one using AI day to day.
Which one is more likely to lose his or her job in the next three to five years?
The truck driver. It isn’t close.
Aurora Innovation peaked at ten fully driverless trucks operating commercially in December 2025, across a network that now spans ten routes through Texas, New Mexico, and Arizona, with more than two hundred fifty thousand cumulative driverless miles and zero collisions attributed to the Aurora Driver as of January 2026. Kodiak AI went public on Nasdaq in September 2025, and by the end of that month had ten driverless semis running up to twenty-four hours a day in the Permian Basin for Atlas Energy, which had placed a firm 100-truck order earlier in the year. Aurora’s guidance for the end of 2026 is more than two hundred driverless trucks on the road, with a stated vision of tens of thousands of trucks moving freight globally in the years that follow. Small numbers today. The direction is clear.
Compare that to the consultant’s world. AmLaw 100 revenue grew about 13 percent in each of the last two years, and lawyer headcount grew 7.7 percent in 2024. Medscape’s 2026 compensation survey reported radiologist compensation rose 9 percent in 2025, to an average of $571,000. The AMA reported that physician AI use jumped from 38 percent in 2023 to 81 percent in early 2026. None of those professions is shrinking. The technology is absorbing tasks, not eliminating jobs.
The high-exposure worker keeps her job and may even get a raise. The low-exposure worker loses his.
A screen that I believe management teams can use
I’ve sat in enough boardrooms over time to know how the question usually comes up. A director asks the CEO how exposed the company is to a technology shift. The CEO pulls up a chart from a consulting deck with percentages by function. Marketing at some number. Customer service at some other number. Everyone nods. The word “transformation” gets said. The meeting moves on. Nothing useful has happened, because exposure was never the right variable. (Now overlay this whole AI discussion and you can imagine the scenario.)
Here’s what I wish every management team ran on every major role inside the company at least once a year. Not a six-month engagement. Please, not a transformation initiative. A twenty-minute structured conversation per role, led by the CHRO with the CFO and the audit committee together. Before I tell you the questions, I will say it again. Do not hire outsiders to do this. Do it yourself if you want tangible insights.
Four questions. Score each one to five.
First, how few essential tasks make up this job. One essential task is a five. Two or three is a four. Four to six is a three. Seven to ten is a two. More than ten is a one. A job built around one thing is the job most at risk of being eliminated whole.
Second, what happens to demand if the price of this role’s output drops thirty percent. Flat or declining demand is a five. Stable is a four. Modest growth is a three. Meaningful growth is a two. Explosive latent demand is a one. If your customers won’t buy more when the product gets cheaper, productivity gains turn into layoffs.
Third, how clear and large is the ROI for automating this role completely. Full wage bill savings with a clear integration path is a five. A strong partial business case is a four. A muddy partial case is a three. Only possible piece by piece is a two. Technically infeasible within five years is a one. The sharper the total ROI, the harder capital will push.
Fourth, how fast is competitive pressure forcing the move. Competitors already automating at scale is a five. Competitors piloting publicly is a four. A credible new entrant building around it is a three. Nobody has moved is a two. Protected by regulation or credentialing is a one. If you don’t automate and your competitor does, your cost structure gives you about eighteen months before you become uninvestable.
Add the four numbers. Anything sixteen or higher is in the red zone and needs a transition plan ASAP. Twelve to fifteen is yellow and needs augmentation investment. That means tools, training, and role redesign so humans spend their time on the parts AI can’t touch. Below twelve is green for now, and you should rerun the screen every six to twelve months because this moves, and quite fast these days with the improvements that keep coming.
Run it across the roles that account for most of your payroll. I think you’ll find two things. Most of your roles will score lower than the headlines imply. A handful will score higher than anyone on your executive team may be willing to admit out loud. The variance is the entire point of this exercise.
What I’m not saying
I’m not saying AI is about to vaporize the American workforce. Morgan Stanley’s April 2026 disruption tracker put the aggregate labor market effect at under ten basis points on unemployment. Goldman’s updated analysis in March 2025 called the aggregate impact negligible to date. The aggregate story is genuinely quiet.
The distribution story is not. Stanford’s Digital Economy Lab found a 16 percent relative employment decline for workers age 22 to 25 in the most AI-exposed occupations compared to older workers in those same occupations. The Burning Glass Institute estimated that roughly 18 million entry-level jobs could become obsolete as AI absorbs junior tasks, while about 29 million mastery roles become more accessible. BLS projects customer service representative employment to decline five percent by 2034, one of the few occupations projected to shrink in absolute terms. A peer-reviewed study in Management Science found that freelance writing job postings on a major global platform dropped more than 30 percent within eight months of ChatGPT’s release.
None of that is showing up in a national unemployment number. All of it shows up on a specific company’s income statement if its board hasn’t seen it coming.
To close
I spent a long time torn between the Amodei camp and the Acemoglu camp, and the reason I couldn’t get comfortable with either was that each was answering a different question than the one in front of me. The one in front of me is the one in front of every board member, every management team, and every investor trying to allocate capital intelligently over the next three to five years. Which specific roles inside which specific companies are at actual risk, and what should we do about it.
Exposure indices don’t answer that. The O-ring framework, stripped of its math, gives you three variables that do. Dimensionality of the role. Demand elasticity for the output. Strength of the firm’s incentive to eliminate the full wage bill. Add competitive pressure. Run the simple screen I shared above.
I’ll be honest with you. I came to this topic uncertain, read my way into my current state of clarity, and I’m sharing that clarity here because I think a lot of people in this conversation are picking sides when they should be picking variables. I’m not picking Amodei over Acemoglu. I’m saying they’re both right for different workers, and the productive question for anyone with fiduciary responsibility is which workers we’re talking about.
Do the analysis now. Not after the pressure arrives.
One addition. I am not saying all is roses. We do need proactive policy, and not after things change, before. I have written about that in a different post.
Credit where it’s due. The core framework in this piece comes from Joshua Gans and Avi Goldfarb, “O-Ring Automation,” NBER Working Paper 34639, January 2026, which itself builds on Michael Kremer’s 1993 “O-Ring Theory of Economic Development.” Alex Imas and Soumitra Shukla brought the paper into wider circulation in their Substack post “How Will AI-driven Automation Actually Affect Jobs?” in March 2026, and the two-workers example and the dimensionality framing are theirs. The Four-Question Screen is my own synthesis for boards and management teams.
Harry Glorikian is Managing General Partner at Scientia Ventures and a Visiting Researcher at the MIT Media Lab. His book The Invisible Interface: How AI Turns Intentions Into Actions, And Who Wins (Simon & Schuster, June 2026) is available for pre-order now. He hosts The Harry Glorikian Show and previously wrote MoneyBall Medicine and The Future You.


