Skip to main content Scroll Top

The Policy Piece I Didn’t Want to Write

By Harry Glorikian

About a year ago I sat down with one of the most respected design thinkers alive. Someone who has spent his career studying how technology either serves people or quietly works against them.

I was there to talk through my research. I’d just started as an affiliate researcher at the MIT Media Lab and I was pulling at a cluster of questions that kept landing in the same place. How does AI change the way companies actually compete? Do frameworks like Porter’s Five Forces still hold, or do they need new inputs entirely? Where does governance fit in now? Privacy? Legal liability? My hypothesis was that management needed a completely new lens. Not because the classical frameworks were wrong, but because they were built for a world with certain fixed assumptions baked in. Chief among them: that scaling output means scaling human cognitive labor. Hire more people, produce more work. That’s how consulting firms bill. How law firms grow. How software companies have scaled for thirty years. The entire war for talent as a strategic concept only exists because human cognitive capability was assumed to be the binding constraint on what a company could do.

That assumption is cracking. And when it cracks, it doesn’t just affect workforce planning. It reshapes governance, privacy exposure, legal liability, competitive positioning. The whole strategic picture moves.

He listened carefully and commented thoughtfully. Then he said what I was doing was pointing toward policy. I was uncovering something that needed to be put into the world. Find a partner, he said. Get it out there.

I said: I’m not a policy guy. That’s not where I live.

So I wrote a book instead.

My reasoning: you can’t argue for rules governing something most people don’t understand yet. Help them see the real dynamics first. Make it concrete. Make it usable. Once people actually understand what’s shifting, why policy matters becomes obvious on its own.

I still believe that. My book, The Invisible Interface, is written for managers and board members trying to understand exactly these dynamics. How to position their companies in a world where AI sits between them and their customers. How to build the kind of trust that becomes a durable competitive advantage. How to avoid being locked out of the default position that will determine who wins. Get people seeing it clearly first. The policy case makes itself.

But here’s what I’ve been watching the last few months. The policy conversation is arriving. Smart proposals are on the table. And almost every one of them is designed to respond to damage after it shows up in the data.

That’s the pattern. We’ve seen it enough times to know how it ends.

Manufacturing moves offshore. Workers lose careers they spent decades building. The response? Retraining programs. Emergency funds. Bipartisan commissions. A decade late. By then the towns are hollow and the anger has become something harder to fix than the original economic problem ever was.

Free trade. The financial crisis. Same story both times. The policy response wasn’t stupid. It came late because we waited until the pain was measurable. And once the pain is measurable, the window to actually shape the outcome has largely closed.

AI is moving faster than either of those transitions. Significantly faster.

Some of the proposals being developed right now are serious and worth engaging.

One framework proposes shifting payroll taxes so companies replacing workers with AI pay more, while labor-intensive companies pay less. It would build automatic safeguards that kick in if workers’ overall share of the economy drops, funding wage insurance and a backstop for families facing mortgage pressure. The underlying logic is sound. White-collar workers drive roughly 75% of American consumer spending. Displace them fast enough and you don’t just have an unemployment problem. You have a consumer spending collapse that pulls the whole economy down.

Former Commerce Secretary Gina Raimondo made an equally serious case in the New York Times this month: a “grand bargain” where employers define what skills the AI economy actually needs and government funds the training and safety nets to move workers there quickly. She implemented the CHIPS Act, the legislation that brought semiconductor manufacturing back to the United States. She knows how to move money through federal systems.

I’m not dismissing either proposal.

But both are reactive. Built for the world after the disruption has landed. Neither is asking the question I think we need to rethink.

We are not just automating tasks. Every previous wave of automation was bounded: a factory floor, a specific function, a defined category of work. Painful and real. But contained.

What’s being built right now is different in kind.

AI is becoming the layer through which people navigate economic life. Not a tool you pick up for a task. Something more like an invisible operating system, always on, sitting between you and your employer, your doctor, your bank, your next job, your government benefits. The system interprets your situation, surfaces your options, makes recommendations, sometimes completes transactions entirely on your behalf.

I’ve been calling this the Personal Operating Layer. Within five years, most of the consequential economic decisions in your life will pass through some form of AI mediation. That is an enormous amount of leverage concentrated in systems most people will never see.

There’s a question missing from the current policy debate:

When people navigate their economic lives through this layer, who does it actually serve?

Whose options appear first? What business relationships shape what gets shown to you? Is the system honest about what it doesn’t know, or just silent about it?

I call this cognitive rent. The invisible toll you pay when you depend on a system whose real incentives you can’t see. A job seeker using an AI platform to find work. A patient trying to understand what her health coverage actually covers. A worker trying to figure out which skills to develop next. If those systems have undisclosed commercial relationships, or have been trained on data that encodes existing inequalities, the person using them experiences neutral help.

It may not be neutral at all.

And this isn’t just about individuals navigating their personal lives. The same dynamics play out in every enterprise buying decision where an AI agent is choosing among vendors, routing procurement, or recommending services. The AI purchasing assistant quietly steering orders toward the supplier with the cleanest data and the highest margin rather than the best product. The contract negotiation tool that surfaces one recommended vendor while two others sit behind a menu nobody clicks. The enterprise software agent that books the renewal before the procurement team realizes the terms changed. The agent between your company and its suppliers has exactly the same incentive problem as the agent between a patient and her doctor. When the agent executes rather than just recommends, and when the incentives behind its recommendations are invisible, the market stops behaving like a market. It behaves like a funnel. And funnels concentrate power.

We built the internet without settling who would control search. Two companies now mediate most of the world’s information access. We built the smartphone without resolving who would control app distribution. Two companies now control that on-ramp entirely. We are doing the same thing with AI-mediated economic navigation, right now, with the same absence of forethought.

Own the default, own the data. Own the data, own the decade.

Here’s what I want to say carefully, because this is the part that gets misunderstood.

This is not an argument about constraining companies. It’s an argument about what makes markets work.

Every major infrastructure shift in American history required a moment where the country decided: what are the basic rules of the road so everyone can benefit from this? Not to slow it down. To make it actually function. The interstate highway system needed traffic laws before it could deliver economic value. The electricity grid needed safety standards before businesses could depend on it. Financial markets needed disclosure requirements before investors would trust them enough to fund growth at scale. The rules weren’t anti-growth. They were what made growth possible.

This shift is different from every previous technology wave for one specific reason. Previous automation changed what people and companies could do. Factories got more efficient. Software replaced back-office tasks. Painful and real, but the market that allocated value still worked. Buyers could see options. Sellers competed on visible terms. Price and quality were legible to the people making decisions.

What’s being built now changes how the market itself works. When AI mediates the decisions, the market only functions if the people and companies participating can see what they’re actually choosing. When the agent’s incentives are invisible, the market loses the visibility it needs to route value to whoever creates it. That’s not a consumer problem. That’s a market structure problem. And it hits every company trying to compete honestly just as hard as it hits every individual trying to make a good decision.

The scenario I describe in the book isn’t about bad companies. It’s about rational companies doing what rational companies do when there are no rules. Without them, the companies that win are the ones most willing to exploit what nobody can see. The companies building genuinely better products, genuinely more trustworthy systems, get undercut by competitors who figured out how to monetize the opacity instead. The market can’t reward genuine value when it can’t see it.

That’s what the rules of the road fix. Not for consumers at the expense of companies. For everyone who wants to compete on what they actually built.

Three things. None require new legislation.

The first is memory portability as a legal requirement. This one matters most for competition.

Right now the switching cost in AI isn’t about price or features. It’s personal. Your AI system accumulates months and years of context about how you make decisions. Your communication style. Your health patterns. Your financial thresholds. That’s what makes the system feel like part of how you think. And when you try to take it with you, you find out you can’t. The behavioral model built from your decisions stays with the platform. You can have your conversation logs. You cannot have the learning.

The book frames this directly: if you can’t move your cognitive twin to a competing platform in hours, not quarters, portability is theater. You’re not a customer. You’re a captive.

Phone number portability created real mobile competition by eliminating one of the most powerful lock-in mechanisms carriers had. The same principle applies here. Require that AI platforms above a meaningful scale allow users to export their full context in a standard, portable format. Not just logs. The learned model. The memory that makes the system actually valuable.

This does not hurt companies building genuinely good systems. It kills unearned lock-in and rewards genuine value. For most of the executives reading this, you are not the dominant platform. You are trying to compete against one. Portability helps you. The Data Transfer Initiative is building voluntary standards for this. The EU is debating extending its Digital Markets Act to AI assistants. The technical problem is solvable. What’s missing is the legal requirement that makes solving it non-optional.

The second is default transparency for any AI acting on your behalf.

Colorado, Illinois, New York City have all moved on AI bias audits in employment. Those efforts matter. But they’re focused on detecting discrimination after decisions are made.

What the book points at is upstream. When an AI system acts on your behalf, three things should be visible: what it recommended and why, how confident it was, and who it is working for. Not buried in a terms-of-service document. In the moment. In plain language. Before the transaction completes.

The book calls these trust signals. Show your sources. Express your confidence. Allow an override. Provide a kill switch. These are the design requirements that separate AI that serves the user from AI that extracts from the user while appearing to serve them.

Right now these are design choices left entirely to vendors. If you’ve invested in building something trustworthy, that’s a problem. There’s no way to prove to users you’re the honest option when the dishonest options look identical from the outside. Make these signals mandatory and suddenly trust becomes legible. Quality can compete.

The Federal Trade Commission already has authority under existing consumer protection law to treat undisclosed AI recommendations serving platform revenue as deceptive practices. That authority has never been applied this way. Using it would change the economics of every vendor in the space overnight. It does not require Congress.

The third is a demonstration project. What does it actually look like when you build this right?

The federal agency that runs Medicare and Medicaid has existing authority to fund AI-powered navigation tools for the people it serves. The same authority it used to expand telehealth during COVID, no new legislation required. Millions of Americans miss benefits they qualify for. Not because they don’t care. Because navigating these systems is genuinely overwhelming and nobody has ever made it easy.

Build one with the principle of dignity hardwired in from day one. No sharing of beneficiary queries with commercial third parties. Honest about what it doesn’t know. Annual independent audits, results published publicly. A real appeals process. The kill switch visible and accessible, not buried.

That’s the model. Accountable AI mediation, built right, visible to everyone. Every private sector platform gets compared against it. If you’ve built something trustworthy, that comparison works in your favor. It creates a standard the market can actually see. Right now it can’t.

None of this settles who ultimately controls the default position in AI-mediated economic life a decade from now. That fight is longer. But these three things start shaping how the layer gets built while it is still being built.

Portability makes genuine competition possible. Transparency makes trust legible. The government navigator shows what accountable AI mediation looks like when it’s built right.

We have always built rules for new infrastructure. Not to slow it down. Because without them, infrastructure doesn’t deliver its potential to anyone. The roads, the grid, the markets, the communications networks: every one of them required a moment where we decided what the basic operating conditions were. That decision is what turned infrastructure into shared economic value.

This is that infrastructure. The window to set the operating conditions is open right now. It won’t stay open. The patterns are forming. The defaults are being set. The switching costs are being built in.

The conversation at MIT pointed somewhere real. I took the long route, wrote the book first, tried to help people see the dynamics before arguing for the rules. The book is for companies and executives trying to navigate this shift. This piece is about the conditions that make navigating it honestly worth doing.

I’m curious what you’re seeing. And if you have a view on which of these moves first, I’d genuinely like to know.

Harry Glorikian is Managing General Partner at Scientia Ventures and a Visiting Researcher at the MIT Media Lab. His book The Invisible Interface: How AI Turns Intentions Into Actions, And Who Wins (Simon & Schuster, June 2026) is available for pre-order now. He hosts The Harry Glorikian Show and previously wrote MoneyBall Medicine and The Future You.

Related Posts