Skip to main content Scroll Top

Cognitive Rent: The AI Cost Nobody’s Measuring

Last March, a Microsoft customer picked a fight with Microsoft on Microsoft’s own Q&A forum.

Not privately. On a public thread anyone can still read.

The customer was running production AI on GPT-4 version 0613 in Azure’s Switzerland North region. Microsoft had scheduled that model for retirement on June 6, 2025. Data residency rules meant the customer couldn’t move to a different region. No in-region replacement had been announced. If nothing changed, their applications were going to stop working.

A Microsoft engineer responded on the thread and confirmed it. No specific replacement was planned for Switzerland North before the retirement date. Future availability depended on capacity.

Think about that. Published schedule. Advance notice. Everything a well-run vendor is supposed to do. And a real production workload with nowhere to go. That customer wasn’t the victim of a bad vendor. They were paying cognitive rent.

What cognitive rent is

Cognitive rent is the ongoing cost you pay when essential memory lives inside someone else’s system and you can’t take it with you.

It isn’t a line item on any invoice. Your finance team won’t find it in the AWS bill or the OpenAI contract. It shows up in three places instead.

The price you can’t negotiate at renewal, because the vendor knows you can’t leave.

The switching cost you can’t calculate until you try to leave, at which point the number is too large to act on.

And the strategic option you quietly stop considering — because “we’re locked in” becomes the shape of every future decision.

The AI version is worse than anything SaaS lock-in has produced before. What you can’t extract isn’t a file format. It’s a learned model of how your business works.

Why this is different from normal vendor lock-in

In the old world, a vendor held your data. Painful to move, but moveable. You could export contacts, transactions, records. The integrator cost a fortune, but the content was portable.

In the AI world, a vendor holds two things. The raw data, which is usually exportable. And the derived memory, where the value actually lives. Derived memory is the patterns, preferences, and operational understanding the system builds up by watching your people work. That second category often isn’t portable. In some cases it can’t be made portable with any technique available today.

A peer-reviewed paper in Nature Machine Intelligence, published February 17, 2025, examines the state of the art in removing the influence of specific training data from large language models and describes the core challenges as open research problems. A second survey, published September 2025 in ACM Transactions on Intelligent Systems and Technology, reaches a consistent conclusion. Robustness, evaluation, and resistance to adversarial recovery are all unresolved. Model weights aren’t a database. You don’t delete a row. You run approximations that may or may not work, and you can’t prove they worked.

Procurement teams are still asking the old question: Do we own our data? They get a satisfying yes. Nobody’s asking the new question: Can we export what the system has learned about us? Most vendors can’t fully answer it, because the industry hasn’t solved it yet.

Where cognitive rent compounds

Andreessen Horowitz surveyed 100 enterprise CIOs across 15 industries in 2025. 37 percent were using five or more AI models, up from 29 percent the year before. A follow-up a16z survey of 100 Global 2000 executives in early 2026 put it higher. 81 percent now use three or more model families in testing or production, up from 68 percent less than a year earlier.

Most large enterprises are running multiple AI systems in parallel. Each one accumulates a different slice of operating context. Each one has different export capabilities. Pricing trajectories at renewal diverge as well. The switching cost compounds every quarter.

Multiply that across a three-year horizon. You’ve built institutional memory distributed across five vendors’ infrastructure with no unified map of what lives where.

Most organizations have built no governance around this. They’ve built dashboards.

What boards should actually demand

The instinct is to ask for portability guarantees. That’s the wrong ask in 2026. Most vendors can’t deliver them, and any vendor that claims they can is either running a very specific architecture or overselling. Three asks are better.

Demand a memory inventory. Not a data dump. A written description of what the vendor’s system has learned about your operation, categorized by where that learning physically lives. Raw logs. Retrieval indexes. Vector embeddings. Fine-tuned adapters. Base model weights. If your vendor can’t produce one, that’s the answer to the portability question. Just not the answer they want to give you.

Separate exportable from non-exportable. Conversation logs, stated preferences, documented workflows, and retrieval documents are generally exportable today. Anthropic launched a memory import tool in March 2026 that uses a prompt-based extraction technique. Users copy a prompt from Anthropic, paste it into ChatGPT or Gemini, and the other assistant produces a structured summary of what it knows about the user. Paste that back into Claude and the context imports. Clever, but prompt-based extraction isn’t formal portability. It works for user preferences. It doesn’t work for fine-tuned weights, tenant-specific embeddings tied to a particular model family, or agentic workflows tuned to a particular vendor’s prompting and orchestration. No prompt trick is going to fix that second category. The vendor’s honest map of which category your memory falls into is the most important document in the contract.

Stop asking for proof of deletion at the weights level. You can’t get it. The European Data Protection Board’s Opinion 28/2024, adopted December 17, 2024, concluded that an AI model trained on personal data isn’t automatically anonymous. Any model from which identifiable data can be extracted is still treated as processing personal data. The remedies the Opinion authorizes include dataset erasure and model retraining. It doesn’t authorize clean deletion of specific learned influence, because that technique doesn’t reliably exist. What you can ask for is a documented combination: dataset deletion, log purging, output suppression, and evidence from extraction tests showing specific content isn’t reproducible. Anything beyond that is marketing.

Why this matters now

AI spending has graduated from innovation budget to core IT line item. The a16z survey found enterprise leaders expect AI budgets to grow around 75 percent in the next year. One CIO in the survey said what they spent in 2023 they now spend in a week.

The Switzerland North customer wasn’t running derived memory inside the model. They were running a specific model version. Take the same dynamic and add three years of advisor communication patterns, or a model of what drives churn in your call center. A model retirement is recoverable. A memory loss isn’t.

If your management team hasn’t priced cognitive rent into the three-year strategic picture, that’s a fiduciary blind spot, not a technology one.

The deeper question

Cognitive rent is a symptom of something bigger. The software layer between a human and an AI service is becoming the most valuable real estate in the economy. Whoever controls it controls what gets remembered, what gets recommended, what gets decided by default. Own the layer, you own the decade. Rent it, you pay rent on everything you thought you owned.

I wrote a book about it. The Invisible Interface: How AI Turns Intentions into Actions and Who Wins. Ideapress Publishing, distributed by Simon & Schuster, June 30. Cognitive rent is one chapter. The rest is about what happens when an entire economy moves onto a layer nobody quite controls yet, and what boards, operators, and investors should be doing in the next eighteen months so they don’t wake up as tenants on their own balance sheets.

If your organization is running AI at any scale and nobody’s put cognitive rent on the risk register, that’s the first move.

Pre-order The Invisible Interface on Amazon

Harry Glorikian is a venture investor, AI strategist, and author of The Invisible Interface: How AI Turns Intentions into Actions and Who Wins (Ideapress Publishing, distributed by Simon & Schuster, June 30, 2026). He is General Partner at Scientia Ventures and a Research Affiliate at the MIT Media Lab.

References

  1. Microsoft Q&A forum, “Azure OpenAI Model is being retired without sufficient alternatives available,” March 2025. learn.microsoft.com
  2. Andreessen Horowitz, “How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025,” June 10, 2025. a16z.com
  3. Andreessen Horowitz, “Leaders, Gainers and Unexpected Winners in the Enterprise AI Arms Race,” early 2026. a16z.com
  4. European Data Protection Board, Opinion 28/2024, adopted December 17, 2024. edpb.europa.eu
  5. Liu, S., et al. “Rethinking machine unlearning for large language models.” Nature Machine Intelligence 7, 181–194 (2025). nature.com
  6. Nguyen, T.T., et al. “A Survey of Machine Unlearning,” ACM TIST, Vol. 16, No. 5, Article 108, September 18, 2025. dl.acm.org
  7. MacRumors, “Anthropic Adds Free Memory Feature and Import Tool to Lure ChatGPT Users to Claude,” March 2, 2026. macrumors.com

Related Posts