Skip to main content Scroll Top

You’re Paying Rent on Your Own Intelligence. You Just Don’t Know It Yet.

I’ve rewritten this piece five or six times and I am apologizing for the length. Every time I thought I had it right, I realized I wasn’t being complete or honest about what’s actually happening. The dynamics are moving fast and the layers keep revealing themselves. So here’s where I’ve landed. I’m sure there’s more to say. I’m deliberately leaving pieces out so this doesn’t turn into a white paper. But I believe the core of it holds, and I think it matters.

I’ve had the pleasure (NOT) of reading vendor contracts on and off for 30 years. I know what lock-in looks like. I watched it happen in healthcare with electronic health records. I watched it happen in enterprise software with ERP systems. Every time, the pattern is the same. By the time you realize how deep you are, the cost of leaving is already higher than the cost of staying.

The AI version is going to be different. Not necessarily worse across the board. But different in ways that matter, and different in ways most of the current conversation gets wrong.

Parallels surveyed 540 IT professionals across the U.S., U.K., and Germany for their 2026 State of Cloud Computing report. 94% of organizations are now concerned about vendor lock-in. Nearly half said they’re very concerned. But here’s what caught my attention. Most of that concern is still about the old kind. Contractual terms, migration headaches, retraining staff on a new interface. That stuff is real. It’s also familiar. We’ve been dealing with it for decades.

The new kind is different. And almost nobody has a name for it yet.

What Most People Get Wrong About AI Portability

Let me start with what a lot of people may get wrong, because I got it wrong in earlier drafts of this piece myself.

Most of what enterprise AI teams have built today is more portable than most may think. If you’re running AI agents, the instructions that tell those agents what to do are typically markdown files, system prompts, custom configurations, workflow definitions. You wrote those. They’re text in your repository. They go with you. Your document knowledge base, what the industry calls retrieval-augmented generation or RAG, is yours too. A friend of mine describes it simply: copy, paste, run. Your documents get indexed, the model retrieves the relevant information at query time, and the knowledge stays in a layer you can see, manage, and move. And as Anthropic’s Model Context Protocol gets adopted by OpenAI and Google, even the tool connections are becoming standardized.

If you fine-tuned a model with a vendor and kept your training data, you can retrain on a new base model. It’s not free, but it’s not catastrophic.

A CTO reading this might say: I swapped models last month for half my use cases and nothing broke. Where’s the lock-in?

Fair question. Here’s where.

The Data That Stays Behind

Every interaction your team has with an AI system generates data that stays on the vendor’s platform:

  • Conversation logs showing what your organization asks about and how people phrase their questions
  • Usage patterns showing which tools get used, how often, by which teams
  • Prompt libraries and templates your people refined through the vendor’s interface rather than saving in their own repositories
  • Explicit memories the system was told to save through persistent memory features
  • Custom configurations and parameters adjusted through the vendor’s UI

None of this is the system autonomously “understanding” your organization. That’s not how current AI works. These are raw logs and records. But they’re raw material with real value. They show how your organization actually uses AI, what it asks for, what works, what doesn’t. If you leave, you lose access to all of it unless your contract specifically addresses it. And if you wanted to use that data to get a new platform up to speed, it would significantly shorten the ramp-up. Without it, your new system starts cold. Your people start over explaining things they already explained a thousand times.

I’ve started calling this cognitive rent. The ongoing cost an organization pays because someone else holds the accumulated operational record it depends on to be productive.

But I think even that understates it. What’s forming inside these vendor platforms isn’t just a collection of data. It’s the beginning of an intelligence layer that your organization increasingly runs on. Your agents, your knowledge, your workflows, your team’s accumulated corrections and preferences. All of it, taken together, is becoming the operating layer through which your organization functions. And it’s forming inside someone else’s infrastructure.

Own the memory, own the customer. Rent the memory, rent your future.

Where Every Vendor Is Heading

Now here’s where it gets more serious.

Today, the data accumulating inside vendor platforms is raw. Logs, records, usage patterns. The systems aren’t turning that data into models of how your organization behaves. But that’s explicitly where every vendor is heading. OpenAI’s persistent memory evolved from “remember what I tell you” in 2024 to “automatically reference all past conversations” by mid-2025. Anthropic and Google shipped similar features. Each step makes the system more useful. Each step also means more context accumulating inside the vendor’s infrastructure.

The next step, and it’s clearly being built toward, is systems that identify patterns across users, across time, across thousands of interactions, surfacing insights about your organization that nobody explicitly programmed. When that arrives, the lock-in changes character. You’re no longer losing raw logs when you leave. You’re losing organizational intelligence that took months or years to accumulate and that can’t be reproduced by handing a new vendor your documents and config files.

That capability isn’t here yet at enterprise scale. But the raw material for it is accumulating right now, inside your vendor’s platform, with every interaction your team has.

The Multi-Agent Complexity Trap

There’s another place this shows up that’s easy to miss. A single agent with a markdown config file is portable. An ecosystem of twenty coordinated agents with shared state, custom orchestration, and interdependent workflows that depend on how those agents hand work to each other is a different problem entirely. Moving one agent is a weekend project. Moving twenty that depend on each other is a full rebuild.

The Forced Upgrade Cycle

But here’s the thing that really changed my thinking, and it’s the reason I kept rewriting this piece.

Every previous generation of enterprise lock-in involved software that worked fine even when it was outdated. Your ERP from 2015 still processes invoices correctly. Your CRM from 2018 still tracks contacts. Outdated software misses features. It doesn’t get dumber.

An AI intelligence layer is different.

I’ve had this conversation with Raffi Krikorian on my podcast and with researchers at the MIT Media Lab. Base model capabilities improve measurably with each generation. If your competitor upgrades to a newer model and you don’t, their agents aren’t just faster. On many tasks, they’re producing higher quality reasoning, better analysis, more nuanced decisions. The gap between a current frontier model and a model from 18 months ago isn’t “missing a feature.” It’s a difference in the quality of thinking the system can do. On domain-specific tasks where you’ve fine-tuned, your older model may still outperform a newer generic one. But the base capability floor keeps rising, and competitors who retrain on newer foundations get both the domain knowledge and the improved reasoning.

That creates a forced upgrade pressure that traditional enterprise software never had. And every upgrade cycle either deepens the lock-in or tests your portability, depending on how you set things up from the beginning.

The EHR Cautionary Tale

I’ve spent three decades in healthcare watching the EHR version of this play out. A peer-reviewed study in Applied Clinical Informatics found that EHR transitions can cost hundreds of millions of dollars for mid-sized systems and over $1 billion for larger ones. Healthcare has had interoperability standards for years. HL7, FHIR, the 21st Century Cures Act. They solved data portability. They never solved context portability. The AI version will follow the same path.

The Depth vs. Portability Tradeoff

Morgan Stanley understood the architecture question before most. They spent five years curating 100,000 internal research documents before deploying a single agent to their financial advisors. Memory architecture first. Agent capability second. 98% of advisors adopted it within months. By keeping their knowledge in a curated layer they controlled, they built something portable. That was the right first move.

But it’s not the full picture. Staying in the safe, portable, RAG-only zone is a real option. Your competitor who goes deeper, who fine-tunes, who lets persistent memory accumulate, who builds complex multi-agent ecosystems, is getting more from their AI than you are. Staying portable means staying shallow. Staying shallow can mean falling behind. That’s the pressure that pushes organizations toward deeper cognitive rent even when they can see the trap coming.

This Is a Board Conversation, Not a Help Desk Ticket

And here’s what I keep seeing that worries me. Most organizations treat this as a technology question. They push it to the IT group or the data engineering team. But who holds the intelligence layer your organization depends on isn’t a technical decision. It’s a strategic one. It affects your negotiating leverage at renewal, your competitive position if a rival upgrades to better models before you do, and your ability to move if the landscape shifts. That’s a board conversation. Not a help desk ticket.

Who Gets Hit Hardest

I should be honest about who I’m talking to here. The dynamics I’m describing hit hardest at large organizations. They have the resources to self-host, negotiate custom contracts, and build independent logging systems. They also accumulate the deepest context and build the most complex multi-agent ecosystems. They have the tools to manage cognitive rent. They also have the most cognitive rent to manage.

But the organizations I actually worry about most are the mid-sized ones. Fifty to five hundred people. They’re deep enough into AI to accumulate meaningful usage data, conversation logs, refined prompts, and custom workflows inside vendor platforms. But they may not have the resources to self-host, the technical teams to build independent logging, or the purchasing power to negotiate custom contract terms. They’re on standard agreements. Take it or leave it. Cognitive rent accumulates. Leverage to manage it doesn’t.

If you’re running a smaller company, the underlying risk is the same but the exposure may be lower because you’re not deep enough to accumulate heavy context. Your best move is the simplest one: keep your knowledge in files you control and don’t let your organizational context live exclusively inside a vendor’s system. But as I am writing this I am questioning if that is even possible for most.

What You Can Actually Do About It

So what do you actually do? There’s no clean answer. Some cognitive rent is the rational price of AI that actually works. The question is: how much are we accepting, what value are we getting in return, and do we retain enough leverage to renegotiate?

Start with what you can control. Keep your agent instructions, workflow configs, and business logic in well-structured files in your own repositories. Not just in the vendor’s platform. That’s your intellectual property and it’s genuinely portable.

Own your document layer the way Morgan Stanley did. Build and curate your knowledge base on infrastructure you control.

Then do the thing almost nobody is doing yet. Start logging how your team interacts with AI, independently of the vendor. The corrections they make to AI outputs. The prompts that work well. The decisions where the AI was overridden and why. Most organizations aren’t capturing these signals at all right now. They edit an AI draft in their own document and the correction never flows back to anyone. Those signals are being lost entirely. Not captured by the vendor. Not captured by you. That has to change. If you ever need to switch or upgrade, that behavioral record becomes your retraining dataset. It won’t perfectly reproduce what you had. But it’s the difference between rebuilding from something and rebuilding from zero.

If you’re fine-tuning with a vendor, retain the training data and negotiate the right to use it elsewhere before you start. The fine-tuned model may not be portable. The data can be.

And pull your AI vendor contracts right now. Find the clause that covers what happens to conversation logs, usage data, and any derived outputs at renewal. Then ask the harder question: even if the contract says you own it, does the architecture actually let you extract it in a usable format?

The Open-Source Paradox

I talk to a lot of founders through my work, and some are taking a different path entirely. Running open-source models like Llama, Mistral, or Qwen on their own infrastructure. No landlord. Real freedom. But real costs too. You carry the full operational burden. On general-purpose tasks, open-source models are closing the gap with frontier vendor models, though it varies a lot by use case. And here’s the paradox: if you fine-tune a local model and a better base model comes out six months later, your fine-tuning doesn’t transfer. You retrain from scratch. You own the building. But it’s aging. And with an intelligence layer, unlike traditional software, an aging building doesn’t just lack features. On many tasks, it produces lower quality output than the competition. That changes the math.

The Missing Piece: A Portable Context Layer

The missing piece, and I think this is both a risk and a serious business opportunity, is a portable context layer. Something that sits between your organization and whatever model processes the work. A system that captures and stores your operational signals, corrections, prompt refinements, and usage patterns in a format you control, independent of any vendor. When you upgrade models, the context layer stays. When you switch vendors, it moves with you. Early consumer tools are exploring this space. Nobody has built the enterprise-grade version. That gap is real. But it is forming, I promise, in ways that are not obvious.

What Happens If the Architecture Itself Changes

One more thing. Everything I’ve described applies to the current generation of large language models. Transformer-based systems that predict the next token. If the field moves toward entirely different approaches—world models that simulate environments rather than predict text—the whole picture changes. Fine-tuned transformer weights have zero portability to a world model. Even RAG knowledge would need re-indexing against entirely different retrieval systems. That shift could dissolve current lock-in or deepen it. We don’t know which. But building your entire strategy around today’s architecture while ignoring the possibility that the architecture itself may change? That’s its own risk.

The Bottom Line

Cognitive rent isn’t a problem you solve. It’s a dynamic you manage. The organizations that go in deliberately—with their agent configs in their own repositories, their knowledge on their own infrastructure, their operational signals captured independently, and their contracts negotiated from a position of awareness—will have options. They can go deep and still retain leverage.

The ones that sleepwalk into it will discover the cost at the worst possible moment: renewal.

The Personal Version Is Coming

And everything I’ve described here is about organizations and their vendors. There’s a bigger version of this coming that almost nobody is talking about yet. The same dynamic is forming at the personal level, for every individual who uses AI as part of their daily life. Your health questions, financial decisions, communication preferences, daily rhythms. Three years of that accumulating inside a single platform. The organizational version at least has contracts and procurement teams. The personal version has nothing protecting it. No negotiating leverage. No alternative architecture. The terms of service are the terms of service.

I’ve come to think of what’s forming, for organizations and individuals alike, as a Personal Operating Layer. A continuous intelligence layer that mediates how you interact with the digital world. Whoever controls that layer controls the defaults, the data, and ultimately the leverage. That’s the argument at the center of my next book, and it’s bigger than any single vendor contract.

But start where you are. What does your renewal clause actually say?

Harry Glorikian is the author of The Invisible Interface: How the AI Layer Will Upend the Economics of Everything (Ideapress Publishing / Simon & Schuster, May 2026).

Sources

[1] Parallels, “2026 State of Cloud Computing Survey,” 540 IT professionals surveyed Nov 2025, published Feb 2026. Link

[2] Andreessen Horowitz, “How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025,” 100 CIOs across 15 industries, published June 2025. Link

[3] Bowman et al., “Transitions from One Electronic Health Record to Another: Challenges, Pitfalls, and Recommendations,” Applied Clinical Informatics, PMC, 2020. Link

[4] OpenAI, “Morgan Stanley,” case study. Link

[5] OpenAI, “Memory and New Controls for ChatGPT,” updated June 2025. Link

[6] Google, “Gemini Switching Tools” (memory and chat history import), announced April 2025. Link

[7] Prabhakar, A.V., “AI-Native Memory and the Rise of Context-Aware AI Agents,” June 2025. Link

[8] Anthropic, Model Context Protocol (MCP), open-sourced Nov 2024; adopted by OpenAI and Google DeepMind, March–April 2025. Link

Related Posts