Who Owns What Your AI Remembers?
A piece in the The Wall Street Journal this week headline: “Workers Are Afraid AI Will Take Their Jobs. They’re Missing the Bigger Danger.” The author, Matthew Call, an associate professor of management at Texas A&M University, argues that employees are so fixated on whether AI will replace them that they’re overlooking a more immediate threat. Enterprise AI systems — Microsoft Copilot embedded in your Office suite, Salesforce Einstein woven into your CRM — are capturing how employees work. Every prompt, every workflow, every problem-solving sequence gets absorbed into company-owned infrastructure. When that employee leaves, their institutional knowledge stays behind. The company may not need them anymore, or it might simply hand their methods to a less experienced replacement.
Call’s recommendation is that workers should use personal AI tools — ChatGPT, Claude, Gemini — for their strategic thinking, keeping their best insights portable and off the corporate ledger.
It’s a thought-provoking argument. And I think he’s identified something real. But having spent years in healthcare and life sciences, and the last eighteen months writing a book about exactly this dynamic, I believe the conversation needs to go further — and the technical mechanism needs to be more precise if leaders are going to act on it.
The AI Isn’t Training Itself on You. But Your Employer Is Still Capturing Your Expertise.
Call writes that enterprise AI systems “can capture everything you do at work and use that information to train itself.” This is the part that deserves a closer look.
I checked the documentation from the two platforms he specifically names. Microsoft is explicit: prompts, responses, and data accessed through Microsoft Graph are not used to train the foundation models behind Copilot. Salesforce built an entire architecture — the Einstein Trust Layer — around the guarantee that their large language models don’t retain customer data. Both companies are unambiguous on this point.
So the AI isn’t “learning how you think” in the way Call describes. The foundation models aren’t being retrained on your problem-solving patterns.
But here’s what is happening — and in my view, it’s actually more consequential than model training.
Your employer doesn’t need the model to learn from you. They just need to log what you did. Microsoft stores every Copilot interaction inside the organization’s tenant. Those prompts and responses are searchable, auditable, and retainable through tools like Microsoft Purview and eDiscovery. Your company can review every question you asked, every document you drafted, every workflow you triggered. That isn’t model training. That’s institutional memory capture — and it absolutely persists after you walk out the door.
And the capture goes further still. Companies are actively building knowledge infrastructure on top of what employees produce. Boston Consulting Group (BCG), for example, has had its employees build over 18,000 internal custom GPTs — lightweight AI assistants wrapped around approved firm content and instructions. BCG now runs a release pipeline that includes red-teaming, data protection review, and legal sign-off before those tools go firm-wide. When a consultant builds one of those agents using the methods they’ve refined over a career, that expertise becomes a reusable company asset. It doesn’t matter that the foundation model wasn’t “trained” on their work. Their knowledge has been codified into company-owned infrastructure just the same.
The same dynamic unfolds anywhere companies build retrieval-augmented generation (RAG) systems that index meeting transcripts, Slack threads, and employee-generated documents into searchable knowledge graphs. The AI doesn’t need to learn from you. The system just needs to organize what you’ve already produced into a form that other people — or other agents — can access after you’re gone.
Healthcare Makes This Concrete
Physicians have spent careers building what you might call diagnostic intuition — the ability to look at a constellation of symptoms, lab values, and patient history and know what to pursue next. That expertise has always walked out the door when a clinician retired or changed systems.
Now, enterprise AI deployed inside electronic health record systems can capture elements of that clinical reasoning in real time. Not because the model is training on the physician’s thought process, but because the system logs every order, every pathway, every decision point. Layer a knowledge graph on top of those logs, and you have a navigable map of how your best clinician approaches complex cases. The next physician who walks in doesn’t need twenty years of experience. They need access to the system.
That’s extraordinary from a patient-care standpoint. But it raises a question that every industry will eventually face: who owns the captured intelligence? The health system that deployed the software? The vendor that built the knowledge graph? Not the physician who generated it — at least not under most current agreements.
The Numbers Tell a Story
Recent research suggests employees already sense this imbalance, even if they can’t name it. A BlackFog study released last month found that 86% of employees now use AI tools weekly for work. Nearly half — 47%, according to Netskope — are doing it through personal accounts their companies can’t monitor. And BlackFog’s data shows this isn’t just a rank-and-file phenomenon: 69% of C-suite executives and 66% of directors and senior VPs believe speed outweighs privacy or security when it comes to AI tool use.
In my experience, when smart people consistently route around a system, the system is the problem, not the people. Enterprise tools may be too restrictive, too slow, or — and this is Call’s genuine insight — employees intuit that everything they do inside those platforms becomes organizational property. So they take their best thinking elsewhere.
IBM’s latest data tells us this workaround has costs of its own: shadow AI was responsible for one in five data breaches in their most recent study, adding significant expense to each incident. But banning personal tools isn’t the answer. That just drives the behavior deeper underground.
What This Means for Leaders
I’ve spent the last eighteen months writing about what I call cognitive rent — the ongoing cost an organization pays because its operational memory lives inside systems it doesn’t fully control. The more useful those systems become, the harder it is to leave — for the company and the employee. I call that the sovereignty paradox.
These aren’t abstract ideas. Gartner is projecting that by 2027, seventy percent of new employment contracts will include digital-persona clauses — provisions governing how companies can use, deploy, and upon exit, delete an AI-shaped version of you.
The companies that figure out the boundary now between institutional knowledge and personal expertise will attract and retain the best talent. The companies that treat this as an afterthought will find their best people doing exactly what Call recommends: keeping their real thinking off the corporate ledger.
In my view, the path forward isn’t adversarial. It’s architectural. Organizations need clear separation between what the company captures and what the individual keeps, with transparent rules for both. Not a policy memo. An engineering specification.
Professor Call identified a real problem. He got the mechanism slightly wrong — the AI isn’t secretly training itself on you. But your employer is building an institutional brain out of everything you produce, and in most organizations, nobody has negotiated who owns which parts.
That’s the conversation leadership teams need to start having. And it needs to happen now, not after the next round of talent walks out the door.
These are among the questions I explore in my upcoming book, The Invisible Interface: How the AI Layer Will Upend the Economics of Everything, coming this spring from Ideapress Publishing.
References
- Matthew Call, “Workers Are Afraid AI Will Take Their Jobs. They’re Missing the Bigger Danger,” Wall Street Journal, February 2026.
- Microsoft, “Data, Privacy, and Security for Microsoft 365 Copilot,” Microsoft Learn. learn.microsoft.com
- Salesforce, “Trusted AI: The Einstein Trust Layer.” salesforce.com
- Alicia Pittman and Scott Wilder, “BCG Execs: AI Across the Company Increased Productivity,” Computerworld, February 2025. computerworld.com
- BlackFog, “Shadow AI Threat Grows Inside Enterprises,” January 27, 2026. blackfog.com
- Netskope, “Cloud and Threat Report: 2026.” netskope.com
- IBM, “Cost of a Data Breach Report 2025,” July 2025. ibm.com
- Gartner, “Top Predictions for IT Organizations and Users in 2025 and Beyond,” October 2024. gartner.com
Harry Glorikian is a General Partner of Scientia Ventures and author of MoneyBall Medicine and The Future You. His next book, The Invisible Interface, publishes Spring 2026 from Ideapress Publishing.





