The New AI Middle Layer in Healthcare: FDA Narrows Its Lane, OpenAI Steps In, and “Information” Turns into Care
I’ve been struggling with how to write this.
Not because the individual headlines aren’t big – they are. It’s because the technology is knitting them together faster than categories can keep up.
And I don’t expect that to slow down. I expect it to accelerate. Which is pretty insane when you think about it. Not even my techie friends can keep up.
Ok – back to the story – It isn’t FDA or OpenAI.
It’s the new AI layer forming between health data and human decisions (and not just here but everywhere but that is for a later piece) – and the fact that accountability is moving to whoever builds, deploys, and distributes that layer.
Here’s my version of what I think the FDA is doing
FDA updated guidance in two buckets:
1) “General wellness” Low-risk tools that look like lifestyle support: sleep, activity, recovery, stress, habit coaching.
FDA’s message: If you stay low-risk and don’t cross into diagnosing/treating disease, FDA will often stay out of the way.
2) “Clinical decision support (CDS)” Tools that help clinicians process information: summarize, prioritize, surface guidelines, suggest what to consider.
FDA’s message: If it’s clearly supporting the clinician – and not pushing urgent, high-stakes directives – FDA may treat it more lightly. But if it’s driving time-critical decisions, expect scrutiny.
That’s not “AI is unregulated.” It’s FDA saying: we’re drawing a lane we can police.
Why the phrase “simply providing information” is misleading
The FDA Commissioner used the phrase “simply providing information.”
That sounds harmless. Uh – It’s not. IMHO
Because in 2026, “information” is no longer neutral. AI makes it:
– personal (“based on your meds/labs/history…”)
– ranked (“this matters most”)
– directional (“do this next”)
– persuasive (tone + confidence + framing)
At that point, it’s not a brochure.
It’s guidance.
And disclaimers don’t change behavior. They change who thinks they’re insulated.
So the real question isn’t “is it information?” The real question is how actionable is it?
– How specific is it?
– How urgent does it feel?
– How likely is a human to follow it?
– What happens when it’s wrong?
Two examples
Example A (actually wellness): “Your sleep was shorter than usual. Consider less caffeine and earlier bedtime.”
Low stakes. Lifestyle nudge. Minimal harm.
Example B (starts behaving like care): “Your blood pressure is high. You should increase your medication.”
Now you’re in medical management territory – because people will act on it clinically.
If a number looks clinical and people treat it clinically, the “wellness” label stops protecting you.
Now add the second signal: OpenAI moved into the health interface layer
This is what I think most people are underestimating.
OpenAI didn’t just “ship a feature.” It moved into the interface between health data and decisions – consumer-side and enterprise-side.
That matters because the interface layer becomes the place where:
– trust accumulates
– habits form
– workflows get rewritten
– value concentrates
The EHR stored data.
This new layer interprets it and nudges action.
That’s a different power dynamic.
And if you’re a health system or a system vendor or a data provider – thinking “this is just another vendor,” you’re truly missing the strategic shift. The winners in this era are the ones who control the layer that turns raw data into decisions. Many players just got squashed and may not even realize it. (more on this later)
The upside is real – if we do this right
Let’s not pretend healthcare is working perfectly without this. If you are a patient or you become a patient – you know it is problematic to say the least.
There are legitimate positives to these changes:
– Patients understand their records better and can show up prepared.
– Clinicians can get relief from the administrative sludge.
– Navigation improves (right care, right time, less chaos).
– Guideline adherence can get more consistent – if outputs are grounded and monitored.
That’s the optimistic path: better access, less friction, fewer preventable failures.
The risk is also real – because influence scales faster than governance
The risk isn’t “AI makes mistakes.” Of course it does.
The risk is scaling a system that shapes decisions while pretending it’s “just information.”
That leads to predictable failure modes:
– confident errors that steer behavior
– incomplete records – wrong conclusions
– drift over time as models may change
– over-reliance (patients and clinicians)
– unclear accountability when harm shows up
And here’s the punchline:
When FDA narrows its lane, accountability doesn’t disappear.
It migrates – to:
– health systems (procurement, deployment, monitoring, escalation)
– payers/employers (distribution at scale)
– platforms (interface incentives)
– courts (when governance is thin)
If you’re still asking “is it regulated?” let me reframe…
The question is: is it governable?
One last point – and then I’ll stop (for now)
I’m intentionally not going deeper here, even though there’s a lot more to say.
In the next phase, context becomes the whole game: what data the model sees, what it doesn’t, how it’s prompted, how it’s evaluated, and the workflow environment it operates inside. The same model can be safe in one context and reckless in another.
That deserves its own piece (or several). This one is just about the structural shift happening now.
The question we should stop dodging
The question isn’t “will FDA regulate this?”
It’s: when this AI middle layer is wrong, who owns it – and what evidence justified putting it in the loop?
Because the technology will accelerate. The stitching will get faster.
The only choice is whether trust and governance scale with it.
More to come later…

