What Regulators Get Wrong About AI in Healthcare

Nina Blackwood

Nina Blackwood

March 7, 2026

What Regulators Get Wrong About AI in Healthcare

The Rush to Regulate

AI in healthcare is high-stakes: diagnosis, treatment suggestions, and patient data. Regulators are under pressure to “do something”—protect patients, ensure safety, and keep up with innovation. But a lot of what gets proposed or enacted misses how AI is actually used in clinics and what would really reduce harm. Here’s what regulators often get wrong and what would help more.

Treating AI as a Single Category

“AI in healthcare” isn’t one thing. It’s decision support that suggests a diagnosis, algorithms that flag at-risk patients, tools that help with scheduling or billing, and experimental systems still in research. Regulating them all the same way is a mistake. A chatbot that answers patient questions is different from a system that recommends a treatment; a triage tool is different from a diagnostic aid. Regulators need to tier by risk: where the output directly affects care, evidence and oversight should be strict; where it’s administrative or low-stakes, the bar can be lower. One-size-fits-all rules either block useful tools or leave dangerous ones under-scrutinized.

Over-Indexing on Explainability

“Explainable AI” is often demanded so that a doctor or patient can “understand” why the system said what it did. Explainability matters for debugging and trust, but it’s not the same as safety or effectiveness. Some of the most useful models are hard to reduce to simple rules; requiring human-interpretable explanations can force vendors to use weaker models or fake explanations. What actually protects patients is validation: does the system work in real settings, for the populations it’s used on, and what’s the evidence? Regulators should focus on outcomes and evidence, not only on whether the model is “explainable.”

Data and Bias in the Blind Spot

AI in healthcare is only as good as the data it’s trained on. Biased or unrepresentative data leads to worse outcomes for underrepresented groups. Regulators often focus on the algorithm or the device and pay too little attention to data provenance, how training data was collected, and whether performance is validated across demographics. Requiring bias audits and demographic breakdowns of performance would do more than generic “fairness” principles. So would rules that make it clear who’s responsible when a model fails for a subgroup that wasn’t well represented in training.

Who’s Liable When It Goes Wrong

When an AI-assisted decision leads to harm, who’s responsible? The clinician, the hospital, the vendor, or the developer? Unclear liability leaves everyone pointing elsewhere and patients in the lurch. Regulators need to clarify accountability: for approved or cleared AI tools, what’s the duty of the clinician to override or question the output? When does liability shift to the vendor? Without that, adoption will be either reckless or paralyzed.

Approval Pathways and the Pace of Change

Medical device regulators (e.g. FDA in the US) are used to reviewing fixed devices and software. AI that updates over time—new data, retraining, model drift—doesn’t fit the old model. Regulators are experimenting with frameworks for “locked” vs “adaptive” algorithms and when a change requires new clearance. Getting this wrong either freezes useful updates or allows unsafe changes without review. The right balance: clear rules for when a change is substantial enough to re-trigger review, plus post-market monitoring so that real-world performance is tracked. Treating AI as “one approval and done” is one of the things regulators get wrong.

International Mismatch

Different countries are taking different paths. The EU’s AI Act tiers risk and imposes strict rules for high-risk applications including some healthcare AI. The US is more fragmented—FDA for devices, state and federal rules for data and practice. That patchwork makes it hard for vendors to design once and deploy everywhere, and for patients to know what level of oversight they’re getting. Harmonization is slow, but regulators could do more to align on risk tiers and evidence requirements so that good tools aren’t blocked in one place while under-scrutinized in another.

Getting It Right

Regulators should: tier oversight by risk, demand evidence and validation rather than explainability theater, require transparency on data and bias, and clarify liability. They should also adapt approval pathways for adaptive AI and work toward international alignment. That’s harder than slapping “AI in healthcare” with a single set of rules—but it’s what would actually protect patients and allow useful innovation to move forward.

More articles for you