Healthcare

Medical AI Trust Layer

Clinical AI needs an evidence and governance layer to move from clearance to adoption.

Medical devicesFDA/EMAVenture thesis2026 signal
ZOAK read

The FDA has authorized 1,450+ AI/ML-enabled medical devices. 295 were cleared in 2025 alone — a record. 76% are in radiology. But 97% entered via the 510(k) pathway, which requires "substantial equivalence" to a predicate, not robust clinical trials. The trust gap between regulatory clearance and clinical adoption is the real constraint.

Pressure index by operating layer

Signal concentration

Capitalized attention split

Problem to company flow

What changed

The pace of FDA AI/ML device authorizations has accelerated dramatically: from 253 in 2024 to 295 in 2025. But the regulatory pathway tells a more nuanced story. 97% of these devices enter via 510(k) — a pathway that requires demonstrating equivalence to an existing device, not proving clinical superiority through prospective trials. Radiology accounts for 76% of authorizations, while cardiovascular and neurology lag far behind. The U.S. AI medical device market is estimated at $4.82B in 2025, growing at 15.8% CAGR. But adoption at the clinical level — radiologists and clinicians actually trusting and using AI recommendations — remains uneven. The FDA's emerging framework for Predetermined Change Control Plans (PCCPs) allows iterative model updates without full resubmission, but raises questions about post-market monitoring and bias detection.

What leaders should do

If you're a health system: audit which AI tools your clinicians actually use vs. which ones sit idle after procurement. Build a clinical AI governance board that tracks performance, bias, and override rates. If you're a medtech company: invest in real-world evidence generation alongside your 510(k) submission — clinical adoption will increasingly depend on post-market performance data, not just regulatory clearance.

What ZOAK wants to build

A clinical AI trust layer: a governance and monitoring platform for health systems that tracks AI model performance against clinical outcomes, detects bias and drift across patient populations, logs clinician override rates, and generates regulatory-ready performance reports. The product sits between the AI tool and the clinical workflow, providing the evidence layer that drives adoption.

Operating analysis

The FDA has cleared over 1,450 AI/ML devices. Over 60% of recent clearances are Software as a Medical Device (SaMD). The market is growing at 15.8% CAGR. But the story of clinical AI is not about clearances — it's about trust. Clinicians need evidence that the AI works in their specific patient population, with their specific workflow, producing measurably better outcomes. The 510(k) pathway doesn't require this evidence. Post-market surveillance is underdeveloped. The trust layer is the gap.

Joint FDA-EMA guiding principles published in early 2026 emphasize transparency, bias mitigation, and Total Product Lifecycle monitoring. Health systems that build governance infrastructure now will be positioned for the next wave of AI deployment — beyond radiology into pathology, cardiology, and primary care.

Constraint97% of AI devices enter via 510(k); post-market evidence and bias monitoring are underdeveloped.Priority 1
System responseClinical AI governance platform: performance tracking, bias detection, override logging.+38% clinician adoption target
Company angleThe trust layer between AI tools and clinical workflows.Prototype
SignalWhy it mattersAction
Clearance acceleration295 AI/ML devices cleared in 2025 — record pace, 76% in radiology.Build performance tracking for the growing portfolio of deployed AI tools.
PCCP frameworkFDA's Predetermined Change Control Plans allow iterative model updates post-clearance.Design monitoring systems that validate each model update against clinical outcomes.
SaMD dominanceOver 60% of recent clearances are Software as a Medical Device.Focus governance tooling on SaMD lifecycle management and version control.
Audit deployed AI tools
Build governance framework
Deploy monitoring layer
Generate evidence reports
What would we build first?

A clinical AI performance dashboard for a single radiology department: track AI recommendation accuracy, clinician override rates, patient outcome correlation, and demographic bias patterns. Start with the 3 most-used AI tools in the department. Generate a quarterly evidence report that can be shared with the FDA under TPLC monitoring.

Why radiology first?

76% of FDA-authorized AI/ML devices are in radiology. It has the highest deployment density, the most mature tooling, and the clearest outcome metrics (sensitivity, specificity, false positive rate). Build the governance model in radiology, then expand to cardiology and pathology.

How would we measure success?

Clinician AI tool utilization rate should increase by 25%+ after implementing transparent performance monitoring. AI-related adverse event reporting time should decrease from weeks to same-day. Patient outcome correlation data should be available within 90 days of deployment, not years.

ZOAK_BUILD_THESIS = {
  category: "Medical AI governance",
  first_principle: "clearance ≠ trust; post-market evidence drives adoption",
  target_lift: "+38% clinician AI adoption rate",
  next_move: "prototype governance dashboard for radiology AI portfolio"
}

Sources: FDA AI/ML Device Database, Innolitics — FDA AI Device Tracker, Healthcare Foresights — Medical AI Market, 2025

Related engagement

Building or deploying clinical AI?

Tell us about your regulatory or adoption challenge — we'll scope a governance diagnostic.

Start a conversation

Live thesis model

Signals we track across strategy, AI, geopolitics, and operations.

Execution lift +68%
Policy volatility +44%
AI energy pressure +73%
Frontier readiness index