Two years ago, many health systems were drafting their first AI policy. The focus was exploratory: defining principles, setting guardrails, and deciding who "owned" AI. Today, with the rapid adoption of Agentic AI, governance conversations have shifted as leadership teams tackle balancing the need for more governance with the rising demand for AI-driven innovation. 

What's driving AI governance change?

AI governance in healthcare is evolving as a result of growing patient safety and organizational security concerns.  Health systems can no longer treat governance as a strategy document. Rather, AI governance must be treated as an operational control system across the organization. 

1. Shadow AI 

A recent survey revealed that 58% of frontline healthcare workers use unauthorized, free AI tools for work at least once in the previous month— everything from pasting patient notes into public LLMs to running imaging summaries through consumer-grade tools. This behavior is alarming to security, compliance and legal teams, but shadow AI isn't a workforce discipline problem. It's a systems problem.

Shadow AI emerges when enterprise-approved tools are outdated or lack essential features for real workflows and when procurement and deployment cycles are prolonged.  

2. Security

The financial and operational impact of shadow AI is materially different from traditional breaches. Incidents tied to unauthorized AI use cost significantly more on average, largely because the data exfiltration paths are harder to trace and harder to remediate.

Once protected health information is placed into an unvetted AI tool, the data may be stored indefinitely, used for model training or surfaced in responses to other users.  And, critically, there is no reliable path for deletion or reversal. At that point, containment is no longer possible. Traditional security controls weren't designed for opaque, probabilistic systems that sit outside the enterprise boundary.

3. Patient safety

AI governance is no longer just about ethics or compliance. It is directly tied to patient safety, clinical quality, and institutional trust.  Considering that consumer-grade AI tools hallucinate quite confidently, if an employee uses an unapproved AI tool to draft a clinical communication and that model fabricates a medical assessment (e.g., a medication interaction), misinterprets a lab value, or recommends unnecessary testing, the organization carries the liability. 

How is AI governance changing?

Healthcare organizations are responding by moving governance from static strategy documents and intermittent assessments into live operational systems across the enterprise.

1.  Centralized registries

Many are replacing spreadsheets and ad hoc meetings with a centralized AI registry to inventory previously deployed tools, review new use cases across clinical and operational domains, and continuously track AI systems over time. 

2. Ethics committees

Others are supplementing existing governance bodies with dedicated ethics review committees that use standardized evaluation rubrics to assess clinical appropriateness, patient safety, bias potential and downstream impact before AI systems scale.

3. Real-time monitoring

To mitigate AI security risk, many organizations are implementing monitoring solutions that enable real-time threat detection, model drift monitoring, API usage tracking, inbound/outbound data scanning for PHI, and active detection of shadow AI. 

4. Secure sandboxes

Rather than blocking experimentation, forward-looking organizations are creating sandbox environments where staff can safely experiment with approved tools. This preserves innovation velocity while keeping patient data protected and usage observable.

Governance alone is not enough.

Despite massive investment, MIT reports that 95% of enterprise generative AI initiatives, including healthcare use cases, never progress beyond pilots. The primary barrier isn't technology. It's the inability to redesign processes and decision flows around AI-driven work. For agentic AI to deliver meaningful, measurable outcomes, there needs to be a radical shift in how processes are defined and determined.

1.  Cultural Shift

Organizations must think about workflows as if starting from a blank slate, challenging every existing step rather than simply layering technology on top of the status quo. Without this shift in thinking, an organization's AI journey is comprised of disconnected AI experiments, pilot fatigue, erosion of executive confidence and little to no measurable business impact. 

2. Process Redesign 

Putting an autonomous agent on a broken process doesn't improve outcomes – it just accelerates the bad process at machine speed. The approach to how processes, users, and decision flows are determined must be reimagined. Additionally, as AI shifts work away from manual execution, it also exposes new constraints. Approvals, dependencies and handoffs can limit AI's speed-to-value potential. 

The path forward

Evolving an organization's AI governance starts with an honest assessment of where it stands today by mapping current AI tools and usage (including shadow AI), evaluating the maturity of governance controls, and identifying the broken processes that can accelerate failure if left unaddressed.  

A governance readiness assessment is the first step toward turning AI from a collection of disconnected pilots into a scalable, secure operating capability that can drive measurable impact from AI investments.