In the last Precision AI article, we explored how Palo Alto Networks' AI Runtime Security introduced real-time protection for AI systems, focusing on monitoring behavior, preventing data leakage, and enforcing controls at the point of interaction. 

That was the foundation.

But as AI has evolved, so has the problem.

AI is no longer confined to individual applications. It is now embedded across workflows, connected to enterprise data, and increasingly capable of taking action through agents. As a result, security is no longer just about protecting AI at runtime; it's about understanding and governing an entire AI ecosystem.

This is where Prisma AIRS comes in.

Prisma AIRS extends beyond runtime protection to address AI security as a system-wide challenge. It does this through a set of core capabilities spanning Discovery, Assessment, and Protection, a framework that has evolved with the recent launch of Prisma AIRS 3.0.

This article begins with the first, and most foundational, of those pillars: Discovery.

The visibility problem Prisma AIRS is solving

Before organizations can secure AI, they need to answer a basic question:

Where is AI actually being used?

In practice, that answer is rarely complete.

AI adoption doesn't follow traditional deployment patterns. It doesn't arrive as a single platform or a centrally managed system. Instead, it spreads organically, through APIs, embedded features, copilots, and experimental tools built across different teams.

Over time, these pieces begin to connect. Models interact with internal data. Applications call external services. Agents begin to automate workflows.

What emerges is not a single AI system, but a distributed, interconnected environment.

And in most organizations, that environment is only partially visible.

This is the gap Prisma AIRS is designed to address, but closing it requires more than scanning for known systems. It requires understanding how AI actually operates across an environment, not just where it was deployed. That's what the next sections examine.

Why AI breaks traditional discovery

Security teams are used to discovering assets with clear boundaries, like servers, endpoints and applications. These systems are relatively stable and centrally managed.

AI is neither.

An AI system is not one thing. It is a combination of:

  • A model (often external or open source)
  • Data (often sensitive and distributed)
  • APIs and tools (internal and third-party)
  • Logic that determines how decisions are made

These components can change quickly, and they are often assembled outside traditional governance processes.

Vibe coding is accelerating this further. Developers using AI assistants to rapidly generate applications often don't fully understand what gets assembled underneath, which models are called, which third-party skills or plugins are included, or what permissions they carry. Tools like OpenClaw illustrate the risk: an open-source AI agent that can be deployed with minimal technical knowledge, yet requires broad access to email, calendars, and messaging systems, with third-party skills that have been found capable of data exfiltration and prompt injection. The developer trusted the AI to build it. The AI trusted the skill repository. The enterprise implicitly inherited all that trust, without ever reviewing it.

This is why Prisma AIRS doesn't treat discovery as a simple inventory exercise.

Instead, it focuses on understanding how these components connect; how a model accesses data, how an application invokes that model, and how an agent may take action based on the output.

Because in AI environments, risk lives in the relationships, not just the assets.

From inventory to context

One of the most common mistakes organizations make is trying to catalog AI the same way they catalog applications.

A static inventory might tell you that a model exists. It won't tell you:

  • What data it can access
  • What systems it is connected to
  • What actions it can trigger

Prisma AIRS approaches discovery differently.

It builds a contextual view of AI by mapping:

  • Where models are being used
  • How they interact with data
  • Which applications and agents rely on them
  • What external services are involved

This creates a living picture of the AI environment, one that reflects how systems actually operate, not just how they were deployed.

That distinction is critical.

Because without context, visibility is incomplete. Without complete visibility, risk cannot be accurately understood.

Prisma AIRS
Prisma AIRS

Shadow AI: The blind spot Prisma AIRS exposes

One of the most immediate benefits of this approach is the ability to identify shadow AI. In most organizations, AI adoption is already happening outside formal processes.

Consider a financial services firm where a data analyst team, frustrated with slow internal tooling, starts routing customer query summaries through a third-party AI API to speed up reporting. No ticket is filed. No security review happens. The tool uses standard HTTPS, so it appears as normal web traffic. Within weeks, three other teams adopt it. Months later, the security team still has no idea it exists, but the model has been processing data that includes account identifiers and transaction histories. The risk isn't in the tool itself. It's in what the organization doesn't know.

These systems often operate through legitimate channels, making them difficult to detect with traditional methods. Prisma AIRS surfaces these hidden deployments by analyzing how AI is actually being used across applications, APIs, and data interactions, rather than relying solely on predefined inventories. This is reinforced by Palo Alto's Enterprise DLP and Prisma  Browser, which extend visibility to the point of interaction, capturing what data is being sent to AI tools directly from the browser, even when those tools operate over standard encrypted traffic. In the scenario above, that's precisely where the exposure would have been caught.

This is important because shadow AI doesn't just introduce unknown tools, it introduces:

  • New access paths to sensitive data
  • New decision-making layers
  • New opportunities for unintended behavior

Why discovery is the foundation for everything else

Every other aspect of AI security depends on discovery. You cannot effectively assess risk, enforce policy, monitor behavior, or control actions if you don't fully understand what exists, how it is connected, and what it is capable of doing.

Without that foundation, organizations often end up with fragmented controls and security measures that apply to known systems, while unknown systems operate outside visibility.

This is why discovery is not just a feature. It is a control point, the mechanism that shifts security from reacting to known issues to proactively identifying unknown ones. That shift is what enables the rest of the platform to function effectively.

Prisma AIRS addresses this by establishing discovery as a continuous process, not a one-time activity. As AI systems evolve, the platform updates its view of the environment, ensuring that security teams are working from current, accurate information rather than a snapshot that's already out of date.

The hard questions

Discovery answers the most uncomfortable question in AI today: "Do we actually know where AI is operating in our environment?"

Security Leaders

"We have too many security initiatives already. Why does AI discovery jump the queue?"

It doesn't. Discovery isn't a new initiative competing for resources; it's the thing that tells you whether your existing initiatives have blind spots. If AI is operating outside your visibility, every control you've already invested in has an unknown gap. Discovery closes that gap without replacing what you've built.

Consultants & Architects

"Every discovery tool promises comprehensive visibility and delivers an overwhelming, unactionable list."

That's the difference between inventory and context. Prisma AIRS doesn't just catalog assets; it maps relationships. How a model connects to data, which applications depend on it, what actions it can trigger. That context is what makes findings prioritizable rather than paralyzing.

Engineers & Practitioners

"Another agent to deploy and another dashboard to check."

That's a fair concern, and it's worth being direct: Prisma AIRS isn't designed to be checked, it's designed to be embedded. The platform offers a Python SDK that lets engineers integrate AI security scanning directly into application code, so detections happen inline rather than after the fact. Findings are pushed to your existing SIEM with direct session links, surfacing in tools you're already monitoring. And native integrations with platforms like ServiceNow and IBM mean discovery data flows into workflows you already own. The goal isn't a new console to manage, it's making what you already have aware of AI.

Discovery doesn't eliminate uncertainty, but it replaces assumption with evidence. That's where all meaningful security decisions have to start.

Putting discovery into practice with ARMOR

Discovery gives you the map. But knowing where AI exists is only the starting point — knowing what to do next is where most organizations get stuck.

WWT's AI Readiness Model for Operational Resilience (ARMOR) is a vendor-agnostic framework designed to help organizations move from visibility to action. It covers the full AI security lifecycle, from governance and model protection to infrastructure security and data protection, giving teams a structured path to secure AI without slowing adoption.

In the next article, we'll show how ARMOR's domains map directly to the Assessment pillar of Prisma AIRS, and why having a framework behind your tooling turns findings into a roadmap.

Looking ahead

Once you have a clear view of your AI environment, the next question becomes unavoidable:

Is it safe?

In the next article, we'll examine the second pillar of Prisma AIRS: Assessment. We'll explore how organizations can continuously test AI systems, simulate real-world attacks, and identify weaknesses before they turn into incidents.

Because visibility alone doesn't reduce risk.

Understanding and validating behavior does.

Technologies