The main topic at the beginning of most AI security conversations remains the same: How can AI enhance security? This is a real use case, but it excludes a significant control problem. Companies are adopting AI across users, applications, and autonomous agents faster than they are building security around those systems. The opportunity cost is a new attack surface that most security programs still lack the sophistication to handle.

The numbers are already uncomfortable; according to Gartner research from 2025, 29% of organizations have been attacked on their generative AI infrastructure. That is real-world breaches, not theoretical exposure. Just 19% characterize their GenAI security posture as "highly confident."

There is a gap that matters because AI isn't just another SaaS category. It's emerging as an operating layer the employee actor, how applications decide what to do next, and how automated systems operate. Check Point's approach is a security architecture layer named the AI Defense Plane.

The Three-Layer Problem

The first layer: Workforce AI usage.

Employees are already working with tools like ChatGPT, Claude, Microsoft Copilot, GitHub Copilot, and Cursor, etc. Security teams don't consistently see the typing, pasting, or uploading done in those tools in many environments.

ChetGPT web interface

Traditional controls were not designed for session-level governance of AI usage. A DLP rule or proxy log might indicate that a user logged into a particular site, but it doesn't necessarily reveal what private material was provided to an AI prompt. Credentials, source code, customer PII, strategy documents, and regulated data can also be put onto third-party AI platforms without the ability to enforce policy, without a usable audit trail, or to demonstrate compliance in a clear manner.

The second layer is Agentic AI.

AI agents, unlike chat tools, can take actions. They can query databases, call APIs, write to systems of record, and interact with MCP servers. This changes your risk picture.

A poor decision is then usually limited by that user's access and the scale of the action. An agent with broad tool access can cause a much bigger problem, particularly when prompt injection or indirect prompt injection affects what that agent does.

News clipping of AI Agent deleting a production database

This danger is already appearing in the data. Gartner found that 32% of organizations have faced a prompt-based attack on AI applications. Many reports state fully autonomous multi-agent kill chains capable of credential harvesting and data exfiltration without human intervention… and this is pretty low on the 'imagine what can happen' scale of creativity. The startling fact remains that even well-intentioned employee actors can trigger a catastrophic event with normal prompting while the agent takes actions it deems fit, like removing a database to fast-track the task.

The third layer is the AI Agent itself.

The trend of Voice AI Agents answering and handling complicated customer service responsibilities is on the rise, for example: if you, dear reader, have recently scheduled an appointment for a Mercedes-Benz vehicle service, the bright and cheerily helpful voice on the line sounds incredibly human. Until one asks it to use C++ to find a string within a string, and the fantasy ends.

News blurbs of AI Agent deployment
News blurbs of AI Agent deployment

Before an organization can deploy an AI assistant, copilot feature, internal knowledge tool, or customer-facing AI workflow, it must understand how that application behaves under adversarial pressure. It needs to understand whether the system is vulnerable to manipulation that could reveal regulated data, become harmful, skirt policy, or generate legal and compliance exposure.

That kind of testing really must be done before production. The costly version of the lesson is finding the gaps post-model deployment when it is attached to business data.

Check Point's AI Defense Plane

Check Point's architecture of AI security is based on the AI Defense Plane and serves three key functions: Discover, Govern, and Protect.

Check Point AI Defense Plane
Check Point AI Defense Plane

The model's strength is that it treats workforce AI, agentic AI, and AI application testing as separate but related problems. It introduces a common control system across these three areas, with consistent guidelines and a unified audit trail.

Workforce AI Security, previously named GenAI Protect, protects the use of public and enterprise AI tools by employees.

It is a small Windows and macOS endpoint browser agent, deployed via MDM or SCCM, that provides security teams with real-time visibility into AI tool usage. That includes the type of AI tools employees are using, what they are sharing, and whether the activity violates policy.

Visibility with Workforce AI

The policy model is granular. Controls can be established depending on event type, data classification, destination platform, and user group. Enforcement could also be achieved without interrupting the user's workflow.

So, for example, if you type a credential into Cursor, the system can do an inline, real-time redaction on the sensitive value before it arrives at the AI platform. The user can continue working, but the secret is never revealed. The redacted token is displayed to the user. The true credential is never received by the AI system.

Credential redaction
Credential redaction

AI Agent Security - formerly known as Lakera Guard, deals with the agentic AI layer.

It supports the discovery and inventory of AI agents: which agents are available, which tools and systems they can access, and the actions they can perform. It also has runtime guardrails to review tool calls before running.

AI Agent Security securing the prompt
AI Agent Security securing the prompt

Pre-deployment policy may allow or deny some MCP servers and tool integrations. Runtime inspection identifies prompt injection attempts in user inputs, agent inputs, tool responses, and multi-turn conversations.

Check Point is implementing this same capability in June 2026 on the Gemini Enterprise Agent Platform powered by Google Cloud. Adding to the existing and growing portfolio alongside protecting Microsoft Copilot. This renders the control model applicable to some of the largest market enterprise agentic AI deployment ecosystems.

AI Red Teaming, previously known as Lakera Red, covers pre-deployment testing of AI applications.

It leverages Lakera's adversarial AI engine to simulate attacks on three primary domains: security, safety, and responsible AI. That encompasses threats to data and system compromise, harmful content generation, and legal or compliance risk.

AI Red Teaming configuration

The platform can execute over 270 attack scenarios in less than five minutes. It comes as both a managed professional service and a self-service platform. The intent is to give teams a quantifiable overview of risk for AI applications before they go live.

AI Red Teaming process
AI Red Teaming process

Why the Architecture Matters

At many companies, AI security was seen as something they could mature later. The problem now and for the foreseeable future is to defend that approach much more forcefully.

According to the Lakera 2025 GenAI Security Readiness Report, 62% of organizations have encountered a deepfake attack. It also concluded that adversarial misuse and agentic risk have taken precedence over privacy as the number one concern for security practitioners.

Check Point's model is valuable because it treats AI security much like an enterprise control-plane problem rather than a specific issue in one tool.

Many of the same security questions arise across workforce AI, agents, and AI applications:

  • What is the AI doing? Can we see the actions, data access, and decisions the AI is making in real time?
  • Should it be doing that? Does what it is doing align with policy, intent, and acceptable risk?
  • How do I stop it from doing something bad? Can we enforce controls and intervene before damage is done?
  • What did it do? Do we have a reliable record of actions taken for investigation and compliance?
  • Who told it to do that? Can we trace the action back to a legitimate user, system, or authorization?
  • What could it do next? Do we understand the potential blast radius, and can we limit it before it happens?

Those questions apply when an employee pastes data into an AI tool, when an agent calls an API, and they apply when an AI application responds to a user in production. At each of these points, the organization must have visibility, policy enforcement, and evidence.

It'd be far more difficult to retrofit those controls after an AI incident than to bake them into the architecture. The AI Defense Plane is a valuable study of protecting AI interactions because employees using AI are already part of the basic enterprise workflow, and the attack data makes it clear that adversaries are already exploiting AI's potential for nefarious purposes.

---

World Wide Technology is a Check Point partner. Get in touch with your WWT account team to determine how the AI Defense Plane aligns with your organization's AI security posture.

Technologies