At WWT, we regularly help enterprise customers answer one question: How do we deliver, secure, and scale applications efficiently in Kubernetes environments — and do it in a way that's ready for the AI workloads to arrive right now?

That question sits at the heart of ARMOR, WWT's Infrastructure Security practice. ARMOR's framework identifies the hardware layer - specifically  NVIDIA BlueField® DPUs, a data processing unit that offloads, accelerates, and isolates networking, storage, and security workloads to power secure, efficient gigascale AI infrastructure, - as the foundational enabler for infrastructure security. F5 BIG-IP Next for Kubernetes (BNK) running on BlueField is precisely the reference implementation of that story: a validated, enterprise-grade proof point of what WWT can achieve, enabled by NVIDIA BlueField DPU and F5 BNK.

In this blog, we'll explore how BNK anchors a security-first Kubernetes architecture, how its observability capabilities give SOC teams a net-new control point over AI traffic, and how it maps to real compliance requirements organizations face today.

The Evolution of Application Delivery, and Its Security Debt

Application delivery has undergone a significant transformation over the past decade. Traditional environments relied on monolithic applications running on dedicated infrastructure, where traffic management solutions were deployed at the data center edge and security posture was relatively static.

Modern applications are built on microservices distributed across Kubernetes clusters, often spanning multiple cloud environments. This shift creates challenges that have as many security problems as they are operational ones:

  • Applications expose dozens or hundreds of API endpoints, each a potential attack surface
  • Traffic patterns are dynamic and difficult to baseline
  • APIs must be observable for both performance and threat detection
  • Infrastructure must scale on demand without introducing new exposure

These aren't just architectural headaches; they are the conditions that allow threats to move laterally, exfiltrate data through API abuse, and evade detection in noisy, ephemeral environments.

BIG-IP Next for Kubernetes was built specifically to address this reality.

BNK as an Infrastructure Security Control Point

BNK brings the proven application delivery capabilities of the BIG-IP platform into a Kubernetes-native architecture. But more importantly, for ARMOR's Infrastructure Security domain, it introduces a consistent enforcement and observability layer across every workload in the cluster.

Control Plane The control plane integrates directly with Kubernetes APIs, dynamically understanding service topology, endpoint changes, and scaling events. This means security policy follows the workload — policies don't have to be manually reconfigured when services move, scale, or update.

Data Plane The data plane handles all active traffic processing:

  • Layer 4 and Layer 7 load balancing
  • Traffic routing and SSL/TLS termination
  • Security policy enforcement
  • Full telemetry and observability

The separation of control and data planes means traffic enforcement scales independently from policy management-critical in large clusters where a configuration bottleneck can become a security gap.

Secure AI Operations: A New Control Point for SOC Teams

The fastest-growing source of new traffic in enterprise Kubernetes' environments isn't traditional application traffic. It's AI.

Organizations are rapidly deploying large language models, ML inference pipelines, and AI agents that interact with external systems through APIs. These AI applications introduce traffic that is difficult to inspect, hard to baseline, and often entirely invisible to existing security tooling.

One emerging standard shaping this ecosystem is the Model Context Protocol (MCP), a framework that enables AI applications to interact with external tools, APIs, and enterprise systems while maintaining context across interactions.

An AI system operating via MCP might:

  • Retrieve sensitive data from internal enterprise APIs
  • Execute automated workflows across multiple backend services
  • Maintain conversational state that crosses trust boundaries
  • Initiate connections to external data sources on behalf of users

Each of these interactions represents a new attack surface. Without a control point sitting in the traffic path, SOC teams have no visibility into what AI agents are doing, where they're connecting, or whether those connections are legitimate.

BNK gives SOC teams exactly that control point. By sitting between AI services and the systems they interact with, BNK can:

  • Enforce policy on AI-generated API traffic
  • Generate telemetry on AI application communication patterns
  • Flag anomalous behavior that deviates from baseline AI traffic profiles
  • Terminate connections that violate security policy before they reach sensitive systems

This is not a capability most organizations have today. Making it explicit and operational is one of the most concrete contributions BNK makes to a modern Secure AI Operations posture.

Compliance Grounding: NIST AI RMF and ISO 42001

The security section of most BNK discussions stops at "API protection" and "traffic visibility." That's not enough for enterprise security and compliance teams who need to map controls to frameworks. Here are two specific mappings worth naming.

NIST AI Risk Management Framework (AI RMF)

The NIST AI RMF's MANAGE function requires organizations to identify, analyze, and respond to risks that emerge from AI system interactions with external services. Specifically:

  • MANAGE 2.2 calls for mechanisms to monitor AI system behavior against defined risk thresholds. BNK's telemetry layer and traffic baseline capabilities directly support this — providing continuous visibility into AI-to-API communication that can be fed into SIEM or SOC workflows.
  • GOVERN 1.7 requires that AI risk management practices be integrated into enterprise risk processes. BNK sitting in the Kubernetes data path means AI traffic controls are part of the same infrastructure governance layer as all other application traffic — not a siloed, model-level control.

ISO/IEC 42001 (AI Management System Standard)

ISO 42001 requires organizations to establish controls around AI system interactions with external environments. Clause 8.4 (AI system operation) specifically addresses ensuring AI systems operate within defined boundaries, and deviations are detectable. BNK's observability capabilities combined with the enforcement capabilities of the BIG-IP policy engine directly address this clause by making AI system behavior auditable and controllable at the infrastructure layer.

Naming these controls matters. It moves the conversation from "BNK is good for security" to "BNK satisfies specific requirements your compliance team is already tracking."

Accelerating Infrastructure with NVIDIA BlueField DPUs

AI workloads require infrastructure that can handle high volumes of network traffic without competing against application-layer compute. This is where the WWT's reference architecture proof points, enabled by NVIDIA BlueField DPU and F5, become concrete.

BIG-IP Next for Kubernetes integrates with NVIDIA BlueField Data Processing Units (DPUs) dedicated hardware accelerators that offload infrastructure functions from CPUs and GPUs. In this architecture:

  • BNK's data plane runs on the BlueField DPU
  • Kubernetes application nodes remain dedicated to AI processing
  • Network security enforcement, SSL/TLS processing, and traffic inspection are handled entirely in hardware

This model, already named in ARMOR's Infrastructure Security framework as the hardware-layer foundation, offers three compounding advantages:

  1. Performance: Network operations are accelerated at the silicon level, removing them as a bottleneck for AI inference traffic
  2. Isolation: Infrastructure services are physically separated from application workloads, limiting lateral movement in the event of a compromise
  3. Efficiency: CPUs and GPUs are freed entirely for application logic, improving AI processing throughput

For organizations deploying large-scale AI inference clusters, this architecture is not just a performance optimization — it's a security architecture decision. Enforcing security at the hardware layer, below the OS and hypervisor, is significantly harder to subvert than software-only approaches.

Why BNK Matters for ARMOR's Infrastructure Security Domain

From WWT's perspective, BNK is the operational instantiation of several principles central to ARMOR:

  1. Reliable Application Delivery at Scale — Advanced load balancing and intelligent routing ensure consistent performance even as AI traffic volumes grow
  2. Security for Modern APIs and AI Services — Policy enforcement at the data plane protects both traditional application APIs and the novel traffic patterns introduced by AI agents
  3. SOC-Grade Observability — Telemetry and monitoring capabilities give security teams the visibility they need to detect anomalies in AI traffic — a gap most organizations have today
  4. Kubernetes-Native Scalability — Direct Kubernetes integration means security controls scale with workloads automatically, without manual intervention
  5. Hardware-Layer Enforcement — DPU integration anchors security below the software stack, consistent with ARMOR's hardware-layer enablement model

Next Steps: See It in the ATC Lab

BNK's value isn't theoretical — it's something organizations need to validate against their own workloads and security requirements before committing to a deployment architecture.

WWT's Advanced Technology Center (ATC) lab provides the environment to do exactly that. Through the ATC, organizations can test Kubernetes networking architectures, validate BNK traffic management configurations, evaluate AI infrastructure designs against real compliance requirements, and benchmark DPU-accelerated performance against baseline environments.

Here is a lab on ATC where you explore the F5 BIG-IP Next for Kubernetes solution. The F5 AI Proving Ground solution comprises a virtual three-node Ubuntu Kubernetes Control Plane and two worker nodes. Each worker node combines NVIDIA L40s GPUS and a NVIDIA BlueField-3 DPU in a Dell PowerEdge R760xa server. https://www.wwt.com/lab/f5-big-ip-next-for-kubernetes-on-nvidia-BlueField-3-dpus

This is also the natural integration point for ARMOR's Infrastructure Security practice. If your team is building an AI-ready Kubernetes platform and needs to map infrastructure controls to NIST AI RMF, ISO 42001, or your own internal security standards, connect with the ARMOR team or reach out to explore ATC lab resources to see how BNK, NVIDIA BlueField DPU, and WWT's architecture expertise come together in a validated reference implementation. 

Summary

The transition toward Kubernetes-based platforms and AI-driven applications is accelerating, and the security implications are arriving faster than most organizations' controls have adapted.

BIG-IP Next for Kubernetes, running on NVIDIA BlueField DPUs and integrated into WWT's ARMOR Infrastructure Security framework, represents a concrete, validated approach to closing that gap. It gives enterprises cloud-native traffic management, hardware-layer security enforcement, SOC-visible AI observability, and direct mappings to the compliance frameworks that matter NIST AI RMF and ISO 42001.

For organizations ready to move from AI experimentation to AI infrastructure that security teams can govern, this WWT reference implementation, built on NVIDIA BlueField DPU and F5 BNK, is the starting point. Reach out to our ARMOR team or explore our ATC lab resources to see it in action.

Technologies