CNF: Powering the Future of Modern Infrastructure, 5G and Edge with BIG-IP Cloud Native Edition
In this blog
Digital transformation is changing the way organizations build and run their infrastructure. Applications are no longer tied to a single data center; they are distributed across public clouds, private clouds, and edge locations.
5G traffic is growing rapidly. Edge computing is becoming standard. Kubernetes has become the control plane for modern platforms. And AI inference workloads are arriving at the edge faster than most infrastructure teams anticipated.
But here is the reality.
Traditional network functions were built for a different era fixed hardware appliances or large virtual machines never designed to support:
- Elastic scaling on demand
- Microservices-based applications
- Continuous integration and continuous delivery (CI/CD)
- Distributed edge environments
- AI inference workloads running alongside 5G core functions with ultra-low latency requirements
Simply containerizing old systems does not make them cloud native. And it does not make them secure, observable, or ready for the distributed AI platforms enterprises are building today.
That is where F5 BIG-IP Cloud Native Network Function (CNF) becomes foundational not just as a networking product, but as the traffic enforcement and observability layer that makes AI-ready 5G infrastructure governable.
This piece is part of WWT's ARMOR Infrastructure Security practice narrative, and CNF is the operational proof point for two of ARMOR's core domains: Infrastructure Security and Secure AI Operations.
F5 BIG-IP Cloud Native Network Function: Built for Kubernetes, Ready for AI
F5 BIG-IP Cloud Native Network Function is built from the ground up for Kubernetes and OpenShift environments. It combines high-performance traffic management, advanced security, and intelligent edge networking into a modern, scalable platform.
Unlike traditional appliances, CNF operates as a cloud-native service. It scales horizontally, integrates into automation pipelines, and adapts dynamically to changing traffic demands. The architecture separates the control plane (policy, orchestration) from the data plane (traffic enforcement), so each can scale independently without the other becoming a bottleneck.
What sets CNF apart in the context of ARMOR is not just what it delivers operationally, it is where it sits in the security architecture. CNF is the enforcement and telemetry layer between workloads, making it the right anchor for infrastructure-level security controls in both 5G core and AI edge environments.
AI at the Edge: A Headline Requirement, not a Footnote
AI inference at the edge is no longer a roadmap item. Enterprises and telcos are actively deploying large language models, computer vision pipelines, and real-time ML inference in environments that sit adjacent to 5G RAN and MEC infrastructure.
This creates infrastructure requirements that did not exist two years ago:
- GPU nodes running inference workloads are being co-located with 5G User Plane Functions (UPFs) and N6-LAN gateways
- AI agents are generating API traffic that crosses trust boundaries in ways traditional network monitoring cannot baseline
- Model serving endpoints are exposed as microservices inside Kubernetes clusters, each representing a new attack surface
- Distributed model serving means inference traffic flows laterally across nodes east-west before it ever hits an external API
CNF sits directly in that traffic path. By running as a cloud-native data plane in the same Kubernetes cluster as AI workloads, CNF can enforce policy on inference API traffic, generate telemetry on model-to-model communication, and provide SOC teams with visibility into AI system behavior that simply does not exist in environments relying on traditional perimeter security.
DPU Offload: Freeing GPUs for Inference
The most resource-constrained environments in AI-at-the-edge deployments are the ones where GPU compute is dedicated to inference. Every CPU cycle spent on network processing, NAT, or SSL termination is a cycle unavailable to the model.
This is where NVIDIA BlueField® DPUs, a data processing unit that offloads, accelerates, and isolates networking, storage, and security workloads to power secure, efficient gigascale AI infrastructure, change the equation and where this WWT reference architecture, enabled by NVIDIA and F5, reference architecture becomes concrete. In this model:
- CNF's data plane processing runs on the BlueField DPU rather than the application node
- GPU and CPU resources on the inference node are entirely dedicated to model serving
- Network security enforcement, traffic inspection, and SSL/TLS termination happen in dedicated hardware, below the OS layer
This is not a theoretical optimization. It is the architecture ARMOR's Infrastructure Security domain identifies as the hardware-layer foundation for secure AI infrastructure and CNF on DPU is its reference implementation in a 5G/edge context.
The security implication is equally significant. Enforcing controls at the hardware layer, physically separate from the application workload, means a compromised inference container cannot tamper with its own network enforcement. The control plane is simply not accessible from the compromised workload.
Security: Concrete Controls, not a Checkbox
The phrase 'integrated L3 through L7 protections' describes a capability set. It does not tell a security team what problem gets solved or what framework requirement gets satisfied. This section is more specific.
East-West Traffic Inspection on the 5G Core
5G core environments are particularly difficult to secure because the threat model includes lateral movement between network functions, not just perimeter intrusion. A compromised UPF (User Plane Function) is a high-consequence event: the UPF handles all subscriber data traffic, enforces QoS policies, and interfaces directly with the N6-LAN that connects to the internet and enterprise services.
What a compromised UPF looks like in practice:
- Unauthorized traffic steering: subscriber sessions are redirected to attacker-controlled endpoints
- Data exfiltration: subscriber data in transit is copied before being forwarded normally — making it invisible to traditional monitoring
- Lateral movement: the UPF's trusted position in the core is used to probe or access other network functions including the SMF, PCF, and UDM
- Policy bypass: QoS and access control decisions are manipulated to allow traffic that should be blocked
CNF addresses this by enforcing east-west traffic policy at the Kubernetes data plane level. Because CNF sits between pods not just at the cluster ingress it can inspect and enforce policy on inter-NF traffic that would otherwise be invisible to perimeter controls. Traffic between UPF, SMF, and N6-LAN services is subject to the same L4–L7 policy enforcement as external traffic.
Compliance Grounding: NIST AI RMF and ISO 42001
Enterprise security and compliance teams need more than architectural descriptions; they need framework mappings. Two specific controls CNF satisfies are worth naming explicitly.
NIST AI Risk Management Framework (AI RMF)
MANAGE 2.2 requires mechanisms to monitor AI system behavior against defined risk thresholds. CNF's telemetry layer and traffic baseline capabilities directly support this, providing continuous visibility into AI-to-API communication that can feed SIEM or SOC workflows. When an inference workload begins establishing connections to previously unobserved endpoints, CNF is the layer that generates the alert.
GOVERN 1.7 requires that AI risk management practices be integrated into enterprise risk processes rather than isolated at the model level. CNF sitting in the Kubernetes data plane means AI traffic controls are part of the same infrastructure governance layer as all other application traffic auditable, policy-driven, and consistent.
ISO/IEC 42001 (AI Management System Standard)
Clause 8.4 (AI system operation) requires that AI systems operate within defined boundaries, and that deviations are detectable and auditable. CNF's combination of policy enforcement and observability directly satisfies this clause AI system traffic is inspectable, and deviations from defined communication patterns trigger enforcement actions rather than being silently permitted.
Telco and 5G: The Architecture Stays
The 5G and telco framing in CNF's architecture is well-founded and should be kept. CNF's TMM-based packet processing, DNS acceleration, and N6-LAN integration are purpose-built for the performance requirements of carrier-grade environments. What changes in the ARMOR framing is the emphasis: telco is the operational context, but security and AI readiness are the reasons the architecture matters to enterprise customers today.
5G Core & Telco Cloud
- CNF deployed within Kubernetes-based 5G core environments with UPF and N6-LAN traffic flow integration
- East-west traffic inspection between network functions not just perimeter enforcement
- Consolidated NAT, stateful firewall, and DPI services in a single cloud-native platform
Enterprise Cloud-Native Environments
- North-south and east-west traffic enforcement with L4–L7 policy
- Advanced segmentation between workloads including AI inference services
- Integrated DNS and edge acceleration
Hybrid & multi-cloud
- Unified policy management via Kubernetes CRDs across private and hyperscaler environments
- GitOps-driven lifecycle automation for consistent enforcement across clusters
AI and Edge Deployments
- DPU offload for network functions running adjacent to GPU inference nodes
- Traffic inspection and policy enforcement for distributed model serving endpoints
- Security controls for AI agent API traffic crossing trust boundaries
- Telemetry generation for SOC visibility into AI system behavior
WWT ATC Validation: Specific Outcomes, Not Generic Claims
WWT's Advanced Technology Center is where architecture decisions become validated reference implementations. For CNF in AI-ready 5G environments, the ATC work produces specific, repeatable outcomes not generic capability assertions.
Users will be able to explore F5's BIG-IP Next CNF use-cases that are running in Kubernetes on top of NVIDIA BlueField-3 Data Processing Unit (DPU) with applications utilizing the NVIDIA L40 GPU by going through lab built in ATC
https://www.wwt.com/lab/f5-big-ip-next-cloud-native-network-functions-cnf
The ARMOR Anchor: CNF and BNK as a Consistent Narrative
CNF and BIG-IP Next for Kubernetes (BNK) are not separate product pitches. They are two layers of the same ARMOR Infrastructure Security story:
- BNK: is the Kubernetes-native traffic enforcement layer for enterprise application platforms and AI agent workloads
- CNF: is the cloud-native network function for 5G core and edge environments, providing the same enforcement and observability capabilities in telco and AI-at-the-edge contexts
Both run on NVIDIA BlueField DPUs. Both provide the east-west inspection and AI traffic telemetry that SOC teams need. Both maps to the same NIST AI RMF and ISO 42001 controls. Together, they give WWT customers consistent security architecture from the enterprise Kubernetes cluster to the 5G edge anchored in ARMOR's Infrastructure Security and Secure AI Operations domains.
Conclusion
Cloud-native networking is no longer optional. For organizations running 5G core functions, AI inference at the edge, or both, it is the foundation of modern digital infrastructure, and it has to be secure by design, not secured after the fact.
F5 BIG-IP Cloud Native Edition provides the performance, scalability, and security required for Kubernetes, 5G, and AI edge environments. Running on NVIDIA BlueField DPUs and anchored in WWT's ARMOR Infrastructure Security practice, CNF is not a product deployment; it is a validated reference implementation of what secure AI-ready infrastructure looks like in practice.
WWT's ATC has done the validation work. The architecture is proven. For organizations ready to modernize network functions without sacrificing reliability, security, or AI readiness, connect with the ARMOR team or explore our ATC lab resources to see CNF in action alongside BNK as part of a complete Infrastructure Security reference architecture.