Solution overview

In this lab, you will explore the F5 BIG-IP Next for Kubernetes solution. The F5 AI Proving Ground solution is comprised of a virtual three-node Ubuntu Kubernetes Control Plane and two worker nodes. Each worker node combines NVIDIA L40s GPUS and a NVIDIA Bluefield-3 DPU in a Dell PowerEdge R760xa server.

With this solution, there are two main things to consider: the Data Plane and the Control Plane

The Data Plane - Traffic Management Micro-kernel (TMM) and the Control Plane monitor the Kubernetes cluster's state and update the TMM's configuration. The BIG-IP Next for Kubernetes Data Plane (TMM) handles the securing of network traffic both entering and exiting the Kubernetes cluster and proxy the traffic to applications running in the Kubernetes cluster.

Users will be able to deploy applications to an NVIDIA L40s Kubernetes cluster. During the deployment process, a user will see the importance of pairing F5's BIG-IP Next with the BlueField-3 Data Processing Unit (DPU). This solution will show how to fully utilize the DPU resources to offload the traffic and free up the Host (CPU) for applications, ensuring that the DPU remains dedicated to traffic processing.

Lab diagram

Loading

Technologies