Explore
Select a tab
9 results found
F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs
This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•9 launches
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•88 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•11 launches
Cisco RoCE Fabrics
This lab will demonstrate how the Cisco Nexus Dashboard Fabric Controller can easily set up an AI/ML fabric with a simple point-and-click GUI. It will only do this easily without knowing the protocols or best practices. The controller will do the work.
Advanced Configuration Lab
•294 launches
Pure Storage GenAI Pod with NVIDIA
This environment provides a highly performant environment to test the deployment and tuning of different NVIDIA NIMs and Blueprints.
Advanced Configuration Lab
•11 launches
Liqid Composable Disaggregated Infrastructure Lab
The Liqid Composable Disaggregated Infrastructure (CDI) Lab showcases how to create impossible hardware configurations that are unfeasible in the physical world. Compose bare metal servers with any number of configurations via software. The lab consists of Dell PowerEdge compute, Liqid PCIe4 Fabric, Liqid Matrix Software, Intel NVMe and NVIDIA GPUs.
Foundations Lab
•44 launches
Person Tracking with Intel's AI Reference Kit
This lab focuses on implementing live person tracking using Intel's OpenVINO™, a toolkit for high-performance deep learning inference. The objective is to read frames from a video sequence, detect people within the frames, assign unique identifiers to each person, and track them as they move across frames. The tracking algorithm utilized here is Deep SORT (Simple Online and Realtime Tracking), an extension of SORT that incorporates appearance information along with motion for improved tracking accuracy.
Advanced Configuration Lab
•43 launches
Deploy NVIDIA NIM for LLM on Kubernetes
NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•62 launches
Deploy NVIDIA NIM for LLM on Docker
This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•63 launches