Explore
Select a tab
7 results found
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•Fundamentals
•200 launches
F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs
This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•Advanced
•17 launches
ATC+
Cisco Nexus Dashboard Fabric Controller (NFDC) Lab
This lab presents a simplified two-switch high-availability design for a Cisco AI deployment, highlighting both North–South and East–West traffic flows. It uses NDFC to demonstrate operational visibility and telemetry monitoring. The provided NDFC account is read-only for demonstration purposes. The topology separates administrative connectivity and GPU communication, offering a focused view of core AI networking concepts.
Advanced Configuration Lab
•Fundamentals
•16 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•Advanced
•12 launches
HPE Private Cloud AI - NIM Deployment
In this lab a customer will deploy a NVIDIA NIM to the HPE Private Cloud AI instance in the ATC. This lab will not only walk the customer through how to deploy a black-forest-labs/FLUX.1-dev NIM (image generation NIM), but also how to interact with the kubernetes layer (Ezmeral) that is engraved in the HPE solution.
Advanced Configuration Lab
•Fundamentals
•16 launches
Deploy NVIDIA NIM for LLM on Docker
This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•Fundamentals
•66 launches
ATC+
Deploy NVIDIA NIM for LLM on Kubernetes
NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•Intermediate
•79 launches