Explore
Select a tab
6 results found
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•171 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•11 launches
Pure Storage GenAI Pod with NVIDIA
This environment provides a highly performant environment to test the deployment and tuning of different NVIDIA NIMs and Blueprints.
Advanced Configuration Lab
•12 launches
Deploy NVIDIA NIM for LLM on Kubernetes
NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•70 launches
Cisco AI NDFC factory lab
This lab presents a simplified two-switch high-availability design for a Cisco AI deployment, highlighting both North–South and East–West traffic flows. It uses NDFC to demonstrate operational visibility and telemetry monitoring. The provided NDFC account is read-only for demonstration purposes. The topology separates administrative connectivity and GPU communication, offering a focused view of core AI networking concepts.
Advanced Configuration Lab
•3 launches
Deploy NVIDIA NIM for LLM on Docker
This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•63 launches