Explore
Select a tab
What's new
Deploy NVIDIA NIM for LLM on Kubernetes
NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•61 launches
Deploy NVIDIA NIM for LLM on Docker
This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•63 launches
Pure Storage GenAI Pod with NVIDIA
This environment provides a highly performant environment to test the deployment and tuning of different NVIDIA NIMs and Blueprints.
Advanced Configuration Lab
•11 launches
F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs
This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•9 launches
NVIDIA Blueprint: Enterprise RAG
NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications.
This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights.
By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.
Sandbox Lab
•156 launches
WWT Agentic Network Assistant
Explore the WWT Agentic Network Assistant, a browser-based AI tool that converts natural language into Cisco CLI commands, executes them across multiple devices, and delivers structured analysis. Using a local LLM, it streamlines troubleshooting, summarizes device health, and compares configs, demonstrating the future of intuitive, AI driven network operations.
Foundations Lab
•538 launches
NVIDIA Run:ai Researcher Sandbox
This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.
Sandbox Lab
•165 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•335 launches
Liqid Composable Disaggregated Infrastructure Lab
The Liqid Composable Disaggregated Infrastructure (CDI) Lab showcases how to create impossible hardware configurations that are unfeasible in the physical world. Compose bare metal servers with any number of configurations via software. The lab consists of Dell PowerEdge compute, Liqid PCIe4 Fabric, Liqid Matrix Software, Intel NVMe and NVIDIA GPUs.
Foundations Lab
•44 launches