AI Proving Ground

What's Inside the AI Proving Ground

Inside the AI Proving Ground, IT professionals can explore the world of validated designs, reference architectures and DIY environments that fit their use cases. This includes full-stack validations from network to compute to storage, along with Kubernetes (K8s) platforms with MLOps integrations. IT professionals will also receive expert guidance from our AI and infrastructure experts, as well as insights from leading AI companies, to help accelerate decision-making and implementation of AI solutions.

Hardware

High-performance compute:

Access the latest CPUs, GPUs, DPUs and SmartNICs from industry leaders like NVIDIA, AMD and Intel.

Storage:

Solutions from Dell, NetApp, Pure Storage, VAST Data, IBM, DataDirect Networks (DDN), Weka, and HPE GreenLake.

Networking:

High-speed networking solutions like InfiniBand fabrics and 400GbE Ethernet from NVIDIA, Cisco and Arista.

Explore our offerings

WWT Agentic Network Assistant

Explore the WWT Agentic Network Assistant, a browser-based AI tool that converts natural language into Cisco CLI commands, executes them across multiple devices, and delivers structured analysis. Using a local LLM, it streamlines troubleshooting, summarizes device health, and compares configs, demonstrating the future of intuitive, AI driven network operations.

Crafting Your First AI Agent

Embark on a journey to create AI agents with "Crafting Your First AI Agent." This hands-on lab introduces LangChain, LangGraph, and CrewAI frameworks, empowering you to build, run, and understand AI agents. Discover the intricacies of AI workflows and multi-agent systems, transforming curiosity into practical expertise.

NVIDIA Run:ai Researcher Sandbox

This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.

Prompt Engineering Lab

This lab will help users strengthen their prompt engineering skills using the prompt blueprint framework, which is a structured approach for writing effective prompts. Participants will refine prompts step by step and experiment with blueprint elements to achieve better model outputs.

NVIDIA Blueprint: Enterprise RAG

NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications. This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights. By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.

HPE Private Cloud AI - Guided Walkthrough

This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.

AI Prompt Injection Lab

Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.

Generative AI Fundamentals

This lab will walk the lab user through the basics of Generative AI