Overview
Explore
Services
Select a tab
23 results found
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
•678 launches
NVIDIA Run:ai Researcher Sandbox
This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.
Sandbox Lab
•129 launches
AIPG: Innovation Lab with NVIDIA DGX
NVIDIA DGX H100 integration and validation environment for enterprise AI workflows.
Advanced Configuration Lab
•6 launches
NVIDIA Blueprint: PDF Ingestion
NVIDIA Blueprints: PDF Ingestion, also known as NVIDIA-Ingest, or NV-Ingest, this blueprint is a scalable, performance-oriented document content and metadata extraction microservice. Including support for parsing PDFs, Word and PowerPoint documents, it uses specialized NVIDIA NIM microservices to find, contextualize, and extract text, tables, charts and images for use in downstream generative applications.
Sandbox Lab
•117 launches
Retrieval Augmented Generation (RAG) Walk Through Lab
This lab will go into the basics of Retrieval Augmented Generation (RAG) through hands on access to a dedicated environment.
Foundations Lab
•873 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•307 launches
AIPG: HPE Private Cloud AI
HPE Private Cloud AI, a pioneering turnkey solution, integrates NVIDIA and HPE technologies to simplify and accelerate AI development. It offers scalable configurations and enables experimentation and innovation in a secure on-premises environment. Explore its capabilities through hands-on lab experiences and easily streamline your AI journey.
Advanced Configuration Lab
AI Gateway - LiteLLM Walkthrough Lab
This lab provides hands-on experience with LiteLLM, an open-source AI gateway that centralizes and manages access to Large Language Models (LLMs). Throughout the five modules, you'll learn how to set up and use LiteLLM to control, monitor, and optimize your AI model interactions.
Foundations Lab
•30 launches
AIPG: Dell Reference Architecture for Generative AI with NVIDIA
The Dell Reference Architecture environment is a full-stack solution that includes Dedicated Dell PowerSwitch High-Speed Networking, Dell PowerEdge Accelerator Optimized Compute nodes (XE9680 and R760xa servers), and Dell PowerScale Storage (F600 Array) as the hardware components and includes multiple MLOps and Kubernetes Platform solutions.
Advanced Configuration Lab
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•51 launches
AIPG: HPE Reference Architecture for Generative AI
HPE's full-stack Generative AI solution integrates advanced hardware and software, including NVIDIA GPUs and HPE GreenLake, to empower AI development. This lab offers hands-on experience with HPE's AI stack, enabling customization, performance validation, and accelerated AI deployment, providing a critical resource for organizations aiming to optimize their data center operations for generative AI.
Advanced Configuration Lab
AIPG: GPU-as-a-Service with Liqid
The AI Proving Ground GPU-as-a-Service environment is a fully automated solution that enables WWT engineers to build physical server environments with different server, CPU, GPU, and Operating System options.
Advanced Configuration Lab