Overview
Explore
Expertise
Ecosystem
Select a tab
9 results found
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•69 launches
NVIDIA Run:ai Researcher Sandbox
This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.
Sandbox Lab
•164 launches
Pure Storage GenAI Pod with NVIDIA
This environment provides a highly performant environment to test the deployment and tuning of different NVIDIA NIMs and Blueprints.
Advanced Configuration Lab
•10 launches
NVIDIA Blueprint: Enterprise RAG
NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications.
This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights.
By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.
Sandbox Lab
•155 launches
F5 AI Gateway (GPU Accelerated)
This lab will provide access to an Openshift cluster running the F5 AI Gateway solution. We will walk through how the F5 AI Gateway routes requests to different models by either allowing them to pass through or, more importantly, securing them via prompt injection checking. We have also added a couple of other tests that will allow for language detection of that input that the F5 AI Gateway can also detect.
Advanced Configuration Lab
•10 launches
F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs
This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•5 launches
Daily Ops Summary Agent
In this lab a customer will deploy the Daily Ops Summary Agent on the HPE Private Cloud AI instance in the ATC. The Daily Ops Summary Agent is built on top of the NVIDIA NeMo Agent Toolkit and leverages Open AI OSS NIMs under the hood. Leveraging representative data from IT Ops Service Management Platform, the Daily Ops Summary Agent brings the most relevant information from both Incident and Change management into an intelligent report for operation specialists.
Advanced Configuration Lab
•20 launches
Deploy Nvidia NIM for LLM on Kubernetes
NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•47 launches
Deploy Nvidia NIM for LLM on Docker
This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•52 launches