Overview
Explore
Select a tab
What's new
Breaking Data Silos: How Private Inference Unlocks GPU ROI on Sensitive Data
Private inference applies Protopia AI's Stained Glass Transform (SGT) to convert sensitive prompts into stochastic embeddings inside the data owner's root of trust, enabling NVIDIA Triton, NIM, and vLLM deployments to process regulated data without exposing plaintext. Aligned with WWT's ARMOR framework, this architecture strengthens data protection, model security, and multi-tenant GPU infrastructure ROI.
Blog
•Apr 28, 2026
Building a Trusted Agent with NemoClaw
Deploy secure, always-on AI agents in minutes using NVIDIA NemoClaw's single-command installer. In this hands-on lab, you'll install NemoClaw, configure NVIDIA OpenShell's privacy and security guardrails, and run your first autonomous agent powered by local open models like NVIDIA Nemotron. You'll see firsthand how policy-based controls govern agent behavior and keep sensitive data secure.
Advanced Configuration Lab
•Intermediate
•178 launches
Partner POV | Explaining Tokens — the Language and Currency of AI
Tokens are units of data processed by AI models during training and inference, enabling prediction, generation and reasoning.
Partner Contribution
•Dec 16, 2025
Introduction to NVIDIA NIM for LLM
This learning path introduces NVIDIA NIM for LLM microservices, covering its purpose, formats, and benefits. You'll explore deployment options via API Catalog, Docker, and Kubernetes, and complete hands-on labs for Docker and Kubernetes-based inference workflows—building skills to deploy, scale, and integrate GPU-optimized LLMs into enterprise applications.
Learning Path
•Fundamentals
What is NVIDIA NIM?
Unlock the potential of generative AI with NVIDIA NIM. This video dives into how NVIDIA NIM microservices can transform your AI deployment into a production-ready powerhouse.
Video
•1:55
•Oct 24, 2025
HPE Private Cloud AI - NIM Deployment
In this lab a customer will deploy a NVIDIA NIM to the HPE Private Cloud AI instance in the ATC. This lab will not only walk the customer through how to deploy a black-forest-labs/FLUX.1-dev NIM (image generation NIM), but also how to interact with the kubernetes layer (Ezmeral) that is engraved in the HPE solution.
Advanced Configuration Lab
•Fundamentals
•16 launches
NVIDIA Blueprint: Enterprise RAG
NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications.
This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights.
By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.
Sandbox Lab
•Intermediate
•164 launches
RAG Customized Multimodel Chatbot Demo
WWT's Knowledge Assistant, powered by NVIDIA's Multimodal PDF Data Extraction Blueprint, accelerates AI outcomes and minimizes hallucinations. Using retrieval augmented generation (RAG), build intelligent chatbots and AI agents that deliver accurate, context-rich insights from your enterprise data.
Video
•5:19
•Apr 2, 2025
A Deep Dive into AWS and NVIDIA NIM Integration
Explore the integration of AWS and NVIDIA NIM in this comprehensive research article. Discover solutions for deploying NVIDIA NIM microservices on AWS, including GPU-backed instances, NVIDIA GPU Operator installation and automation with Anthropic Claude 3.5. Learn how to enhance AI deployments with AWS and NVIDIA high-performance architecture for improved customer engagement and digital presence.
Blog
•Mar 4, 2025
Overview of NVIDIA NIM Microservices
Welcome to part 2 about our RAG lab infrastructure built in collaboration with NetApp, NVIDIA, and WWT. NVIDIA NIM microservices is a suite of user-friendly microservices that facilitate the deployment of generative AI models, such as large language models (LLMs), embeddings, re-rankings, and others, across various platforms. NVIDIA NIM microservices simplify the process for IT and DevOps teams to manage LLMs in their environments, providing standard APIs for developers to create AI-driven applications like copilots, chatbots, and assistants. It leverages NVIDIA's GPU technology for fast, scalable deployment, ensuring efficient inference and high performance.
Video
•3:29
•Aug 28, 2024
Accelerate your AI Journey with NVIDIA NIM
Transform AI model deployment with best-of-breed technology without sacrificing control or privacy.
Blog
•Aug 26, 2024
The NetApp and NVIDIA Infrastructure Stack
Welcome to part 1 of the videos series about the RAG Lab Infrastructure built in collaboration with NetApp, NVIDIA, and World Wide Technology. This video series will take you behind the scenes of this state-of-the-art lab environment inside the AI Proving Ground from WWT, powered by the Advanced Technology Center.
Video
•4:45
•Aug 23, 2024