Overview
Explore
Labs
Services
Events
Select a tab
In the ATC
Level up your skills with on-demand learning labs that focus on the latest tech advancement.
Intelligent Resource Optimizer
In this lab, customers will deploy the Intelligent Resource Optimizer on the HPE Private Cloud AI instance in the ATC. Built using the NVIDIA NeMo Agent Toolkit and powered by Open AI OSS NIMs, this solution moves beyond static analysis to proactive infrastructure management. By integrating a specialized Time Series model, the Intelligent Resource Optimizer analyzes historical system utilization to accurately forecast future resource demands. This allows IT Operations teams to receive data-driven recommendations for scaling and optimization, ensuring peak performance and cost-efficiency across the environment.
Advanced Configuration Lab
7 launches
Model Context Protocol (MCP) Foundations Lab
MCP is an open-source standard designed to standardize how LLM applications connect to and work with external tools and data sources.
Foundations Lab
19 launches
F5 Distributed Cloud for LLMs
In the current AI market, the demand for scalable and secure deployments is increasing. Public cloud providers (AWS, Google, and Microsoft) are competing to provide GenAI infrastructure, driving the need for multi-cloud and hybrid cloud deployments.
However, distributed deployments come with challenges, including:
Complexity in managing multi-cloud environments.
Lack of unified visibility across clouds.
Inconsistent security and policy enforcement.
F5 Distributed Cloud provides a solution by offering a seamless, secure, and portable environment for GenAI workloads across clouds. This lab will guide you through setting up and securing GenAI applications with F5 Distributed Cloud on AWS EKS and GCP GKE.
Advanced Configuration Lab
29 launches
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
826 launches
Incident Knowledge Assistant
In this lab a customer will deploy the Incident Knowledge Assistant on the HPE Private Cloud AI instance in the ATC. This Assistant is built on top of the NVIDIA NeMo Agent Toolkit and leverages Open AI OSS NIMs under the hood. Leveraging representative data from an IT Service Management Platform, the Incident Knowledge Assistant identifies the most relevant information from Incident, Change and Problem Management as well as the Knowledge Base to provide fast targeted guidance for IT Operations specialists.
Advanced Configuration Lab
15 launches
Deploy NVIDIA NIM for LLM on Kubernetes
NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
65 launches
Daily Ops Summary Agent
In this lab a customer will deploy the Daily Ops Summary Agent on the HPE Private Cloud AI instance in the ATC. The Daily Ops Summary Agent is built on top of the NVIDIA NeMo Agent Toolkit and leverages Open AI OSS NIMs under the hood. Leveraging representative data from IT Ops Service Management Platform, the Daily Ops Summary Agent brings the most relevant information from both Incident and Change management into an intelligent report for operation specialists.
Advanced Configuration Lab
30 launches
Deploy NVIDIA NIM for LLM on Docker
This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
63 launches