Overview
Explore
Services
Select a tab
13 results found
Cisco AI Defense Capture the Flag (CTF)
Experience Cisco AI Defense in an interactive Capture the Flag (CTF), designed to showcase how Cisco is securing the future of GenAI.
Advanced Configuration Lab
•571 launches
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
•834 launches
NVIDIA Blueprint: Enterprise RAG
NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications.
This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights.
By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.
Sandbox Lab
•157 launches
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•91 launches
Model Context Protocol (MCP) Foundations Lab
MCP is an open-source standard designed to standardize how LLM applications connect to and work with external tools and data sources.
Foundations Lab
•20 launches
NVIDIA Run:ai Researcher Sandbox
This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.
Sandbox Lab
•165 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•11 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•336 launches
F5 Distributed Cloud for LLMs
In the current AI market, the demand for scalable and secure deployments is increasing. Public cloud providers (AWS, Google, and Microsoft) are competing to provide GenAI infrastructure, driving the need for multi-cloud and hybrid cloud deployments.
However, distributed deployments come with challenges, including:
Complexity in managing multi-cloud environments.
Lack of unified visibility across clouds.
Inconsistent security and policy enforcement.
F5 Distributed Cloud provides a solution by offering a seamless, secure, and portable environment for GenAI workloads across clouds. This lab will guide you through setting up and securing GenAI applications with F5 Distributed Cloud on AWS EKS and GCP GKE.
Advanced Configuration Lab
•29 launches
Retrieval Augmented Generation (RAG) - Programatic Lab
In this lab, we'll be focusing on the programatic steps of Retrieval Augmented Generation (RAG). First, we'll discuss data chunking, how we break down our documents. Then, we'll explore how these chunks become embeddings, numerical representations. Finally, we'll see how a vector database helps us efficiently retrieve this information.
Foundations Lab
•274 launches
Person Tracking with Intel's AI Reference Kit
This lab focuses on implementing live person tracking using Intel's OpenVINO™, a toolkit for high-performance deep learning inference. The objective is to read frames from a video sequence, detect people within the frames, assign unique identifiers to each person, and track them as they move across frames. The tracking algorithm utilized here is Deep SORT (Simple Online and Realtime Tracking), an extension of SORT that incorporates appearance information along with motion for improved tracking accuracy.
Advanced Configuration Lab
•45 launches
AI Gateway - LiteLLM Walkthrough Lab
This lab provides hands-on experience with LiteLLM, an open-source AI gateway that centralizes and manages access to Large Language Models (LLMs). Throughout the five modules, you'll learn how to set up and use LiteLLM to control, monitor, and optimize your AI model interactions.
Foundations Lab
•53 launches