Overview
Explore
Services
Select a tab
15 results found
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•6 launches
Cisco AI Defense Capture the Flag (CTF)
Experience Cisco AI Defense in an interactive Capture the Flag (CTF), designed to showcase how Cisco is securing the future of GenAI.
Advanced Configuration Lab
•296 launches
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
•694 launches
NVIDIA Blueprint: Enterprise RAG
NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications.
This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights.
By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.
Sandbox Lab
•152 launches
NVIDIA Run:ai Researcher Sandbox
This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.
Sandbox Lab
•138 launches
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•57 launches
Retrieval Augmented Generation (RAG) - Programatic Lab
In this lab, we'll be focusing on the programatic steps of Retrieval Augmented Generation (RAG). First, we'll discuss data chunking, how we break down our documents. Then, we'll explore how these chunks become embeddings, numerical representations. Finally, we'll see how a vector database helps us efficiently retrieve this information.
Foundations Lab
•234 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•316 launches
Cisco RoCE Fabrics
This lab will demonstrate how the Cisco Nexus Dashboard Fabric Controller can easily set up an AI/ML fabric with a simple point-and-click GUI. It will only do this easily without knowing the protocols or best practices. The controller will do the work.
Advanced Configuration Lab
•241 launches
Generative AI Fundamentals
This lab will walk the lab user through the basics of Generative AI
Foundations Lab
•352 launches
Training Data Poisoning Lab
Training data poisoning poses significant risks to Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) systems. This lab explores these dangers through a case study of an online forum, demonstrating how corrupted data can compromise AI effectiveness and security, and examines methods to mitigate such threats.
Foundations Lab
•170 launches
Deploying and Securing Multi-Cloud and Edge Generative AI Workloads with F5 Distributed Cloud
In the current AI market, the demand for scalable and secure deployments is increasing. Public cloud providers (AWS, Google, and Microsoft) are competing to provide GenAI infrastructure, driving the need for multi-cloud and hybrid cloud deployments.
However, distributed deployments come with challenges, including:
Complexity in managing multi-cloud environments.
Lack of unified visibility across clouds.
Inconsistent security and policy enforcement.
F5 Distributed Cloud provides a solution by offering a seamless, secure, and portable environment for GenAI workloads across clouds. This lab will guide you through setting up and securing GenAI applications with F5 Distributed Cloud on AWS EKS and GCP GKE.
Advanced Configuration Lab
•19 launches