Explore
Select a tab
What's new
F5 Distributed Cloud for LLMs
In the current AI market, the demand for scalable and secure deployments is increasing. Public cloud providers (AWS, Google, and Microsoft) are competing to provide GenAI infrastructure, driving the need for multi-cloud and hybrid cloud deployments.
However, distributed deployments come with challenges, including:
Complexity in managing multi-cloud environments.
Lack of unified visibility across clouds.
Inconsistent security and policy enforcement.
F5 Distributed Cloud provides a solution by offering a seamless, secure, and portable environment for GenAI workloads across clouds. This lab will guide you through setting up and securing GenAI applications with F5 Distributed Cloud on AWS EKS and GCP GKE.
Advanced Configuration Lab
•29 launches
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
•809 launches
Incident Knowledge Assistant
In this lab a customer will deploy the Incident Knowledge Assistant on the HPE Private Cloud AI instance in the ATC. This Assistant is built on top of the NVIDIA NeMo Agent Toolkit and leverages Open AI OSS NIMs under the hood. Leveraging representative data from an IT Service Management Platform, the Incident Knowledge Assistant identifies the most relevant information from Incident, Change and Problem Management as well as the Knowledge Base to provide fast targeted guidance for IT Operations specialists.
Advanced Configuration Lab
•12 launches
Prompt Engineering Lab
This lab will help users strengthen their prompt engineering skills using the prompt blueprint framework, which is a structured approach for writing effective prompts. Participants will refine prompts step by step and experiment with blueprint elements to achieve better model outputs.
Foundations Lab
•158 launches
Pediatric Vaccine Assistant
Primary care and pediatric providers can spend significant time manually cross-referencing multiple sources to construct accelerated catch-up protocols for unvaccinated or under-vaccinated patients. The Pediatric Vaccine Assistant streamlines this workflow, reducing administrative burden and supporting evidence-based care.
Foundations Lab
•46 launches
AWS AI Lab Nova Sonic and Bedrock
This lab deploys a real-time speech-to-speech support agent for fictional company AnyTelco, where a customer calls in for help. It uses Amazon Nova Sonic in Amazon Bedrock for natural conversation, with S3/CloudFront and Cognito for frontend/auth. A backend on ECS/NLB streams audio and invokes MCP tools (e.g., DynamoDB customer profiles, Bedrock knowledge bases). All infrastructure is provisioned via AWS CDK.
Foundations Lab
•88 launches
LLM Powered Clinical Trial Matcher
The recruitment of patients for clinical trials presents significant challenges for pharmaceutical companies, often delaying drug development, increasing cost, and leaded to study failure. This lab explores how AI applied to anonymized health records enables efficient trial matching—at single, bulk, and multi-trial levels—reducing time, improving accuracy, and enhancing access for researchers, providers, and patients.
Foundations Lab
•74 launches
AI-Powered Manufacturing Assistant
The WWT AI-Powered Predictive Maintenance Assistant uses machine learning and a large language model to predict equipment failures, schedule maintenance, generate instructions, and answer related questions. Through an interactive UI, users explore proactive, data-driven strategies to reduce downtime, extend machine life, and optimize resources, demonstrating AI's transformative potential in manufacturing environments.
Foundations Lab
•95 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•11 launches
WWT Agentic Network Assistant
Explore the WWT Agentic Network Assistant, a browser-based AI tool that converts natural language into Cisco CLI commands, executes them across multiple devices, and delivers structured analysis. Using a local LLM, it streamlines troubleshooting, summarizes device health, and compares configs, demonstrating the future of intuitive, AI driven network operations.
Foundations Lab
•538 launches
Cisco AI Defense Capture the Flag (CTF)
Experience Cisco AI Defense in an interactive Capture the Flag (CTF), designed to showcase how Cisco is securing the future of GenAI.
Advanced Configuration Lab
•534 launches
Crafting Your First AI Agent
Embark on a journey to create AI agents with "Crafting Your First AI Agent." This hands-on lab introduces LangChain, LangGraph, and CrewAI frameworks, empowering you to build, run, and understand AI agents. Discover the intricacies of AI workflows and multi-agent systems, transforming curiosity into practical expertise.
Foundations Lab
•589 launches
Person Tracking with Intel's AI Reference Kit
This lab focuses on implementing live person tracking using Intel's OpenVINO™, a toolkit for high-performance deep learning inference. The objective is to read frames from a video sequence, detect people within the frames, assign unique identifiers to each person, and track them as they move across frames. The tracking algorithm utilized here is Deep SORT (Simple Online and Realtime Tracking), an extension of SORT that incorporates appearance information along with motion for improved tracking accuracy.
Advanced Configuration Lab
•43 launches
AI Gateway - LiteLLM Walkthrough Lab
This lab provides hands-on experience with LiteLLM, an open-source AI gateway that centralizes and manages access to Large Language Models (LLMs). Throughout the five modules, you'll learn how to set up and use LiteLLM to control, monitor, and optimize your AI model interactions.
Foundations Lab
•51 launches
Defensive AI: Fixing Vulnerabilities with AI-Powered Agents
Brews and Bytes coffee shop faces a digital dilemma: a vulnerable web server. Engage in an interactive lab where AI agents, guided by human insight, tackle security flaws. Discover how AI can revolutionize cybersecurity, transforming vulnerabilities into opportunities for innovation and defense. Can AI be your next cybersecurity ally?
Foundations Lab
•16 launches
Retrieval Augmented Generation (RAG) - Programatic Lab
In this lab, we'll be focusing on the programatic steps of Retrieval Augmented Generation (RAG). First, we'll discuss data chunking, how we break down our documents. Then, we'll explore how these chunks become embeddings, numerical representations. Finally, we'll see how a vector database helps us efficiently retrieve this information.
Foundations Lab
•273 launches
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•88 launches
Cisco RoCE Fabrics
This lab will demonstrate how the Cisco Nexus Dashboard Fabric Controller can easily set up an AI/ML fabric with a simple point-and-click GUI. It will only do this easily without knowing the protocols or best practices. The controller will do the work.
Advanced Configuration Lab
•294 launches