Skip to content
WWT LogoWWT Logo Text
The ATC
Search...
Ctrl K
Top page results
See all search results
Featured Solutions
What's trending
Help Center
Log In
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalSustainabilityImplementation ServicesLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Featured today
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Featured learning path
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
WWT in the news
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
Partner spotlight
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalSustainabilityImplementation ServicesLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
The ATC
Overview
Explore
Events

Select a tab

12 results found

AI Prompt Injection Lab

Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
•367 launches

Retrieval Augmented Generation (RAG) Walk Through Lab

This lab will go into the basics of Retrieval Augmented Generation (RAG) through hands on access to a dedicated environment.
Foundations Lab
•845 launches

F5 AI Gateway

This lab will provide access to an Openshift cluster running the F5 AI Gateway solution. We will walk through how the F5 AI Gateway routes requests to different models by either allowing them to pass through or, more importantly, securing them via prompt injection checking. We have also added a couple of other tests that will allow for language detection of that input that the F5 AI Gateway can also detect.
Advanced Configuration Lab
•50 launches

Cisco AI Defense Demo

Demonstration app for Cisco AI Defense LLM Guardrails for Cisco Live 2025
Advanced Configuration Lab
•33 launches

AI Gateway - LiteLLM Walkthrough Lab

This lab provides hands-on experience with LiteLLM, an open-source AI gateway that centralizes and manages access to Large Language Models (LLMs). Throughout the five modules, you'll learn how to set up and use LiteLLM to control, monitor, and optimize your AI model interactions.
Foundations Lab
•26 launches

AIPG: The AI Security Enclave

The AI Security Enclave in the AI Proving Ground (AIPG) adds an environment dedicated to supporting AI security efforts and demonstrating WWT expertise and capabilities for testing innovative hardware and software security solutions.
Advanced Configuration Lab

F5 AI Gateway (GPU Accelerated)

This lab will provide access to an Openshift cluster running the F5 AI Gateway solution. We will walk through how the F5 AI Gateway routes requests to different models by either allowing them to pass through or, more importantly, securing them via prompt injection checking. We have also added a couple of other tests that will allow for language detection of that input that the F5 AI Gateway can also detect.
Advanced Configuration Lab
•7 launches

Protect AI Guardian Sandbox

Protect AI Guardian is an ML model scanner and policy enforcer that ensures ML models meet an organization's security standards. It scans model code for malicious operators and vulnerabilities, while also checking against predefined policies. Guardian covers both first-party (developed within the organization) and third-party models (from external repositories). This comprehensive approach helps organizations manage ML model risks effectively. In this Lab, you will walk through the Protect AI Interface, explore the different feature sets there, and submit example models for scanning.
Sandbox Lab
•183 launches

Deploying and Securing Multi-Cloud and Edge Generative AI Workloads with F5 Distributed Cloud

In the current AI market, the demand for scalable and secure deployments is increasing. Public cloud providers (AWS, Google, and Microsoft) are competing to provide GenAI infrastructure, driving the need for multi-cloud and hybrid cloud deployments. However, distributed deployments come with challenges, including: Complexity in managing multi-cloud environments. Lack of unified visibility across clouds. Inconsistent security and policy enforcement. F5 Distributed Cloud provides a solution by offering a seamless, secure, and portable environment for GenAI workloads across clouds. This lab will guide you through setting up and securing GenAI applications with F5 Distributed Cloud on AWS EKS and GCP GKE.
Advanced Configuration Lab
•16 launches

Training Data Poisoning Lab

Training data poisoning poses significant risks to Large Language Models (LLMs) and Retrieval Augmented Generation (RAG) systems. This lab explores these dangers through a case study of an online forum, demonstrating how corrupted data can compromise AI effectiveness and security, and examines methods to mitigate such threats.
Foundations Lab
•150 launches

Defensive AI: Fixing Vulnerabilities with AI-Powered Agents

Brews and Bytes coffee shop faces a digital dilemma: a vulnerable web server. Engage in an interactive lab where AI agents, guided by human insight, tackle security flaws. Discover how AI can revolutionize cybersecurity, transforming vulnerabilities into opportunities for innovation and defense. Can AI be your next cybersecurity ally?
Foundations Lab
•5 launches

Deep Instinct Data Security X (DSX) for NAS

Deep Instinct provides several solutions powered by deep learning to quickly identify potential attacks. This lab will demonstrate the capabilities of their DSX for NAS - NetApp solution, which can scan files in milliseconds anytime they enter the network or are edited. Files are scanned within the network environment, ensuring full data privacy, confidentiality, and compliance. Files that are found to be malicious can be either deleted or quarantined. Deep Instinct works with both network attached storages and cloud storages.
Foundations Lab
•43 launches

AI Security

AI security helps harness AI's potential while mitigating emerging threats, ensuring safe, compliant and ethical use across the organization.

122 Followers

At a glance

171Total
49Blogs
43Articles
35Videos
15Events
12Labs
10WWT Research
3Learning Paths
2Briefings
1Assessment
1Playlist
What's related
  • Security
  • AI & Data
  • Applied AI
  • Cybersecurity Risk & Strategy
  • WWT Presents
  • Blog
  • AI Proving Ground
  • ATC
  • Security Operations
  • GenAI
  • Application & API Security
  • Palo Alto Networks
  • Cisco
  • Cyber Resilience
  • Network Security
  • Cisco AI Solutions
  • Data Security
  • Cloud
  • NVIDIA
  • Cisco Security

What's related

  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2025 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Accessibility
  • Cookies