Skip to content
WWT LogoWWT Logo Text (Dark)WWT Logo Text (Light)
The ATC
Ctrl K
Ctrl K
Log in
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
The ATC
Explore

Select a tab

9 results found

F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs

This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•9 launches

HPE Private Cloud AI - Guided Walkthrough

This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•88 launches

Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai

Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•11 launches

Cisco RoCE Fabrics

This lab will demonstrate how the Cisco Nexus Dashboard Fabric Controller can easily set up an AI/ML fabric with a simple point-and-click GUI. It will only do this easily without knowing the protocols or best practices. The controller will do the work.
Advanced Configuration Lab
•294 launches

Pure Storage GenAI Pod with NVIDIA

This environment provides a highly performant environment to test the deployment and tuning of different NVIDIA NIMs and Blueprints.
Advanced Configuration Lab
•11 launches

Liqid Composable Disaggregated Infrastructure Lab

The Liqid Composable Disaggregated Infrastructure (CDI) Lab showcases how to create impossible hardware configurations that are unfeasible in the physical world. Compose bare metal servers with any number of configurations via software. The lab consists of Dell PowerEdge compute, Liqid PCIe4 Fabric, Liqid Matrix Software, Intel NVMe and NVIDIA GPUs.
Foundations Lab
•44 launches

Person Tracking with Intel's AI Reference Kit

This lab focuses on implementing live person tracking using Intel's OpenVINO™, a toolkit for high-performance deep learning inference. The objective is to read frames from a video sequence, detect people within the frames, assign unique identifiers to each person, and track them as they move across frames. The tracking algorithm utilized here is Deep SORT (Simple Online and Realtime Tracking), an extension of SORT that incorporates appearance information along with motion for improved tracking accuracy.
Advanced Configuration Lab
•43 launches

Deploy NVIDIA NIM for LLM on Kubernetes

NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•62 launches

Deploy NVIDIA NIM for LLM on Docker

This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•63 launches

High-Performance Architectures

At a glance

54Total
27Videos
10Learning Paths
9Labs
5Events
3WWT Research
What's related
  • AI Proving Ground
  • AI & Data
  • AI Infrastructure Engineers
  • Applied AI
  • NVIDIA
  • ATC
  • High-Performance Architecture (HPA)
  • WWT Presents
  • AI Practitioners
  • Cisco
  • Cisco AI Solutions
  • Dell Tech
  • Intel
  • Data Center
  • Data Center Networking
  • NVIDIA DGX Platform
  • Networking
  • AI Proving Ground Podcast
  • AI Security
  • Cisco UCS

What's related

  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2026 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Accessibility
  • Cookies