Skip to content
WWT LogoWWT Logo Text (Dark)WWT Logo Text (Light)
The ATC
Ctrl K
Ctrl K
Log in
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWSGoogle CloudVMware
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWSGoogle CloudVMware
The ATC
Explore

Select a tab

7 results found

HPE Private Cloud AI - Guided Walkthrough

This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•Fundamentals
•200 launches

F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs

This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•Advanced
•17 launches
ATC+

Cisco Nexus Dashboard Fabric Controller (NFDC) Lab

This lab presents a simplified two-switch high-availability design for a Cisco AI deployment, highlighting both North–South and East–West traffic flows. It uses NDFC to demonstrate operational visibility and telemetry monitoring. The provided NDFC account is read-only for demonstration purposes. The topology separates administrative connectivity and GPU communication, offering a focused view of core AI networking concepts.
Advanced Configuration Lab
•Fundamentals
•16 launches

Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai

Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•Advanced
•12 launches

HPE Private Cloud AI - NIM Deployment

In this lab a customer will deploy a NVIDIA NIM to the HPE Private Cloud AI instance in the ATC. This lab will not only walk the customer through how to deploy a black-forest-labs/FLUX.1-dev NIM (image generation NIM), but also how to interact with the kubernetes layer (Ezmeral) that is engraved in the HPE solution.
Advanced Configuration Lab
•Fundamentals
•16 launches

Deploy NVIDIA NIM for LLM on Docker

This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•Fundamentals
•66 launches
ATC+

Deploy NVIDIA NIM for LLM on Kubernetes

NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•Intermediate
•79 launches

High-Performance Architectures

At a glance

49Total
24Videos
10Learning Paths
7Labs
4Events
4WWT Research
What's related
  • AI Proving Ground
  • AI & Data
  • AI Infrastructure Engineers
  • ATC
  • Applied AI
  • NVIDIA
  • High-Performance Architecture (HPA)
  • WWT Presents
  • AI Practitioners
  • Cisco
  • Cisco AI Solutions
  • HPE
  • HPE AI and Data
  • Data Center
  • HPE Private Cloud AI
  • NVIDIA DGX Platform
  • AI Proving Ground Podcast
  • AI Security
  • Cisco UCS
  • Data Center Networking

What's related

  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2026 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Accessibility
  • Cookies