Skip to content
WWT LogoWWT Logo Text (Dark)WWT Logo Text (Light)
The ATC
Ctrl K
Ctrl K
Log in
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
The ATC
Explore

Select a tab

What's popular

Digital human face with glowing network lines, representing AI technology.

AI Essentials

Welcome to the AI Essentials Learning Series. This series is designed to equip you with the knowledge and skills necessary to understand key aspects of Artificial Intelligence (AI). As AI continues to revolutionize industries and transform the way we live and work, it is crucial to understand the fundamental components that enable its remarkable performance. This series of learning paths will provide you with a comprehensive overview of AI, High Performance Storage, High Performance Networking, High Performance Compute, and Security, all tailored to the specific needs of AI applications.
Learning Series

Why Kubernetes is the Platform of Choice for Artificial Intelligence

This report explores Kubernetes' role in AI, analyzing its capabilities in managing AI workloads, its benefits over traditional infrastructure and its unique features that make it the platform of choice for AI.
WWT Research
•Aug 20, 2025

Building for Success: A CTO's Guide to Generative AI

A strategic roadmap for Chief Technology Officers to align GenAI strategies with business goals, assess infrastructure needs, and identify the talent and skills needed to achieve sustainable GenAI transformation.
WWT Research
•Apr 17, 2025

Facilities Infrastructure Priorities in the Age of AI

AI adoption is quickly gaining momentum in the enterprise space, but there are key priorities that data center operators and IT decision-makers need to consider regarding the hardware and design of their data centers, including power and cooling. Here are your facilities infrastructure priorities for 2024.
WWT Research
•Aug 1, 2024

F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs

This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
9 launches

What's new

Introduction to NVIDIA NIM for LLM

This learning path introduces NVIDIA NIM for LLM microservices, covering its purpose, formats, and benefits. You'll explore deployment options via API Catalog, Docker, and Kubernetes, and complete hands-on labs for Docker and Kubernetes-based inference workflows—building skills to deploy, scale, and integrate GPU-optimized LLMs into enterprise applications.
Learning Path

AI Proving Ground: Cisco UCS C885A

Discover the latest addition to WWT's AI Proving Ground — the Cisco UCS C885A server. Technical Solutions Architect, Chris Braun, walks us through the unboxing of our newest hardware that is part of the Cisco Secure AI Factory with NVIDIA environment. Equipped with 8 NVIDIA H200 GPUs, this powerhouse is designed for LLM training, fine-tuning, inference, and Retrieval Augmented Generation (RAG).
Video
•2:17
•Nov 17, 2025

Deploy NVIDIA NIM for LLM on Kubernetes

NVIDIA NIM revolutionizes AI deployment by encapsulating large language models in a scalable, containerized microservice. Seamlessly integrate with Kubernetes for optimized GPU performance, dynamic scaling, and robust CI/CD pipelines. Simplify complex model serving, focusing on innovation and intelligent feature development, ideal for enterprise-grade AI solutions.
Advanced Configuration Lab
•62 launches

Deploy NVIDIA NIM for LLM on Docker

This lab provides the learner with a hands-on, guided experience of deploying the NVIDIA NIM for LLM microservice in Docker.
Foundations Lab
•63 launches

NVIDIA Run:ai for Platform Engineers

Welcome to the NVIDIA Run:ai for Platform Engineers Learning Path! This learning path is designed to build both foundational knowledge and practical skills for platform engineers and administrators responsible for managing GPU resources at scale. It begins by introducing learners to the key components of the NVIDIA Run:ai platform, including its Control Plane and Cluster, and explains how NVIDIA Run:ai extends Kubernetes to orchestrate AI workloads efficiently. The learning path then covers essential topics such as authentication and role-based access, organizational management through projects and departments, and workload operations using assets, templates, and policies. Learners will also explore GPU fractioning to understand how NVIDIA Run:ai maximizes GPU utilization and ensures fair resource allocation across teams. All this builds toward a hands-on lab experience designed to reinforce your learning and give you practical experience working directly with NVIDIA Run:ai.
Learning Path

Cisco UCS and NVIDIA RTX PRO 6000 Server Edition: Powering the Next Wave of Enterprise AI

By combining NVIDIA RTX PRO™ 6000 Blackwell Server Edition with Cisco UCS servers, enterprises gain a powerful and scalable foundation for AI and visualization workloads.
WWT Research
•Oct 24, 2025

Pure Storage GenAI Pod with NVIDIA

This environment provides a highly performant environment to test the deployment and tuning of different NVIDIA NIMs and Blueprints.
Advanced Configuration Lab
•11 launches

F5 BIG-IP Next for Kubernetes on NVIDIA BlueField-3 DPUs

This lab will guide and educate you on what the F5 BIG-IP Next for Kubernetes is, how it is deployed with NVIDIA DOCA, and how it will ultimately help show the gains in offloading traffic to DPUs instead of tying up resources for your applications on the CPU of the physical K8 host.
Advanced Configuration Lab
•9 launches

NVIDIA DGX SuperPOD and DGX BasePOD Day 3 Operations

This Learning Series was created for NVIDIA DGX admins and operators to explore things you would use on Day 3 when administering your NVIDIA DGX SuperPOD and BasePOD environments with BCM (Base Command Manager). It will go into advanced topics of cmshell, cloud bursting from BCM, HA for headnodes, IB setup and testing of worker nodes, active directory integrations, as well as advanced workload topics of deploying Kubernetes from Base Command Manager.
Learning Path

Why Kubernetes is the Platform of Choice for Artificial Intelligence

This report explores Kubernetes' role in AI, analyzing its capabilities in managing AI workloads, its benefits over traditional infrastructure and its unique features that make it the platform of choice for AI.
WWT Research
•Aug 20, 2025

Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai

Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•11 launches

Inside Cisco Live 2025: Go Beyond with Softchoice and World Wide Technology

Watch an exclusive webinar where we  Go Beyond the headlines of Cisco Live 2025. We'll unpack the biggest announcements across Cisco's key focus areas —Security, Networking, and AI.
Video
•53:05
•Jul 7, 2025

AI Proving Ground: Unboxing the NVIDIA DGX B200

Take a tour of the NVIDIA DGX B200. Technical Solutions Architect, Chris Braun, explains the new features of the NVIDIA Blackwell chipset. We will showcase how we are leveraging the NVIDIA DGX B200 to build learning paths, articles, proof of concepts, and discuss the use cases for educating our clients and internal staff.
Video
•1:57
•Jul 2, 2025

AI's Invisible Bottleneck: Why AI Stalls at the Network, not the GPU

For many, AI success isn't limited by how many GPUs you can buy; it's limited by how fast those GPUs can talk to each other without tripping over the plumbing. In this episode of the AI Proving Ground Podcast, two of WWT's top networking minds —Justin van Schaik and Eric Fairfield — lay out the real choke points slowing AI projects to a crawl and how powerful, modernized network architectures are quietly rewriting the rulebook for scaling AI.
Video
•0:50
•Jul 1, 2025

Person Tracking with Intel's AI Reference Kit

This lab focuses on implementing live person tracking using Intel's OpenVINO™, a toolkit for high-performance deep learning inference. The objective is to read frames from a video sequence, detect people within the frames, assign unique identifiers to each person, and track them as they move across frames. The tracking algorithm utilized here is Deep SORT (Simple Online and Realtime Tracking), an extension of SORT that incorporates appearance information along with motion for improved tracking accuracy.
Advanced Configuration Lab
•43 launches

Hidden Infrastructure Demands of Enterprise AI

As AI pushes the limits of traditional IT infrastructure, enterprises are racing to modernize their data centers. In this episode, Mike Parham and Bruce Gray walk us through the behind-the-scenes decisions that matter — from power and cooling challenges to GPU readiness and sustainability. Whether you're modernizing or starting from scratch, this conversation is your blueprint for AI-ready infrastructure.
Video
•0:51
•May 20, 2025

Assessing AI Workloads: How to Choose the Right Environment for Enterprise AI

Where should your AI workloads run? It's one of the most overlooked questions in AI strategy. From surprising constraints around power, cooling and floor space, to the growing demand for GPU-as-a-Service models, this episode delivers a field-level view of the challenges enterprises face when moving from AI proof of concept to production. You'll hear why infrastructure readiness assessments are essential, how AI workloads differ from traditional IT, and what to consider before buying that next GPU cluster.
Video
•1:25
•May 6, 2025

Building for Success: A CTO's Guide to Generative AI

A strategic roadmap for Chief Technology Officers to align GenAI strategies with business goals, assess infrastructure needs, and identify the talent and skills needed to achieve sustainable GenAI transformation.
WWT Research
•Apr 17, 2025

Liqid Upgrade from 3.4 to 3.5

In this video, we will guide you through upgrading your Liqid infrastructure from 3.4 to 3.5. The upgrade requires your Liqid stack to have some downtime, as the main portion updates your Liqid director server from Centos to Ubuntu. During this upgrade, all hosts and chassis will need to be powered off in order to complete the upgrade successfully.
Video
•9:08
•Apr 15, 2025

AI at Scale: How Cisco, NVIDIA and WWT Are Powering the Future of Enterprise AI

As AI moves from predictive analytics to real-time autonomous decision-making, enterprises face a critical challenge: scaling AI securely and efficiently. This episode of the AI Proving Ground Podcast explores key takeaways from NVIDIA GTC, the evolving Cisco-NVIDIA partnership and the foundational role AI infrastructure plays in unlocking enterprise-scale AI. Featuring WWT VP of Networking and AI Solutions Neil Anderson, NVIDIA SVP of Networking Kevin Deierling and Cisco SVP of Networking Kevin Wollenweber, the episode covers insights into networking, security, and compute innovations, and how Cisco, NVIDIA and WWT are shaping the future of AI-powered businesses.
Video
•1:30
•Mar 25, 2025

High-Performance Architectures

At a glance

49Total
27Videos
10Learning Paths
9Labs
3WWT Research
What's related
  • AI Proving Ground
  • AI & Data
  • AI Infrastructure Engineers
  • Applied AI
  • NVIDIA
  • ATC
  • High-Performance Architecture (HPA)
  • AI Practicioners
  • WWT Presents
  • Cisco
  • Cisco AI Solutions
  • Dell Tech
  • Data Center
  • Data Center Networking
  • Intel
  • NVIDIA DGX Platform
  • Networking
  • AI Proving Ground Podcast
  • AI Security
  • Cisco UCS

What's related

  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2026 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Accessibility
  • Cookies