Explore
Select a tab
31 results found
Cloud, FinOps and AI: What You Need to Know About Unit Economics, GPUs and the ROI Flywheel
AI has pushed cloud into overdrive. In this episode of the AI Proving Ground Podcast, two of our top cloud experts — Jack French and Todd Barron — reset the approach and detail why cloud is the launchpad but portability is the strategy; how to start greenfield with containers and abstraction; what a real FinOps model for AI looks like (unit economics, tagging, token/GPU visibility); where neo clouds fit versus hyperscalers; and how to handle cross-cloud risk and skill gaps; and the governance moves that accelerate—not restrict—innovation.
Video
•1:38
•Oct 14, 2025
Private AI vs. Cloud: How Enterprise Leaders Can Make Smarter Build-or-Buy Decisions
Is your organization ready to own AI or are you better served by leveraging the speed and scale of the cloud? In this episode of the AI Proving Ground Podcast, WWT High-Performance Architecture Director Jeff Fonke and VP of Advanced Technology Solutions Jeff Wynn break down the toughest question facing IT leaders today: should you build or buy your AI capabilities? From the economics of inference costs to hybrid cloud realities, the two Jeffs share practical strategies on private AI, workload orchestration, data readiness and overcoming the enterprise skills gap.
Video
•40:55
•Sep 2, 2025
How AI Agents Are Transforming IT Ops
AI agents are moving from hype to the heart of enterprise IT. In this episode of the AI Proving Ground Podcast, Eric Jones and Ruben Ambrose — two leading AI experts — explore how intelligent, human-guided systems are transforming IT service management, incident response and operational scale to deliver faster resolutions, stronger security and smarter decisions across the enterprise.
Video
•1:38
•Oct 28, 2025
Can AI Be Trusted to Run Critical Networks?
Artificial intelligence is firmly in the heart of the world's most critical infrastructure — the massive networks that keep our digital lives running. In this episode of the AI Proving Ground Podcast, two of our top trusted advisors to the world's largest network operators — Dave Clough and Yohannes Tafesse — break down the high-stakes reality of applying AI at scale, the often-overlooked work of preparing data and building trust, and why the lessons emerging from telecom will shape how every enterprise approaches AI in mission-critical environments.
Video
•0:54
•Oct 21, 2025
The NetApp and NVIDIA Infrastructure Stack
Welcome to part 1 of the videos series about the RAG Lab Infrastructure built in collaboration with NetApp, NVIDIA, and World Wide Technology. This video series will take you behind the scenes of this state-of-the-art lab environment inside the AI Proving Ground from WWT, powered by the Advanced Technology Center.
Video
•4:45
•Aug 23, 2024
VAST Data and WWT's AI Partnership: Powering the future of AI
The AI journey is just getting started. Explore how WWT and VAST Data partner together to transform enterprises into AI-powered innovators
Video
•6:09
•Oct 3, 2024
Introduction to the Run:ai running on Dell's Validated Design for AI
Take a behind-the-scenes look at a state-of-the-art AI lab, showcasing how Run:ai optimizes AI workloads, Red Hat OpenShift manages and scales AI applications, and Dell's validated design provides the powerful hardware foundation for AI innovation.
Video
•1:24
•May 23, 2024
AI at Scale: How Cisco, NVIDIA and WWT Are Powering the Future of Enterprise AI
As AI moves from predictive analytics to real-time autonomous decision-making, enterprises face a critical challenge: scaling AI securely and efficiently. This episode of the AI Proving Ground Podcast explores key takeaways from NVIDIA GTC, the evolving Cisco-NVIDIA partnership and the foundational role AI infrastructure plays in unlocking enterprise-scale AI. Featuring WWT VP of Networking and AI Solutions Neil Anderson, NVIDIA SVP of Networking Kevin Deierling and Cisco SVP of Networking Kevin Wollenweber, the episode covers insights into networking, security, and compute innovations, and how Cisco, NVIDIA and WWT are shaping the future of AI-powered businesses.
Video
•1:30
•Mar 25, 2025
Agents, Copilots and Beyond: Everyday AI's Jordan Wilson on Future of AI in the Enterprise
In this episode of the AI Proving Ground Podcast, we talk with Jordan Wilson, host of the popular Everyday AI Podcast, to unpack the realities of enterprise AI adoption. From tool sprawl and failed pilots to executive sponsorship and agentic models, Jordan shares lessons he's taken away from thousands of conversations with enterprise leaders — and explains why soft skills and unlearning old habits may be the ultimate keys to success.
Video
•2:04
•Sep 23, 2025
What Cisco Live 2025 Revealed About the Future of Enterprise AI
At Cisco Live 2025, the networking giant rolled out a sweeping agenda to make AI not just powerful, but practical — and secure. In this episode, we caught up with leaders from Cisco, NVIDIA and WWT to talk about what this year's announcements actually mean for enterprise teams tasked with building scalable, secure, AI-ready infrastructure. From the rise of the Cisco Secure AI Factory with NVIDIA to the reality of agentic workflows and persistent inference traffic, this episode unpacks the architectural shifts reshaping the modern data center.
Video
•1:24
•Jun 17, 2025
HPE Private Cloud AI - Introduction and Demo
The world has been changed by AI and every business is trying to figure out how to leverage it. HPE Private Cloud solutions bring together optimized infrastructure and software in a unique cloud experience delivered through HPE GreenLake.
Video
•5:42
•Jul 29, 2024
Overview of NVIDIA NIM Microservices
Welcome to part 2 about our RAG lab infrastructure built in collaboration with NetApp, NVIDIA, and WWT. NVIDIA NIM microservices is a suite of user-friendly microservices that facilitate the deployment of generative AI models, such as large language models (LLMs), embeddings, re-rankings, and others, across various platforms. NVIDIA NIM microservices simplify the process for IT and DevOps teams to manage LLMs in their environments, providing standard APIs for developers to create AI-driven applications like copilots, chatbots, and assistants. It leverages NVIDIA's GPU technology for fast, scalable deployment, ensuring efficient inference and high performance.
Video
•3:29
•Aug 28, 2024