Skip to content
WWT LogoWWT Logo Text (Dark)WWT Logo Text (Light)
The ATC
Ctrl K
Ctrl K
Log in
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWSCyberArkGoogle CloudVMware
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWSCyberArkGoogle CloudVMware
The ATC
Ai DayAI Proving GroundResearchHigh-Performance Architecture (HPA)AI & DataResearch NoteFacilities InfrastructureData Center
WWT Research • Research Note
• January 27, 2026 • 6 minute read

Executive Insight: Lessons in Power, Placement and the Path to AI at Scale

At a recent WWT Ai day, our keynote detailed how organizations can graduate from AI experimentation to AI as a scalable framework for durable advantage. Here's a distillation of the insights we shared.

In this report

  1. AI advantage requires decisive action
    1. Market signals leaders should act on
  2. The strategic choice: AI workload placement
  3. Architectural requirements for the AI factory
  4. Getting from idea to outcome, faster
  5. The path to durable advantage

Enterprises are rapidly moving beyond isolated AI pilots toward durable platforms and operating models that can support scale, governance and measurable business impact. Capital is shifting decisively toward generative AI (GenAI) and accelerator-centric infrastructure, while legacy on-premises environments, long the default destination for IT spend, are seeing relative decline.

This transition reframes AI decisions from a technical exercise into a strategic one. "Build versus buy" is no longer about architecture alone; it is a financial, operational and risk decision shaped by business outcomes, power and facilities readiness, data gravity and organizational maturity.

The organizations that will lead are not those that experiment the most, but those that operationalize the fastest. They compress time-to-value while avoiding stranded investments and architectural dead ends.

Listen in: The AI Proving Ground Podcast — Enterprise AI: What Actually Works in 2026

AI advantage requires decisive action

AI investment is consolidating around platforms rather than point solutions. Leaders are forced to make earlier decisions about governance, economics and operating models — often before certainty exists. At the same time, AI is becoming embedded in core workflows, meaning delays caused by unclear economics, facilities constraints or stalled pilots now translate directly into competitive risk.

In this environment, discipline matters more than novelty. Clear outcomes, pragmatic workload placement and early operationalization are becoming decisive advantages.

Market signals leaders should act on

With all the noise in today's evolving AI market, here are four signals organizations should be incorporating into their AI strategy:

  1. AI funding is durable, but trade-offs are sharper: Enterprise prioritization of AI and GenAI remains strong, even as funding for traditional infrastructure tightens. This is forcing difficult trade-offs across IT, facilities and adjacent business functions. AI strategy is increasingly inseparable from capital allocation strategy.
  2. Power and cooling are the real limiting factors: GPUs are no longer the only constraints on AI factory scaling, as power availability, cooling technology and physical facility readiness have become important considerations. Liquid cooling and higher rack densities are becoming requirements, yet most enterprise data centers are unprepared without upgrades. These constraints will dictate where, and how quickly, organizations can scale through 2026 and beyond.
  3. Workloads are shifting to new venues: AI workloads are expanding into emerging GPU cloud providers (e.g., neoclouds, GPU-as-a-Service), specialized AI clouds and edge environments. As data and models move, leaders must rethink security, governance, portability and cost management.
  4. Economics are gradually clarifying: First-wave TCO, ROI and tokenomics models are beginning to show meaningful cost differences across workload placement options. While still evolving, these models are already useful when grounded in real workload traces and data movement assumptions.

The strategic choice: AI workload placement

There is no universal "right" answer when it comes to AI workload placement, only context-driven trade-offs, such as:

  • On-premises environments can deliver maximum control and data locality, and may be more cost-effective for stable, high-utilization workloads — that is, if the data center is AI-ready with high-density power and cooling.
  • GPU cloudscan accelerate access to scarce accelerators and simplify early operations. This route is often cost-advantaged for bursty or early-stage demand, but requires careful attention to portability and data movement.
  • Public cloud provides elasticity, global reach and rich managed services, but it requires vigilance as utilization stabilizes and data gravity increases.
  • Private collocation offers a pragmatic middle path, enabling modern densities and cooling designs when enterprise facilities lag, without requiring immediate greenfield builds.

Across industries, signals point to these AI workloads being split equally among public cloud, hybrid or hosted private environments, and enterprise-controlled infrastructure — adjusted for sector-specific power, sovereignty and risk considerations.

Architectural requirements for the AI factory

Successful AI factories start with the workload and the outcome. Explicit performance targets, latency and throughput requirements, and governance constraints should drive architecture rather than legacy standards.

Compute architectures are shifting from CPU-centric to GPU-centric designs, with accelerator-aware scheduling as a core capability. Networking must support massive east–west traffic, a fundamental departure from traditional north–south models. Storage strategies increasingly pair a high-performance tier with a large-scale capacity tier, as data preparation pipelines routinely reach tens of petabytes.

Leaders should aggressively standardize where it matters (e.g., observability, security, lineage) while preserving flexibility elsewhere to avoid brittle lock-in in a fast-moving ecosystem.

Finally, power and cooling must be treated as first-order design inputs, not downstream concerns. Most data centers require upgrades (or alternative hosting) to support next-generation AI densities. 

In the near term, GPU cloud and private colocation can maintain momentum while facilities modernize, provided architectures are designed for phased, horizontal scale.

Getting from idea to outcome, faster

Leaders should anchor early on business outcomes and workload definitions, model full economics across placement options, prioritize data locality and governance, design for optionality, and operationalize from day one. Skills, observability and security should mature alongside architecture, not after pilots conclude.

WWT's Advanced Technology Center (ATC) underpins the AI Proving Ground — a multi-vendor environment where clients can evaluate integrated reference architectures before committing capital. Through focused proofs of concept, lab-as-a-service engagements and "try before you buy" evaluations, leaders can validate performance, cost and operational readiness using their own data and controls.

Just as importantly, WWT emphasizes avoiding "POC frenzy." Every engagement is tied to a production roadmap and measurable business outcome, delivering experimentation that translates into durable capability. 

This theme is showcased in the WWT approach, "use case → demo (art of the possible) → proof of viability (POV) or minimal viable product (MVP) → production."

The path to durable advantage

Durable advantage in AI does not materialize from chasing every new model or platform. It comes from compressing the AI lifecycle, from idea to operation, in a way that grounds adoption in clear outcomes that are validated through realistic pilots and scaled on architectures that reflect today's economic, power and facilities realities.

Organizations that master this transition will not only deploy AI faster, but they will do so with confidence, control and sustained return on investment.

WWT Research
Insights powered by the ATC

This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research.


This report is compiled from surveys WWT Research conducts with clients and internal experts; conversations and engagements with current and prospective clients, partners and original equipment manufacturers (OEMs); and knowledge acquired through lab work in the Advanced Technology Center and real-world client project experience. WWT provides this report "AS-IS" and disclaims all warranties as to the accuracy, completeness or adequacy of the information.

Contributors

Chris Campbell
Sr. Director - AI Solutions - AIaaS/GPUaaS, Facilities & Infrastructure
Derek Elbert
High Performance Architecture Practice Lead

Contributors

Chris Campbell
Sr. Director - AI Solutions - AIaaS/GPUaaS, Facilities & Infrastructure
Derek Elbert
High Performance Architecture Practice Lead

In this report

  1. AI advantage requires decisive action
    1. Market signals leaders should act on
  2. The strategic choice: AI workload placement
  3. Architectural requirements for the AI factory
  4. Getting from idea to outcome, faster
  5. The path to durable advantage
What's Next Ai day
  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2026 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Accessibility
  • Cookies