Skip to content
WWT LogoWWT Logo Text (Dark)WWT Logo Text (Light)
The ATC
Ctrl K
Ctrl K
Log in
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWSCyberArkGoogle CloudVMware
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalImplementation ServicesIT Spend OptimizationLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWSCyberArkGoogle CloudVMware
The ATC
Ai DayResearchAI & DataResearch NoteATCConsulting ServicesWhat we do
WWT Research • Research Note
• January 26, 2026 • 9 minute read

Executive Insight: The Thing About Bubbles and AI

Is AI a tech bubble waiting to burst? When it comes to AI investment and adoption, executives should understand how talk of "bubbles" may obscure the more consequential question: What does it actually take to make AI work at scale?

In this report

  1. Bubbles as a social technology
  2. Why the AI bubble narrative persists
  3. The real divide isn't optimists versus skeptics
  4. The myth of the standalone use case
  5. What bubble talk obscures
  6. The quiet importance of architecture
  7. Why governance is the unsung variable
  8. The time horizon problem
  9. So, is AI a bubble?
  10. The executive question that actually matters
  11. A final thought

Every technological cycle eventually produces the same question, asked with the same mix of urgency and self-satisfaction: Is this a bubble?

The question is rarely neutral. It's usually a warning disguised as analysis, a way of signaling prudence in the face of collective enthusiasm. To ask whether something is a bubble is to imply that others are being carried away, that you — thoughtful observer — might step aside just in time.

Artificial intelligence has now arrived at that stage. The valuations are large. The headlines are breathless. The capital expenditures are eye-watering. And so the question returns, as it always does, draped in reasonableness: Surely this can't last.

But the more interesting thing about bubbles isn't whether they pop. It's what they build while everyone is arguing about whether they will.

Bubbles as a social technology

The mistake most bubble debates make is treating bubbles as errors — deviations from rational markets that must eventually be corrected. Historically, they're something closer to a coordination mechanism.

Major technological shifts require three things to happen simultaneously: capital has to move, talent has to retool, and institutions have to rewire themselves. None of that happens at scale without excess. Over-investment isn't a pathology of transformation; it's how transformation gets financed before the returns are legible.

Take railroads, electricity and the internet. Each produced a moment where spending ran ahead of demonstrated value. Not because investors were stupid, but because the alternative was waiting for a certainty that could only emerge after the infrastructure was built.

This is why Carlota Perez's work remains so useful. What looks like a bubble from a balance-sheet perspective often looks like an installation phase from a systems perspective. Capital floods in not because every project is sound, but because enough of them might be. The losses are visible. The foundations are not.

AI fits this pattern almost uncomfortably well.

Why the AI bubble narrative persists

The persistence of the AI bubble narrative has less to do with AI itself than with our discomfort around ambiguity. AI is simultaneously overhyped and under-deployed. The models are impressive; the implementations are uneven. The demos are dazzling; the enterprise results are lumpy.

That combination invites skepticism. It feels safer to assume eventual collapse than to sit with the messiness of adoption.

There's also a category error at work. Many critiques of AI economics implicitly assume that value should show up quickly, cleanly and at the level of individual use cases. When it doesn't — when pilots stall, costs fluctuate or ROI depends on second-order effects — the conclusion is drawn that the underlying technology must be flawed.

But this is like judging the productivity of electricity by the first factory that tried to reorganize around it.

General-purpose technologies don't create value by being impressive in isolation. They create value by forcing organizations to change how they work. That change is slower, more political and far less linear than the hype cycle suggests.

Which brings us to the part of the bubble debate that actually matters.

The real divide isn't optimists versus skeptics

It's builders versus spectators.

What separates organizations that extract durable value from AI from those that don't won't be their opinion on bubbles. It'll be whether they do the unglamorous work of turning novelty into infrastructure.

This is where the public conversation often misleads. From the outside, AI progress looks like a race between models: larger context windows, better reasoning, lower costs. From the inside, it looks like a struggle with data quality, change management, security reviews and ownership.

Executives who fixate on whether AI is "overvalued" are often responding to a more practical anxiety: What if we invest and don't see results? That fear is reasonable. But the answer isn't waiting for the bubble to resolve. It's understanding where results actually come from.

They rarely come from the first pilot.

The myth of the standalone use case

One of the quiet drivers of AI skepticism is the way organizations evaluate it. AI initiatives are often scoped narrowly, staffed thinly and judged quickly. A team is asked to "prove value" in 90 days using imperfect data and unfamiliar tools, while continuing to operate inside legacy processes.

When the results disappoint, the technology gets blamed.

But AI doesn't behave like a point solution. Its returns are highly sensitive to context: data readiness, architectural reuse, governance clarity and time horizon. A single use case may struggle on its own, but its components — cleaned data, prompt patterns, evaluation tooling, security reviews — are rarely single-use.

Value compounds when those components are reused.

This is why organizations that see strong returns from AI often sound oddly boring when they describe what they've done. They talk about platforms. About shared services. About standards. About investing early in capabilities that don't map cleanly to a P&L line item.

From the outside, this can look like bureaucracy. From the inside, it's how optionality is manufactured.

What bubble talk obscures

The obsession with bubbles tends to crowd out a more consequential conversation: What does it actually take to make AI work at scale?

That question has much less to do with model performance than with organizational design. It forces uncomfortable tradeoffs around centralization versus autonomy, speed versus control, experimentation versus accountability.

Most organizations, when pressed, discover they are not structured to absorb a general-purpose technology. They are optimized for incremental improvement, not for rethinking workflows. They reward local optimization, not cross-functional leverage. They fund projects, not capabilities.

AI exposes those fault lines quickly.

This is why debates about whether AI investment will slow or accelerate miss the point. Capital is not the limiting factor. Attention is. So is discipline.

The quiet importance of architecture

As AI capabilities commoditize, and they will, the locus of differentiation moves up the stack. The models matter, but they matter less than how they are embedded. The same underlying capability can produce radically different outcomes depending on how it is orchestrated, governed and extended.

This is not a new lesson. It's what happened with cloud computing. The winners weren't the ones who adopted cloud fastest, but the ones who rethought their architectures around it. Lift-and-shift delivered convenience. Cloud-native delivered leverage.

AI is following a similar path. Early wins often come from automating isolated tasks. Durable advantage comes from redesigning systems so intelligence is assumed, not bolted on.

A diagram the AI deployment timeline, which maps invA diagram of a flight  AI-generated content may be incorrect. The AI deployment timeline, which maps investment versus value.
The AI deployment timeline, which maps investment versus value.

That redesign is slow. It requires shared reference architectures, agreed-upon patterns and a willingness to invest ahead of certainty. It also requires accepting that while some early work may never pay off directly, it will make future work dramatically cheaper.

This is the part of the AI story that bubble narratives never capture, because it doesn't look like exuberance. It looks like plumbing.

Why governance is the unsung variable

Another casualty of bubble discourse is governance. It's often framed as a brake on innovation, something to be bolted on once the fun is over. In practice, governance is what allows experimentation to survive contact with reality.

Without clear decision rights, risk thresholds and ownership, AI initiatives stall in a familiar way: pilots proliferate but production lags. Teams learn a lot but nothing scales. Executives hear anecdotes, not aggregates.

The result is a perception gap. Operators see value. Leaders see noise.

Good governance closes that gap not by slowing things down but by making outcomes legible. It creates a shared language for success, a common set of metrics and a way to distinguish between promising failures and dead ends.

This is not glamorous work. It doesn't make headlines. But it's the difference between AI as a series of experiments and AI as an operating capability.

The time horizon problem

Perhaps the most corrosive effect of bubble thinking is the compression of time horizons. If you believe a collapse is imminent, you optimize for short-term proof. You demand immediate ROI. You underinvest in foundations.

Ironically, this behavior is what makes returns disappointing.

Organizations that extract real value from AI tend to evaluate it over realistic windows — long enough for learning to compound, but short enough to enforce discipline. They expect early inefficiencies. They plan for reuse. They measure portfolios, not anecdotes.

This doesn't mean blind faith. It means matching the evaluation model to the technology. You wouldn't judge a data platform by its first dashboard. You shouldn't judge AI by its first chatbot.

So, is AI a bubble?

It depends on what you mean.

If by bubble you mean there will be excess, failures and corrections, then yes. That's almost guaranteed. Some companies will overbuild. Some use cases will disappoint. Some valuations will look foolish in hindsight.

If by bubble you mean the underlying capability will fade or reverse, then history offers little support. The constraints around AI — energy, compute, data, talent — are real. But they define the shape of progress, not its existence.

The more interesting question for leaders isn't whether the bubble pops. It's whether their organization emerges from this period with new muscles or just scar tissue.

The executive question that actually matters

Strip away the noise and the strategic question becomes surprisingly simple: Are we using this moment to build capabilities that lower our cost of change in the future?

If the answer is yes, the bubble debate is mostly irrelevant. Excess capital and attention are doing useful work on your behalf. If the answer is no, then skepticism is just inertia in intellectual clothing.

This is the uncomfortable truth at the heart of the AI conversation: The biggest risk isn't being early; it's being unprepared when the technology stops being novel and starts being assumed.

By the time AI feels boring, the window for building advantage will have narrowed. The plumbing will be in place. The patterns will be known. The leaders will be those who invested when the conversation was still confused.

Which is, of course, exactly when bubble debates tend to be loudest.

A final thought

Every generation wants to believe it can outsmart the cycle. That it can enjoy the benefits of transformation without paying the price of excess. History suggests otherwise.

The smarter move is not to avoid the frenzy, but to use it — to let the noise create cover while you do the hard, quiet work of building.

Bubbles burst. Capabilities compound.

The difference between the organizations that thrive after the bubbles pop and those that don't is rarely timing. It's preparation.

And that is something that leadership can control.

WWT Research
Insights powered by the ATC

This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research.


This report is compiled from surveys WWT Research conducts with clients and internal experts; conversations and engagements with current and prospective clients, partners and original equipment manufacturers (OEMs); and knowledge acquired through lab work in the Advanced Technology Center and real-world client project experience. WWT provides this report "AS-IS" and disclaims all warranties as to the accuracy, completeness or adequacy of the information.

Contributors

Tim Brooks
Area VP and Global Head, AI Development & Business Advisors

Contributors

Tim Brooks
Area VP and Global Head, AI Development & Business Advisors

In this report

  1. Bubbles as a social technology
  2. Why the AI bubble narrative persists
  3. The real divide isn't optimists versus skeptics
  4. The myth of the standalone use case
  5. What bubble talk obscures
  6. The quiet importance of architecture
  7. Why governance is the unsung variable
  8. The time horizon problem
  9. So, is AI a bubble?
  10. The executive question that actually matters
  11. A final thought
  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2026 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Accessibility
  • Cookies