Thoughts from the Cisco AI Summit 2026

For the past several months, I've been writing about a pattern I keep seeing in AI adoption. The technology is moving fast, but organizations are struggling to absorb it in ways that actually change how work gets done. What stood out to me at the Cisco AI Summit is that this gap was no longer implied or politely sidestepped. It was named, repeatedly, and from the top.

Across product, people, and platform leaders, the message converged on the same idea. AI systems are now far more capable than how most organizations are using them. Sam Altman described this as a "capability overhang," a growing mismatch between what models can do and what humans and institutions are ready to deploy. Put simply, the technology has sprinted ahead. Our ways of working have not.

That theme echoed throughout the summit. Jeetu Patel spoke about a paradox where AI is solving increasingly complex problems, yet leaders struggle to articulate concrete ROI because adoption stalls at the human and organizational level. Francine Katsoudas described what she called a "map problem." Traditional hierarchies no longer reveal where AI value is actually being created. The most effective users are often not the most senior or the loudest voices.

I'm seeing the same pattern in my own work. When leaders lack visibility into how AI is actually being used, they often assume it isn't being used at all. That assumption leads to parallel initiatives, duplicated effort, and teams solving the same problems in isolation. Without shared literacy, leaders lose sight of who is experimenting responsibly, who is stuck, and where real momentum is forming. Innovation fragments, and scale never follows.

Several speakers pointed out that this fragmentation is amplified by the nature of modern knowledge work. Aaron Levie argued that while coding has adopted AI rapidly because it is structured and verifiable, most enterprise work is anything but. It is context-heavy, permissioned, and full of tacit judgment. I see this gap daily, both internally and with clients. AI struggles here not because it lacks capability, but because people have not been taught how to use it effectively, how to provide the right context, or how to think about AI as part of a larger system rather than a standalone tool. That realization became clear to me in late 2024 and has shaped the enablement work I focus on today. This is a fluency gap, not a tooling one.

The human consequences are already visible. Katsoudas shared findings from Cisco's internal AI research study that surfaced what she described as an "efficiency paradox." Individual employees using AI can move at incredible speed, but when usage is uneven or invisible, team trust can fracture. High performers pull ahead, collaboration models strain, and organizations built for slower, more linear work start to feel the stress.

At the same time, the nature of value creation is shifting. Kevin Scott noted that as the cost of creation drops, the bottleneck moves to judgment. Reviewing outputs, catching errors, and deciding what should not be automated become the scarce skills. Katsoudas connected this shift to a growing need for curiosity and agency at every level, not just at the top. When fluency is left to self-selection, organizations unintentionally narrow who benefits.

My takeaway from the summit is that AI adoption does not fail because people resist change. It fails because organizations are not addressing how humans work and collaborate, even as they hand them radically different tools.

AI literacy helps people understand what these systems are and are not, reducing fear and magical thinking. AI fluency allows teams to supervise, judge, and collaborate with increasingly capable systems. Without both, leaders end up with isolated power users, fragmented trust, and experiments that never scale.

This is why I keep coming back to human-centric AI adoption. Not because it sounds comforting, but because it reflects reality. AI does not transform organizations by being introduced. It transforms them when people learn how to think, decide, and collaborate differently.

If your AI strategy does not include changing how humans learn and work, it isn't really an adoption strategy. It's a collection of pilots waiting to stall, no matter how advanced the technology becomes.

Technologies