Cyber experts pinpoint what to look out for in 2026
by David DiMolfetta, Nextgov/FCW
With 2025 coming to a close, Nextgov/FCW asked cybersecurity experts — including former officials, research analysts and providers — to outline their predictions for cybersecurity activity in 2026.
Morgan Adamski, former executive director at U.S. Cyber Command and deputy leader for PwC's Cyber, Data & Technology Risk Platform
Jiwon Ma, senior policy analyst at the Foundation for Defense of Democracies' Center on Cyber and Technology Innovation
John Laliberte, former NSA vulnerability researcher and CEO and founder of ClearVector
Frank Cilluffo, former homeland security official under President George W. Bush and director of the McCrary Institute for Cyber and Critical Infrastructure Security
Madison Horn, national security and critical infrastructure chief advisor at World Wide Technology:
In 2026, the most dangerous cyber events will not look like cyberattacks at all. They will look like reasonable, automated decisions made at scale until systems begin to fail.
The defining risk will be AI cascading failures across critical infrastructure. A single compromised or poorly governed AI agent in energy, transportation or logistics will trigger automated responses across tightly coupled systems. One "bad" decision will propagate instantly, not because systems were breached, but because they were trusted
At the same time, AI supply chain compromise will eclipse zero-day exploits as the highest-impact attack vector. Poisoned training data, manipulated model weights, compromised plugins, and agent action libraries will quietly undermine AI systems long before deployment. These failures won't be detected by traditional security tooling, and most organizations won't realize they are operating on corrupted intelligence until physical or economic consequences emerge.
Beneath the AI layer, virtualization and hypervisors will become the next systemic choke point.
Hypervisors sit below cloud workloads, [operational technology] edges, and enterprise environments yet few organizations have visibility or security ownership at this layer. As dependence on virtualization deepens, these hidden control planes will represent a single point of failure capable of producing cross-sector disruption.
By the end of 2026, identity as we know it will break. Enterprises will manage exponentially more machine, AI agent, and workload identities than human ones and current [identity and access management] models are fundamentally incapable of governing autonomous, non-human trust at scale.
Meanwhile, quantum transition shock will arrive abruptly. Organizations will realize overnight that they should have started post-quantum migration years ago, particularly for long-life systems such as operational technology, satellites, and military communications.
As these risks converge, AI governance will move beyond corporate compliance and become a national security imperative driving new liability frameworks, mandatory incident reporting, and sector-specific controls for AI systems with physical consequences.