CrowdStrike At GTC Makes The Case For AI Native Security
by Tony Bradley via Forbes
I've been watching NVIDIA's GPU Technology Conference evolve for years. It started as a developer event for people who worked with graphics hardware. Then AI took over, and GTC became something closer to a de facto AI conference—the place where the industry takes stock of where compute and intelligence are heading. GTC 2026 continued that trend, but with a different undercurrent than previous years.
AI infrastructure is no longer hypothetical. Enterprises are building AI factories and deploying agentic workloads. That shift is creating a security problem that can't be bolted on after the fact—and the number of security announcements coming out of GTC this year reflected that reality.
CrowdStrike was particularly active at GTC, with several announcements tied directly to NVIDIA's ecosystem. Taken together, they say something about where enterprise security is going and how tightly it's now connected to AI infrastructure decisions.
Testing Security Before AI Goes Into Production
One of the practical problems with enterprise AI adoption is that organizations are building AI factories faster than they're figuring out how to secure them. CrowdStrike and World Wide Technology tried to address that gap with the launch of the Securing AI with CrowdStrike Lab inside WWT's AI Proving Ground, which is built on NVIDIA AI factory infrastructure.
The lab gives enterprises a place to test and validate AI security controls before committing to production deployment. Misconfiguration, data exposure and prompt injection risks don't disappear because you're excited about what the GPU cluster can do. Having a validated environment to work through those issues before they show up in a breach report is a reasonable approach.