The AI Security Crossroads: Black Hat and Defcon 2025 Show What's Next
In this blog
BlackHat USA 2025 and Defcon 33, both held in Las Vegas, left no doubt that artificial intelligence is now the defining force in cybersecurity. This year's events marked a clear shift from 2024, when the focus was on protecting large language models from prompt injection, jailbreaks and retrieval augmented generation risks.
In 2025, the spotlight moved decisively toward agentic AI—autonomous, goal-oriented systems capable of executing complex, multi-step operations without human intervention.
These AI agents are becoming vital parts of defensive security strategies but are also arming attackers with unprecedented speed and capability. The result is a high-speed race in which both sides are continuously adapting, and the margin for error is shrinking.
Keynote insights and strategic themes
Jeff Moss set the tone in his opening remarks by calling for integrated resilience across cyber, physical and narrative domains—a departure from the siloed defenses that have long dominated the industry. He stressed that trust between vendors, clients and partners is now just as critical as technical speed in detection. His push for open threat intelligence frameworks to counter adaptive AI threats resonated strongly with attendees and offered a vision of broader collaboration across the sector.
Mikko Hyppönen provided a 30-year retrospective on cybersecurity, tracking its evolution from the hobbyist malware of the 1990s to the state-sponsored, AI-driven ecosystems of today. He predicted a near future where autonomous reconnaissance bots could run entire exploit chains without human oversight, and urged the embedding of adversarial AI simulations into testing—a message that dovetailed with many of the week's technical demonstrations.
Nicole Perlroth turned the focus to operational and human risks, showing how malware-as-a-service operations powered by AI could spin up targeted campaigns in minutes. Her live demonstration of deepfake-driven disinformation campaigns, including fake CEO videos, was a stark reminder that reputational damage can now occur well before a technical breach is even detected. She advocated for incident response playbooks that incorporate narrative defense alongside technical mitigation.
Kate Fazzini deepened the conversation on narrative warfare—harassment, misinformation, and even home network attacks targeting executives and analysts. She gave real-world examples of journalists collaborating with security teams to expose nation-state operations and emphasized that high-value personnel's personal environments have become viable entry points for attackers.
Notable threats and techniques
Technical briefings highlighted an array of vulnerabilities and attack strategies now in active use.
- Cloud & Identity Exploits: Researchers showed how impersonating agents could escalate privileges into high-level IAM roles. Another demonstration revealed how Active Directory-to-Entra ID trust relationships could be abused for rapid tenant mapping and privilege escalation. A case study on API and Key Vault secrets leakage underscored both the technical risk and the challenges in coordinating disclosure and remediation.
- Secrets & Policy Gaps: A flaw in security allowed bypassing lockout mechanisms and escalating privileges to root via logic errors in policy validation.
- AI-Specific Attacks: Multiple sessions showed how prompt injection could be used to exfiltrate sensitive data from enterprise LLM deployments. Data poisoning was demonstrated as a way to reduce malware detection accuracy in real time. Architectural flaws in agentic AI systems were exploited to bypass guardrails entirely.
- Infrastructure Risks: Misconfigured environments allowed privilege escalation in GPU-powered AI workloads, highlighting the importance of hardening AI infrastructure.
- Evasion Tactics: AI-obfuscated DNS tunneling was used to disguise malicious traffic, and Event Tracing for Windows logs were manipulated to blind EDR systems.
The common thread across these threats was the speed and adaptability AI now gives to attackers—compressing the kill chain from hours to minutes.
Defcon 33 hands-on highlights
While Black Hat leaned toward strategic insights and technical briefings, Defcon delivered the tactile, hands-on experience.
Attendees began with a drone hacking lab, taking apart drones to extract SD card data and manipulate software—a practical illustration of how hardware security intersects with data protection. In the IoT Village, a firmware reverse engineering lab guided participants through extracting files from firmware, revealing persistent weaknesses in device update mechanisms. Another lab on Govee smart lights showed just how easy it is to exploit Bluetooth devices: identify the network, locate the MAC address, then use a Python script to toggle the lights on and off.
One of the most talked-about sessions was GlytchC2, where presenters demonstrated hijacking offline Twitch livestream pages, bringing them online, injecting their own content, and even exfiltrating data from the stream. The fact that this vulnerability remains unpatched only heightened the urgency of the discussion.
John Hammond's open cybersecurity talk was a crowd favorite, covering topics from navigating the dark web to mobile malware capable of leaking personal photos. It was less formal and more wide-ranging, but it gave attendees a quick, intense tour of current threat realities.
A standout session, Here and Now: Exploiting the Human Language, broke down social engineering into a structured framework: perceive without bias, assess risks and opportunities, decide and act with awareness, and read attitudinal cues like pauses or tension. The emphasis was on human signals as exploitable assets—a reminder that social engineering remains a potent first step in many technical compromises.
Lab and Learning vision for 2025–2026
The lessons from both conferences feed directly into our potential content development roadmap:
- Q3 2025 – DNS Security Labs: Launch scenario-based labs using Infoblox Threat Insight and Splunk, training analysts to detect and mitigate DNS tunneling, spoofing and C2 traffic in realistic simulations.
- Q4 2025 – DNS Tunneling Learning Paths: Build structured learning paths combining Infoblox detection with Recorded Future threat intelligence, helping analysts understand both the "how" and "why" of malicious DNS activity.
- Q2 2026 – AI Threat Modeling: Partner with Protect AI and Deep Instinct to create adversarial AI threat models and test them for vulnerabilities in LLMs, agentic AI and AI-assisted malware under controlled lab conditions.
- Q3 2026 – Entra ID Attack Simulations: Expand SOC training to include lateral movement and privilege escalation scenarios leveraging Entra ID, teaching analysts to recognize abnormal identity behaviors and misconfigured trust relationships.
- Q4 2026 – Narrative Defense Modules: Introduce training with Blackbird.AI and Dataminr to detect and respond to disinformation campaigns, deepfakes, and targeted harassment in real time.
Conclusion: The AI-driven battleground
Black Hat and Defcon 2025 left no doubt that AI—particularly agentic AI—is now at the heart of the cybersecurity fight. The same technology that can automate detection, investigation, and response is enabling attackers to compress timelines, chain exploits, and evade defenses faster than ever.
Defenders who adopt AI thoughtfully, build in strong guardrails, and prepare for adversaries who blend technical, physical, and narrative tactics will be best positioned to stay ahead in this high-speed contest. The battleground is here, it is AI-driven, and it is evolving by the minute.