Partner POV | The Year in Review 2025: AI, APIs, and a Whole Lot of Audacity
In this article
- The Year in Review
- Reflections and expectations
- Roger Barranco, Vice President of Global Security Operations
- Richard Meeus, Senior Director of Security Technology and Strategy, EMEA Region
- Reuben Koh, Director of Security Technology, APJ Region
- Steve Winterfeld, Advisory CISO
- Tricia Howard, Scrybe of Cybersecurity Magicks
- A commitment to learning
- Download
This article was written and contributed by, Akamai.
The Year in Review
Remember that wild heist at the Louvre Museum in Paris in October 2025? Thieves literally used a cherry picker to smash through a window and steal French crown jewels. It's honestly the perfect metaphor for what has happened in cybersecurity this year: Threat actors will use whatever tools they can get their hands on to grab your digital crown jewels.
Today's tools are both super sophisticated and frighteningly easy to access, thanks to ransomware as a service (RaaS). The kicker is that, unlike a museum, your enterprise "treasure room" is massive and spread across the entire globe. That's a lot of windows you need to keep an eye on.
Helping businesses tackle these security challenges is what we're all about at Akamai. Our State of the Internet (SOTI) reports are designed to give you the insights you need to fight back against the threats that are changing the game in cybersecurity.
And, yeah, AI is becoming a bigger player in security, but we know that nothing beats real human expertise when it comes to spotting and stopping sophisticated attacks.
Reflections and expectations
For this Year in Review, we pulled together some of the brightest cybersecurity minds from across Akamai to break down the biggest security trends from this past year and explore what might be coming in 2026.
In this blog post, Roger Barranco, Richard Meeus, Reuben Koh, Steve Winterfeld, and Tricia Howard share practical, actionable ideas to help you close security gaps and build up your defenses.
Roger Barranco, Vice President of Global Security Operations

What stood out for you in 2025?
"AI has lowered the barriers to entry for attackers. Threat actors no longer need to be skilled coders; they can use AI to verbally build and mount an attack."
We observed a dramatic increase in the number of very large distributed denial-of-service (DDoS) attacks. Attacks in the multi-terabit range are becoming a regular occurrence. So far, these attacks have been readily mitigated. However, we are also seeing an increase in more complex attacks, including those launched by nation states. This highlights the value of having a scalable platform with strong native capabilities combined with highly skilled individuals available to quickly address complex, zero-day attacks.
We've also seen a phenomenon in which some organizations are being targeted with bot attacks by an extremely diligent attacker. We mitigate the attack, but the attackers continue to come after the target, often mixing in scraping activity. This may or may not be malicious; it could be an aggregator looking to gather product information.
Often, this activity is timed around specific events, like a product launch or promotional event. When needed, we can dedicate a security architect to be on the customer bridge during these events, addressing any anomalies in real time.
The increased use of AI by attackers was another trend in 2025. AI has lowered the barriers to entry for attackers. Threat actors no longer need to be skilled coders; they can use AI to verbally build and mount an attack. Fortunately, AI has also empowered defenders by helping to quickly identify subtle behavioral anomalies that may be indicative of an attack.
In last year's Year in Review, I highlighted the resurgence of the Mirai botnet and it has come back again with a vengeance. This year has seen the emergence of some extremely powerful Mirai variants that we are watching closely.
What notable issues do you foresee in 2026?
CISOs are under significant pressure to facilitate the rapid adoption of AI and this can negatively impact the prioritization of risk register items by assigning lower risk levels to AI-related initiatives. This further highlights the need for purpose-built technologies that effectively protect AI elements, including large language models (LLMs).
The rapid adoption of AI also creates a new front in API security. Typically an LLM will need to transit an API at some point, making API protection critical. Now, CISOs are recognizing that API security can be used to identify critical traffic, such as determining whether it's an LLM or a human trying to access something.
This generates an alert, enabling security teams to decide whether to allow the traffic or block it. Therefore, there is big value in monitoring APIs to identify where LLMs are or where AI is being used to reach inside the enterprise.
With an increased focus on AI governance, more organizations are creating rules regarding whether employees are permitted to build their own AIs to assist with their work. This is giving rise to AI control boards to determine what is allowed and to provide oversight so it's not completely unregulated as in the past; this is vital from a security perspective.
In 2024's Year in Review, I mentioned the need to focus on quantum computing and that continues to be a future concern. Organizations are already dealing with the issue of post-quantum–safe certificates designed to provide a higher level of encryption. The challenge is how to transition to these new digital certificates when some end users are using browsers that do not support them.
In 2026, I expect to see enterprises upgrading their infrastructure and encouraging their customers to upgrade their browsers to ones that support post-quantum–safe certificates.
Richard Meeus, Senior Director of Security Technology and Strategy, EMEA Region

What stood out for you in 2025?
I think the best word to describe this past year is resilience — specifically, the lack of it. Here in the United Kingdom, we saw some significant and highly publicized attacks on retail enterprises, as well as ongoing attacks targeting some major manufacturers. Many of these attacks targeted customer service in order to access user information and reset passwords. Although this wasn't the primary attack vector, it emerged as a vector that is perhaps not red-teamed enough.
Last year, I discussed the NIS2 Directive's focus on operational resilience. It was designed to be written into legislation by each European Union (EU) state by October 2024, although many countries have not yet done this.
Contrast this with the Digital Operational Resilience Act (DORA) that provides an enhanced resilience framework for financial organizations. This act was a regulation, not a directive, so it was applicable across the EU as a whole as of January 17, 2025. NIS2, however, is proving to be a bigger lift for some countries.
In 2025, we also saw some major outages caused by faulty updates. These were extremely disruptive to businesses and consumers across a variety of industries. This issue raises important questions:
- Should you assume every vendor update can be trusted?
- Should you triage it and, if so, what processes do you have for that?
- Do you sandbox it and do a staged rollout or canary deployment?
- Companies need to prepare and think about how they will mitigate this risk.
What notable issues do you foresee in 2026?
"Just as human service agents can be socially engineered, so can chatbots. Understanding the shift from attacking customer service staff to attacking these endpoints will become a big concern for businesses."
I already mentioned customer service as an attack vector; I can see this evolving further in 2026 as human service representatives are increasingly augmented by AI chatbots. These chatbots are moving beyond hierarchical, tree-based responses to have more intelligence and the ability to provide customers with more information quickly.
However, just as human service agents can be socially engineered, so can chatbots. Understanding the shift from attacking customer service staff to attacking these endpoints will become a big concern for businesses.
It will be interesting to see what happens as AI changes the way we use the internet. The way people search hasn't changed fundamentally since the days of AltaVista. It will change, however, as people use AI agents to do their searching/booking/ordering for them.
This raises an interesting question: Will the web server or gateway treat that situation as a human request or a bot request (instigated by a human)? I think organizations will need to deduce what the impact of this interaction is and how to manage it from a security standpoint. In a way, the pendulum will shift from bots being viewed largely as a problem for businesses to bots being seen as a benefit.
So, being able to verify and validate bots will become important. This will lead to agreements with major AI vendors to cryptographically verify their queries so they will be dealt with in a way that's different from a dodgy scraper.
Finally, I think we'll see an increase in DDoS attacks that exploit Internet of Things (IoT) vulnerabilities. The Cyber Resilience Act (CRA) that the EU passed in 2024 is intended to address the inherent poor security often seen in IoT devices that leads to easy abuse by attackers.
The CRA provides a harmonized approach to IoT device security designed to simplify compliance and avoid overlapping regulations. Provisions will phase in over time, so it will be interesting to see how companies comply and what impact that will have on IoT security.
Reuben Koh, Director of Security Technology, APJ Region

What stood out for you in 2025?
"AI is powering successful attacks and posing challenges for conventional defenses and security teams because the attacks are harder to detect, have more impact, and achieve their objectives faster. "
In 2025, we saw AI move from experimentation to practical implementation in cyber offensive capabilities. AI is powering successful attacks and posing challenges for conventional defenses and security teams because the attacks are harder to detect, have more impact, and achieve their objectives faster.
Attackers are actively using GenAI to develop everything from malware to AI threat agents at speed, without requiring deep technical skill sets. Defenders cannot afford to sit still. Their defenses will become obsolete very quickly if they don't adapt to this AI arms race.
This emphasizes the need for greater AI literacy. Security teams must become well versed in the technology so they can harness its power. For example, some enterprises are building their own AI chatbots to capture their accumulated security knowledge in a training dataset so less-experienced team members can get quick assistance.
With AI evolving so rapidly, teams need to brainstorm about what else they can do beyond today's capabilities. It's also important to make sure vendors and supply chain partners are focused on AI security. Enterprises must ask: "If I'm using your AI-enabled tool, how are you protecting your LLM, your dataset, and other AI elements?"
What notable issues do you foresee in 2026?
The rapid acceleration of the attack lifecycle, fueled by autonomous AI, will be a top challenge in 2026. From automated vulnerability exploitations to AI-assisted malware development, AI is driving new levels of attack precision and efficiency.
Studies have shown that the time required for AI-enabled exploits to achieve success have been compressed dramatically. A typical data breach in 2026 will require hours rather than weeks to deliver an impact. This is a particular concern in the Asia-Pacific (APAC) region, where enterprises commonly still rely on traditional perimeter defense and human-centric response. Organizations that lack mature security operations and AI literacy will be at a significant disadvantage.
In 2026, APIs will surpass all other attack vectors to become the dominant source of application-layer data breaches, potentially accounting for more than half of all application attacks across APAC. More than 80% of APAC organizations reported experiencing at least one API security incident in the past year.
Critically, nearly two-thirds of APAC organizations don't know which of their APIs are processing sensitive data. This highlights the need for greater visibility and governance of API inventories.
The adoption of "vibe-coding," where GenAI is used to create APIs, will only increase the risk. AI-assisted coding has been linked to a rise in misconfigurations, insecure default settings, and overlooked vulnerabilities. To manage the risk, organizations will need specialized security tools to discover, test, and protect APIs throughout their lifecycle.
Ransomware will be completely democratized in 2026, driven by RaaS and AI-powered "vibe-hacking." Expect to see an increase in attack frequency and speed. The lines separating cybercriminals, hacktivists, and state-aligned operators are becoming blurred, obscuring attack motives and making attribution more difficult.
In APAC, high-tech manufacturing is likely to be a prime target. A successful ransomware attack against semiconductor fabrication plants would disrupt the global chip supply and inflict tremendous economic damage. With the democratization of advanced attack tools, organizations must quickly ramp up their operational resilience and not only focus on preventing breaches.
Steve Winterfeld, Advisory CISO

What stood out for you in 2025?
This year, the SOTIs have included a guide for defenders and the latest information on apps and APIs, ransomware and DDoS, and fraud and abuse threat trends. Two patterns were clear throughout all the reports:
First, bots continue to innovate and will be as persistent as DDoS has been.
Second, we see cybercriminals continue to follow the critical data to monetize their attacks, and we see APIs as the primary focus with GenAI/LLMs emerging as the new target area.
Statistics confirm these two trends. For example, AI bot traffic has been rising by 300% year over year from July 2024, we have seen 94% growth in quarterly application-layer (Layer 7) DDoS attacks, and 47% of AppSec teams maintain full API inventories but fail to identify APIs that handle sensitive data.
Next, I should mention some of the frameworks that were reviewed in the SOTI reports. Here are the resources that stood out to me:
The insights into mapping real-world attack alerts to different frameworks that showed what actual attack trends were based on industry standards (sample results — OWASP 32%, MITRE 30%, ISO 22%, GDPR 21%, PCI DSS 16%)
A compliance integration process that presented the six steps to build a unified multidisciplinary program
Guidance on how to map OWASP Top 10 vulnerabilities to your fraud program to help prioritize what to fix first
As we think about how to take advantage of these insights, it is important to remember that we need situational awareness to understand what is happening. We also need to work with a vendor whose platform can provide the integrated ability to tailor your mitigations based on your leadership's risk appetite. Next, we need to validate our process and technical controls through exercises and testing.
Finally, it is important to consider the criminal ecosystem in which we see that state-sponsored attacks based on regional wars, monetized bot and RaaS capabilities, and AI-powered fraud tools like FraudGPT and WormGPT are redefining today's cyberthreat landscape.
What notable issues do you foresee in 2026?
"Most organizations need to update their cyber risk portfolio to ensure that they can handle the latest trends, such as the new surge in scraping, the need for brand protection, and new record-setting DDoS attacks."
As we look toward 2026, we see that most organizations need to update their cyber risk portfolio to ensure that they can handle the latest trends, such as the new surge in scraping, the need for brand protection, and new record-setting DDoS attacks. But the real work is making sure they can mitigate two key threats: edge attacks and business disruption.
API and GenAI capabilities are exploding and organizations need to make certain that they are secure. They also need to detect and segment ransomware attacks so they do not have a material impact on business.
Across industries and regions, cyberthreats are escalating in both scale and sophistication — driven by AI innovation. The mandate for 2026 is clear: Build resilience through tested playbooks, leverage frameworks (like OWASP, MITRE, ISO), and run validation exercises that turn situational awareness into measurable readiness.
Tricia Howard, Scrybe of Cybersecurity Magicks

What stood out for you in 2025?
"The more AI is used by attackers, the more important it becomes to understand the people and psychology behind the attacks. "
The year 2025 continued to prove to me that the more complex the threats are, the more important the basics are. Imagine buying a US$5 million house and then finding out the foundation is bad. You'd be pretty upset right? Imagine how much more money you'd have to spend to fix the foundation after you'd already moved in.
Security is no different — you need a solid foundation of the basics to stay safe. Some of the biggest breaches we've seen were cases of simple phishing or failing to have multi-factor authentication (MFA). The basics matter. Please please please patch your vulnerabilities!
Attackers have long repurposed valid, established tactics for their malicious intent, but even that has evolved. A troubling trend we're seeing more and more of is the abuse of processes and/or features that are not only legitimate, but also are intended to help. A great example of that is the new variant of the Coyote malware — the first confirmed case of maliciously using Microsoft's UI Automation (UIA) framework in the wild by weaponizing accessibility features.
Another example is the abuse of compliance and knowledge of the legal ramifications to ensure a payout in a ransom situation. It works because it's brilliant, so it's scary and worth keeping an eye out for.
Finally, AI and LLMs made an obvious impact this year, including in attacks focused on business logic abuse. The more AI is used by attackers, the more important it becomes to understand the people and psychology behind the attacks.
You're not just fighting the tools — you're fighting the humans building and using those tools. Attackers are people, too, and criminal organizations are enterprises. They're trying to do more with less like everyone else. So, rather than focusing only on the tools themselves, asking why the attackers are using these tools and what they're trying to achieve is really important for proper defense. LLMs and GenAI extrapolate the human factor, not remove it.
What notable issues do you foresee in 2026?
Organizations need to assume a breach. Full stop. We are beyond wondering if they will get in. They will get in. In fact, they've probably already been in. Disaster recovery should be a part of your business continuity plan now.
This is especially true for an enterprise — when an incident occurs, a lot of things need to happen in a very short amount of time. That means organizations need to break down the communication and technical barriers that separate groups — marketing and public relations here, IT there, and security over there — to be able to mount an effective response. Plus, the more you're communicating across teams, the more opportunities you have to find potential vulnerabilities.
I believe that third-party risk and supply chain risk are going to become really critical in 2026. Because of AI and LLMs, it's a completely different ball game than what we've seen in the past . You may have security vendors that are using these GPTs without disclosing that their customers' data is being fed into them.
As a defender or a CISO, how can you manage your risk when you don't even know what that risk is? It will be vital to assess your vendors (and their vendors), know what questions to ask, and be able to score your risk.
I expect we'll also see a continuation of hacktivism tied to geopolitics. People are even more purpose-driven today because of the harsh political climates around the world. I think we're going to see more highly targeted whale hunting attacks (i.e., sophisticated phishing attacks that target high-profile executives) that use AI tools. The agility that these tools provide allows attackers to focus on a target with significantly higher earning potential with no more effort than what's needed to target a smaller business.
Whether the attacks are financial, purely disruptive, or something else, understanding the attackers' motivation will help organizations make decisions about where to focus their defensive efforts.
A commitment to learning
We hope that this Year in Review 2025 gave you some solid insights and helpful takeaways. As we head into 2026, we're staying committed to diving deep into the cybersecurity world and covering the stuff that actually matters to you. We're excited to bring fresh perspectives to the table while keeping things relevant, whether you're dealing with global threats or regional issues.