How to Use ARMOR: A Guide to AI Security Transformation
In this article
Many IT leaders find it difficult to understand how AI security differs from traditional cybersecurity and how to manage these new risks. WWT's AI Readiness Model for Operational Resilience (ARMOR), a vendor-agnostic framework to secure AI deployments at every stage, helps organizations operationalize AI security to protect against these dynamic and fast-moving threats. To understand the path ahead with ARMOR, let's consider a couple of related questions.
Why utilize a framework?
Frameworks provide an operational architecture to clarify, understand and align tactics to strategy, especially when those activities span cross-functional areas. Frameworks can also inform technical architecture decisions, which can either help or hinder the realization and sustainability of secure enterprise AI. What is ARMOR?
What is ARMOR?
ARMOR is a vendor‑agnostic AI security framework delivered by WWT, leveraging a jointly built approach with NVIDIA and strengthened through real‑world collaboration with the Texas A&M University System. It delivers clear, expert guidance across six security domains: governance, risk and compliance, model protection, secure AI operations, infrastructure security, data protection, and secure development lifecycle with the through line topic of cyber resilience. Each domain provides structured and practical guidance to help organizations evaluate and plan the security of their AI initiatives.
ARMOR is mapped to industry standards (such as NIST, CSA, ISO, CIS, OWASP, etc.), to ensure consistent and familiar messaging. ARMOR takes it a step further, offering prescriptive, operational detail tailored for today's dynamic threat landscape.
Why ARMOR?
Securing and defending AI systems requires a focused and integrated approach. Without a comprehensive framework, organizations face issues such as data leakage, ineffective AI outcomes, compliance gaps and damaging visibility limitations.
ARMOR's six core domains focus on key elements of securing AI systems and related processes, helping you understand security control points inside and between these systems, and in their interactions with the surrounding environment. This is key to understanding how your AI security mechanisms mesh with traditional security solutions and is vital to identifying present gaps. ARMOR also accommodates organizations at different maturity levels. While some teams may need deep guidance across all domains, others may need expert concentration on a select few. ARMOR provides an accessible path for organizations of any maturity level. Even mature organizations benefit from reviewing each domain's expert insights to validate assumptions and confirm that their security posture is ready for modern AI demands.
Our direct experience supporting real-world AI deployments shows that strong results come from addressing security comprehensively, at every stage of the AI lifecycle and across all six ARMOR domains. Many frameworks list controls without exploring how to use them together in a way that reflects the challenge of dealing with complex, multi-functional organizations and significant changes over the long term. ARMOR considers this uniquely, from a "plan to implement, design to operate" approach that fits the realities of cross-functional teams and long-term AI evolution. ARMOR is a proven, real-world validated AI security framework that was shaped through two-way collaboration with a lighthouse customer who served as the first testing ground.
How does ARMOR apply for highly regulated industries?
Highly regulated sectors face strict requirements for security, transparency and operational control. ARMOR supports these needs by aligning with established industry standards such as NIST and ISO rather than existing as a replacement. Its value comes from practical guidance on safe AI adoption, deep implementation experience and a clear view of how to meet regulatory expectations while advancing AI initiatives.
WWT also plays a meaningful role in this process. Because we maintain a clear separation of responsibilities, we can assist with risk assessments and compliance preparation without conflicts. Our existing services offerings align to the ARMOR domains, providing an informed, structured look at your environment and its associated risk.
Regulated industries often adopt new technologies cautiously. This pattern mirrors early cloud adoption, where speed and innovation were balanced against risk and the need for strong governance. The authors of ARMOR designed the framework with these lessons in mind, ensuring that organizations can pursue AI opportunities responsibly while maintaining accountability, including when third-party providers are involved.
How do I Use ARMOR?
At the highest level, the AI journey consists of three phases: readiness, initiation and transformation. Most organizations get the fastest and most sustainable results by starting with a short AI readiness accelerator. This focused engagement delivers immediate value through shared knowledge of leading practices, stronger cross-functional alignment, identification of high-value use cases, a prioritized gap analysis and a clear roadmap. It is also the ideal time to introduce ARMOR, learn how its domains support the full AI lifecycle and begin building enterprise-wide support for secure AI adoption.
No matter where you start, operationalizing ARMOR follows eight clear and repeatable steps that guide your organization from initial awareness to institutionalized, continuously improving AI security:
1. Access the ARMOR dashboard: Your AI security command center
Your journey to holistic AI security begins with the ARMOR dashboard, a centralized hub that houses all ARMOR content organized into dedicated domain research pages. From the main page, you can explore each of the six core domains: governance, risk and compliance (GRC), model protection, infrastructure security, secure AI operations, secure development lifecycle (SDLC), and data protection, along with the through line focus of cyber resilience. The dashboard is designed to be your single source for ongoing updates, guidance and new materials as they are released. It gives you an efficient way to stay current, revisit key concepts and identify early areas of interest worth examining in more depth as your AI program evolves. Explore this dashboard and identify any early areas of interest to consider deeper moving forward.
2. Introduce, distribute and align on the framework
Introduce ARMOR to key stakeholders as early as possible. Bring together IT, security, data, research teams and any other functional groups that will build or rely on AI capabilities. The goal is to establish a shared understanding of ARMOR's domains and the capabilities required to support secure AI adoption.
Early exposure creates several advantages:
- Ensures teams understand the six ARMOR domains, the capabilities within each and example activities.
- Surfaces misunderstandings about how AI security diverges from traditional cybersecurity and where new considerations arise.
- Reveals process gaps, overlaps or conflicting work across teams.
- Gives each group time to begin evaluating and budgeting for the tools and services they may need.
Early alignment builds a stable foundation for the work ahead. It prevents confusion, reduces rework and ensures teams understand how their efforts connect rather than encountering the framework after decisions have already been made.
Once the initial introduction and alignment are complete, distribute the relevant ARMOR domain content to leaders and teams for deeper review. While anyone can review the full set of materials, dividing work across subject matter experts is usually the most efficient approach. For example, your data team can focus on data protection and your GRC team can review governance, risk and compliance.
3. Design governance and the operating organization
This step marks the shift from readiness to transformation as you begin mapping your environment to the ARMOR controls. Once teams share a common understanding of the framework, the next priority is designing the governance and organizational structures that will make ARMOR's insights operational. While governance can take different shapes across organizations, the focus here is on establishing sustained oversight for the framework and its related responsibilities.
A key step is to establish a cross-functional body that can make real-time decisions on risk acceptance, policy interpretation and architectural changes. Depending on your organization, this may include:
- A cross-functional AI security governance board, empowered to approve policies, manage risks and guide strategic decisions.
- An AI or AI/ML Center of Excellence (CoE) that coordinates capability building, shared services and institutional expertise.
- Working groups dedicated to technical operations, such as model evaluation, ML security engineering or data governance.
- Documented decision rights and escalation paths for each ARMOR domain, ensuring clear guidance when roles or responsibilities are uncertain within the shared responsibility model described next.
The goal is not to implement governance. It is to create the structures that support continuous operationalization, turning the framework from a static reference into a living system. Publish the governance charter early in the process. Momentum and shared clarity are far more valuable at this stage than striving for perfection.
4. Define the shared responsibility model
A clear shared responsibility model (SRM) is essential for reducing ambiguity and managing risk. Without it, ownership becomes unclear, handoffs falter and gaps in security emerge, so invest time to get this right. Clearly document responsibilities using a RACI (responsible, accountable, consulted, informed) matrix or similar tool. Pay close attention to integration points where multiple teams collaborate, since these are often the areas most vulnerable to misalignment. Be sure to account for any third-party or managed service providers, as they influence visibility and security expectations across the AI deployment.
Developing the SRM includes:
- Creating an ARMOR-aligned RACI matrix that identifies the teams or roles responsible, accountable, consulted and informed for each key capability.
- Determining which responsibilities belong to groups such as the AI CoE, IT security, infrastructure teams, research functions, partners or other contributors.
- Identifying gaps, overlaps and handoff points across the AI lifecycle.
This stage often surfaces "quick wins," such as clarifying incident response paths for AI behaviors, or defining baseline requirements for model registration. Consider including the shared-responsibility model in onboarding materials, vendor contracts, service-level agreements and architecture review templates to reinforce consistency and ensure teams understand how their roles fit into the broader security picture.
5. Prioritize controls and identify quick wins
Not all security controls are equally urgent. Focus initial efforts on the "lethal trifecta" of AI security risks: unauthorized access to AI systems, uncontrolled distribution of AI capabilities, and exposure to untrusted content or data. These risks represent foundational vulnerabilities that can undermine even the most mature AI programs if left unaddressed.
- Map ARMOR's domains against these three critical risk areas to identify your highest-priority controls.
- Cross-reference each ARMOR domain against the lethal trifecta to ensure coverage of these fundamental risks before expanding to other controls.
- Apply zero trust principles across your AI environment.
Applying zero trust means continuously assessing AI system access, controlling how models and data can be moved or shared, and rigorously validating all inputs to AI systems. This mindset strengthens your early security posture and often reveals quick wins that can be implemented immediately to reduce exposure.
6. Identify solutions and build the implementation roadmap
With clear governance, responsibilities and control priorities, your organization can accelerate evaluation and selection of supporting technologies and building the AI security roadmap.
This includes evaluating solutions for:
- Secure AI development environments
- Data protection and lineage
- Model monitoring and behavior analytics
- Red-teaming and evaluation tooling
- Isolation boundaries and inference-time controls
- Model registries, documentation tooling and supply chain integrity
- Policy enforcement and configuration automation
As these components come together, ARMOR transitions from a conceptual framework into a functioning, embedded part of how your organization builds and operates AI. This shift ensures that security, responsibility and scalability remain integral to every stage of your AI program.
7. Leverage our technical and services teams
Engage our technical experts to create space for deeper, more personalized conversations about any domain(s) you'd like to explore. This helps to refine your takeaways from reviewing ARMOR and turns its recommendations into clear, actionable next steps.
WWT's services can help you interpret regulatory drivers, clarify business goals and understand your risk landscape. This support also helps you identify which ARMOR domains are most urgent for your organization's maturity and operating model.
Importantly, engaging services is not a departure from traditional security or infrastructure engagements. It actually simplifies the process. Once your teams have reviewed ARMOR and aligned on initial insights, you will likely feel more confident pursuing assessments that match your current needs. Many organizations start with a risk and governance assessment, which provides clarity on risk appetite and helps prioritize actions based on real organizational exposure.
8. Turn insights into action
ARMOR helps you identify risk areas across your organization and provides clear, actionable guidance for building resilience. With assessment results and a roadmap in hand, you can invest confidently in the controls, processes and technologies that matter most and considering solutions for your ecosystem becomes simple and structured. Our team remains a partner throughout the journey, supporting you from planning and implementation through ongoing optimization.
Key takeaways
- Frameworks matter. They provide an operational architecture that aligns strategy, operations and investment.
- ARMOR offers practicality and operational depth, integrating the best of existing AI and cybersecurity standards while focusing on multi-functional realities.
- Alignment and governance must come first. Technical controls are only effective when supported by clear decision-making and accountability.
- A shared responsibility model is essential; without it, accountability and risk management break down.
- We remain a partner throughout the process. We stay engaged as your AI program evolves, helping you navigate decisions, validate assumptions and adapt to changing risk conditions.
- Leverage our ARMOR-aligned services for a comprehensive and efficient approach. Our services align to ARMOR's domains, making it easier to assess your environment, prioritize actions and implement controls with precision.
Organizations that successfully implement AI share several traits: they use frameworks like ARMOR to establish structure and alignment, they build clear governance and shared responsibility models to anchor accountability, and they balance quick wins with long-term transformation. By following these steps, you can build an AI security program that protects against today's challenges, prepares for tomorrow's threats and evolves alongside your organization's AI journey.
Ready to get started?
Whether you're beginning your AI security journey or looking to mature your program, ARMOR provides the structure, and our services provide the momentum. Access the ARMOR dashboard, connect with us for an ARMOR briefing, and take the next step toward secure, resilient AI.