WWT, NVIDIA Introduce Framework for Secure, Scalable, Responsible AI Adoption
The framework was developed with input from early adopters such as the Texas A&M University System, ARMOR integrates with NVIDIA AI Enterprise technologies to support secure, scalable enterprise AI deployments.
Technology services provider World Wide Technology and NVIDIA have jointly developed an AI security framework dubbed AI Readiness Model for Operational Resilience (ARMOR), designed to help organizations accelerate AI adoption while maintaining security, compliance, and operational resilience.
Structure of the Framework
The vendor-agnostic ARMOR framework provides "actionable, holistic guidance that embeds security across the full AI lifecycle from chip to deployment, whether cloud or on-premises," according to a news announcement. It's broken down into six domains to address the various aspects of security around AI. Those areas, as described by WWT, are:
Governance, Risk, and Compliance (GRC): Ensures AI operations align with regulatory requirements, organizational policies, and ethical standards, managing risks across on-premises and cloud environments.
Model Security: Protects AI models from threats such as poisoning, inversion threats, and theft, ensuring integrity and reliability throughout their lifecycle.
Infrastructure Security: Secures the hardware and network foundation, including GPUs, DPUs, and cloud regions, to prevent unauthorized access or tampering.
Secure AI Operations: Enables real-time monitoring and rapid response to threats, ensuring secure operation of AI platforms in interconnected systems.
Secure Development Lifecycle (SDLC): Embeds security into the development of AI software and services, mitigating vulnerabilities like prompt injection from design to deployment.
Data Protection: Safeguards datasets, whether stored in locally connected storage or in a cloud data lake, ensuring confidentiality, integrity, and regulatory compliance without stifling innovation.
Developed with Higher Education Input
"ARMOR gives us a common language and structured approach for managing AI risk," commented Adam Mikeal, chief information security officer at Texas A&M University. "It's a practical solution for real-world AI security."