Overview
Explore
Select a tab
What's new
Hands-On Lab Workshop: Protect AI Guardian
Join us for the Virtual Lab Workshop! During this exclusive session, the host will introduce the Protect AI Guardian Lab.
Protect AI Guardian is an ML model scanner and policy enforcer that ensures ML models meet an organization's security standards. It scans model code for malicious operators and vulnerabilities, while also checking against predefined policies. Guardian covers both first-party and third-party models. This comprehensive approach helps organizations manage ML model risks effectively.
In this Lab, you will walk through the Protect AI Interface, explore the different feature sets there, and submit example models for scanning. This one-hour event is limited to 60 participants who will have the opportunity to engage in Q&A during the session. Attendees are encouraged to actively participate by launching the lab themselves alongside the host for a hands-on experience.
Webinar
• Oct 25, 2024 • 11am
AIPG: The AI Security Enclave
The AI Security Enclave in the AI Proving Ground (AIPG) adds an environment dedicated to supporting AI security efforts and demonstrating WWT expertise and capabilities for testing innovative hardware and software security solutions.
Advanced Configuration Lab
AI Security: Practicing Good Model File Security
As organizations continue to incorporate AI solutions into their workflows and user toolset, the requirements for securely creating, acquiring and storing large AI model files have grown significantly. This blog post explores the risks presented by third-party AI model files and best practices to guard against those risks.
Blog
• Sep 3, 2024
Protect AI Guardian Sandbox
Protect AI Guardian is an ML model scanner and policy enforcer that ensures ML models meet an organization's security standards. It scans model code for malicious operators and vulnerabilities, while also checking against predefined policies. Guardian covers both first-party (developed within the organization) and third-party models (from external repositories). This comprehensive approach helps organizations manage ML model risks effectively.
In this Lab, you will walk through the Protect AI Interface, explore the different feature sets there, and submit example models for scanning.
Sandbox Lab
• 163 launches