Situation 

Artificial intelligence (AI) is reshaping every corner of higher education, from administrative efficiency to research acceleration and intelligent classroom tools. At Washington University in St. Louis (WashU), the possibilities — and risks — were growing by the day.  

Like many institutions, WashU was seeing rapid, decentralized growth in AI use across its clinical, research, administrative and teaching domains. AI tools were being explored by faculty, students and staff alike for applications ranging from more accurate and efficient note-taking for patient visits to specialized tools that assist in contract development.  

Innovation was happening fast, and governance was racing to keep up. This resulted in a fragmented view of how AI was being used, and a concern that the hastily implemented guardrails might have gaps or lack the desired level of maturity. Shadow AI — the use of unauthorized AI tools — could put the organization's private data at risk. The university also had to guard against biased, incorrect and even hallucinatory AI outputs and ensure compliance with growing regulatory requirements or risk reputational and financial damage. 

University leadership began asking the right questions: What is our exposure to AI-related risk? How do we know which tools are in use and whether they're secure, compliant and ethical? 

The differing data, regulations and responsibilities across WashU's schools and departments further complicated the challenges. From Health Insurance Portability and Accountability Act (HIPAA) protected medical data to education-specific Family Educational Rights and Privacy Act (FERPA) guidelines, every environment has unique needs and varying levels of readiness.  

Any centrally defined policy would need support from a diverse set of stakeholders, including governance committees overseeing IT within administration, teaching and learning, research, and clinical operations. 

WashU's Chief Information Security Officer (CISO) was tasked with drafting a university-wide AI risk management policy that could protect the institution while empowering its people to innovate. This wasn't about saying no to AI; it was about creating a proactive structure that allowed the university to say yes with confidence, compliance and safety. 

WashU sought a partner to evaluate its AI risk posture, help benchmark its AI governance maturity, and chart a path toward consistent, ethical and secure AI use across the university. 

Approach 

WWT partnered closely with WashU to deliver an AI risk assessment built around a desired level of maturity, not just compliance. 

The first step was to understand how AI was being used across the university. 

Working with the university's stakeholders, WWT identified approximately 20 active AI use cases across the institution. Eight representative cases— spanning clinical, research, administrative and academic domains — were selected for in-depth analysis. Some use cases spanned multiple domains, offering a clearer picture of where risks overlapped and where siloed or absent governance created blind spots. 

Each use case was evaluated through multiple lenses, such as auditor, attacker, risk officer and innovator. This multi-lens approach helped surface vulnerabilities, policy gaps and friction points across the AI lifecycle from initial tool adoption to long-term ethical oversight. 

WWT also reviewed WashU's existing AI policies, intake processes and governance structures.  

Throughout the process, WWT emphasized stakeholder input and encountered many instances of AI excellence in which AI tools were being used thoughtfully and responsibly. A range of voices shaped the findings and ensured that recommendations were rooted in the reality of how AI was being used and governed on campus. 

Solution  

WWT experts used the NIST AI Risk Management Framework's 72 requirements to confidently baseline the initial controls. With these foundational controls, WWT created a custom maturity rubric and heat map that enabled seamless cross-mapping with multiple industry frameworks. This approach highlighted where the university was already strong, where it was exposed and what actions were needed to close the gaps.  

This board-ready AI maturity model visually conveyed the university's current state, future targets and actionable next steps in clear, non-technical language to support executive decision-making. 

The solution included the following components:

  • Visibility into the university's AI assets and systems that were curated into lists based on overall awareness: official, sanctioned AI systems; AI instances that needed some additional vetting; and recommended systems based on stakeholder feedback and needs.
  • Detailed AI risk analysis and assessment tailored to both internally developed and third-party AI tools. This framework includes control requirements, governance triggers and role-specific guidance.
  • A roadmap for improvements to the university's intake and review processes to ensure identification, review and approval of new AI use cases in order to contain shadow AI growth and reduce tool sprawl.
  • A roadmap for building an AI ethics review committee to supplement existing governance bodies. This would create a structured way to evaluate use cases with a standardized rubric.

Outcomes and benefits

Enterprise-wide view of AI maturity
WashU now has a more comprehensive understanding of how AI is being used across the organization. This visibility provides a strategic foundation for tracking progress, identifying gaps and aligning resources more effectively.

Governance with minimal friction
Through consistent intake and review processes, teams will be able to propose and evaluate AI tools confidently, knowing there's a structure in place to support responsible experimentation. The result is a more scalable, transparent approach to managing AI innovation.

Momentum through measurable accountability
Based on the findings from this engagement, the AI risk management policy will serve as a roadmap for tracking progress toward future improvement goals and creating accountability and forward momentum. This proactive approach enables continuous monitoring and mitigation of AI-related risks rather than addressing issues after the fact.

Operational insights that reduce risk and waste
The engagement revealed underutilized infrastructure that can be optimized across departments, potentially reducing shadow AI, improving resource planning and enhancing return on existing technology investments. Because the controls are uniquely tailored to specific questions and responses, they bring visibility to valuable insights across people, processes and technologies. 

How we did it

With 35 years of experience helping the world's largest companies and government entities, we've learned digital transformation and IT modernization thrive in the space between:  

strategy and execution; business and technology; physical and digital

Our deep domain expertise cuts across business and technology. Our ability to extensively test solutions and deploy them at scale allows us to both advise and execute to create new realities for our customers. 

Here's how we did it with WashU:

We surveyed functional areas to surface what matters
We engaged stakeholders across the university to ensure all voices were heard. By leading with empathy and curiosity, we built trust between teams, uncovered unmet needs and surfaced critical gaps that might have otherwise gone unnoticed.

We turned complexity into clarity
We translated technical and operational risks into business-friendly insights that resonated with both IT leaders and executive decision-makers. This helped drive alignment, accelerate buy-in and turn emerging AI challenges into clear, actionable strategies.

How can we help you?

If your organization is exploring or accelerating AI adoption, we can help you:

  • Assess your current AI governance posture and benchmark against industry frameworks.
  • Translate AI risk into actionable strategies for both technical and executive stakeholders.
  • Design a roadmap for trustworthy, responsible and secure AI deployment.
  • Empower stakeholders to use AI while also protecting sensitive data, critical systems and your reputation.

Let's talk about how to lead proactively in the era of AI.