Accelerating AI Adoption in Federal Agencies: Strategies for Success
This guide provides federal leaders, program managers, and technical teams with a practical starting point for adopting and operationalizing AI, including clear steps to accelerate progress.
Turning AI potential into AI progress
AI has immense potential to transform the U.S. federal government. The White House's release of more than 1,700 AI use cases at the end of last year demonstrates just how much the technology stands to impact agency operations and outcomes.
While the government often gets pegged as a technology laggard, AI is one area where it's been breaking ground for years. For example, the Department of Energy was among the first to use high-performance computing to train AI models, a practice now standard in the field. Similarly, NASA was an early pioneer in integrating AI and machine learning into digital twin technology, inspiring the use of advanced virtual models across other industries.
Now, AI initiatives are gaining momentum across a broader range of Department of Defense (DoD) and federal civilian agencies. Successful use cases are emerging in areas such as intelligence analysis, threat detection, predictive maintenance and human resource management.
Still, most agencies have yet to move beyond pockets of experimentation. AI use cases often remain siloed within research teams or innovation labs, rarely extending into daily workflows and decision-making.
As stakeholders try to advance AI initiatives, they come up against challenges related to data quality, cybersecurity and access to infrastructure capable of supporting AI workloads. Aligning AI with mission objectives, ethics and compliance adds further complexity. Common concerns include algorithmic bias, transparency in model decisioning and adherence to federal mandates such as the AI Bill of Rights and OMB AI guidance.
Our Federal Guide to AI is designed to help agencies overcome these challenges by providing a practical path to operationalizing AI — from establishing a foundation for solution development to scaling adoption across agency functions.
By applying best practices and real-world insights from our work with government and private sector clients, federal leaders can turn AI potential into AI progress.
Pursuing the right AI use cases
Agencies can pursue an abundance of AI use cases, making it easy for leaders to simply pick ones with low barriers to entry. However, agencies are better served by a more strategic approach — one that incorporates commercial best practices related to maximizing efficiency and delivering a return on investment.
This starts with mission alignment. Efforts grounded in mission outcomes garner significantly more buy-in from both internal stakeholders and the public than those focused solely on growing technical capabilities. They also help agencies measure success and make the most of available resources.
Watch this 25-minute episode of WWT Experts, "Charting the AI Journey: Insights for Federal Civilian Agencies." WWT Federal Civilian Area Vice President Kevin Pearson and Senior Director of WWT's AI Practice Mike Trojecki discuss how to identify and prioritize high-impact AI use cases.
Criteria for use case prioritization
The following criteria can help leaders prioritize use cases that create value for their agencies:
- Mission impact: How directly does the use case support the agency's core mission?
- Feasibility: Is the necessary data available in terms of quality and quantity?
- Return on investment: What are the potential cost savings or efficiency gains from implementing the use case?
- Scalability: Can the solution be scaled to address similar challenges across departments and other agencies?
- Citizen impact: Will the AI solution improve services or experiences for citizens?
- Risk mitigation: Does the use case help manage or reduce organizational risks?
Examples of successful AI implementations
With these criteria in mind, stakeholders can begin to identify high-impact, practical AI opportunities. A good starting point is looking at common AI use cases that have proven successful across agencies.
- Predictive maintenance: AI-driven systems analyze sensor data to predict equipment failures, reducing downtime and maintenance costs.
- Cybersecurity and threat detection: AI algorithms process vast amounts of data to identify potential security threats more quickly and accurately than human analysts.
- Intelligence analysis: AI-powered tools assist in processing and analyzing large volumes of intelligence data, enhancing decision-making capabilities, especially when qualified human analysts are scarce.
- Autonomous systems: AI is being integrated into unmanned platforms for quicker data analysis and decision-making in tactical environments.
- Citizen and service member engagement: AI-driven chatbots and virtual assistants improve customer service experiences across various agencies, reducing wait times and enhancing user satisfaction.
- Fraud detection: AI algorithms analyze patterns in financial transactions to identify potential fraud, saving taxpayer money and improving government efficiency.
However, leaders can't take these AI use cases off the shelf. They must invest in aligning them with their agencies' unique missions, data maturity and operational capabilities. The following examples highlight agencies that have done a particularly good job of this alignment.
- National Oceanic and Atmospheric Administration (NOAA): Using advanced AI systems and machine learning algorithms, NOAA tracks hurricanes in real-time, improving public safety and emergency preparedness by providing more accurate predictions.
- Transportation Security Administration (TSA): By leveraging AI-powered facial recognition and global entry systems, TSA enhances identify verification. It also uses AI to analyze passenger flow and shorten wait times.
- U.S. Department of Agriculture (USDA): The USDA uses AI and machine learning to analyze satellite imagery and other data sources for crop monitoring and yield prediction. This helps forecast agricultural output, manage supply chains and provide early warnings about potential food shortages.
- Department of Energy (DOE): The DOE employs AI to optimize the national power grid. AI algorithms analyze vast amounts of data from sensors and grid infrastructure to predict energy demand, detect anomalies and optimize energy distribution.
Given the rapid pace of AI advancements and the evolving nature of agency goals, leaders should regularly reassess AI use cases to make sure they align with shifting priorities, available resources and emerging technologies.
Building an AI foundation
Developing AI solutions involves unique challenges compared to traditional IT solutions, making it crucial for agencies to lay a strong foundation for AI. While there are many aspects to establishing such a foundation, we suggest leaders focus on three areas: data maturity, infrastructure considerations and establishing an AI Center of Excellence.
Data maturity
To prepare for AI initiatives, agencies should create a foundation of high-quality, standardized data that's secure and accessible. Focusing on data sets identified as essential for specific AI use cases can make this process more manageable and efficient. It also allows agencies to advance their AI projects without waiting for the entire organization to reach full data maturity.
First, agencies should classify data assets based on sensitivity, relevance and potential use in AI applications. They'll also want to remove errors, duplicates and inconsistencies to prevent AI models from producing inaccurate or biased results.
Once agencies have a better sense of their data footprint, they can work on standardizing inter-agency data sharing agreements and metadata management.
As standardization matures, leaders can begin to integrate data from various systems and silos. This may involve creating a centralized data platform or data lake to facilitate easier access and analysis by AI systems.
Agencies can leverage APIs and microservices for more flexible and scalable data integration. They can also leverage cloud-based platforms that provide standardized data storage and processing capabilities. To protect data while still making it accessible, agencies should explore solutions related to data encryption, access control and anonymization.
It's important that agencies don't overlook people, policy and process as they mature their data. A strong data governance framework with defined roles, stewardship policies and lifecycle processes is essential (see above fig).
Lastly, remember that data maturity is an ongoing effort that needs continuous monitoring and improvement. Regular data audits and automated data quality checks help maintain data reliability and relevance for AI applications.
Infrastructure considerations
As government agencies prepare to adopt AI, they must carefully evaluate and upgrade their IT infrastructure to support the demanding computational and energy requirements of AI workloads.
One of the primary considerations is the need for high-performance computing (HPC) capabilities. AI models, particularly those involving deep learning and large language models, require significant processing power.
Agencies should invest in GPU-accelerated systems or specialized AI hardware to handle these intensive workloads efficiently. As AI initiatives grow, agencies should implement scale-out architectures, which may involve adopting modular data center designs or leveraging cloud resources to accommodate fluctuating demands.
Because AI workloads often involve moving large datasets, agencies will want to leverage high-bandwidth, low-latency networking. This can take the form of InfiniBand or high-speed Ethernet (100 Gbps or higher) for efficient data transfer.
For example, low-latency networks have been key to the TSA implementing AI-powered facial recognition systems for global entry and security screening, as it requires processing real-time video feeds from multiple airport locations simultaneously.
Storage infrastructure is another critical component. AI systems require fast, high-capacity storage to handle large datasets and model outputs. Agencies should consider implementing tiered storage solutions, combining high-performance flash storage for active datasets with more cost-effective options for archival data.
These investments in IT infrastructure only work when facilities infrastructure is considered as well. The increased computational density of AI systems often exceeds the capabilities of traditional data center designs. Agencies must consider advanced cooling solutions, such as liquid cooling, to manage the heat generated by AI hardware.
For example, the Department of Energy (DoE) implemented micro-grid testbeds using GPU clusters and advanced cooling solutions to simulate and optimize power distribution using AI models.
Looking forward, agencies must consider integrating edge computing capabilities into their infrastructure. As AI applications expand to include real-time processing of sensor data or autonomous systems, the ability to process data at the edge becomes increasingly essential.
Establishing an AI Center of Excellence
We recommend that agencies establish an AI Center of Excellence (CoE) to capture, evaluate and harness the potential of AI. This internal body will oversee all AI initiatives, including use case evaluation and prioritization, making sure mission outcomes related to AI are achieved in a responsible, cost-effective, trustworthy and secure manner. The CoE also plays an instrumental role in identifying current and future AI talent gaps.
Because AI touches so many aspects of agency business functions, infrastructure and day-to-day operations, you'll want to cast a wide net when staffing your CoE with subject matter experts.
For example, in addition to experts from areas like policy, legal, compliance, acquisition, budgeting and intelligence, you'll want to engage technical experts across key domains. These include cybersecurity, data science, data engineering, machine learning engineering, research science, data analytics, AI lifecycle management, software engineering, automation and orchestration, networking, storage, cloud, facilities infrastructure, and more.
Recruiting for these roles can be challenging given the federal hiring process and security clearance requirements. Agencies may need to explore interagency talent exchanges or leverage existing federal talent initiatives to fill gaps.
Once staffed, your CoE can begin to build momentum for AI initiatives by:
- Creating a regular forum where the members meet with representatives from various agency departments to brainstorm and discuss potential AI applications.
- Collaborating with agency leadership to ensure AI use cases and initiatives support overarching mission goals and priorities.
- Partnering with agency departments to create small-scale AI prototypes that demonstrate value and potential for scaling.
- Studying AI implementations in other federal agencies to identify transferable use cases and best practices.
- Creating clear guidelines for AI development, deployment and ethical considerations.
- Developing internal AI talent through continuous learning programs and hands-on experience with AI projects.
- Evaluating the success of pilots to determine if projects should be scaled or retired.
Accelerating AI adoption
Prototyping and development strategies
Agencies should adopt an agile mindset when prototyping and developing AI solutions. By prioritizing rapid prototyping and iterative development, they can keep solutions moving forward as government priorities shift, legislative mandates roll out and mission requirements evolve.
We suggest agencies start small and focus on exploring a few high-impact use cases within open-source prototype environments. In fact, many of our federal clients are surprised to learn just how much prototyping can be done in basic environments that require minimal investment.
For example, using a GPU-powered laptop and open-source software, we helped one of our DoD clients create a full production demo of an AI chatbot that could be queried about maritime activities.
While the demo only supported a handful of users and used publicly available information, command leadership saw how users across agencies could leverage the chatbot and how the solution could be matured through the inclusion of classified data sources.
Just as in the case of the DoD client, by starting small, agencies can refine their use cases and build momentum before investing in larger, more complex systems.
In some ways, the development of AI solutions mimics the development of traditional IT solutions. For example, AI solutions should be treated as products with continuous integration and delivery (CI/CD), frequent usability testing and clearly defined product manager roles. However, as leaders develop AI solutions, they should be aware of some key differences:
- Development lifecycle: The AI development lifecycle involves continuous learning and adaptation. AI models need to be regularly updated and fine-tuned based on new data and changing conditions. By contrast, traditional IT solutions often follow a more linear development process with defined start and end points.
- Skill sets and collaboration: Developing AI solutions requires a range of skill sets, including data science, machine learning and AI-specific engineering expertise. Traditional IT development may not require such a broad range of specialized skills and cross-functional collaboration.
- Integration and deployment: AI solutions may have more complex integration paths than traditional IT solutions. For example, AI deployments require careful consideration of deployment models, such as cloud, on-premises or edge computing, to meet specific performance and scalability needs. These integration and deployment factors should be considered early in the development process.
Watch this 25-minute episode of WWT Experts, "Practical Strategies for Accelerating AI Adoption in the DoD." WWT DoD Systems Engineering Manager Scott Paparone and DoD Principal Solutions Architect Tim Robinson discuss how to develop and scale AI solutions.
Scaling AI solutions from experimentation to production
Scaling AI beyond prototyping presents significant hurdles — compute scarcity, talent gaps, cross-agency governance and unclear ROI being chief among them. However, by focusing on key infrastructure, tools, processes and governance, agencies can move from experimentation to production within and across agencies.
We suggest leaders:
- Standardize infrastructure: Agencies should provide a consistent environment for training, testing and deploying AI models. For example, the National Institutes of Health (NIH) provides researchers with standardized cloud platforms that can be used for large-scale AI model training on health data. These platforms provide agencies the necessary computing to scale models up.
- Centralize AI development: Agencies can centralize AI development by using government-wide cloud contracts or shared high-performance computing (HPC) resources. For example, multiple military branches can deploy AI workloads on shared GPU infrastructure via the DoD's Joint Warfighting Cloud Capability (JWCC).
- Streamline development pipelines: Agencies should work toward establishing a true DevOps cycle for AI applications to speed up model iteration and deployment. This includes incorporating necessary security controls and user testing throughout the development process.
- Collaborate across agencies: By sharing resources and models between different agencies, leaders can leverage each other's capabilities and avoid duplication. For example, the General Services Administration (GSA) is working toward offering an AI tool as a shared service to other agencies.
- Invest in compliance and security: By integrating security and compliance measures into AI infrastructure, agencies can more easily confirm that their AI solutions meet the federal government's strict regulatory requirements.
- Continuously learn and improve: AI models need ongoing refinement to continuously improve solutions based on new data and feedback.
Overcoming implementation challenges
As agency leaders move fast to adopt AI, they can be caught off guard by implementation challenges. Leaders should be prepared to overcome challenges related to data, compliance and integration.
Maintaining data quality
As noted earlier, data maturity is foundational to AI success. Before beginning any AI initiative, agencies should assess the quality of their data and create a roadmap for improvement. This helps to align data quality efforts to AI initiatives, creating buy-in for maintaining data quality.
As agencies improve data quality, they likely will find that manual efforts related to data cleansing and validation can slow progress. Before agencies get too far down the road of maturing their data, they should consider adopting automated data cleaning pipelines to make sure data is consistent and accurate before its fed into AI models.
While quality data fuels quality AI, agencies shouldn't let gaps in data quality stop them from pursuing use case development. They should simply start with well-understood, high-quality, manageable data sets. Then, they can expand to larger, more complex data sources.
For example, NOAA is using AI to better predict the path of hurricanes. This system likely started by leveraging basic meteorological data before incorporating complex atmospheric and oceanic data sources.
Navigating security and compliance
AI requires robust frameworks to safeguard personal information and maintain public trust. This includes setting up proper security controls in prototype environments and ensuring that full-scale deployments meet all necessary compliance standards.
Agency leaders should involve cybersecurity teams early to make sure data is protected and compliant throughout its lifecycle. For example, teams can help establish a data classification system based on sensitivity levels to help apply appropriate security measures, especially when dealing with personally identifiable information (PII) or classified data.
Security teams can also use zero-trust architecture principles to continuously authenticate user access from the time data is collected to the time it's processed in AI systems. Additionally, cybersecurity teams can prevent unauthorized access by creating secure, isolated environments for model training.
AI technology is moving fast moving and so are regulations that govern its use. One of the best things agencies can do regarding compliance is prepare flexible AI strategies that can adapt to an evolving regulatory landscape.
To navigate security and compliance challenges effectively, agencies should:
- Conduct regular compliance audits of AI systems and data usage.
- Establish clear data governance policies aligned with AI initiatives.
- Engage with regulatory bodies to stay informed of upcoming changes.
- Implement robust security measures throughout the AI development lifecycle.
- Foster a culture of compliance awareness among AI development teams.
By prioritizing security and compliance, agencies can build trust in their AI initiatives and responsibly deploy AI in service of their missions.
Promoting collaboration and knowledge sharing
One of the biggest challenges to AI implementation is the siloed nature of agency systems and processes. It's important that agencies treat AI as a team sport with leaders collaborating and sharing knowledge.
Standardizing documentation is a great first step to promote collaboration and knowledge sharing. For example, an inter-agency repository of AI model specifications can help agencies quickly identify models that best suit their needs. Standard formats like model cards — a documentation framework proposed by leading AI research groups — can support consistency and transparency across agencies. These standardized documents can include:
- Data requirements and preprocessing steps
- Performance metrics and limitations
- Hardware and infrastructure requirements
- Potential biases and mitigation strategies
Inevitably, mistakes will be made as agencies experiment with AI. However, by sharing lessons learned, leaders can destigmatize failure and build institutional knowledge. As agencies move through AI initiatives, leaders should:
- Document specific technical challenges encountered.
- Identify organizational or process barriers that impede success.
- Outline attempted solutions and their outcomes.
- Provide recommendations for future attempts.
To foster inter-agency collaboration, leaders should establish AI working groups by domain (e.g., customer service AI, predictive maintenance AI, fraud detection AI). This approach allows for more targeted knowledge sharing. Members should include a mix of technical and mission-focused personnel, meet regularly and maintain a repository of use cases, best practices and guidelines.
Collaboration within agency departments is just as important as collaboration between agencies. Feedback loops are critical to fostering internal collaboration and knowledge sharing across functional teams.
Feedback loops include a user-to-developer feedback mechanism. It also includes regular communication from product owners to executive leadership to assess resource allocation and investment priorities.
Conclusion
The adoption of AI within federal agencies presents immense opportunities. When implemented correctly, AI can lead to new levels of operational efficiency, national security and citizen satisfaction.
However, an abundance of AI use cases can make it hard for agency leaders to turn AI potential into AI progress.
Leaders should start by letting mission goals guide their AI efforts. This will help them make smart decisions related to data management, infrastructure investments and compliance requirements.
Establishing an AI Center of Excellence and adopting best practices for solution prototyping and development will help AI become embedded across agency functions. Collaboration and knowledge sharing are also key, especially as AI breakthroughs only stand to occur more frequently.
By embracing a structured, mission-aligned approach, agencies can bridge the gap between AI's potential and its practical impact.
Agencies that can strike a balance between innovation and mission, infrastructure and operations, and speed and security will be the ones to turn AI potential into AI progress.
This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research.
This report is compiled from surveys WWT Research conducts with clients and internal experts; conversations and engagements with current and prospective clients, partners and original equipment manufacturers (OEMs); and knowledge acquired through lab work in the Advanced Technology Center and real-world client project experience. WWT provides this report "AS-IS" and disclaims all warranties as to the accuracy, completeness or adequacy of the information.