The Reality of AI in State Government and Why AI Centers of Excellence Matter
In the fast-moving world of AI, states that invest in a center of excellence will be the ones that turn experimentation into progress.
AI isn't waiting for state government
When it comes to AI in the public sector, state IT leaders find themselves caught in a balancing act. On the one hand, the pressure is on to innovate. Governors' offices and state legislatures are eager to point to any adoption of AI as a sign that their state is on the cutting edge of technology.
On the other, the same stakeholders expect IT leaders to ensure safe experimentation, protect sensitive data and comply with a growing body of AI-related legislation.
As they work to balance preparation with innovation, leaders are asking questions like:
- Which AI use cases are safe to pursue now?
- How do we protect sensitive data while fostering experimentation?
- Who is responsible for setting and enforcing AI standards?
Agencies, however, are not waiting for answers.
Large organizations with big technology budgets and federal funding are pushing forward with AI pilots. For example, health agencies are experimenting with AI in the areas of eligibility determination, case management and document processing. Transportation agencies are exploring how AI can help with predictive maintenance and fleet management. In many cases, this work is occurring outside central IT.
At an individual level, experimentation is happening faster than agencies can keep up with. Staff are spinning up small generative assistants to summarize documents, draft communication and answer internal questions — and unknowingly exposing sensitive data in the process.
Then there's the opposite end of the spectrum where leaders are hesitant to move at all. They hear about states launching "AI labs" and "innovation hubs" and assume they're too far behind. In reality, these efforts often reflect savvy public relations than actual capabilities.
For state IT leaders navigating AI in the public sector, there's no shortage of nuance and contradiction. That's why AI centers of excellence (CoEs) matter. Not as top-down mandates, but as a practical way to bring visibility, coordination and control to an AI footprint in fast flight.
An AI CoE is not a policy shop or an innovation lab. It serves as a coordination layer that helps states see where AI is already in use, surface risk early and remove friction that slows responsible experimentation.
Effective CoEs focus on shared workflows and repeatable decisions. By bringing security, data, privacy, legal and operational leaders into the same conversation early, they shorten review cycles and allow teams to move forward quickly, clearly and safely.
Why shadow AI is the first challenge every state must address
The most immediate AI risk facing states is not the possibility of future misuse. It's the deployments already happening out of sight.
Across nearly every state environment — and increasingly in local government — employees are experimenting with generative tools. They are connecting them to internal spreadsheets, automating document workflows, building simple bots with agency credentials and, in the process, unintentionally exposing endpoints. This isn't reckless behavior. It's a response to real operational pressure combined with unprecedented ease of access.
What seldom gets considered is that AI tools are not passive. Many agents can browse the web, call tools, access files and interact with internal systems. When those agents are deployed on open ports without authentication, they become attack surfaces almost instantly.
The people deploying agents rarely think of themselves as standing up infrastructure. But with AI, small experiments can create exposures that previously required a full application rollout.
In one state environment, we found an example of a well-intentioned employee who connected a generative assistant to a shared spreadsheet used to track internal cases. To make the tool easier to access, it was deployed on a publicly reachable endpoint with no authentication. Within days, automated scans identified that the exposed interface was granting access to thousands of records containing internal identifiers and notes.
A CoE gives states a way to discover these deployments, assess their risk and bring them into monitored, secured environments without discouraging innovation. That balance is essential. The goal is not to shut experimentation down, but to prevent well-intentioned work from creating avoidable breaches.
Why use cases are the real engine of an AI center of excellence
Everything that feels overwhelming about AI in government services — governance, data quality, cybersecurity, workforce readiness, etc. — becomes more manageable when anchored to a single, concrete workflow. Use cases force specificity. They bring data owners, security teams and operational leaders into the same conversation. They expose gaps long before any model is deployed.
States often gravitate toward their largest, most visible problems to serve as the target for their first AI initiatives. These are almost always the wrong starting points. Large, cross-agency systems require extensive data cleanup, coordination and political alignment, making early success unlikely.
Instead, agencies should prioritize use cases that are narrow in scope, tied to well-understood data and rooted in routine processes. For example, an effective starting point might be the IT project intake process.
Agencies submit project requests that often take central IT weeks to review, with submissions sent back and forth due to missing information or compliance issues. An AI-driven pre-review capability could assess completeness, validate data classification, flag fiscal anomalies and prompt clarifying questions automatically.
What previously took weeks of back-and-forth could be reduced to minutes of automated triage. And because every agency uses the same intake process, the capability could become shared infrastructure rather than a one-off pilot.
That is what a strong early use case looks like: concrete, data-appropriate, politically safe and reusable. A CoE that starts here earns credibility quickly.
The data barrier: Why states feel behind and why they're not
One of the biggest obstacles to AI in public administration is psychological. State leaders frequently believe AI requires pristine, enterprise-wide data before anything meaningful can happen. CIOs describe their data as too large, too fragmented or too outdated to support AI initiatives.
While it's true that states have a plethora of unruly data, it's not bad data alone that's the issue. The issue is what AI does with that data.
AI systems do not merely reflect data quality issues. They amplify them. Bias embedded in historical data, once subtle, can become decisive when scaled through automation and machine learning. Public examples of chatbots producing harmful or discriminatory outputs underscore how quickly this can happen when models ingest biased inputs.
In state government, those patterns often stem from decades of policy decisions embedded in eligibility systems, justice data or social services records.
But this does not mean states must fix everything before starting.
The more effective approach is to begin with use cases tied to small, well-understood data sets rather than sprawling, cross-agency systems. This reduces risk, accelerates progress and allows states to confront data issues incrementally rather than all at once.
A CoE enables this discipline by helping states choose where to begin intentionally instead of reacting under pressure.
What "good" looks like in a state AI center of excellence
A functional AI CoE is not a large new department. It is a high-trust environment that brings together security, data, privacy, legal and operational leaders early and often.
Effective programs are typically anchored by a capable AI leader who acts as a conductor, aligning different teams toward shared outcomes. Some states assume this responsibility should fall to the person with the deepest AI expertise or the greatest organizational authority.
In reality, the best leaders may not be the most senior or the most technical, but they excel at leading difficult conversations, identifying real progress, and earning trust across a variety of stakeholder groups.
With strong leadership in place, effective CoEs tend to exhibit several key traits:
- A consistent forum for early engagement among all risk owners
- Reusable guardrails that reduce friction across agencies
- A disciplined approach to use case selection based on feasibility, not ambition
- Clear roles, sequencing and decision-making authority
Most importantly, a good CoE lowers the learning curve for agencies. It gives teams a place to go when they want to move forward responsibly but lack clarity.
Starting without waiting: A parallel path forward
States do not need perfect conditions to begin. They do not need fully mature data. They do not need an ideal organizational chart. And they do not need a staff of AI experts on day one.
The most effective approach is a parallel one. One track focuses on readiness: understanding existing tools, identifying risks, mapping skills and establishing basic guardrails. The second focuses on delivery: selecting a feasible use case, building a prototype and demonstrating value early.
These tracks do not run sequentially. They run in parallel. Delivery begins as soon as there is enough insight to move responsibly.
This model reflects the realities of state government: limited time, uneven maturity, political pressure, legacy systems and competing priorities. It sustains momentum without compromising safety.
A CoE is not about designing the perfect future for AI in public services. It is about enabling progress now with structure, visibility and confidence.
Honest starts beat perfect plans
AI is already entering state environments, whether leadership has formally approved it or not. The question is not whether AI will be used, but whether states will have the coordination and visibility to shape how it is used.
An AI CoE provides that capability. Not as a command structure, but as a safe, credible space for agencies to bring ideas forward, for risk owners to engage early and for shared capabilities to emerge.
States that succeed will be those that start honestly by identifying realistic use cases, flagging real data constraints and developing a clear-eyed understanding of how their agencies actually operate.
A good CoE does not create uniformity. It creates alignment. Those that invest in that alignment will be the ones to turn AI experimentation into real progress.
This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research.
This report is compiled from surveys WWT Research conducts with clients and internal experts; conversations and engagements with current and prospective clients, partners and original equipment manufacturers (OEMs); and knowledge acquired through lab work in the Advanced Technology Center and real-world client project experience. WWT provides this report "AS-IS" and disclaims all warranties as to the accuracy, completeness or adequacy of the information.