by Tim Brooks, Managing Director & Chief AI Advisor,
World Wide Technology for The AI Journal

While the potential impact of generative AI (genAI) is widely recognized, implementation challenges persist for organizations. According to Gartner research, approximately 30% of genAI projects stall, or remain stagnant, following development and proof-of-concept phases. Gartner found that the primary challenges are attributed to inadequate data quality and insufficient frameworks to enable seamless integration of AI.  

Organizations without comprehensive data strategies that will support genAI now and in the future face significant operational risks, including resource inefficiencies, competitive positioning deterioration, and delayed innovation cycles. 

Successful genAI implementation requires IT leaders to establish robust data infrastructure that encompasses quality assurance mechanisms, governance protocols, and strategic alignment with organizational objectives. 

Addressing Data Quality Assessment Complexities 

Data quality is the foundation for effective genAI, and many organizations continue to run into data challenges. This is hindering their execution of strategic AI innovation, resulting in diminished return on investment and extended deployment timelines. For example, IT teams can discover inconsistent data entry protocols, disparate categorization standards across business units, and variations in the languages used to construct AI models.  

Quality inconsistencies from any data source can fundamentally undermine AI system effectiveness. Therefore, without a proper assessment of data quality, IT leaders may face complexity when it comes to IT management at-scale. There are several considerations to weigh when evaluating the state of enterprise data: 

Tapping Data Trapped in Legacy Systems:  

For many organizations, critical enterprise data remains isolated within legacy infrastructure, which has not been built to run AI workflows. In fact, research shows that 40% of enterprise systems are beyond end of life or support. This means systems such as mainframes, proprietary databases, and discontinued platforms have the potential to cause data bottlenecks, making it more difficult for AI solutions to access information needed to produce consistent, accurate outcomes. IT leaders who address this will be positioned to spend more time on strategic initiatives, rather than remediating infrastructure challenges. 

Allocating Adequate Resources to Evaluate Existing Systems:  

While complete infrastructure overhaul may not be necessary for AI readiness, organizations must conduct a thorough evaluation of all systems contributing to AI data pipelines. This process is in-depth and can include the evaluation of many disparate systems, requiring companies to invest more resources than they may have anticipated. IT leaders who anticipate the resources required to complete this level of assessment will be better positioned to integrate AI efficiently and effectively. 

 

 

Read full article