Integrating MLOps for the AI Advantage: Top Trends in 2022
In This Article
Machine Learning Operations, MLOps for short, has us at World Wide Technology pretty excited. That has been true since we realized in 2019 that our partners would need the people, processes, and technologies to scale their adoption of artificial intelligence (AI) and machine learning (ML). According to a survey by The AI Journal, 74% of leaders expect AI to play an important role to make business processes more efficient. Simplifying the management of those AI/ML models is where MLOps comes into play, and we used our AI R&D program as a launchpad for an MLOps practice to ensure we would be ready to assist customers with their maturation journeys. The practice has only grown since then, so please read on to discover the trends we see coming for the rest of 2022.
But first, a brief recap of what MLOps entails. Google kicked off the MLOps field in 2015 with the theory that actual ML code, the focus of most data science-oriented research, is only a small part of any deployed AI solution. The data ingest and validation, model evaluation, and production monitoring steps are often overlooked.
MLOps builds on DevOps concepts in that systematic processes and reusable components enable teams to build and deploy ML solutions (as opposed to software) better, faster, and cheaper. Unlike traditional software that only requires code, ML models rely on both code (models + algorithm) and data. By introducing a framework with which to develop and maintain an end-to-end ML solution, MLOps offers the ability to industrialize data science, guaranteeing a robust, repeatable process for delivering business value via new AI solutions. The field has come a long way since its inception in 2015 and continues to evolve quickly. The trends listed are some of the most exciting developments we have observed from working with clients in the field.
A model can only be as reliable as the data on which it was trained. This growing realization prompts a fundamental shift in machine learning solution development from being model-centric to being data-centric. According to AI thought leader Andrew Ng, a model-centric approach works with available data to iteratively improve both the code and the model. In comparison, the data-centric approach iteratively improves the data while holding the code fixed. He has demonstrated the effectiveness of this technique with steel defect detection and surface inspection use cases; focusing on improving the data improves model accuracy more significantly than focusing on the code does. "Good" data becomes even more important when training on small datasets where each entry can have a substantial impact on the resulting model's performance. Some of the techniques to improve data includes consistent labelling, ground truthing, spot-checking and increasing the size of datasets. We expect data teams that prioritize working with the best possible data over those focusing on the best possible algorithms will be able to deliver more business value.
The concept, highlighted in 2021's State of AI report, refers to changes in the composition of a dataset underlying any ML model and how the model responds to those changes. There are many types of drift (and even some variation between definitions) depending on what specifically changed, but they all address ways in which model performance can degrade over time: data drift when the data used for training is not correct, concept drift when the relationship between model output and desired prediction becomes misaligned, and more. Regardless of the type, the existence of drift underscores the importance of not only focusing on deploying a ML model into production, but also monitoring the performance of that model in production to make sure it continues to produce value. The ability to detect drift and effectively respond is a differentiator for data teams, enabling them to become proactive and respond to any model issues before those issues affect the experience of end users.
As AI/ML solutions are expected to increasingly drive decisions, leaders are faced with the need to track and quantify the value of models in use. Value tracking consists of defining key metrics to help assess the benefit of a solution. This visibility is paramount for the health, security, and reputation of organizations using the models. Tying machine learning model results back to business outcomes also builds user confidence and promotes wider adoption.
At WWT, we have developed a value tracking framework describing how MLOps enables smarter, faster, and cheaper innovation at scale. For instance, the framework includes a methodology to track model quality, visibility, and reusability by measuring the difference in effort for code development or reduction in bugs reported with and without certain MLOps capabilities. We have also worked with clients to increase visibility of model performance and value tracking through a single pane of glass. This approach provides key information about models in production and connects that insight to tangible business value. It empowers leaders to understand how resilient all models in use are against bias, how reusable model components are reducing the need for time and resources, and how MLOps is affecting time to model deployment. The most important principle is to connect the outputs of AI/ML models to bottom line impact and employee satisfaction, which should both be positively impacted by MLOps.
MLOps goes beyond tools to encompass a transformation across people, process, and technologies related to AI/ML. People participation is a key ingredient to successfully implementing MLOps. In our experience, forming a group of champions or key stakeholders from different levels and departments works very well in helping develop best practices, identifying challenges, sharing knowledge, and building limits to the use of AI solutions. It helps different folks who might not interact with each other in their daily routines work better together.
There is currently no consensus around a best single tool or set of tools for MLOps, and we do not expect that to change soon. Given the diversity of capabilities encompassed by a comprehensive CI/CD/CM/CT pipeline (continuous integration, delivery, monitoring, and training – an extension of traditional CI/CD), there is ample opportunity for both niche and end-to-end products. Since MLOps should be a capability adoptable by organizations regardless of their cloud provider or technical stack, developing a one-size-fits-all solution would be a substantial challenge. Thus, we continue to see active development in everything from open-source tools (e.g., MLflow) to custom built solutions from major technology companies (e.g., Metaflow by Netflix, which was open sourced in 2019) and end-to-end ML platforms (e.g. Databricks, Azure ML, Sagemaker, GCP, and Data Robot, to name a few).
Data science is fundamentally a scientific process, and just as lab scientists will customize and calibrate their instruments, data scientists, and machine learning engineers will do the same. As a result, we do not consistently recommend any one set of technologies when developing MLOps capabilities with our partners; we believe the right choice is specific to a given situation. So, we will be keeping an eye out for new, exciting MLOps tools (as well as possibly working on some ourselves), and we recommend you also seek the best-fit tools for your goals shaped by specific use cases or outcomes that you want to achieve.
The goal of MLOps projects is often to accelerate the path to model deployment, as that is when models start delivering business value. According to Fortune Business Insights, the global machine learning market is projected to grow from $15.50 billion in 2021 to $152.24 billion in 2028 at a CAGR of 38.6% in forecast period. MLOps proposes that spending additional effort upfront will increase the long-term value of deployed ML solutions by reducing maintenance costs and making the process of deploying new models more repeatable. So as AI/ML becomes increasingly adopted, we expect MLOps to see similarly increasing interest.
In our digital world, it is time for ML to catch up to software development in acknowledging that written and deployed code is not the end of the road for data teams. Adopting a spirit of continuous improvement will become part of the definition of done.
Now seven years since its inception, MLOps is maturing as a discipline but is still ripe for experimentation and development. From changing attitudes on which area of the ML lifecycle warrants the greatest focus to the continual development of new tools, we anticipate MLOps will continue to be a focus area for organizations seeking to maximize the business value of AI/ML models. We at WWT will be following the space closely as well as developing MLOps capabilities alongside our customers and partners. We hope you will follow along with us.