?
Digital Data Analytics & AI
9 minute read

How to Choose MLOps Tools: The Top Considerations That Will Impact Your Decision-Making

Planning and creating an MLOps environment and procuring the right tools for your business is a critical step in designing a successful MLOps implementation. This article will take you through the major considerations you will need to take into account when implementing an MLOps solution.

In This Article

MLOps is an automation-first approach that brings together the people, process, and technology to enhance cross-team collaboration on Machine Learning (ML) projects and streamline the iteration, production, deployment, and operation of ML models. It is a broad concept with many different implications for your organization, ranging from helping data scientists ramp up to preparing your organizations overall for the positive change, to evaluating the maturity of your organization

When designing your MLOps implementation, it’s important to consider all three aspects of people, process, and technology. This article focuses on the technology component to help identify the tools that work best with your organization’s environment and requirements.

copy link

Top considerations for decision-making

Before diving into the world of MLOps, there are three conditions that an organization must first address which are each critical to ensuring a successful MLOps journey. Organizations must have an understanding of:

  1. The desired business outcome
  2. The requirements of their current data science environment
  3. The level of effort required to implement the MLOps vision

copy link

1. Understanding the desired business outcome

Building an MLOps environment is like building with LEGO. Just like LEGO blocks, every MLOps tool has different purposes and capabilities. This is why it’s important to identify your desired business outcome, and considering the state of your current data science tech stack before implementing an MLOps environment. Understanding the desired business goals will empower your team to choose the best MLOps tools for the job, ensuring your environment is implemented as seamlessly as possible while aligning with your business needs as much as possible.

Once established, it’s crucial to constantly evaluate your MLOps environment and business goals. This will enable you to identify areas of opportunity allowing for continual improvement, as well as ensuring your business needs are always met. For example, if your organization already has an existing MLOps model but has no way to validate or monitor results, then tools that excel at model validation and monitoring, such as Datatron, SAS, and TensorFlowExtended (TFX), would be the best way to help your organization keep growing its MLOps capabilities.

copy link

2. Ensuring compatibility with current data science environment

When building with LEGO, one key step is to consider whether the pieces you are using fit together. The same is true in MLOps. Similarly, when building a data processing model, all parts must fit and work together seamlessly to ensure the best possible results. For example, if an organization runs heavily in a commercial cloud such as AWS or Azure, it will likely require MLOps tools that can run in this sort of environment while working together with other tools in this same environment such as MLFlow, TFX, Kubernetes, or other cross-platform tools. Just like you would not lock two data scientists in two separate rooms with no way to talk to each other, you would not get two different tools that can’t talk to each other. 

It is very likely that MLOps will not comprise the entirety of the organization’s data science environment, so it is also important to understand which parts of the overall environment will need MLOps tools, and which aspects of the data science environment can be run by a simple, more traditional data science model. Determining which parts are priorities for increased automation should aide this decision. 

copy link

3. Evaluating your current level of expertise

Understanding the complexity of tools needed is another important consideration. Some tools, especially open-source tools, have a much higher barrier to entry than paid tools. If an organization has a high-level of Machine Learning expertise available, open-source tools will likely be the best option. However, if the complexity of these tools is prohibitive, it might be worth considering paid tools. 

Paid tools, just like LEGO sets, offer the benefit of providing a more streamlined, general solution that comes pre-packaged with other tools.  Open-source tools are more like designing custom LEGO models in that they take more experience and knowledge to build but provide a more bespoke environment fitting exact business needs.

copy link

Key offerings of MLOps tools

With everyone clamoring to break into this market, you might be dazzled by the myriad MLOps tools available on the market. You might wonder, “What are the key offerings to consider when making the decision? Which tools really excel at these areas?” There are several core capabilities that MLOps tools provide, including:

  • Data management
  • Model versioning and storage
  • Model training and deployment
  • Model validation
  • Continuous integration & continuous delivery (CI/CD)
  • Model monitoring
Diagram

Description automatically generated
Figure 1. MLOps pipeline

copy link

1. Data management

Data lies at the heart of any machine learning project. Just as Figure 1 shows, you will always need to preprocess your data first regardless of the fancy algorithms and models you would like to develop. Specifically, data extraction, validation, and preparation are all necessary steps that support your later modeling efforts. Without proper data exploration and processing, it is almost impossible for algorithms to learn the mapping between input data and target variables to enable the business outcome you are looking for. Some MLOps tools and platforms that excel at this are Azure ML and TFX.

copy link

2. Model versioning and storage

The output of ML projects is the result of the repeated iteration of the code and model as well as the interaction of multiple components, including data, code, model, and in some cases, meta-information like hyperparameters. In the case of an error or flaw, data scientists have to constantly trace back and correct each version of the code. By properly implementing code versioning and storage, data scientists will be able to retrain the model and reproduce the output over time. In a word, versioning and storage are key to the reproducibility of machine learning, which helps overcome human error in the experimental process.

If you are operating transnational businesses, regulations and compliance will also need to be factored into consideration. You need to account for such rules and laws as CCPA (the California Consumer Privacy Act of 2018) and GDPR (the General Data Protection Regulation) and maintain a clear lineage of models deployed into the production environment. MLOps tools with a model versioning and storage offering can tag and document the exact data and models that have been deployed and in turn help businesses comply with audits. 

Current MLOps tools with this offering include MLFlow, GCP AI Hub, Sagemaker, Domino Data Science Platform, and Kubeflow Fairing.

copy link

3. Model training and deployment

Modeling and deploying is an iterative process, which involves various stakeholders with different roles and multiple systems, tools, and environments. However, for many businesses at level 1 of the ML maturity curve, this step may be manual because the automated ML pipeline hasn’t been set up yet. Friction will result from manual integration of the technical package during deployment, which could jeopardize the stability of the environment. MLOps tools offer a way to streamline the modeling and production deployments and easily scale this activity. In terms of specific tools, GCP AI Platform, Sagemaker, Domino Data Science Platform, TFX, and Kubeflow Pipelines all provide this functionality.

copy link

4. Model validation

Model validation is usually performed in tandem with model development, and is measured using statistical metrics, which are quantitative measures that evaluate predictions against observations. Some examples include confusion matrices, F1 scores, and AUC (Area Under the Curve) - ROC (Receiver Characteristic Operator) curves. If a model fails to reach the statistical metrics with new data, it will go back to the development phase. Validation matters because it helps minimize bias and enhance model explainability before a model is deployed to the production environment. By testing the model against new data, you can ensure the model performs as expected. A number of MLOps tools have this offering, including Datatron, SAS, and TFX.

copy link

5. Continuous Integration & Continuous Delivery (CI/CD)

Once your MLOps model has been designed and delivered, you will find it is constantly being modified and updated. These changes need to be integrated and delivered as quickly and seamlessly as possible, which is where CI/CD comes in. CI/CD helps introduce automation into this challenge through continuous integration and continuous delivery. 

Continuous integration ensures that changes made to your model are constantly tested and merged, providing a solution to the classic problem of having too many cooks in the kitchen (or too many developers in the code). Once changes have been consolidated, continuous delivery ensures that the most updated version of the model is automatically uploaded to a shared repository and delivered to production. This minimizes the efforts associated with delivering new code, as well as increases visibility between management and development teams.

Together, CI/CD form what is known as the CI/CD pipeline which is a key element in the lifecycle of any application that requires constant updates. Some CI/CD tools include: GCP Cloud Build, AWS CodePipeline, Azure DevOps, Gitlab, and Jenkins.

copy link

6. Model monitoring 

In an ever-changing world, it is crucial to monitor day-to-day operations and track metrics to ensure the accuracy of model performance in production. As the training data fed into the model evolves over time, the model can be susceptible to drift. The model output therefore might no longer reflect the actual situation, resulting in outdated and misleading predictions. To prevent drift and its repercussions, you can turn to MLOps tools for help. These tools and platforms can monitor for drift, saving your data scientists the time and energy constantly comparing live traffic against the baseline results Examples of these tools include Sagemaker Pipelines, Domino Model Monitor, Datatron, TFX, and Kubeflow Metadata.

Are you ready to start the exciting journey into the world of MLOps? WWT can be your guide!

MLOps is more than just code and tools, it’s also about the implementation of an entire system. In order to pick the right tools, it’s very important to understand not only your organization’s needs and wants but also your current data science landscape and the value that MLOps can provide in these areas. Stay tuned for upcoming articles which take a deeper dive into the important capabilities of tools if you would like to know more! Once you feel ready to take the first step into MLOps, WWT can help your organization identify its needs and requirements and help build a successful MLOps environment.

References

  1. https://www.wwt.com/article/getting-started-with-mlops-the-value-proposition
  2. https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning#continuous_integration
  3. https://www.datarobot.com/mlops-101/#:~:text=MLOps%20allows%20for%20a%20production,models'%20lifecycle%20as%20you%20scale.
  4. https://towardsdatascience.com/how-to-detect-model-drift-in-mlops-monitoring-7a039c22eaf9
  5. https://towardsdatascience.com/why-is-model-validation-so-darn-important-and-how-is-it-different-from-model-monitoring-61dc7565b0c
  6. https://www.redhat.com/en/topics/devops/what-is-ci-cd
  7. https://www.redhat.com/en/topics/devops/what-cicd-pipeline