Over the past few months, we've studied and tested many of the most popular and cutting-edge MLOps platforms. We believe organizations that master the automation, organization and architecture of these platforms will be the ones to lead the way in the future business of data science.
MLOps platforms will provide value to the business by making data scientists more efficient, allowing them to manage their models and providing them the ability to iterate in a rapid fashion on experiments and model development. This will ultimately make the data scientists more productive and the models more performant, accelerating the revenue generation or cost savings targets of the models themselves. It is a win-win for the organization!
This is the first in a series of articles about the nature of the landscape as it continues to grow, the business uses and value of MLOps and an evaluation of various platforms. WWT is committed to being a leader in this field and we hope you will join us in pushing the boundaries of business applications of these exciting, new and game-changing toolboxes.
What have you heard about MLOps so far?
A 2015 paper identified the problems and “hidden technical debt” the ML lifecycle can incur, which spurred the idea for centralized “machine learning operations” platforms. But it was not until 2018 that the space began to fill with enterprise-level solutions. Each developer’s version has its own capabilities, unique features and limitations, but they all have the same goal. Which is to give data scientists the tools they need to manage the entire machine learning lifecycle.
The buzz truly kicked off in 2017 when Google began heavily investing in the field and their internal MLOps platform, TensorFlow Extended (TFX) and then later KubeFlow, a technology they have now made open source. Since then, you may have heard of MLOps from a variety of sources because it has been a hot topic from its inception (and rightfully so).
But right now the whirlwind of announcements, new platforms and constant churn of updates has made it difficult to get an idea of the landscape. Your CTO may have mentioned that they heard about it at a technology conference, or maybe you read an article discussing the benefits of these toolboxes — and they really are each their own stand-alone toolbox. Some toolboxes will fill certain niches like performance tracking and visualization and others will come with a full complement of tools for each task along the lifecycle. In future articles we will be breaking down the features and limitations of many of the leading MLOps platforms to build understanding of the space.
Why bother learning about MLOps?
It is good to ask yourself, “Why am I investing time and money into a new tool? Will it help me in the long term?” From a data scientist’s perspective, adding MLOps to your skill set is an investment with huge upside. Data preparation and model training are difficult enough ventures for complicated business problems, but those well-known challenges do not account for the hidden technical debt which the process can and will incur.
Once the models are developed, progress can slow dramatically. Most data scientists are not very familiar with the steps that lie between training a model and turning it into a finished product. It is a truth many of us know all too well — that writing the code is just a small part of the data science lifecycle.
MLOps platforms deliver a world where data scientists can save their models and share various iterations with their colleagues easily. It is a world where you can start experimenting collaboratively with someone else’s model in a matter of seconds. And one where data scientists are armed with the tools they need to maintain the long-term health of their models. This world is the future of the field and is quickly becoming the today’s reality.
A challenge that many data science teams face is the gap between their team and their software delivery counterparts, especially in organizations who are growing their data science practice. Getting stuck in this rut between model development and model deployment and upkeep can be frustrating; it feels like the machine learning models are ready to be put into production, but either the teams lack the expertise to do this on their own or there is a disconnect between their vision and the delivery team’s.
MLOps platforms not only build a bridge over this rut, but map and pave out the paths leading to and from that bridge. It connects two separate areas of expertise, data science and software delivery.
How difficult is it to get started?
There are challenges with implementing a MLOps platform, but they are mostly encountered in the upfront time and integration costs. And remember that organizations can expect to save time and resources in the long term. To go back to the toolbox analogy, bringing a new MLOps platform online is like installing a new table saw. Just as there are frustrations and bumps along the way as you carry your bulky new saw into the workshop, navigate around tight corners and summon the strength to lift the thing, there will be some pains getting the platform installed. The more accustomed to Linux and cloud systems you are, the easier you will find the installation process.
Once you have it in your space, you will need to plug things in, move materials around to accommodate the new tool, and start learning how to use it properly (and safely). But as it becomes integrated into your work, the feeling that it is something novel and foreign will fade, and the familiarity of the tool will make it a normal part of your workflow. You will soon see it is faster and more efficient to do your work, from a deep neural net to a large set concurrent models, with your newly integrated tools.
Does it get easier later?
Yes! And this is the beauty of using a MLOps platform: the more you and your team use it, the easier and more valuable the toolset becomes. MLOps provides tools for the machine learning team, such as increased experimental automation, hyperparameter trackers and collaboration among an entire team. The more experimentation is automated and collaboration is increased, the better the product becomes.
The collaboration extends to the delivery team, which may have less experience with the data science team. For data scientists, this is a no-brainer. It makes the software architecture steps in the lifecycle, which are not traditionally within a data science skillset, easier to manage; this gives data scientists more time to do actual data science.
Where do we start?
Like so many things in data science, the first place to start is research and goal setting. Investigating the plethora of options you have for MLOps and deciding what features and capabilities are the most important for your organization will lead you to the right platform.
If your company is already plugged into the Microsoft Azure Cloud, then staying in the same ecosystem with Azure DevOps and Azure ML may be the easiest choice. If you are looking for a hefty data science toolbox, such as using notebooks for coding, and are agnostic to cloud platform or on-premise use, KubeFlow is a great choice. If your organization is lacking a way to track experiments and organize model iterations, then Neptune is a lightweight tool you will want.
We are currently evaluating some of the most popular services and will publish our perspective in the coming months. Stay tuned for a deep dive into the tools and functions of each by following our Data Analytics and AI topic area. Given the pace with which the market is evolving, we are certain that our favorites will continue to evolve as well.
In companies across the industries, MLOps is changing the scale data science work is done. This is an exciting time to get involved with the field and a chance to be part of group of early adopters who understand and provide feedback on these tools and quickly become leaders in the machine learning space.