blog Article

The Consequences of NOT Applying MLOps to Your AI Projects

Author: Stijn Goossens ,

We recently published a blog post highlighting four things you must do when starting your MLOps journey. If you want to deploy effective AI solutions quickly, you need to follow MLOps best practices.

My name is Stijn Goossens. As a Senior Machine Learning Engineer at Radix, I have seen firsthand what happens when clients don’t apply MLOps to their AI projects. So, this blog post highlights the consequences of NOT using MLOps.

When You Don’t Practice MLOps

You need your AI projects to succeed. However, without MLOps, you will have a harder time deploying effective AI solutions to the market.

When you don’t apply MLOps practices to your AI projects, you:

Radix

You waste a lot of time on manual and inefficient processes

Traditional AI project development involves many manual processes. For example, you often spend time creating and running scripts for data processing, model training, evaluation, and deployment. This is not only time-intensive, but also makes it harder to figure out how exactly a certain model was trained: Where did the training data come from? How was this data processed? What script was used to train the model and what were the hyperparameters? Because of this, you risk not being able to reproduce a model that achieved good results.

MLOps encourages AI teams to set up automated pipelines and implement versioning for code, data, and hyperparameters. It makes models reproducible, so you don’t have to spend time figuring out how some models were trained. This allows you to easily go back to an earlier model and continue improving the model from that state. Furthermore, training and deploying new AI models take less effort and time because the automated pipelines replace the manually run scripts.

You have a difficult time collaborating and onboarding newcomers

Today, many teams collaborate asynchronously, where members work on copies of project files stored at different locations on their own schedules. Onboarding new team members is challenging when the workflow involves so many files and versions. And trying to merge all the different files usually leads to conflicts and errors. When you bring in data scientists, they tend to use notebooks like Jupyter, which don’t handle versioning well.

When you follow MLOps best practices, you create AI projects with automated pipelines and versioned code with documentation. The pipelines enable newcomers to more easily kick off a model training instead of having to figure out how the manually triggered scripts tie together. Furthermore, you encourage version control, so managing files and versions becomes much easier, with fewer merge conflicts and errors. All these things make it easier to enable asynchronous collaboration and onboard newcomers.

You fail to detect data drift quickly

If you spend time building AI models, you know that the data supporting your model will eventually change. For example, Orient uses a model where the data changes periodically — new professions come on the market while others disappear. Changes in the data mean you will have to retrain and refine your model to ensure it continues to perform well. You often will not quickly detect that the data has drifted, which reduces the model’s performance.

MLOps encourages the monitoring and evaluation of models in production, ensuring that models always perform well and avoid drift. A monitoring system warns you in case of data drift or model performance degradation. You can also automate model retraining, adjusting models automatically when they start to drift or degrade.

You run the risk of disruptions in production

Sometimes models break after you deploy them. However, when you have a primarily manual development process, you can’t quickly revert to a previous model version. The longer it takes to fix the model or build a new one, the longer the disruption and the greater the cost to the business.

With MLOps, you store all your models — every version — in a model repository and automate the deployment process, including model quality and load tests. For example, the Orient model uses a deploy pipeline kicked off by a commit that specifies the model ID. Reverting to a previous version simply means committing the model ID of an earlier, correctly working model. When you can immediately replace a broken model with a previous working version, you reduce the risks of disruptions in production.

When should you apply MLOps to your AI project?

Short answer: immediately!

We recommend that you start applying MLOps practices to your AI projects right away. At the minimum, you should implement MLOps practices once you’ve proven that your ML solution can bring the company value (proof of concept) and you want to develop it further.

We don’t recommend applying MLOps near the end of your project when you plan on moving it to the maintenance stage. If you wait until the end, you will have missed out on creating efficiencies during the earlier parts of the project. And if you want to scale your AI project, we recommend that you:

  • Automate as much as possible — e.g., data processing, model training and evaluation, model deployment, and feedback loops.
  • Version your code, data, and hyperparameters.
  • Constantly monitor model performance in production.
  • Add sufficient tests — e.g., regular unit tests combined with integration tests for every pipeline.

Applying MLOps will help you build quality AI solutions faster and at scale.

Radix Projects with MLOps

We work with companies to help them build AI solutions that positively impact their business. And we do that by applying MLOps best practices during the development of each project. A few examples:

VDAB’s Orient 2.0

We worked with the team at VDAB to build Orient 2.0, an AI-supported orientation test that suggests professions based on the user’s interests. We put MLOps components into use, including a model building pipeline and a deploy pipeline. The model building pipeline involves:

  • Model unit tests
  • Data processing
  • Model training
  • Containerization
  • Quality, load, and bias testing

The deploy pipeline provisions the cloud infrastructure and deploys the model. Commits triggered both pipelines for perfect reproducibility.

Macadam AI solution for car defect inspection

We partnered with the team at Macadam to create an AI solution that helps inspect cars better with Computer Vision. We applied MLOps at different points throughout the project. For example, we used ML pipelines for model training and evaluation, which resulted in better reproducibility, team collaboration, and experiment tracking. We also deployed the model on a Kubernetes cluster so that it can be scaled easily when production usage increases.

Register for Our Upcoming MLOps Webinar

Radix and Brussels Airport Company will hold a webinar on June 15, 2022, at 11:00 am CST to discuss MLOps. Join my colleagues Brecht Coghe and Xavier Goás Aguililla and Brussels Airport Company’s Thibault Verhoeven to discuss what MLOps is and why machine learning without MLOps is like a house without a foundation.

You will also have an opportunity to ask these experts your questions.

Don’t miss this chance to learn more about MLOps and register below!




Stijn Goossens
About The Author

Stijn Goossens

Stijn is a Solution Architect at Radix. Following his passion for data, he graduated with the Advanced Master of Artificial Intelligence at KU Leuven. Stijn wants to create an impact by applying Machine Learning to real-world challenges. Previously, he worked as a functional consultant after obtaining his master's in Business Engineering.

About The Author