In the past few years, we’ve seen an increase in Artificial Intelligence and Machine Learning solutions in real-life situations. In big companies, these solutions have to be implemented in hundreds of use cases and it’s difficult to do this manually.
At the enterprise level, the deployment of AI solutions and machine learning models needs to be operationalized. Data scientists and ML engineers have tools to create and deploy models but that’s just a start. The models need to be deployed in production to deal with real-world use cases. So, we need a framework or methodology to reduce manual effort and streamline the deployment of ML models. That framework is ModelOps.
It was proposed in December 2018 by IBM researchers as “a programming model for reusable, platform-independent, and composable AI workflows”. The authors, Waldemar Hummer and Vinod Muthusamy, later expanded on the idea as a “cloud-based framework and platform for end-to-end development and lifecycle management of artificial intelligence (AI) applications”.
“Artificial intelligence (AI) model operationalization (ModelOps) is a set of capabilities that primarily focuses on the governance and the full life cycle management of all AI and decision models. This includes models based on machine learning (ML), knowledge graphs, rules, optimization, natural language techniques and agents. In contrast to MLOps (which focuses only on the operationalization of ML models) and AIOps (which is AI for IT operations), ModelOps focuses on operationalizing all AI and decision models.” – Gartner
In March 2020, ModelOp, Inc. published the first comprehensive guide to ModelOps methodology. The guide covered what ModelOps can do, along with technical and organizational requirements for implementing ModelOps practices.
Basically, ModelOps is a collection of tools, technologies, and best practices to deploy, monitor and manage machine learning models. It is the key capability for scaling and governing AI at the enterprise level.
It’s based on the concept of DevOps but adapted to ensure good quality of machine learning models. In general, ModelOps includes:
- continuous integration / continuous delivery (CI/CD),
- development environments,
- model versioning,
- model store,
- rollback, etc.
You can think of ModelOps as an expansion of MLOps, with the main focus to keep deployed models ready for the future with continuous re-training and synchronized deployments.
“A true ModelOps framework allows you to bring standardization and scalability across these disparate environments so that development, training and deployment processes can run consistently and in a platform-agnostic manner.” – cio-wiki
Why ModelOps? The key to enterprise AI
In 2018, Gartner asked large enterprises about AI adoption. Managers had expected 23% of their systems to have AI integrated by the following year. Gartner followed up in 2019 and found that only 5% of deployment made it to production. Most enterprises were unable to quickly scale and integrate AI into their systems.
This buildup of undeployed models can eventually affect company growth. Models need complex re-training. For each new line of business, a new set of data. Plus, models need to be placed into a 24/7 operational environment, as most data scientists use open-source modelling tools like Jupyter notebook or R studio. Data scientists are not aware of or don’t have access to environments where latency can be observed.
This is where ModelOps comes into the picture. It can help solve these challenges and enable organizations to scale and manage AI initiatives easily.
“Models are profoundly accountable to the business, more so than traditional software. They have to go under regulatory scrutiny and compliance. A properly operating model can dramatically change the topline performance of a particular business unit. So, integration between the business units and compliance departments is critical.” – Forbes
In the past years, enterprises have seen a surge in AI development. But, as one survey shows, “84% of C-suite executives believe they must leverage artificial intelligence (AI) to achieve their growth objectives, yet 76% report they struggle with how to scale.”
ModelOps excel in the dynamic environment. The model can be easily adjusted whenever the defined condition changes. Enterprises incorporate different types of models for different business problems. ModelOps enables them to switch or scale the systems accordingly.
ModelOps is like a bridge between data scientists, data engineers, application owners, and infrastructure owners. It fosters dynamic collaboration and improved productivity. Enterprises use ModelOps to solve challenges like:
- Regulatory Compliance – To comply with regulatory requirements, we need systematic reproduction of training, evaluation, and scoring of each model. Model monitoring helps to enforce compliance gates and controls ensuring that all the business and regulatory requirements are satisfied.
- Silos Environment – Multiple teams are involved in a model when it goes from deployment to monitoring. Ineffective collaboration across teams can make scaling AI difficult. Teams need to come together, and ModelOps helps to create an environment where models can easily be moved from the data science team to the IT production team.
- Different models have different solutions – Enterprises will have hundreds of models for different business problems. Each model accounts for specific business process variations, unique customer segmentations, etc. ModelOps provides a single view to see workflows, auditing, performance tuning, and governance to control cost and create value.
- Complex Technology – A wide range of solutions, tools, and technologies are available for data and analytics problems. It’s difficult to manage all of them with continuous innovations. Even the most expert teams might not keep up. ModelOps makes it easier to integrate and adopt new technologies.
ModelOps use cases
Many managers have struggled to demonstrate the value of analytics because analytics solutions often don’t make it to production. ModelOps can solve this.
Here are some of the areas where ModelOps is being used extensively to overcome model deployment challenges:
- Finance – For example, banks have been doing credit approval using statistical models, and today most operational decision-making is driven by real-time analytics. This model-based approach has helped banks to reduce man-hours, but managing these complicated models at scale is also difficult. These models should be fair and robust, which fosters unbiased decisions. ModelOps makes it easier to monitor the model for bias or anomalies and to update them accordingly.
- Healthcare – AI can improve efficiency and patient care while reducing costly administrative errors. But, machine learning models have to be refreshed with current data, new KPIs, etc. Also, monitored to review anomalies. The updated models should be readily available on different systems, for example, a mobile app or a system at the lab, to keep the results in sync.
- Retail – When covid-19 hit, everything had to move online, but it was difficult to deploy and monitor AI solutions effectively. ModelOps provides an ability to monitor the models, and create a multilevel view of key metrics to see model performance in production. To understand the areas of growth and reduce work for data scientists and IT specialists, retail companies opted for automation and standardization of ML operators. Companies, like Domino’s Pizza, were able to increase the efficiency of managing multiple models at scale.
There are several online hubs available to guide enterprises through the model lifecycle. Some of the most commonly used platforms are:
In 2016, ModelOp was founded to address the large gap between deployment and maintenance of models. To operationalize machine learning models, the team came up with ModelOp Center. Center helped to accelerate the operationalization of models with ensured governance, and enforcement of regulatory requirements.
“ModelOp Center automates the governance, management and monitoring of deployed AI, ML models across platforms and teams, resulting in reliable, compliant and scalable AI initiatives.”
ModelOp Center can help you accomplish the steps involved in a model’s life cycle:
- Register – In ModelOp center you can register the model using the Jupyter plugin or CLI, and passing the information through predefined elements – Model Source, Attachments, Schemas and Model Platform. The source code of the model can be divided using 4 functions – Init, Scoring, Metrics and Training.
You can also update the registered model whenever you want by editing schemas or adding new assets to it.
- Orchestrate – ModelOp Center has an MLC (Model Life Cycle) manager which automates model operations, like deployment, monitoring, and governance. The model can be deployed and monitored in production efficiently and easily. This also gives enterprises flexibility to automate only a portion of the lifecycle.
MLC manager is a framework that executes, monitors, and manages MLC processes. MLC processes are triggered through external events, like:
- Data arrival
- Manual Intervention
- Marked as – Ready for production
A few common MLC processes can be handled using the ModelOp Center: productionization, refreshing, retraining, and monitoring performance.
- Monitor – ModelOp Center provides various metrics for comprehensive model monitoring. You can evaluate models using common metrics like F1 score, ROC, AUC, etc. By using SLA or data metrics, you can also monitor operational, quality, risk, and process performances.
- Govern – A central repository for governing each step of the model lifecycle. Governance ensures a standard representation of a model. Stakeholders can look at production model inventory to view details, modify and improve the model, or operationalise a process.
ModelOp Center is also integrated with many development platforms, IT systems and enterprise applications. These integrations can be leveraged to extend AI investments and unlock the value of AI.
With platforms like ModelOp Center, enterprises can accelerate model deployment time and reduce business risks.
“Building models isn’t the problem—it’s organization. We’re building Datatron to fix this, by speeding up deployments, detecting problems early, and increasing efficiency of managing multiple models at scale.”
Datatron is a one-stop solution where you can automate, optimize, and accelerate ML models. It supports a wide range of frameworks and languages such as TensorFlow, H2O, Scikit-Learn, SAS, Python, R, Scala, etc. Additionally, it supports both on-premises and cloud-based infrastructure. They have divided ModelOps activities into the below categories:
- Model Catalog – Explore models built and uploaded by your data science team, all from one centralized repository.
- Model Deployment – Create a scalable model with a few clicks and deploy using any language or framework.
- Model Monitoring – Create conditional alerts, compare model predictions, check model accuracy and check if the model is going to decay.
- Model Governance – You can easily validate your models and perform internal audits. The platform can also be used to debug and explain the model.
- Model Management – Dynamically select the best model at runtime to help serve better predictions and decrease errors. Enforcing latency to prevent any service outage and perform A/B testing on the model sequence.
- Model Workflow – Setting up workflows can help to integrate business logic with the model’s results. For example, you can set up a pre-processing workflow where you define data sources and feature engineering processes to then input them into the model.
SAS model management software is used to streamline analytical model deployment and management. But, enterprises needed help with implementation and ongoing value addition, so SAS introduced ModelOps. It’s a combined package with model manager software and services. Some of the key features are:
- Managing analytical models
- Delivering detailed project plan
- Assessing current models
- Implementation of SAS model manager and activation support.
Superwise assures the health of multiple predictions every day. It can predict performance over time and versions and enables you to gain visibility when you have no feedback.
It supports advanced incident management which helps with model monitoring and gives granular visibility of model performance.
It’s API-driven, so the solutions can be integrated easily into an existing environment. It can discover features, generate KPIs, and set thresholds by looking at the data.
Modzy was founded in 2019 with a clear purpose to build a world where humans and machines working together outperform either working alone.
“Modzy powers AI at a scale that couldn’t exist ten years ago, with a ModelOps platform and marketplace of ready-to-run models.”
- Modzy integrates with a wide range of tools, frameworks, data pipelines, and CI/CD systems.
- It provides a list of predefined models that you can reuse with new input data.
Benefits of ModelOps
In the previous sections, we talked about what ModelOps brings to enterprises and why they need to integrate with ModelOps to scale AI systems. Let’s explore more of these benefits:
- Accelerate Deployment – Many large enterprises find it challenging to deliver AI solutions on time. Either there’s a shortage of skilled data scientists, or it takes too much to tune or find problems with the models. ModelOps platforms provide a single view of all the models and various pipelines which help to accelerate the model deployment. This way, businesses can focus more on innovation and value creation.
- Mitigate Model Drift – There can be hundreds of models, and managing them can be difficult. ModelOps helps to explain how models are derived and measure their fairness while helping to correct if there are any drifts from the defined baseline.
- AI outcome-driven – Valuable insights – ModelOps helps to map model outcomes to business KPIs by generating key insights and patterns. Managers can leverage strategic enablers like automation, prediction, and optimizations. This helps to create a solution that matches your business needs.
- Simpler Onboarding – ModelOps is a unified environment that leads to substantial gains while reducing time invested in model building, deployment, and management. These platforms help you onboard models and processes and let you monitor and govern them with a few clicks.
- Economic – ModelOps platforms are integrated with the cloud, which can help to optimize cloud service and AI models economically. Enterprises can choose flexible consumption of services for modelling.
Features of ModelOps
We have covered some of the ModelOps tools in the previous section, and most of them have common features, as they all serve one goal – to operationalize models.
ModelOps is the key to unlock value, it’s the connective tissue in your AI tech stack. Let’s look at some of the features and see why ModelOps matters:
- Generating Model Pipeline – With the least human interaction and a few clicks, ModelOps platforms generate a pipeline automatically. Once the first setup is done, the whole modelling lifecycle – preparing data, selecting models, feature engineering, hyperparameter optimization – will be automated.
- Monitoring Models – Monitoring hundreds of different models can be challenging. ModelOps helps to monitor all kinds of models. It looks for any possible bias and then learns how to resolve it.
- Deploying Models – The models built on ModelOps platforms can be integrated into any application easily. You can deploy and send models virtually anywhere.
- One-stop solution – Build, Run and Manage Models – Data scientists build models using open-source platforms like Jupyter, R, etc. Then IT professionals use a different platform to deploy these models in production. This process usually takes time and reduces the time to market. ModelOps platforms are here to rescue enterprises with this complicated and lengthy process. They are the one-stop solutions where teams can build models, run, and manage them easily.
Is ModelOps the same as MLOps?
There is a thin line between ModelOps and MLOps and if you look at their architecture you might see ModelOps as an extension of MLOps. ModelOps refers to the processes to operationalize and manage AI models in use in production. ModelOps can be seen as SDLC that helps to organize the life cycle flow, i.e creating a model, testing, deploying, and monitoring it.
ModelOps does everything MLOps does and more. It helps enterprises to get more out of their AI investments. MLOps enables data scientists and IT professionals to collaborate and communicate effectively while automating machine learning models. ModelOps is an evolution of MLOps, focused on continuous retraining, synchronized development and deployment of ML models, decision optimization, and transformational models.
“ModelOps has emerged as the critical link to addressing last mile delivery challenges for AI deployments. ModelOps is a superset of MLOps, which refers to the processes involved to operationalize and manage AI models in use in production systems.” – Modzy
ModelOps vs MLOps
It’s easy to confuse ModelOps with MLOps. To understand the differences, we need to know how exactly they help with modelling. Both are needed to create scalable AI solutions.
Let’s take a look at some of the common differences between these two:
Focused on Machine Learning Models operationalization
All AI and decision model operationalization
A continuous loop of model development, deployment, and monitoring of performance
Focus on governance and full lifecycle management of models
Tools & Platforms
Amazon SageMaker, Neptune, DataRobot, MLFlow, and more
Cnvrg, Cloudera, ModelOps, Modzy, SAS
Aims to create AI-enabled applications by creating an effective collaboration environment for various teams and stakeholders
To provide transparency into AI usage using dashboards, reporting to business leaders
To find out more about differences, please check the ModelOp center.
Throughout this article, we’ve learned what ModelOps is and how enterprises can use it to operationalize AI solutions. There’s a wide range of platforms and tools available that can help to create a model workflow, monitor, and govern with only a few clicks. We also identified how MLOps and ModelOps are different, but belong to the same concept.
To conclude, with ModelOps, enterprises that create AI solutions and deploy them into production will have to keep updating these models so they don’t become obsolete, and are able to keep up with the market. Thanks to ModelOps, Enterprises can automate these updates.
“By providing information and insights tailored to business leaders, ModelOps solutions address one of the most pressing issues with AI adoption today. This transparency into AI usage across the enterprise provides explainability for models in a way business leaders can understand. Bottom line: ModelOps promotes trust, which leads to increased AI adoption.” – Data Science Central
MLOps: What It Is, Why it Matters, and How To Implement It
13 mins read | Prince Canuma | Posted January 14, 2021
According to techjury, every person created at least 1.7 MB of data per second in 2020. For data scientists like you and me, that is like early Christmas because there are so many theories/ideas to explore, experiment with, and many discoveries to be made and models to be developed.
But if we want to be serious and actually have those models touch real-life business problems and real people, we have to deal with the essentials like:
- acquiring & cleaning large amounts of data;
- setting up tracking and versioning for experiments and model training runs;
- setting up the deployment and monitoring pipelines for the models that do get to production.
And we need to find a way to scale our ML operations to the needs of the business and/or users of our ML models.
There were similar issues in the past when we needed to scale conventional software systems so that more people can use them. DevOps’ solution was a set of practices for developing, testing, deploying, and operating large-scale software systems. With DevOps, development cycles became shorter, deployment velocity increased, and system releases became auditable and dependable.
That brings us to MLOps. It was born at the intersection of DevOps, Data Engineering, and Machine Learning, and it’s a similar concept to DevOps, but the execution is different. ML systems are experimental in nature and have more components that are significantly more complex to build and operate.
Let’s dig in!Continue reading ->