A model registry is a central repository that is used to version control Machine Learning (ML) models. It simply tracks the models while they move between training, production, monitoring, and deployment. It stores all the predominant information such as:
- model versions,
- and training jobs.
As the model registry is shared by multiple team members working on the same machine learning project, model governance is a major advantage that these teams have. This governance data tells them:
- which dataset was used for training,
- who trained and published a model,
- what’s the predictive performance of the model,
- and finally, when the model was deployed to production.
Usually, while working in a team, different team members tend to try out different things, and only a few of them are finalized and pushed to the version control tool they use. The model registry helps them solve this issue as each team member can try their own versions of models, and they will have a record of all the things they have experimented with throughout the project journey.
This article will discuss the model registry tools and evaluation criteria for such tools. You will also see a comparison of different model registry tools, such as:
- AWS Sagemaker,
- and Comet.
So let’s get started!
Evaluation criteria for choosing model registry tools
The model registry is an important part of MLOps platforms/tools. There are plenty of tools available in the market that can fulfill your ML workflow needs. Here is an illustration that classifies these tools on the basis of their specialization.
The products on the bottom right are focused on deployment and monitoring; those on the bottom-left focus on training and tracking. Those at the very top aim to cover every aspect of the ML lifecycle, while those in the middle-top do most or all of the spectrum with leaning one way or another.
To visualize it even more precisely, let’s have a look at another image:
From the above image, it can be inferred that tools like Kubeflow and other cloud providers are the most balanced and cover every stage of an ML pipeline development equally. Specialized tools like Neptune and Polyaxon are closest to their axis, i.e., majorly focused on model training.
NOTE: Aforementioned evaluation criteria for these tools is subjective to the features these tools had at that point in time (November 2021). Many of these tools have moved much beyond their area of specialization in the past year, so take this discussion with a pinch of salt.
However, there are some evergreen factors that are integral to determining a registry tool’s effectiveness. From my own experience, some of them are:
Installation and integration
Choosing the right model registry tool is often influenced by how it would be installed and what kind of integrations it would offer. Usually, organizations choose the tools based on their development environment. For example:
- if the organization is using AWS for whole development and deployment, in that case, Sagemaker would make a lot of sense as there would be no compatibility issues.
- But if the organization is not using AWS, then tools like Neptune or MLFlow can be used for a model registry.
- On the other hand, tools that are typically viewed as end-to-end, like Sagemaker, are more and more open to the concept of interoperability and the fact that users can complement them with other tools.
Integrations can be a major worry for firms that are dedicated to their current choices in terms of their technological stack. If an organization is using some continuous integration tool, they might prefer the model registry tool that easily blends in.
Ease of automation
Another requirement of a model registry tool is how easily the development team can make use of that tool.
- Some tools require you to code all the things needed to store the model versions,
- While some tools require very less coding, and you just need to drag and drop different components to use them.
- There are also some tools fully based on the concept of AutoML and do not require you to write any code for storing your model versions.
Auto-ML tools have less flexibility for customizations while Low-Code tools provide both custom and automation options finally, Code-First tools only provide a writing code facility. You can choose a tool based on your requirement.
Updated model overview and model stages tracking
The entire purpose of a model registry tool is to provide an easy overview of all the versions of models that the development team has tried. While selecting the tool, you must remember that the tool must provide the model overview of each version at every stage. Tracking models extend beyond development; it is done for maintenance and enhancement in staging and production as well. The machine learning model lifetime including:
- and production,
must be tracked by the model registration tool.
Competence in managing the model dependencies
The model registry tool must have compatibility with all the dependencies the ML model needs. You should check the dependencies competence for the Machine Learning libraries, Python version, and data. If you are working on some use case that requires a special ML library and the registry tool does not support it, that tool would not make much sense for you.
Providing the flexibility of team collaboration
You may evaluate whether you and your team can collaborate on the registered model or not. If the model registry enables you to work with your team on the same ML model, then you can choose that tool.
Thus, you can follow the evaluation criteria to select the best model registry tool according to your requirements.
Comparison of model registry tools
Every model registry tool has different features and performs various unique operations. Here’s how they compare:
Model registry tools
Here are a number of model registry tools that are used across the industry:
Neptune is a metadata store for experiment tracking and model registry. So registering models is one of the two key functionalities of this tool.
In general, Neptune allows you to log, compare, display, query, and organize all metadata related to ML experiments and models. It only takes a few lines of code to integrate it with your code, the API is flexible, and the UI is user-friendly but also prepared for the high volume of logged metadata.
Some of the features of Neptune’s model registry include:
- It lets you register models and model versions, along with the metadata associated with these versions. It can version model code, images, datasets, Git info, and notebooks.
- It allows you to filter and sort the versioned data easily.
- It lets you manage model stage transitions using four available stages.
- You can then query and download any stored model files and metadata.
- Additionally, it records all your metadata for machine learning model development with version control in one place.
- And it helps your team to collaborate on model building and experiments by providing persistent links and share buttons for its central ML metadata store and the table for all runs so far.
- It supports different connection modes such as asynchronous (default), synchronous, offline, read-only, and debug modes for the versioned metadata tracking.
An open-source platform that you can use for managing the ML model lifecycle. MLFlow enables you to track the MLOps life cycle with the help of its APIs. It provides model versioning, model lineage, annotations, and transitions from development to deployment functionalities.
Some features of MLflow model registry are as follows:
- It provides chronological model lineage, i.e., which MLflow experiment and run produced the model at a given time.
- It provides different predefined model stages as Archived, Staging, and Production but allocates one model stage at a time for different model versions.
- MLflow allows you to annotate the top-level models and version them individually using markdown.
- It offers webhooks so that you can automatically trigger actions based on registry events.
- There is also a provision for email notifications of model events.
Check detailed comparison between neptune.ai and MLflow.
Developers use Amazon SageMaker for complete control of the ML development lifecycle. You can catalogue production models, associate metadata, and manage versions and approval status of models with the SageMaker registry.
First, you create a version of a model and specify its respective group. Besides, you could use an inference pipeline to register the model with variables and container specifications. Then you may create new model versions using AWS Python SDK. Moreover, you can also deploy the model out of the model registry using AWS. You can deploy the trained Machine Learning model with real-time interference and low latency to SageMaker endpoints. This deployed model can be monitored using the Amazon SageMaker Model Monitor feature.
Some features of the Amazon Sagemaker model registry are as follows:
- You can create a model group to solve a specific ML problem. It allows you to view all of the model versions that are associated with a model group.
- Using AWS Python SDK or Amazon Sagemaker Studio, you can view details of a specific version of a model.
- You can also associate metadata, such as training metrics, with a model and version it as a whole.
- You can approve or reject a model version within the model registry, if approved, the CI/CD deployment can be carried out easily from there.
Developers can use the Comet platform to manage machine learning experiments. This system allows you to version, register, and deploy the model using its Python SDK Experiment.
Comet keeps track of model versions and the experiment history of the model. You can check the detailed information of all model versions. Besides, you can maintain ML workflow more efficiently using model reproduction and optimization.
The feature-rich Comet has various functionalities for running and tracking ML model experiments, including:
- Comet allows you to easily check the history of evaluation/testing runs.
- You can easily compare different experiments using the Comet model registry.
- It allows you to access the code, dependencies, hyperparameters, and metrics within a single UI.
- It has in-built reporting and visualization features to communicate with team members and stakeholders.
- It lets you configure webhooks and integrate the Comet model registry with your CI/CD pipeline.
May be useful
Check detailed comparison between neptune.ai and Comet.
You can use the Verta AI tool for the management and operations of the model in one unified space. It provides an interactive UI where you can register the ML models and publish the metadata, artefacts, and documents. Then, to manage the end-to-end experiment, you may connect the model to the experiment tracker. Version control solutions for ML projects are also offered by Verta AI.
Additionally, it enables you to keep track of changes made to data, code, environments, and model configuration. With the audit log’s accessibility, you may also examine the model’s dependability and compatibility at any time. You can also create a unique approval sequence that is appropriate for your project and incorporate it with the selected ticketing system.
Some of the main features of Verta AI’s model registry are:
- It enables end-to-end information tracking such as Model ID, description, tags, documentation, model versions, release stage, artifacts, model metadata, and more, which helps in selecting the best model.
- It works on container tools like Kubernetes and Docker and is integrable with GitOps and Jenkins, which helps in automatically tracking model versions.
- It provides access to detailed audit logs for compliance.
- It has an environment like Git that makes it intuitive.
- You can set up granular access control for editors, reviewers, and collaborators.
After reading this article, I hope you now know what model registry tools are and the different criteria that one must look for while selecting a model registry tool. To offer a practical perspective, we also discussed some of the popular model registry tools and compared them with each other in several aspects. Now, let’s wrap the article with a few key takeaways:
- Model registry performs model versioning and publishes them into production.
- Before selecting a model registry tool, you must evaluate each model according to your requirement.
- Model registry evaluation criteria can range from the capability to monitor and manage the different ML model stages and versions to its ease of use and pricing.
- You may refer to the highlighted features of different model registry tools to get a better idea of that tool’s compatibility with your use case.
With these points in mind, I hope your model registry tool search will be much easier.