MLOps Blog

Best Tools for ML Model Governance, Provenance, and Lineage

11 min
14th November, 2023

ML software development is complex; building an ML model is one thing, improving and maintaining it, is another. If you want your machine learning models to be robust, compliant, and give reproducible results, you must invest time and money in quality model management. 

Model governance, model provenance, and model lineage tools help you in doing just that by tracking model activity, recording all changes in the data and the model, and outlining best practices for data management and disposal. 

In this post, let us discuss what these tools are and how to choose the best ones. While these three practices are meant for different things, they have a lot in common. So a tool that is good for, say model governance, is usually great for the other two as well. 

I will guide you through some of the most popular tools for model governance among developers and explain which one you should choose based on your particular use case. 

What is model governance?

Model governance, as you might understand from the term, is a set of practices and techniques that provide for controlling the process of model development and implementation. Any ML model is bound to comply with certain expectations. They should meet technical requirements, be legally compliant, and not present any ethical concerns. 

Why model governance? 

It is quite common that after the model is deployed, the results in the real world are pretty different from what you were expecting. To understand where things possibly went wrong and quickly fix that, you need model governance. 

You must be familiar with the term ‘black box’ concerning ML models, especially deep learning models. They are described as mysterious and impossible to debug. However, once you have a catalog of all modifications that ever happened, it becomes much easier to ensure the proper functioning of the model. 

For example, a common problem in the last few years that has invited a lot of backlash from the community is models producing biased results on getting fed biased data. The Amazon AI hiring system is often cited as an example of biased AI. The system penalized resumes that included the word ‘women’s’ in them like in ‘women’s chess club’ because historically, it was men who occupied the majority of positions.

If you don’t want your company to end up on the ‘blacklist’ together with Amazon, Google, and Facebook, who have been widely criticized for their discriminatory models, the ability to track data drifts, anomalies, and biases can be beneficial. 

Moreover, biases are not the only kind of problem that your model may face. Another challenge is to guarantee the security of a model, especially if we’re talking about enterprise-level companies. If a model is accidentally exposed to another department inside your company or third parties, there can be a lot of uncertainty. For example, models can be tampered with and that can put the enterprise at risk. 

Benefits of model governance

Model governance has apparent benefits. However, it is not always easy to implement it from scratch. You will need to review the workflows of the ML team first to evaluate effectiveness and cost demands. The most important thing about model governance implementation is consistency. You have to use model governance across all models and departments, not just to a few business units. Standardization is the key to effective model governance. 

What is model provenance?

Model provenance is tightly connected to the model governance process and describes the origin of the model and the processing steps that were applied to it. 

Why model provenance?

Usually, before you start working with an ML model, you have to prepare the training data which has to go through a long transformation process along the machine learning development pipeline. The idea of data provenance is to keep track of every step of transformation: where the data came from, who altered it, and in what manner. Data acquisition, merging, cleaning, and feature extraction belong to the camp of data provenance. 

Benefits of model provenance

Models go through a lot of changes before the final version is deployed. The process usually involves experimentation with different ML techniques and architectures to get the best results. You can’t always predict whether your next approach will be fruitful or not. And it can be hard to get back to the point of optimal performance in the absence of the model’s checkpoints. Model provenance tools help you track all the inputs (including data), hyperparameters, dependencies, and more.

Documenting all these things can be a solution, but it is not very convenient in the case of large projects. Moreover, the human factors can always put your model provenance at risk: developers often forget to record something small such as metadata notes, and their negligence can result in substantial monetary losses. 

Therefore, it is worth investing in automated solutions in the form of model provenance tools. It is particularly helpful for companies because it helps to enhance visibility. You have a map of your model at hand and can forget about ‘visibility debt’. 

What is model lineage?

Model lineage is another technique of enhancing the visibility of your model. It keeps the history of your model in check. If you use automated tools, which are highly recommended for routine tasks, the records will be automatically generated each time a new version of the model is trained. 

Among the information that model lineage tools allow you to keep, is data about the kind of data and algorithms used for building the model, the pipeline used for training, and chosen parameters.

Why model lineage?

ML model development is a field that requires experimentation. It usually takes weeks, if not months, to come up with the right combination of model configurations and hyperparameters. Usually, researchers work in iterations, keeping what works best from the previous version of the model and alternating what needs to be changed. Moreover, there are usually many training datasets involved that can be changed along the ML development pipeline.

When you finally settle down for the final version of the model, it can be hard to track down what alterations actually contributed to this success. However, it’s important to understand this not only for the research purposes but also for the continuous reproducible results of the model. In addition, researchers usually work as part of the team and sometimes even collaborate with other teams and departments. Keeping records in spreadsheets is not very sustainable and doesn’t scale very well.  

When a need for collaboration rises, it is always better if you use version control tools for better model traceability. Knowing the history of the model makes the experiment results reproducible which is highly valued in both business and academic fields. Via Model lineage tools it will be possible to trace the relationship between a model and its components, including experiments, datasets, containers, etc. They help to represent the associations between the artifacts and the core and show the structure in an intuitive, visual way. 

Benefits of model lineage

Model lineage is essential for the transparency and traceability of any company. Today your ML model cannot simply represent a ‘black box’. For example, if third parties raise legal or ethical concerns, you will be able to share all the nitty-gritty details of the model. You should be able to trace back to the point where questionable parts were introduced and by whom and be able to explain and fix it.

Moreover, model lineage is crucial in science and business. The results that your model shows should be reproducible. Your client or your colleagues should be able to run the model and get the same result. It is much easier to build and deploy a stable model when you have all the data about its history and development.

May be useful

Top Model Versioning Tools for Your ML Workflow

How to choose the right tool 

Now that the difference and similarities between model governance, model provenance, and model lineage are clear, let us discuss how to choose the right tool for your company. 

If you have decided to transition to a model governance tool, the first thing you do is not to reach out to the software vendors. In fact, this is the last thing you should do. Here is what you can do instead step-by-step:

  • Model governance adoption: Take your time and evaluate what outcome your organization is trying to achieve. The best answer is the concrete answer, for example, ‘Deliver quality data to increase revenue by X thousands of dollars’. Your milestones should be precise and measurable; only then will you be able to monitor the efficiency of your model governance adoption after some time. 
  • Operating model discovery: An operating model is a tool that will help you to outline the roles, responsibilities, and business terms in what concerns model governance. For example, it will establish data owners, individuals who decide how the company should use the data, and data stewards responsible for data collecting and processing. You will also need to develop a set of data governance policies, aka rules that your team must follow. 
  • Selection of the best tools: The best data governance tool will automate as much as possible and provide maximum customization because every company is different. Don’t be afraid to talk to vendors and ask them to demonstrate their solutions and highlight essential features. Come to the meeting prepared: you have your goals and requirements and ask how the product will help you fulfill them. 

Often you will have to compromise. For example, a solution that contains every feature you need and more might be too expensive. In this case, consider how important the part is for your ML engineering team. It is nice to have the features ranked in advance: for some teams, it is crucial that the solution supports the cloud; for others, not really. Just because everyone is working in the cloud doesn’t mean you have to. 

A great way to prioritize features is to ask yourself and your team whether you often perform the tasks you want to automate. And if the answer is no, perhaps the costs of achieving them manually are lower than automation. 

Top tools for model governance, model provenance, and model lineage

Since model governance, model provenance, and model lineage are tightly interconnected, here is a unified list of tools that help ML teams with these practices.

1. DataRobot

Tools for ml model governance - datarobot
You can easily set up model governance from the start with DataRobot | Source

DataRobot is known by many as a platform for business analysts to build predictive analytics solutions without having an in-depth knowledge of programming. One of the critical features of DataRobot is AutoML, to generate models quickly and easily. 

However, DataRobot has also developed products for MLOps that can be used together with the DataRobot AI Cloud platform. DataRobot MLOps allows to simplify model deployment and monitor ML development at every stage of its life cycle. Here are the main features of this tool: 

  • Roles and responsibilities: DataRobot has the necessary functionality to establish model governance from the start and define clear roles within the model lifecycle. For example, you can assign a production model manager, model administrator, model validator, etc. You can add descriptions to each role, including required qualifications or other requirements. Each user can be assigned multiple roles. 
  • Access control: You may want to establish different levels of access control between different departments or even team members. This will allow you to protect the model environment and make the development process more controllable. With DataRobot, you can quickly implement limitations using role-based access control.
  • Audit logs: DataRobot has an automated tool that will log all the changes to avoid missing anything important. Secure logging is necessary to comply with legal regulations. It will allow you to track down every change in the system and understand when it was made and by whom, which makes troubleshooting easier. 
  • Annotations: It can be challenging to understand the context of changes by simply judging from log recordings. That is why the ability of users to leave notes about their motivations is crucial. In DataRobot, users can easily add annotations to their actions for better interpretability and transparency. 
  • Model lineage: When you have the history of your model’s evolution, it becomes easy to update and maintain it. DataRobot provides developers with the possibility to keep track of model history, including model artifacts and changelogs.
  • Traceable results: The results that your model provides you with should always be attributable back to the model version. Records of request data and response values to those requests with the data about are essential. If you want to maintain the traceability of your model, especially if you continue to update the model, use DataRobot’s in-built tools for model response tracking. 
  • Production model lifecycle management: DataRobot can assist you at every stage of the model lifecycle management. You can use it for model retraining; the tool automatically uncovers issues with model performance and begins a testing process before it goes into production. It also assists you with model warm-up and allows you to place a model in warm-up mode to observe its performance in real-world conditions. 

Read also

Best DataRobot Alternatives for Model Registry

2. Dataiku

Dataiku is a one-stop platform for data processing, analysis, and machine learning. Dataiku allows you to create, share and reuse applications that use data and machine learning to extend and automate decision making.

Dataiku also positions itself as a powerful model governance tool that helps you to manage risks and ensure compliance with the aid of advanced permissions management, SSO, and LDAP integration. Several features make Dataiku one of the preferable tools for model governance: 

  • Monitoring & drift detection in MLOps: Dataiku can monitor the model’s pipeline for you to make sure that everyone goes as planned. If any anomalies are detected it will send an alert to the engineer about the issue. It automatically checks that scoring data and training data remain the same for reliable results.
Tools for ml model governance - dataiku
Dataiku has in-built tools for monitoring and drift detection I Source
  • Automatic model documentation: Dataiku makes model documentation easier with its automatic model documentation generator that uses a standard template. It also keeps track of model versions to mitigate any discrepancies in collaborative model development.
Tools for ml model governance - dataiku
Dataiku is a tool for automated documentation generation and version control I Source
  • Permissions management: Dataiku allows you to control model accessibility. Different user roles correspond with different permission rights. Team members can belong to more than one user group and have different rights on different projects.  
Tools for ml model governance - dataiku
Permission management with Dataiku I Source
  • SSO and LDAP: Dataiku offers different options for user authentication. You can integrate single sign-on or directory services (SSO), including LDAP authentication services like Microsoft Active Directory. Proper authentication is essential when you give people access to your critical systems and is necessary for your company to be compliant with internal and external regulation controls.
  • Audit: Dataiku creates logs for all user activity and includes a rich audit trail for object changes in the systems. If any problems arise, it will be easy for the team to discover the source of the problem because the log contains data about all user actions, including their ID, IP address, and the authentication method. 
Tools for ml model governance - dataiku
Audit trail for increased traceability with Dataiku | Source
  • Secure API Access: Dataiku enhances flexibility by providing API access control. This will allow you to design endpoint services for your business apps, such as risk assessment scoring. API keys and multi-level authentication secure API integrations.
Tools for ml model governance - dataiku
Dataiku provides secure API access | Source

3. Domino Data Lab

Tools for ml model governance - domino data lab
Streamline model governance with Domino Data Lab platform | Source

Domino Data Lab is a feature-rich hub for MLOps and model monitoring. It helps businesses to scale successfully by supporting enterprise-wide data science safety and security. 

Using Domino Data Lab, data scientists can easily collaborate on more than one different project at once throughout the same platform. Domino Data Lab doesn’t limit them in terms of data, tools, or languages, so they don’t have to suffer from infrastructure friction. 

  • Unified model monitoring: Domino Data Lab makes it easy to monitor different aspects of your machine learning model in one interface. It automatically detects changes, tracks performance, and keeps a record of all user activity. Domino Lab can even troubleshoot potential problems before the business is seriously impacted. 
  • Enhanced reproducibility: Since Domino Data Lab automatically tracks changes in code, used tools, frameworks, and packages, it becomes much easier to show reproducible results. You can provide the client-side team with all the information you need to deploy the model on their side or update it in the future.
  • Enterprise-level security: Machine learning models have a complex architecture, making it harder to protect them against vulnerabilities and adversarial attacks. Domino Data Lab provides an environment where customizable permission rights securely protect every data science operation, single sign-on (SAML or OIDC), and credential propagation.
  • Auditable Environment: Domino Data Lab makes it easy to monitor how models transform over time. It keeps track of the history of changes and enables you to trace back any stage of the pipeline to meet necessary regulations. 
  • Powerful Integrations: The tool is integrated with many popular tools for collaboration, data science, and project management such as Jira, Google Cloud, AWS, NVIDIA, and so on. 

4. Datatron

Datatron is a centralized platform for AI ModelOps and model governance. It will help you automate model deployment, monitoring, governance, and standardize the processes across different departments. 

Here are some of the critical features of Datatron for model governance:  

  • Dashboard: Dashboard tab gives you an opportunity to overview the health of an ML model without having to dive deep into details. You can set up custom metrics and parameters for the model investigation. For example, you can monitor used CPU, memory, and more.
Tools for ml model governance - datatron
Monitor model health with Datatron dashboard I Source
  • Health score: This feature automatically evaluates the overall health of the model and displays it in an easy-to-understand way.
Tools for ml model governance - datatron
Datatron scores the health of your model | Source
  • ML Gateways: The gateways are there to improve the orchestration of your models and data in complex multi-component projects. The gateways are designed to scale quickly and support more use cases so that your project would remain compliant even while going through active growth.
  • Bias, Drift, Performance, Anomaly Detection: Biases, drifts, and anomalies if unannounced can cause serious problems. Datatron has powerful features for bias monitoring. Its anomaly detection mechanisms identify potential issues by gathering data from multiple sources and analyzing model and system logs and alerts engineers. It also catches data drifts, i.e. changes in data that occurred after the model is deployed. 
Tools for ml model governance - datatron
You can easily conduct anomaly detection with Datatron | Source
  • Custom KPIs: Every enterprise pursues different goals when implementing model governance strategies. This feature allows companies to define their own KPIs, set thresholds and alerts. The central governance dashboard displays the KPIs for each user.
Tools for ml model governance - datatron
Datatron allows companies to define their own KPIs | Source
  • Alerts: Datatron automatically notifies users if something goes wrong, for example, if the model is falling behind performance thresholds or an anomaly in data is identified. It’s possible to select from different communication channels like email, Slack, or other messengers.
Tools for ml model governance - datatron
Datatron sends alerts if the model doesn’t meet predefined KPIs | Source
  • Explainability: Model governance and explainability go hand in hand. Datatron’s functionality for explainability management includes monitoring models, tracking and visualizing their insights, using payload logging endpoint to capture scoring requests. This allows L teams to increase trust in their products from the stakeholders and users. 
  • Jupyter support: An exciting feature of Datatron is that it supports the direct import of Jupyter notebooks. They can be run by data scientists alongside current models, which helps conduct experiments faster and streamline the validation of hypotheses.

5. Neptune

Tools for ml model governance - neptune
Example custom dashboard for artifacts | See in Neptune

neptune.ai is a platform that enables data scientists and machine learning engineers to manage all their models and build, log, store, and display their MLOps metadata in a single place. It also provides tools for experiment tracking and model registry that can be useful to teams that work in research and production and run many experiments. 

Neptune hasn’t been built specifically for model governance. However, it supports multiple model governance features. If you’re looking for a tool for a small or medium-sized team, that wants to enhance control over model development, it’s the right choice, since the instrument is lightweight and intuitive to understand. 

Here are some features that might make you want to consider Neptune for model governance: 

  • Know how the model was built: When you are putting the model in production, you want to know exactly how it was built, so that you can be confident in the reproducibility of its results. Neptune allows you to monitor exactly which artifacts were used, where they are, and enable future users to re-run the model if needed.
  • Package, test, and review new model versions: When you get a new model from a data scientist, you want to be able to package, test and review it quickly, and don’t waste time adjusting it for production usage. The tool automates the process of packaging and testing the model and provides version control tools for easy deployment. 
  • Have full model lineage and traceability: When someone from legal/compliance/business asks for a production model audit, you want to be able to have full model lineage, know which data it was trained on, who built it, when it was updated. Neptune makes it so you could confidently present this information to the business stakeholders. 
  • Build MLOps platform: When you’re building the MLOps platform for your data science team, you want to add a component for experiment tracking, model registry, data versioning, so that the model building pipeline is manageable, reproducible, and compliant. All this you can access using the Neptune platform and spend less time fine-tuning it than when using an enterprise-grade solution.

See Neptune in action

Check an example project

6. Weights & Biases

Tools for ml model governance - wandb
Weights & Biases is a feature-rich tool for model governance, model lineage, and model provenance | Source

Weights & Biases is a solution that helps ML teams to train their models in parallel with different combinations of hyperparameters. 

It is also a useful deep learning experiment tracking tool. You just need to write the code for model training and tune the hyperparameters. The solution will help you with the rest. It will manage the movement of production nodes for batch and real-time evaluations. There are several reasons why 

  • Personalized shareable dashboards: In Weights & Biases, you can easily create project dashboards and share them with other stakeholders. You can even schedule dashboard updates so that everyone involved in project development can see the results of your work and track KPI compliance. 
  • Simple collaboration: This tool is made to fit the needs of enterprise-level projects. If you use hundreds of different datasets and go through the steps of the data transformation process over and over again, it will still be easy to find what you’re looking for afterwards, for both you and your team.  
  • Permission control: W&B allows you to assign control rights to different team members and manage them in a few clicks. These rights cover the creation of projects, their modifications, read-only mode, and more. Users can belong to multiple groups, and their access rights can differ based on the project and/or organization. 
  • Auditability: Thanks to a transparent and easily manageable environment, it is always possible to track the changes across different projects and maintain ML model lineage. In the case of an internal or external audit, you will not have to worry about anything because all the data you need is stored and backed up securely.

Comparison between Weights & Biases and Neptune

7. Amazon SageMaker 

Tools for ml model governance - Amazon SageMaker
Amazon SageMaker can be used for production monitoring and experiment management | Source

Amazon SageMaker is a platform that helps data scientists to manage their machine learning projects. Here you can create and train machine learning models in the cloud and deploy them to your environment. 

Amazon SageMaker also equips its users with tools for ML model governance. The tool that must be mentioned when we’re talking about model governance is Amazon SageMaker Model Monitor.

This is a relatively new feature of Amazon SageMaker. Model monitor can be helpful for your project for a couple of reasons: 

  • Production monitoring: The tool continuously monitors your model’s performance, tracks, and records any changes, deviations, and data drifts. It is helpful for experiment tracking and research but can also be applied to the development and deployment of production models. Model monitor will alert you if any immediate action is needed.
  • Predictions: After the models have been deployed, they can demonstrate different performances from what you would have expected. In this case, it is important to have a tool at hand that can collect data about what went wrong post-factum and predict its requests and responses from your endpoints. 
  • Experiment management: Model monitor can analyze the data collected during the ML model run to compare it against other training experiments. 
  • ML models optimization: Amazon SageMaker automatically optimizes ML models for deployment. SageMaker Edge Manager can make your model run up 25 times faster depending on the hardware that you choose. It also allows you to optimize models using different frameworks such as DarkNet, Keras, PyTorch.
  • Integrations: You can integrate SageMaker Edge Manager with your existing apps using APIs. It supports Java, Go, Python, Ruby, and other common programming languages.

Learn more

Comparison between Amazon SageMaker and Neptune

Conclusion

Choosing a model governance tool can be made easy if you approach it step-by-step. First, you need to define your goals and expectations to select the tool with the functionality that suits your business. Only after this step is complete, should you start reaching out to vendors and testing their tools. 

DataRobot
Dataiku
Domino Data Lab
Datatron
neptune.ai
Weights & Biases
Amazon SageMaker
DataRobot
Dataiku
Domino Data Lab
Datatron
neptune.ai
Weights & Biases
Amazon SageMaker
Easy to set up
Integrations support
4/5
3/5
3/5
4/5
5/5
5/5
Scalable for enterprise- level
Model versioning
Easy to audit
Documentation and community
5/5
4/5
4/5
3/5
5/5
4/5
5/5

If you want to learn more about model governance, lineage, and provenance, check also

Machine Learning Model Management: What It Is, Why You Should Care, and How to Implement It

Best Data Lineage Tools

Doing ML Model Performance Monitoring The Right Way

Was the article useful?

Thank you for your feedback!