Blog » ML Tools » Best Kubeflow Metadata Alternatives You Need to Check

Best Kubeflow Metadata Alternatives You Need to Check

Kubeflow is an open-source, standardized solution to deploy the entire lifecycle of enterprise ML apps. Because ML systems all have various applications, platforms, and resource considerations, it can be pretty hard to maintain them. Kubeflow provides tools and frameworks for ML, and makes it easier to develop, deploy, and manage ML projects. 

Kubeflow Metadata helps data scientists track and manage the huge amounts of metadata produced by their workflows. Metadata is information about runs, models, datasets, and artifacts (files and objects in the ML workflow). 

Examples of Experiment and Model Training Metadata: 

  • Metrics
  • Hyperparameters
  • Learning curves
  • Training code and configuration files
  • Predictions (images, tables, etc)
  • Diagnostic charts (Confusion matrix, ROC curve, etc)
  • Console logs
  • Hardware logs

Examples of Artifact Metadata: 

  • Paths to the dataset or model (s3 bucket, filesystem)
  • Dataset hash
  • Dataset/prediction preview (head of the table, snapshot of the image folder)
  • Description
  • Feature column names (for tabular data)
  • Who created/modified
  • When last modified
  • Size of the dataset

Examples of Trained Model Metadata: 

  • Model binary or location to your model asset
  • Dataset versions 
  • Links to recorded model training runs and experiments 
  • Who trained the model
  • Model descriptions and notes
  • Links to observability dashboards 

If you don’t store all this experimental metadata, you can’t achieve reproducibility or compare experiment results. Let’s explore why that is, and then we’ll take a look at Kubeflow Metadata and a few alternative tools.

Why you need to store metadata from ML experiments

Creating a machine learning model is a bit like a scientific experiment: you have a hypothesis, you test it using various methods, and then pick the best method based on data. With ML, you start out with a hypothesis about which input data might produce accurate results and train multiple models using various features. 

Going back and forth with error analysis and various domain experts, you can build new features meant to increase performance. However, there’s no surefire way to tell if the new model is fairly comparing the previous version – unless you store metadata. 

Storing machine learning experiment metadata helps you with comparability – being able to compare results between experiments. When one project involves several large teams of data scientists, it becomes difficult to use the same training and test set splits, or the same validation schemes.

For example, individual data scientists within a team could be taking different ML approaches to the problem, with their own libraries and languages – with these differences, you need a standardized method to collect and store experiment metadata if you want to compare results. 

Another great thing about storing metadata is reproducibility – being able to repeatedly run your algorithm on different datasets and achieve the same results. Reproducibility assures data consistency, error reduction, and ambiguity when moving projects from development to production. Even if you lose some trained model objects, or have data changes – with stored metadata, you can retrain the model and deploy to production. 

👉 Read about ML Experiment Tracking (What It Is, Why It Matters, and How to Implement It) and explore 15 Best Tools for Tracking ML Experiments

Setting up and using Kubeflow Metadata 

Kubeflow Metadata is installed as an application in your Kubernetes cluster. First, download the Kubeflow manifests repository: 

git clone https://github.com/kubeflow/manifests

Then, run these commands in order to deploy the Metadata component services: 

cd manifests/metadata
kustomize build overlays/db | kubectl apply -n kubeflow -f -

To install the Metadata SDK in your project: 

pip install kubeflow-metadata

Now, let’s try out the Metadata SDK in an example Jupyter Notebook. First, set up a Juypter notebook with Kubeflow using the following guide: 

Getting Started with Juypter Notebooks on Kubeflow

Next, download the demo notebook on Github, and save the notebook code as a local file called demo.ipynb. 

Back in your Jupyter notebook server in the Kubeflow UI, upload the demo.ipynb notebook. Then, click its name to open it in the Kubeflow cluster. Finally, run the steps in the notebook in order to install and use the Kubeflow Metadata SDK. 

The ensuing metadata will be visible in the Kubeflow UI. You can view it through the Artifact Store. You can also view logged artifacts and their corresponding details in the Artifact Store. 

Within the Kubeflow UI, navigate to the central dashboard and click the Artifact Store: 

Here, you can view a list of items for all the metadata items logged by your workflow. Clicking the name of each item reveals more details. 

When you run the the demo.ipynb notebook, here’s what the items should look like: 

The Artifacts screen will include the following: 

  • MNIST (model metadata item)
  • MNIST-evaluation (metrics metadata item)
  • Mytable-dump (dataset metadata item)

Clicking on each of these will expand into more detail. 

The Kubeflow Metadata SDK has these predefined types used to describe your ML workflows: 

  • Dataset: metadata for a dataset, both input and output 
  • Execution: metadata for a run/ execution in an ML model
  • Metrics: metadata for metrics in an ML model
  • Model: metadata for an ML model 

However, although Kubeflow Metadata provides considerable functionality for logging metadata, it doesn’t do everything, and lacks features like: 

  • experiment management
  • notebook versioning and diffing
  • team collaboration
  • advanced UI features
  • integrations

Let’s look at a few tools that provide a more comprehensive set of functionalities for ML metadata needs. 

Kubeflow Metadata alternatives

Neptune

Neptune-new-UI

Neptune is a lightweight metadata store for MLOps that is built for team collaboration and scalability. As one of the best experiment management tools available on the market, it offers a plethora of experimentation tracking features for log metrics, data versions, hardware usage, etc.  and also easy integration with your workflow. In addition to tracking, you can retrieve and analyze experiments while easily sharing them with team members. Neptune’s flexibility and beautiful yet instinctive user interface places it above several other competing tools. 

Neptune’s three major features are:

  • Experiment Management: you can track, tag, filter, group, sort, and compare your team’s experiments 
  • Notebook versioning and diffing: you can do a side by side comparison of two notebooks or checkpoints in one notebook  
  • Team Collaboration: add comments, mention teammates, and compare experiment results; assign status to team members

To setup Neptune: 

  1. Sign up for a Neptune AI account first. It’s free for individuals and non-organizations, and you get a generous 100 GB of storage. 
  2. Create a project. In your Projects dashboard, click “New Project” and fill in the following information. Pay attention to the privacy settings!
Neptune new project
  1. Install the Neptune client library
pip install neptune-client
  1. Add logging to your script
import neptune

run=neptune.init(project='Me/MyProject')
params={'lr':0.1, 'dropout':0.4}
run[‘parameters’] = params
# training and evaluation logic
run[‘logs/metrics/test_acc’].log(0.84)

Your metadata database and dashboard will look like this:  

Metadata database

You can store the experiment model and dataset metadata in the metadata database so it can be logged and queried efficiently. 

Product_dashboard

This dashboard allows you to visualize the metadata, models, and datasets in your metadata database. 

Tensorflow Extended ML Metadata 

Tensorflow Extended helps developers build production pipelines in order to put out their ML models, while also covering a lot of the requirements for production software deployments. TFX first starts with data ingestion, then goes through data validation, feature engineering, training, evaluating and serving. TFX comes with libraries for the major steps of an ML pipeline, and a series of pipeline components which implement these libraries. 

ML Metadata (MLMD) is a library from TensorFlow Extended, but you can use it independently. ML Metadata helps you store metadata generated by a production ML pipeline, such as: 

  • Dataset the model was trained on
  • Pipelines, other lineage information
  • Hyperparameters used to train the model 
  • TensorFlow version 
  • Failed models, errors
  • Training runs
  • Artifacts generated
  • Executions

Now, in the case of some strange or unforeseen pipeline errors, you can leverage this metadata and debug by analyzing the pipeline components lineage. 

MLMD stores these three types of metadata in the a database, or the Metadata Store: 

  1. Metadata regarding the artifacts generated during the steps of your ML pipelines
  2. Metadata regarding executions of the steps of your ML pipelines 
  3. Metadata regarding pipelines and related lineage info. 

Metadata Store uses APIs in order to record and retrieve the metadata both to and from the storage backend; it also has reference implementations for SQLite and MySQL out of the box. 

MLMD parts
A summary of the several MLMD parts | Source

Here is the data model that the Metadata Store uses to record and retrieve metadata from the storage background: 

  • ArtifactType: artifact’s type and its properties 
  • Artifact: specific instance of Artifact Type 
  • Execution Type: component or step in workflow, and its corresponding runtime parameters
  • Execution: instance of execution
  • Event: connection between artifacts and executions
  • ContextType: theoretical group of artifacts and executions 
  • Context: distance of ContextType
  • Attribution: connection between artifacts and contexts 
  • Association: connection between artifacts and contexts 

You can check out much more detailed descriptions of the data model used by Metadata Store at this link.  

To get started using ML Metadata: 

  1. Install the necessary packages 
  2. Install and import TFX, in addition to other libraries and TFX Component classes you may need
pip install -q -U --use-feature=2020-resolver tfx
  1. Import the MLMD Library 

You can see more detailed instructions at this link. 

MLflow Tracking

MLflow Tracking lets you log parameters, code versions, metrics, output files, and more. 

MLflow Tracking has runs, or execution of code; runs record:

  • Code version
  • Start and End time
  • Source: project name or entry point 
  • Parameters: key-value input parameters, with both keys and values as strings
  • Metrics: key value metrics that can be updated through the run. You can visualize the metric’s full history using MLflow records
  • Artifacts: output files such as images, models, datafiles, etc. 

Runs can also be organized into experiments that execute a particular task. Using the MLflow API and UI, you can create and search for experiments. 

These runs are recorded to local files, an SQLalchemy-compatible database, or a remote tracking server. On the other hand, Mflow artifacts can be stored to local files or other remote file storage options. There are two parts to store runs and artifacts in MLflow: the backend store and the artifact store. The backend store has MLflow entities such as the metadata, and the artifact store has artifacts. The user cna interface the backend and artifact storage configurations. 

The most common configuration structures include: 

  • MLFlow on localhost: both the backend and artifact store have the same directory on the local file system
  • MLflow on localhost using SQLite: run MLflow locally with SQLite, in which the artifacts are stored in the local ./mlruns directory, whereas entities are stored in the SQLite Database 
  • MLflow on localhost with a tracking server: this is close to the first scenario; however, here you launch a tracking server that listens for REST request calls, and these arguments are stored in the mlflow server <args>. 
  • MLflow with remote tracking server, backend and artifact stores: the tracking server, backend store, and artifact store all remain on remote hosts. The Mlflow client links with the tracking server through REST requests in order to record all the MLflow entities. 

Examples of logging functions include: 

Check out their docs for much more logging functions in detail. 

You can also launch multiple MLflow runs simultaneously.

Conclusion 

As you can see, ML Metadata store solutions are necessary for data scientists. Kubeflow Metadata provides a unique set of tools for performance, efficiency, and accuracy in ML workflows – but it might not suit your workflow. In that case, take a look at the features offered by other tools. For me, Neptune provides the most thorough, all-inclusive solution. 

If you want to learn more about Neptune, check out the official documentation. If you want to try it out, create your account and start tracking your machine learning experiments with Neptune.


NEXT STEPS

How to get started with Neptune in 5 minutes

1. Create a free account
Sign up
2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune.new as neptune

run = neptune.init('Me/MyProject')
run['params'] = {'lr':0.1, 'dropout':0.4}
run['test_accuracy'] = 0.84
Try live notebook

Metadata store solutions

Best Metadata Store Solutions: Kubeflow Metadata vs TensorFlow Extended (TFX) ML Metadata (MLMD) vs MLflow vs Neptune

Read more
MLflow vs Kubeflow vs Neptune

MLflow vs Kubeflow vs Neptune – What Are the Differences?

Read more
Best tools to log and manage metadata

Best Tools to Log and Manage ML Model Building Metadata

Read more
MLOps guide

MLOps: What It Is, Why it Matters, and How To Implement It

Read more