We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”

Read more

Blog » ML Tools » MLflow vs TensorBoard vs Neptune: What Are the Differences?

MLflow vs TensorBoard vs Neptune: What Are the Differences?

You see endless columns and rows, random colors, and don’t know where to find any values? Ahh, the beautifully chaotic spreadsheet of experiments. Machine learning developers shouldn’t have to go through that pain.

Tracking and managing countless variables and artifacts in a spreadsheet is exhausting. You have to manually take care of:

  • Parameters: hyperparameters, model architectures, training algorithms
  • Jobs: pre-processing job, training job, post-processing job — these consume other infrastructure resources such as compute, networking and storage
  • Artifacts: training scripts, dependencies, datasets, checkpoints, trained models
  • Metrics: training and evaluation accuracy, loss
  • Debug data: weights, biases, gradients, losses, optimizer state
  • Metadata: experiment, trial and job names, job parameters (CPU, GPU and instance type), artifact locations (e.g. S3 bucket)

Switching to dedicated experiment tracking tools is inevitable in the long run. If you’re already considering which tool is the right one for you, today we’ll compare Neptune, Tensorboard, and MLflow. Here’s what you’ll find in this article:

  • A quick overview of MLflow, Tensorboard, Neptune, and what they do;
  • A detailed chart comparing the features of MLflow, Tensorboard, Neptune;
  • When Neptune is a better alternative than MLflow and Tensorboard;
  • How Neptune integrates with MLflow and Tensorboard.

Quick overview of MLflow, TensorBoard, and Neptune

Although you can use all three tools to solve similar problems, the differences can be really important depending on your use case. 

In Neptune, you can track machine learning experiments, log metrics, performance charts, video, audio, text, record data exploration, and organize teamwork in an organic way. Neptune is fast, you can customize the UI, and manage users in an on-prem environment or on the cloud. Managing user permissions and access to projects is a breeze. It monitors hardware resource consumption, so you can optimize your code to use hardware efficiently.

Neptune has a wide range of framework integrations, so you won’t have a problem integrating your ML models, codebases, and workflows. It’s built to scale, so you won’t have any issues with your experiments getting too big. 

MLflow is an open-source platform for managing your ML lifecycle by tracking experiments, providing a packaging format for reproducible runs on any platform, and sending models to your deployment tools of choice. You can record runs, organize them into experiments, and log additional data using the MLflow tracking API and UI. 

TensorBoard on the other hand is specialized in visualization. You can track metrics including loss and accuracy, and visualize model graphs. Tensorboard lets you view histograms of weights, biases or other tensors, project embeddings, and you can incorporate description images, text, and audio.

Detailed chart comparing the features of MLflow, TensorBoard, Neptune

Neptune’s flexibility when it comes to experiment tracking, frameworks and team collaboration, places it above MLflow and Tensorboard.

MLflow
Neptune
TensorBoard

Pricing

MLflow:

Free

Neptune:

– Free for individuals,
non-profit and educational research
– Team: paid

TensorBoard:

Free

Open-source

MLflow:

Image

Neptune:

TensorBoard:

Image

Experiment tracking features

Data versioning

MLflow:

Neptune:

Limited

TensorBoard:

Notebook versioning

MLflow:

Neptune:

Image

TensorBoard:

Notebook auto snapshots

MLflow:

Neptune:

Image

TensorBoard:

Resources monitoring

MLflow:

Neptune:

Image

TensorBoard:

Limited

Log audio, video, HTML

MLflow:

Limited

Neptune:

Image

TensorBoard:

Image

UI features

User management

MLflow:

Neptune:

Image

TensorBoard:

Experiment organization

MLflow:

Limited

Neptune:

Image

TensorBoard:

Limited

Notebook diffs

MLflow:

Neptune:

Image

TensorBoard:

Saving experiment views

MLflow:

Neptune:

Image

TensorBoard:

View sharing

MLflow:

Limited

Neptune:

Image

TensorBoard:

Grouping experiments

MLflow:

Neptune:

Image

TensorBoard:

Product features

Scales to millions of runs

MLflow:

Neptune:

Image

TensorBoard:

Dedicated user support

MLflow:

Neptune:

Image

TensorBoard:

Integrations

Scikit-Learn

MLflow:

Image

Neptune:

Image

TensorBoard:

Image

TensorBoard

MLflow:

Neptune:

Image

TensorBoard:

Image

Sacred

MLflow:

Neptune:

Image

TensorBoard:

Image

Catalyst

MLflow:

Neptune:

Image

TensorBoard:

Image

Scikit-Optimize

MLflow:

Neptune:

Image

TensorBoard:

Ray

MLflow:

Image

Neptune:

TensorBoard:

Image

HiPlot

MLflow:

Neptune:

Image

TensorBoard:

When Neptune is a better alternative than MLflow and TensorBoard

Let’s explore the cases when you might want to choose Neptune over MLflow and TensorBoard. Later on, you’ll also see that all three tools can go hand-in-hand, providing you with a rich environment that meets all of your ML experiment needs.

Which tool’s visualization dashboard is easiest to set up for your entire team? 

In Neptune, you can save experiment data by either backing it up on a hosting server, or an on-prem installation. You can easily share experiments with no overhead.

When it comes to TensorBoard and MLflow, they store and track experiments locally. They have very limited user management and team setup capabilities, so I definitely recommend Neptune for large, collaborative projects.

Neptune project

Can you manage user permissions in MLflow and TensorBoard? 

Collaboration is very limited in MLflow and TensorBoard. In Neptune, you have full control over users and access permissions. There are three user rules: admin, contributor, or viewer. 

You can add team members very easily through an email invitation:

Neptune invite to project

Is MLflow and TensorBoard fast with thousands of runs? 

Neptune was built to scale in order to support millions of experiment runs, both on the frontend and backend.

MLflow, as an open-source tool, isn’t the fastest tool out there; especially with 100’s or 1000’s of runs, the UI can get laggy. TensorBoard is a visualization toolkit, and it is nowhere near as fast as Neptune.

Neptune experiments

Can you save different experiment dashboard views in MLflow and TensorBoard? 

TensorBoard and MLflow are best suited for individual work, with local storage and local UI/Dashboard.With multiple users (multitenant), this gets uncomfortable quickly. 

Team members can have different ideas on how to design the experiment dashboard. In Neptune, everyone can customize, change, and save experiment dashboard views as they please. 

save_dashboard_views

Can you get hardware metrics in MLflow and TensorBoard? 

Not much. TensorBoard Profile does profile the execution of the code but only for the current run.But in Neptune, you can monitor hardware and resource consumption (CPU, GPU, memory) live persistently, while you train your models. With this data, you can optimize your code to utilize your hardware to the maximum. 

This data is generated automatically, and you can find it in the monitoring section of the UI: 

CPU usage

How easy is it to log images and charts in MLflow and TensorBoard? 

Neptune logs

You can now browse through your images in the “predictions” tab of the “logs” section of the UI. 

You can even log interactive charts that will be rendered interactively in the UI through neptunecontrib.api.log_chart. 

matplotlib

Do MLflow and TensorBoard snapshot your Jupyter notebooks automatically? 

Neptune integrates with Jupyter notebooks, so you can automatically snapshot whenever you run a cell containing neptune.create_experiment()

Regardless of whether you submit your experiment, everything will be safely versioned and ready to be explored.  

Can you track an explanatory analysis with MLflow and TensorBoard? 

In Neptune, you can version your exploratory data analysis or results exploration. After saving it in Neptune, you can name, share and download, or see differences in your notebook checkpoints. 

With Neptune, you can automatically log images and charts to multiple image channels, browse through them to view the progress of your model as it trains, and get a better understanding of what’s happening in the training and validation loops. 

All you have to do to log one or multiple images to a log section is: 

neptune.log_image('predictions', image)
for image in validation_predications:
    neptune.log_image('predictions', image)
Compare notebooks

Do MLflow and TensorBoard allow you to fetch your experiment dashboard directly to a pandas DataFrame? 

The mlflow.serach_runs()API returns your MLFlow runs in a pandas DataFrame. 

Neptune allows you to fetch whatever information you or your teammates tracked and explored. Exploratory features such as HiPlot integration will help you do that. 

neptune.init('USERNAME/example-project')

make_parallel_coordinates_plot(

     metrics= ['eval_accuracy', 'eval_loss',...],

     params = ['activation', 'batch_size',...])
hiplot

Neptune integration with MLflow

As we mentioned before, one of the disadvantages of MLflow is that you can’t easily share experiments, nor collaborate on them. 

In order to add organization and collaboration, you need to host the MLflow server, confirm that the right people have access, store backups, and jump through other hoops. 

The experiment comparison interface is a little lacking, especially for team projects. 

But you can integrate it with Neptune. This way, you can use the MLflow interface in order to track experiments, sync your runs folder with Neptune, and then enjoy the flexible UI from Neptune. 

You don’t need to back up the mlruns folder or fire up the MLflow UI dashboard on a dedicated server. Your MLflow experiments will automatically be hosted, backed up, organized, and enabled for teamwork thanks to Neptune

Change the workflow from: 

mlflow ui

to: 

neptune mlflow

You can do everything else as you normally would.

Neptune integration with TensorBoard

You can also integrate Neptune with TensorBoard to have your TensorBoard visualization hosted in Neptune, convert your TensorBoard logs directly into Neptune experiments, and instantly log major metrics. 

First, install the library: 

pip install neptune - tensorboard

After creating a simple training script with Tensorboard logging and initializing Neptune, you can integrate with two simple lines: 

import neptune_tensorboard as neptune_tb
neptune_tb.integrate_with_tensorflow()

Make sure to create the experiment!

with neptune.create_experiment(name=RUN_NAME, params=PARAMS):

Now, your experiments will be logged to Neptune, and you can also enjoy the features of team collaboration.

Learning more about Neptune…

As you can see, these tools aren’t necessarily mutually exclusive. You can benefit from your favorite features of MLflow and TensorBoard, while using Neptune as a central place for managing your experiments and collaborating on them with your team.

Would you like to learn more about Neptune?

Do you want to start tracking your experiments right away?

Happy experimenting!


READ NEXT

InstaDeep Case Study: Looking for Collaboration Features and One Central Place for All Experiments

5 mins read | Updated November 22th, 2021

InstaDeep is an EMEA leader in delivering decision-making AI products. Leveraging their extensive know-how in GPU-accelerated computing, deep learning, and reinforcement learning, they have built products, such as the novel DeepChain™ platform, to tackle the most complex challenges across a range of industries. 

InstaDeep has also developed collaborations with global leaders in the AI ecosystem, such as Google DeepMind, NVIDIA, and Intel. They are part of Intel’s AI Builders program and are one of only 2 NVIDIA Elite Service Delivery Partners across EMEA. The InstaDeep team is made up of approximately 155 people working across its network of offices in London, Paris, Tunis, Lagos, Dubai, and Cape Town, and is growing fast.

About the BioAI team

The BioAI team is the place at InstaDeep where Biology meets Artificial intelligence. At BioAI, they advance healthcare and push the boundaries of medical science through a combination of biology and machine learning expertise. They are currently building DeepChain™, their platform for protein design. They are also working with their customers in the bio sector to tackle the most challenging problems with the help of bioinformatics and machine learning.

Deepchain dashboard
DeepChain dashboard | Source

They apply the DeepChain™ protein design platform to engineer new sequences for protein targets using sophisticated optimization techniques such as reinforcement learning and evolutionary algorithms. They also leverage Language Models pre-trained on millions of protein sequences and train their own in-house protein language models. Finally, they use machine learning to predict protein structure from sequence.

Problem

Building complex software like DeepChain™, a platform for protein design, requires a lot of research with different moving parts. Customers demand various types of solutions that require new experiments and research every time. With several experiments running for different customers, it will be unavoidably daunting for a team of any size to keep track of the experiments while ensuring they remain productive.

Fazed with the thought of managing numerous experiments, Nicolas and the BioAI team encountered a series of challenges:

  • 1Experiment logs were all over the place
  • 2It was difficult to share experiment results
  • 3Machine learning researchers were dealing with infrastructure and operations
Continue reading ->
How to Make Your MLflow Projects Easy to Share and Collaborate On

How to Make Your MLflow Projects Easy to Share and Collaborate On

Read more
How to Make Your TensorBoard Projects Easy to Share and Collaborate On

How to Make Your TensorBoard Projects Easy to Share and Collaborate On

Read more
ML_experiment_tracking

ML Experiment Tracking: What It Is, Why It Matters, and How to Implement It

Read more
Switching from spreadsheets

Switching from Spreadsheets to Neptune.ai and How It Pushed My Model Building Process to the Next Level

Read more