Blog » ML Tools » The Best Tools to Visualize Metrics and Hyperparameters of Machine Learning Experiments

The Best Tools to Visualize Metrics and Hyperparameters of Machine Learning Experiments

Evaluating your model on the key metrics is a crucial first step in understanding your model quality. Keeping track of hyperparameters and corresponding evaluation metrics is important because small changes in hyperparameters can sometimes have a big impact on model quality.

And so, understanding which hyperparameters have an impact and which do not affect evaluation metrics can lead to valuable insights. This is why you should visualize what impact those parameters have on your metrics and know what is your model performance across all of your ML experiments.

To help you, I’ve gathered a list of recommended tools that will do the tedious work for you.

Here are the best six tools to visualize metrics and hyperparameters of machine learning experiments.

1. Neptune

Feel in control

Neptune is a metadata store for MLOps built for research and production teams that run a lot of experiments.

You can use Neptune to track all metadata generated from your runs (i.e. Hyperparameters, loss, metrics and etc), then visualize and compare results. Automatically transform tracked data into a knowledge repository, then share and discuss your work with colleagues.

Neptune – summary:

  • Easily keep track of metrics, hyperparameters
  • Visualize losses and metrics as your model is training (monitor learning curves)
  • Compare learning curves across various models/experiments
  • Use an interactive comparison table that automatically shows diffs between experiments
  • Fetch experiment data and visualize parameters and metrics in notebooks. You can use the HiPlot integration or do custom analysis
  • It has other visualization features that are not parameter-metric related

SEE ALSO
The Best Tools for Machine Learning Model Visualization


2. WandB

Weights & Biases a.k.a. WandB is focused on deep learning. Users can track experiments to the application with Python library, and – as a team – can see each other’s experiments.

The tool lets you record and visualize every detail of your research and collaborate easily with teammates. You can easily log metrics from your script to visualize results in real-time as your model trains. You can also see what your model is producing at each time step.

WandB – summary:

  • Monitor training runs information like loss, accuracy (learning curves)
  • Compare runs with a dashboard table showing auto-diffs
  • Visualize parameters and metrics via parallel coordinates plot
  • Explore how parameters affect metrics with feature (parameter) importance visualization (this I think is experimental)
  • It has other visualization features that are not parameter-metric related

3. Comet

Comet app

Comet is a meta machine learning platform for tracking, comparing, explaining, and optimizing experiments and models. It allows you to view and compare all of your experiments in one place. It works wherever you run your code with any machine learning library, and for any machine learning task.

Comet is suitable for teams, individuals, academics, organizations, and anyone who wants to easily visualize experiments and facilitate work and run experiments.

Comet – summary:

  • You can customize and combine your visualizations
  • You can monitor your learning curves
  • Comet’s flexible experiments and visualization suite allow you to record, compare and visualize many artifact types
  • It has other visualization features that are not parameter-metric related

4. TensorBoard

TensorBoard is a visualization toolkit for TensorFlow that lets you analyze model training runs. It’s open-source and offers a suite of tools for visualization and debugging of machine learning models.

What’s more, it has an extensive network of engineers using this software and sharing their experience and ideas. This makes a powerful community ready to solve any problem. The software, itself, however, is best suited for an individual user.

TensorBoard – summary:

  • Tracking and visualizing metrics such as loss and accuracy
  • Comparing learning curves of various runs
  • Parallel coordinates plot to visualize parameter-metric interactions
  • It has other visualization features that are not parameter-metric related

See also: The Best TensorBoard Alternatives (2020 Update)

5. Optuna

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning.

Additionally, Optuna Integrates with libraries such as LightGBM, Keras, TensorFlow, FastAI, PyTorch Ignite, and more.

Optuna – summary:

  • Visualizations in Optuna let you zoom in on the hyperparameter interactions and help you decide on how to run your next parameter sweep
  • plot_contour: plots parameter interactions on an interactive chart. You can choose which hyperparameters you would like to explore
  • plot_optimization_history: shows the scores from all trials as well as the best score so far at each point
  • plot_parallel_coordinate: interactively visualizes the hyperparameters and scores
  • plot_slice: shows the evolution of the search. You can see where in the hyperparameter space your search went and which parts of the space were explored more

👉 Read about Neptune’s integration with Optuna.

6. HiPlot

Hiplot is a straightforward interactive visualization tool to help AI researchers discover correlations and patterns in high-dimensional data. It uses parallel plots and other graphical ways to represent information more clearly.

HiPlot can be run quickly from a Jupyter notebook with no setup required. The tool enables machine learning (ML) researchers to more easily evaluate the influence of their hyperparameters, such as learning rate, regularizations, and architecture. It can also be used by researchers in other fields, so they can observe and analyze correlations in data relevant to their work.

HiPlot – summary:

Creates an interactive parallel plot visualization to easily explore various hyperparameter-metric interactions
Based on selection on the parallel plot the experiment table is updated automatically
It’s super lightweight and can be used inside notebooks or as a standalone web server

Final words

Now that you have all the list of the best tools, you can visualize metrics and hyperparameters of your ML experiment. Test them yourself and see which one works best for you. We, of course, recommend Neptune – the most lightweight of them all 😉

And which tool is your favorite?


READ NEXT

15 Best Tools for ML Experiment Tracking and Management

10 mins read | Author Patrycja Jenkner | Updated August 25th, 2021

While working on a machine learning project, getting good results from a single model-training run is one thing. But keeping all of your machine learning experiments well organized and having a process that lets you draw valid conclusions from them is quite another. 

The answer to these needs is experiment tracking. In machine learning, experiment tracking is the process of saving all experiment-related information that you care about for every experiment you run. 

ML teams implement experiment tracking in different ways, may it be by using spreadsheets, GitHub, or self-built platforms. Yet, the most effective option is to do it with tools designed specifically for tracking and managing ML experiments.

In this article, we overview and compare the 15 best tools that will allow you to track and manage your ML experiments. You’ll get to know their main features and see how they are different from each other. Hopefully, this will help you evaluate them and choose the right one for your needs. 

How to evaluate an experiment tracking tool? 

There’s no one answer to the question “what is the best experiment tracking tool?”. Your motivation and needs may be completely different when you work individually or in a team. And, depending on your role, you may be looking for various functionalities. 

If you’re a Data Scientist or a Researcher, you should consider: 

  • If the tool comes with a web UI or it’s console-based;
  • If you can integrate the tool with your preferred model training frameworks;
  • What metadata you can log, display, and compare (code, text, audio, video, etc.);
  • Can you easily compare multiple runs? If so, in what format – only table, or also charts;
  • If organizing and searching through experiments is user-friendly;
  • If you can customize metadata structure and dashboards;
  • If the tool lets you track hardware consumption;
  • How easy it is to collaborate with other team members – can you just share a link to the experiment or you have to use screenshots as a workaround?

As an ML Engineer, you should check if the tool lets you: 

  • Easily reproduce and re-run experiments;
  • Track and search through experiment lineage (data/models/experiments used downstream); 
  • Save, fetch, and cache datasets for experiments;
  • Integrate it with your CI/CD pipeline;
  • Easily collaborate and share work with your colleagues.

Finally, as an ML team lead, you’ll be interested in:

  • General business-related stuff like pricing model, security, and support;
  • How much infrastructure the tool requires, how easy it is to integrate it into your current workflow;
  • Is the product delivered as commercial software, open-source software, or a managed cloud service?
  • What collaboration, sharing, and review feature it has. 

I made sure to keep these motivations in mind when reviewing the tools that are on the market. So let’s take a closer look at them. 

Continue reading ->
Plotly tutorial

Plotly Python Tutorial for Machine Learning Specialists

Read more

The Best Tools for Machine Learning Model Visualization

Read more
Pandas plotting

Pandas Plot: Deep Dive Into Plotting Directly with Pandas

Read more
Experiment tracking Experiment management

15 Best Tools for ML Experiment Tracking and Management

Read more