Blog » ML Model Management » How to Track Machine Learning Model Metrics in Your Projects

How to Track Machine Learning Model Metrics in Your Projects

It is crucial to keep track of evaluation metrics for your machine learning models to:

  • understand how your model is doing
  • be able to compare it with previous baselines and ideas
  • understand how far you are from the project goals

“If you don’t measure it you can’t improve it.”

But what should you keep track of?

I have never found myself in a situation where I thought that I had logged too many metrics for my machine learning experiment.

Also, in a real-world project, the metrics you care about can change due to new discoveries or changing specifications, so logging more metrics can actually save you some time and trouble in the future.

Either way, my suggestion is:

“Log more metrics than you think you need.”

Ok, but how do you do that exactly?

Tracking metrics that are a single number

In many situations, you can assign a numerical value to the performance of your machine learning model. You can calculate the accuracy, AUC, or average precision on a held-out validation set and use it as your model evaluation metric.

In that case, you should keep track of all of those values for every single experiment run.

With Neptune you can easily do that:

neptune.log_metric('train_auc', train_auc)
neptune.log_metric('valid_auc', train_auc)
neptune.log_metric('valid_f1', train_auc)
neptune.log_metric('valid_accuracy', train_auc)

Note:

Tracking metrics both on training and validation datasets can help you assess the risk of the model not performing well in production. The smaller the gap, the lower the risk. A great resource is this kaggle days talk by Jean-François Puget.

That said, sometimes, a single value is not enough to tell you if your model is doing well. 

This is where performance charts come into the picture.

Tracking metrics that are performance charts

To understand if your model has improved, you may want to take a look at a chart, confusion matrix, or distribution of predictions. 

Those, in my view, are still metrics because they help you measure the performance of your machine learning model.

With Neptune logging those charts is trivial:

neptune.log_image('diagnostics', 'confusion_matrix.png')
neptune.log_image('diagnostics', 'roc_auc.png')
neptune.log_image('diagnostics', 'prediction_dist.png')
diagnostic charts

Note:

If you want are working with binary classification metrics you can log:

  • All major metrics like f1, f2, brier_loss, accuracy and more
  • All major performance charts like Confusion Matrix, ROC curve, Precision-Recall curve

With one function call!

import neptunecontrib.monitoring.metrics as npt_metrics

npt_metrics.log_binary_classification_metrics(y_test, y_test_pred)

Tracking iteration-level metrics (learning curves)

Most machine learning models converge iteratively. This is the case for deep learning models, gradient boosted trees, and many others.

You may want to keep track of evaluation metrics after each iteration both for the training and validation set to see whether your model to monitor overfitting.

Monitoring those learning curves is really simple to implement yet important habit.

For simple iteration-based training it can look like this:

for i in range(iterations):
   # training logic
   train_loss = loss(y_pred, y)
   neptune.log_metric('train_loss', train_loss)

And in the case of callback systems used for example in most deep learning frameworks:

class NeptuneLoggerCallback(Callback):
    ...
    def on_batch_end(self, batch, logs={}):
        for log_name, log_value in logs.items():
            neptune.log_metric(f'batch_{log_name}', log_value)

Note:

Neptune integrates with most of the major machine learning frameworks, and you can track those metrics with zero effort. Check the available integrations here.

Tracking predictions after every epoch

Sometimes you may want to take a look at model predictions after every epoch or iteration. 

This is especially valuable when you are training image models that need a lot of time to converge. 

For example, in the case of image segmentation, you may want to plot predicted masks, true masks, and the original image after every epoch.

In Neptune you can use .log_image method to do that:

for epoch in epochs:
     …
     mask_preds = get_preds(model, images) 
     overlayed_preds = overlay( images, masks_true, masks_pred)
     neptune.log_image('network_predictions', overlayed_preds)

Tracking metrics after the training is done 

In some applications, you cannot keep track of all the important metrics in the training script. 

Moreover, in real-life machine learning projects, the scope of the project, and hence metrics you care about can change over time. 

In those cases, you will need to update experiment metrics or add new performance charts calculated when your training jobs are already finished. 

Luckily updating experiments is easy with Neptune:

exp = project.get_experiments(id='PROJ-421')[0]

exp.log_metric('test_auc'; 0.62)
exp.log_image('test_performance_charts', 'roc_curve_test.png')

Note:

Remember that introducing new metrics for one experiment or model means you should probably recalculate and update previous experiments. It is often the case that one model can be better with respect to one metric and worse concerning some other metric.

Final thoughts

In this article we’ve learned:

  • That you should log your machine learning metrics
  • How to track single-valued metrics and see which models performed better
  • How to track learning curves to monitor model training live
  • How to track performance charts to see more than just the numbers
  • How to log Image predictions after every epoch,
  • How to update experiment metrics if you calculate evaluation metrics after the training is over

Happy training!

Senior Data Scientist

NEXT STEPS

Get started with Neptune in 5 minutes

If you are looking for an experiment tracking tool you may want to take a look at Neptune. 

It takes literally 5 minutes to set up and as one of our happy users said:

“Within the first few tens of runs, I realized how complete the tracking was – not just one or two numbers, but also the exact state of the code, the best-quality model snapshot stored to the cloud, the ability to quickly add notes on a particular experiment. My old methods were such a mess by comparison.” – Edward Dixon, Data Scientist @intel

To get started follow these 4 simple steps. 

Step 1

Install the client library.

pip install neptune-client

Step 2

Connect to the tool by adding a snippet to your training code. 

For example:

import neptune

neptune.init(...) # credentials
neptune.create_experiment() # start logger

Step 3

Specify what you want to log:

neptune.log_metric('accuracy', 0.92)

for prediction_image in worst_predictions:
    neptune.log_image('worst predictions', prediction_image)

Step 4

Run your experiment as you normally would:

python train.py

And that’s it!

Your experiment is logged to a central experiment database and displayed in the experiment dashboard, where you can search, compare, and drill down to whatever information you need.

Get your free account ->

How to Monitor Machine Learning and Deep Learning Experiments

Read more

Best Tools to Manage Machine Learning Projects

Read more
Dalex-Neptune

Explainable and Reproducible Machine Learning Model Development with DALEX and Neptune

Read more

A Complete Guide to Monitoring ML Experiments Live in Neptune

Read more