MLOps Blog

How to Track Machine Learning Model Metrics in Your Projects

2 min
1st September, 2023

It is crucial to keep track of evaluation metrics for your machine learning models to:

  • understand how your model is doing
  • be able to compare it with previous baselines and ideas
  • evaluate and select the best model
  • understand how far you are from the project goals

ā€œIf you donā€™t measure it you canā€™t improve it.ā€

But what should you keep track of?

I have never found myself in a situation where I thought that I had logged too many metrics for my machine learning experiment.

Also, in a real-world project, the metrics you care about can change due to new discoveries or changing specifications, so logging more metrics can actually save you some time and trouble in the future.

Either way, my suggestion is:

ā€œLog more metrics than you think you need.ā€

Ok, but how do you do that exactly?

Tracking metrics that are a single number

In many situations, you can assign a numerical value to the performance of your machine learning model. You can calculate the accuracy, AUC, or average precision on a held-out validation set and use it as your model evaluation metric.

In that case, you should keep track of all of those values for every single experiment run.

Note: For the most up-to-date code examples, please refer to Neptune docs.

With neptune.ai, you can easily do that:

# Log scores (single value)
run["score"] = 0.97
run["test/acc"] = 0.97

# Log metrics (series of values)
for epoch in range(100):
    # your training loop
    acc = ...
    loss = ...
    metric = ...

    run["train/accuracy"].append(acc)
    run["train/loss"].append(loss)
    run["metric"].append(metric)

Note:

Tracking metrics both on training and validation datasets can help you assess the risk of the model not performing well in production. The smaller the gap, the lower the risk. A great resource is this kaggle days talk by Jean-FranƧois Puget.

That said, sometimes, a single value is not enough to tell you if your model is doing well. 

This is where performance charts come into the picture.

Tracking metrics that are performance charts

To understand if your model has improved, you may want to take a look at a chart, confusion matrix, or distribution of predictions. 

Those, in my view, are still metrics because they help you measure the performance of your machine learning model.

With Neptune, logging those charts is trivial, as you can log them as images.

Tracking iteration-level metrics (learning curves)

Most machine learning models converge iteratively. This is the case for deep learning models, gradient-boosted trees, and many others.

You may want to keep track of evaluation metrics after each iteration, both for the training and validation set to see whether your model monitors overfitting.

Monitoring those learning curves is really simple to implement, yet an important habit.

For simple iteration-based training, you can create a series of metrics or other values in Neptune. It can look like this:

for iteration in range(100):
    run["train/loss"].append(loss)
    run["logs"].append(iteration_config)

May be useful

Neptune integrates with most of the major machine learning frameworks, and you can track those metrics with zero effort. Check the available integrations here.

Tracking predictions after every epoch

Sometimes you may want to take a look at model predictions after every epoch or iteration. 

This is especially valuable when you are training image models that need a lot of time to converge. 

For example, in the case of image segmentation, you may want to plot predicted masks, true masks, and the original image after every epoch.

In Neptune, you can do that by logging a series of images. Here’s how it looks like in the web app:

Tracking metrics after the training is done 

In some applications, you cannot keep track of all the important metrics in the training script. 

Moreover, in real-life machine learning projects, the scope of the project and hence metrics you care about can change over time. 

In those cases, you will need to update experiment metrics or add new performance charts calculated when your training jobs are already finished. 

Luckily updating experiments is easy with Neptune.

To resume a run, you can pass its ID to the with_id argument at initialization. This lets you add new data or visualizations to a previously closed run and facilitates multi-stage training.

run = neptune.init_run(with_id="CLS-123")

# Download snapshot of model weights
run["train/model_weights"].download()

# 450 is the epoch from where you want to resume training process
checkpoint = 450

# Continue training as usual
for epoch in range(checkpoint, 1000):
  run["train/accuracy"].append(0.75)
  ...

Note:

Remember that introducing new metrics for one experiment or model means you should probably recalculate and update previous experiments. It is often the case that one model can be better with respect to one metric and worse concerning some other metric.

Final thoughts

In this article, weā€™ve learned:

  • That you should log your machine learning metrics
  • How to track single-valued metrics and see which models performed better
  • How to track learning curves to monitor model training live
  • How to track performance charts to see more than just the numbers
  • How to log Image predictions after every epoch,
  • How to update experiment metrics if you calculate evaluation metrics after the training is over

Happy training!

Was the article useful?

Thank you for your feedback!
Thanks for your vote! It's been noted. | What topics you would like to see for your next read?
Thanks for your vote! It's been noted. | Let us know what should be improved.

    Thanks! Your suggestions have been forwarded to our editors