Blog » ML Project Management » How to Fit Experiment Tracking Tools Into Your Project Management Setup

How to Fit Experiment Tracking Tools Into Your Project Management Setup

In machine learning, engineers tend to do a whole lot of experiments (to say the least). Because of this, experiment tracking is an essential part of every ML project. There’s a need for a tool that won’t just do the tracking for us, but also easily integrate into our existing project management setup, so the other team members can follow and track the workflow.

Neptune has come up with an experiment tracking solution which our team has exploited quite broadly. We recently decided to integrate Neptune into our workflow even more, and expanded it to common activities like regular daily meetings and weekly sprints. 

It’s made our entire project flow much more transparent and understandable for the entire team, whereas we, the ML engineers, continued to enjoy the tracking capabilities of Neptune. 

A win-win solution? To me, definitely! I’ll show you exactly what we did, and what are the benefits.

Tracking tool integrated into sprints

Every sprint has an objective. Our team has a biweekly meeting where we set up goals to achieve for the sprint timespan. 

These goals become high level tasks in Jira – our tool for managing software development. Each task has a unique ID associated with it, so we can refer to a particular task using this ID. 


READ NEXT
The Best Software for Collaborating on Machine Learning Projects


Let’s consider an example for a real ML project I’ve recently worked on. Here’s how the page with a newly created task in Jira looks like:

ML PM Jira
Unique task ID is shown for a newly created sprint task

When we decided on a task and created it in Jira, we then set a new project in Neptune, and link it to the original Jira task by the ID. The project I’ve created in Neptune looks like this:

ML PM Jira
Project created in Neptune and linked to Jira via the unique project ID

Now, Jira and Neptune are connected. We allocated a particular project in Neptune that aims to tackle a development problem described in Jira’s task.

Tracking tool integrated into daily meetings

Development phase of the project

For the development environment, our ML team tends to work in Jupyter Notebooks. In order to track the overall project workflow, it’s essential to track the progress within those Notebooks. 

Neptune, when linked to Jupyter, lets us do checkpointing for the notebooks. These checkpoints are uploaded to the project page within Neptune, and look like this in Neptune’s UI:

Checkpoints in Jupyter
Checkpoints logged and listed for a jupyter notebook. 
Screenshot from Neptune’s UI.

At daily meetings, we look at these checkpoints. It makes the development progress understandable and transparent. Look at the above screenshot to see how my model development workflow was going, starting from the Exploratory Data Analysis (EDA) for the input data, continuing with the data generator creation, finishing up with logging inclusion. 

Also, note how date and time information is displayed next to the checkpoint. It makes the timeline clear and trackable. When necessary, we can compare two checkpoints (two notebooks) if detailed progress information is requested. Using Neptune’s tracking capabilities makes the project manageable and predictable.

The same tracking process can be done if you prefer to develop in an IDE. Your scripts can be stored and compared the same way we saw for Jupyter Notebooks.

Hypothesis testing phase of the project

When the development phase for the ML project is completed, we usually end up with a baseline model that shows some performance, but rarely the best possible performance. Now’s the time to test hypotheses, and check if there are any other model configurations that might lead us to better performance, or a more optimal solution.

Hypothesis testing is an essential part of every ML project, and Neptune tracks experiment results in a simple and convenient way. Tracking records can be used as logs to set up management for an ML project. Let me show you how we integrated Neptune into our overall management workflow.

Jira lets us do subtasking, so each hypothesis testing starts out with a subtask created for it. The subtask is located under the original high level task and belongs to it. Here’s how it looks for my project:

Hypothesis testing subtask
Subtask creation for a hypothesis testing.
Example from Jira.

If the subtask is opened as a separate page, here’s how it looks like in Jira:

Subtask page within Jira
Subtask page within Jira

Note that a hypothesis (shown above as a subtask within Jira) is connected via a link with a unique experiment created in Neptune. We do this for the sake of convenience, since it can help us navigate between Jira and Neptune.

On the other hand, if we go to Neptune and look at the experiments page, this is what we’ll see:

Experiments page in Neptune
Experiments page in Neptune

Have a look at the record bounded in the red rectangle. This is the experiment for the subtask we saw previously. Note that the experiment has some tags associated with it. If we read these, we’ll note some interesting information in there:

  • Jira’s subtask ID 
  • Sprint number

Subtask ID helps us keep a link between a particular experiment and a task from Jira, whereas the sprint number reminds us what was the sprint when we tackled this particular hypothesis. 

You might think of any other tags that will be useful for your particular case. The key takeaway is that you can have it next to the experiment in Neptune, and it really eases the overall tracking progress.

By the way, in case you’re curious what it takes to add tags to the experiment run, here’s a cell from my jupyter notebook where I do tag appending:

create_experiment = True
log_aug_script = False

launch_name = 'hypothesis_test' # experiment name definition
tags = ['sprint 49', 'DOC-2698']

# model config placed into dict, so it can be logged
params = {
    'backbone': backbone.name,
    'loss': loss_name,
    'pooling': pooling_name,
    'dense_layer': dense_count,
    'input_shape': input_shape,
    'batch_norm_usage_in_top_part': batch_norm_usage_in_top_part
}

if create_experiment:
    neptune.create_experiment(name=launch_name,
                              params=params,
                              upload_source_files=['aug.py'] if log_aug_script else [])
    if tags:
        neptune.append_tags(tags)

CI/CD and capabilities for reproducibility

CI/CD applied to machine learning grasps more and more attention nowadays. We want a set of practices and tools that we can use to enhance project development and ease future deployment.

Neptune covers this area of our interest as well, thanks to its logging capabilities. Neptune can log and store some of the most essential artifacts needed not only for the deployment phase, but also to reproduce what we do. Moreover, all of these artifacts and logs can be attached to a particular experiment, empowering it with the core information it needs.


BOOKMARK FOR LATER
How to Set Up Continuous Integration for Machine Learning with Github Actions and Neptune: Step by Step Guide


Let’s look at an example of our team’s extensive use of these capabilities. First of all, loss and metrics values tracked by Neptune and displayed as charts.

Metrics and loss plots Neptune
Metrics and loss plots displayed in Neptune for a particular experiment.
Can be tracked for both training and validation sets
.

Next, we can always remind ourselves which model parameters and architecture we used to get such model performance. Neptune stores this information for us under the parameters tab.

Model architecture and parameters Neptune
Model architecture and parameters stored in Neptune for a particular experiment


The problem that every machine learning engineer might experience is dataset versioning control. Neptune lets us upload the dataset information we work with for a particular experiment.

Dataset information Neptune
Dataset information stored in Neptune


I personally work mostly with image data, and prefer to log not the images themselves, but the data frames which have all necessary information I need (like the paths to the images and the labels for them). These data frames can be fetched programmatically from Neptune and used later on.

As a computer vision engineer, I also like to experiment with image augmentation. I play around with multiple augmentation approaches, and need to keep track of my current augmentation set up as well. Neptune lets me attach augmentation information (as a code script) to a particular experiment, so it won’t be lost and can be restored.

Augmentation information
Augmentation information attached to the experiment


Model checkpoints can also be uploaded and stored. I usually upload the best checkpoint for a particular experiment, so I can fetch it later when needed.

Best checkpoint Neptune
Best checkpoint for an experiment is stored

In case we’d like to remind ourselves of the model performance capabilities, we can look not only at the loss and metrics values, but also at the performance plots that I usually build and also attach to the experiment run. Neptune lets me upload these plots and keep them within the experiment page.

Model performance Neptune
Model performance information stored for a particular experiment.
Loss value, metrics value and performance plots are included
.

At the very bottom of the above screenshot you might note two plots that I attached: roc-auc curve and confusion matrix plot. Plots, metrics and loss values will give me comprehensive information about the model achieved for a particular experiment run.

Last but not least, I tend to include code for a complete inference call and attach it to the best experiment run. Later, when the time comes for model deployment, my teammates can fetch the best checkpoint and inference code, and use it to design a service that can be deployed. Pretty neat, right?

Conclusions

In case your project has a machine learning part in it, you’ve just seen how your project management setup can benefit from a Neptune integration.

It provides tracking capabilities that can help not only your ML team in development, but also in the overall project management, allowing you to see the progress made on a daily or a sprint basis.

You’ve also seen a way how a widely accepted software development tool Jira can be linked with Nepune in an easy and convenient way. 

Lastly, there’s an enormous amount of artifacts that Neptune can store for you. All of that combined will equip your project with CI/CD capabilities, and will result in complete reproducibility of your project.

Other tools that can help you with project management:

1. https://www.ganttproject.biz/

What we have here:

  • Gantt Charts
  • Milestone Tracking
  • Percent-Complete Tracking
  • Project Planning
  • Resource Management
  • Task Management

2. https://www.meisterlabs.com/

What we have here:

  • Task tracking
  • Task planning
  • Task scheduling
  • Mind maps
  • Project planning
  • Project time tracking
  • Time tracking by project
  • Workflow management
  • Real-time reporting
  • Activity dashboard
  • Tags & keywords
  • Status tracking
  • Project tracking
  • Project workflow
  • Collaborative workspace
  • Real-time notifications

3. https://www.wrike.com/vj/

What we have here:

  • File sharing
  • Subtasks
  • Progress reports
  • Branded workspace
  • Email-to-task syncing
  • Custom calendars for vacations, sick leave etc.
  • Workload view & scheduling
  • Calendar integrations with Google and iCalendar
  • Andriod and iPhone app
  • API
  • User groups
  • Shared workspace
  • Task planning
  • Task scheduling
  • Task management
  • Activity dashboard
  • File management
  • File transfer
  • Real time activity stream
  • Reporting &statistics
  • Task-related discussions
  • Automate recurring tasks & projects
  • Third party integration
  • Timeline management
  • Calendar management
  • Project time tracking
  • Time tracking by project
  • Due date tracking
  • Resource management
  • Budget tracking
  • Activity tracking
  • Activity management
  • Email & calendar synchronization
  • Synchronization management
  • Project tracking
  • Configurable workflow
  • Workflow management
  • Dashboard creation
  • User access controls
  • Permission management
  • Data backups
  • Role-based permissions
  • Password management
  • Data encryption
  • Secure data storage
  • Automatic backup
  • Real time reporting

4. https://www.orangescrum.com/

What we have here:

  • Hosting included
  • Web based
  • Dashboard
  • Daily catch-Up
  • Task template
  • Email and live support
  • Google Drive and Dropbox
  • Free trial
  • Open source version
  • Multiple users
  • Respond via email
  • Desktop notification
  • Unlimited projects
  • Calendar
  • Kanban view
  • Conversation threads
  • Ticketing/Work-flow
  • Time tracking
  • Activities
  • Email notification

5. https://freedcamp.com/

What we have here:

  • Bug Tracking
  • Collaboration
  • File Sharing
  • Issue Management
  • Milestone Tracking
  • Project Planning
  • Status Tracking
  • Task Management
  • Time & Expense Tracking

READ NEXT

15 Best Tools for Tracking Machine Learning Experiments

Pawel Kijko | Posted February 17, 2020

While working on a machine learning project, getting good results from a single model-training run is one thing, but keeping all of your machine learning experiments organized and having a process that lets you draw valid conclusions from them is quite another. That’s what machine learning experiment management helps with. 

In this article, I will explain why you, as data scientists and machine learning engineers, need a tool for tracking machine learning experiments and what is the best software you can use for that.

Tools for tracking machine learning experiments – who needs them and why?

  • Data Scientists: In many organizations, machine learning engineers and data scientists tend to work alone. That makes some people think that keeping track of their experimentation process is not that important as long as they can deliver that one last model. This is true to an extent, but when you want to come back to an idea, re-run a model from a couple of months ago or simply compare and visualize the differences between runs, the need for a system or tool for tracking ML experiments becomes (painfully) apparent. 
  • Teams of Data Scientists: A specialized tool for tracking ML experiments is even more useful for the whole team of data scientists. It allows them to see what others are doing, share the ideas and insights, store experiment metadata, retrieve it at any time and analyze it whenever they need to. It makes the teamwork much more efficient, prevents situations where several people work on the same task, and makes onboarding of new members way easier.
  • Managers/Business people: tracking software creates an opportunity to involve other team members like managers or business stakeholder in your machine learning projects. Thanks to the possibility to prepare visualizations, add comments and share the work, managers and co-workers can easily track the progress and cooperate with the machine learning team.

Here is an in-depth article about experiment management for those of you who want to learn more.

Continue reading ->
Data Science Project Management 2021

Data Science Project Management in 2021 [The New Guide for ML Teams]

Read more

Best Tools to Manage Machine Learning Projects

Read more
Tensorboard sharing and collaboration

How to Make your TensorBoard Projects Easy to Share and Collaborate on

Read more
Model Management

Machine Learning Model Management: What It Is, Why You Should Care, and How to Implement It

Read more