Blog » ML Tools » How to Make your TensorBoard Projects Easy to Share and Collaborate on

How to Make your TensorBoard Projects Easy to Share and Collaborate on

We all know that teamwork is an essential part of every machine learning project. Although each engineer has their own piece of the project to work on, they still need to share their results with the team and be open to collaboration.

So in this article, you’ll learn how to make your ML projects easy to share and collaborate on, along with the tools that make it possible.

We’ll focus on the TensorFlow/Keras framework, widely accepted by machine learning engineers worldwide. We’ll explore the pros and cons of TensorBoard, a basic instrument for result tracking. We’ll also compare it to competitors that are more advanced and robust, to see if they can help us overcome TensorBoard’s limitations.

By the end of this article, you’ll know:

  • Why it takes less than a minute to share your entire ML workflow with your teammates;
  • How to extend what can be shareable, and start logging and sharing model attributes (parameters, checkpoints), input data, images, metrics, charts, hardware consumption and many other interesting things;
  • How you can share and collaborate if your project is in a Jupyter Notebook;
  • How to give your colleagues access to the results of your experiment runs and let them programmatically download your project results, data or model attributes, so they can work on it independently.

MIGHT INTEREST YOU
➡️ Docs: TensorBoard and Neptune integration
➡️ TensorBoard vs Neptune comparison
➡️ TensorBoard and Neptune: How are they actually different?


Are you excited to know more? Get your popcorn and keep reading!

watching gif

Logging via TensorBoard

It’s always good to go back to basics, and in our case, it’s Tensorboard – the old, yet still reliable and convenient, tool that most of us like to use.

Let me tell you that this is not going to be another boring article about TensorBoard and its capabilities. We all know what TensorBoard is. I won’t go over it again. 

Instead, I want to share my experience and knowledge gained from working with TensorBoard and show you the best cases for using it, as well as the limitations.

To make it more enjoyable, I’ll base my reasoning on a real-life machine learning project: let’s build an image classifier that tells us the orientation input of an image. There are four options available, as in the plot below:

Image orientation options
Four potential orientation options of an image.
We want to build a classifier to predict orientation.

There are many ways we can do this. For simplicity, I decided to finetune the ResNet50-based model to tackle this problem. Since we’re using the TensorFlow/Keras framework, I created my model architecture using the code snippet below:

# backbone selection
backbone = ResNet50(include_top = False,
                    input_shape = input_shape,
                    pooling = 'avg')

# defining architecture for the net top and defining final model
n_classes = 4

model = Sequential()
model.add(backbone)

dense_count = 256
model.add(Dense(dense_count))
model.add(LeakyReLU())
model.add(BatchNormalization())

model.add(Dense(n_classes))
model.add(Softmax())

Code snipped where I define my model architecture

I won’t spend much time explaining the rationale behind my choice of model parameters, as it’s not on today’s agenda (if you’re really interested, refer to this article). 

Quick reminder: how do you enable TensorBoard and make it responsible for tracking model logs?

First, I define a path to a directory where my logs will be stored. Then I create a TensorBoard callback object. It will store logs in a previously specified directory.

logdir = './logs_import/'
os.makedirs(logdir, exist_ok = True)

tbCallBack = keras.callbacks.TensorBoard(log_dir = logdir,
                                         histogram_freq = 0,
                                         write_graph = False,
                                         write_images = False)

Initiating a TensorBoard callback

With the above code snippet, my future model logs will be stored at ./logs_import/, which I just created via the .makedirs() function, available in the os library. 

To launch TensorBoard, I open a new Terminal window and navigate to my project directory. From there, I execute the following command:

tensorboard --logdir=./logs_import

TensorBoard will now automatically launch in a new tab of my default web browser. It’s empty now since we haven’t started logging yet. To start logging, I fit the model and kick off the training job, as shown in the code snippet below.

model.fit_generator(generator = train_generator,
                    steps_per_epoch = training_steps_per_epoch,
                    epochs = 100,
                    validation_data = validation_generator,
                    validation_steps = validation_steps_per_epoch,
                    callbacks = [rLrCallBack,
                                 tbCallBack,
                                 mcCallBack_loss,
                                 mcCallBack_acc,
                                 esCallBack],
                    use_multiprocessing = True,
                    workers = 8
                   )

Fitting my model to start training

Note that the previously initiated TensorBoard callback was passed in as a parameter among other callbacks.

After around an hour and 20 epochs on a GPU, my model has finished training. Let’s see what TensorBoard has for us. To do that, I get back to the previously opened browser tab, and I see that 5 scalars were logged:

  • epoch_loss and epoch_val_loss are values for a model’s loss function for both training and validation sets respectively;
  • epoch_spare_categorical_accuracy and epoch_val_spare_categorical_accuracy are values for model metrics for both train and validation sets respectively;
  • epoch_lr are learning rate values that the optimizer uses throughout the training process. Since it varies over time, it’s logged.
Tensorboard logs
Outlined in red are logs that TensorBoard keeps track of and displays in its UI.

Besides scalars, there are some other things we can track with TensorBoard. Look at the below screenshot from the TensorBoard UI to see what else we can track:

Tensorboard tracking
Outlined in red are things you can track with TensorBoard.

That was a brief reminder of what you can do with TensorBoard. Now let’s see if it has limitations in terms of result sharing.

Problem statement

Problem gif

Have you ever wanted to use TensorBoard to share and discuss work progress with your team? There are a few ways you can do that.

Firstly, TensorFlow has its own toolkit (TensorBoard.dev) for TensorBoard sharing. It’s a good option to consider, but keep in mind that your TensorBoard will be publicly visible, and you can’t manage access to it. This warning comes from the official TensorBoard docs: 

“…the TensorBoard you upload will be publicly visible, so do not upload sensitive data.” 

So for any project that isn’t open-source, you’d probably need a more secure option.

Secondly, we used to be able to use a third-party service called Aughie Boards. Its approach to sharing was described in this article, but unfortunately, this service was halted and isn’t available.

Thirdly, Neptune.ai has a well-rounded solution for sharing experiment results – you just copy a link. You can control privacy by managing access restrictions. Integration with our selected framework (TensorFlow/Keras) is fully supported, so there’s no need to change our established workflow.

Neptune seems like the most reasonable option to solve our progress-sharing problem. Let’s see how we can integrate it into our project.

Neptune integration

Luckily for us, integrating with Neptune is a smooth process. There are two options to consider:

  1. Via Direct Import of TensorBoard Logs:

This option is especially handy when your model has already been trained, and all essential logs have been stored in your logs directory. We just want to upload our result and make them available for sharing.

When I converted my logs into a Neptune experiment, here’s what I got:

Tensorboard Neptune import logs
Logs imported to Neptune

Take a look that there are two rows (experiments) that appeared in my project space. When we look closely, we see that the first line represents training set logs, and the second line – validation logs. You can see this from the Tags, or/and the tf/run/path attribute next to the experiment ID.

Let’s check if my import was successful. To do that, I click the ID of a selected experiment. Here’s what’s inside:

Tensorboard Neptune display logs
Training scalar logs imported to Neptune and displayed as plots

Two charts that describe model loss and accuracy (selected metric) at each training epoch.

There are several ways to examine a chart. I can change an axis scale: switch between linear or logarithmic scale for the y-axis, or/and swap between time or number of epochs for the x-axis. Usual features from TensorBoard, like chart smoothing and zooming, are also available.

Neptune plays the role of a host for our logs. If this is what you want, let me show you how to import your logs to Neptune. There are a few simple steps:

  • Get Python 3.x, and the following libraries:
    • neptune-tensorboard;
    • tensorboard.

Install them by executing the following bash command:

pip install tensorboard==2.4.0 neptune-tensorboard==0.5.1
  • Get your API token and set it as NEPTUNE_API_TOKEN:
    • For Linux/iOS users, the bash command for api token setting is:
export NEPTUNE_API_TOKEN='YOUR_API_TOKEN'
  • For Windows users:
set NEPTUNE_API_TOKEN="YOUR_API_TOKEN"
  • Host your TensorBoard logs at Neptune:
    • Navigate to your TensorBoard logs directory and run:
neptune tensorboard --project USER_NAME/PROJECT_NAME
  • Alternatively, you can point Neptune to your TensorBoard logs directory:
neptune tensorboard /PATH/TO/TensorBoard_logdir \
--project  USER_NAME/PROJECT_NAME

If you need more details, you can watch this video guide by Jakub Czakon (click on the below image to get a link to the video) about importing your TensorBoard logs to Neptune:

  1. Via Integration with TensorFlow/Keras Framework:

Just starting your project, and you want to integrate Neptune into your workflow? I would strongly recommend using this integration method. 

This method provides much more advanced functionality with just a few lines of extra code. You’ll love how your experiments/logs are organized, stored, and easy to share.

Let’s start my classification project from the beginning, but now I’ll integrate Neptune with the help of the second integration method. When integrated, it’s interesting how the page with experiments for my project changes compared to the first method:

Neptune Tensorboard integration
Experiment results displayed after using the second integration method

Look at how convenient it is to see not only the experiment and its results, but also some parameters that go to the right from the experiment ID (parameters like dense_layer, backbone, batch_size, etc…). 

What if you had a dozen experiments displayed? In my personal practice, this type of detailed info next to each experiment saves time when dealing with multiple experiments, trying to compare them or to find a special one.

Clicking on the experiment ID reveals pretty much the same charts we saw before, but now we have both training and validation plots under one experiment ID. It’s convenient to keep them this way, since both validation and training plots refer to one particular run.

Tensorflow Neptune integration charts
Charts for the experiment created by integrating Neptune with TensorFlow/Keras

Besides charts, there are several other things we can do in Neptune. Let’s look at the Parameters tab:

Neptune Tensorflow parameters
Parameters of a model associated with a given experiment. Also hosted at Neptune

Neptune has stored some of the model parameters that are related to the current experiment. These exact parameters were shown next to the experiment ID we saw previously.

How did Neptune capture these parameters? I’ll tell you the details later on, now I just want to give you a high-level overview of the tool.

Let’s move on and open up the Monitoring tab:

Neptune Tensorflow resource utilization
Resource utilization displayed within Neptune for the training job associated with a given experiment

CPU and GPU utilizations are shown as an example.

We can monitor how resource utilization has been changing throughout the experiment run. We can observe both CPU and GPU load, as well as memory consumption. This information flow goes on in real-time while your model is training, and all of this info is available in a single experiment page, which is pretty neat.

Let me show you how I was able to get the above info, and how the actual integration works out, so you can do it on your own:

  • As before, register for an account at Neptune.ai

You can easily install them by executing the following bash command:

pip install tensorflow==2.3.1 neptune-contrib==0.4.129 neptune-client==0.25.0
  • Initialize Neptune by adding an initialization code snippet to the script or Jupyter Notebook you work in. Here is how initialization is done in my project:
# enabling neptune for experiments tracking
import neptune
import neptune_tensorboard as neptune_tb

token = '**********' # your token here

neptune.init(project_qualified_name='anton-morgunov/image-orientation-detection-method',
             api_token=token)

neptune_tb.integrate_with_tensorflow()

Code snippet for initializing Neptune within the project

Note that I intentionally blurred my token because it’s a private token for my account. When you register an account on Neptune’s website, your personal token will be available in your profile page.

Also, pay attention to the .integrate_with_tensorflow() method that I applied in line 10, enabling the actual integration with TensorFlow/Keras within my workspace.

  • The rest of my project code goes the same as before, except the way I fitted my model. Now it looks like this:
params = {
    'backbone': backbone.name,
    'dense_layer': dense_count,
    'batch_size_train': batch_train,
    'lr': learning_rate,
    'input_shape': input_shape
}

# training launch
with neptune.create_experiment(name='resnet50_finetuning', params=params):
    model.fit_generator(generator = train_generator,
                        steps_per_epoch = training_steps_per_epoch,
                        epochs = 100,
                        validation_data = validation_generator,
                        validation_steps = validation_steps_per_epoch,
                        callbacks = [rLrCallBack,
                                     tbCallBack,
                                     mcCallBack_loss,
                                     mcCallBack_acc,
                                     esCallBack],
                        use_multiprocessing = True,
                        workers = 8
                       )

Code snippet for the model fit when you need an integration with Neptune

You see that within the first cell of my Jupyter Notebook, I create a dictionary to specify a set of parameters that will go along with the current experiment. In the next cell, I passed these parameters during experiment creation. This is how Neptune associates a set of parameters with a given experiment run.

Model fitting is similar to what we usually do in TensorFlow/Keras, with the only difference in line 2, where we specify an experiment creation.

This is all that you need to know to integrate Neptune with TensorFlow/Keras in your project. Quite a simple process, and it takes just a little bit of your time.

Sharing your work

Sharing gif

Now, when you have your experiments uploaded to Neptune, we can talk about result sharing.

The approach to sharing differs based on the privacy level you set for the project. Neptune has two options for project privacy: public and private

Sharing public projects:

Given no restrictions to your project, Neptune provides the easiest way to share your results: with a URL copied from the web-page you’d like to share.

Here’s what you can share via a URL:

  • Details of your experiments, including charts and model binaries;
  • Comparison charts overlaid for different experiments you want to compare;
  • Jupyter Notebook, including checkpoints and differences between multiple notebook versions. For example, check out a complete notebook for my project that I shared.

You can find your project privacy level within the “Projects” tab of your account:

Neptune project privacy
Project privacy level outlined in red for each of the projects in my workspace.
Some of them are public, some are private.

Sharing private projects:

To enable restricted access for your project, you can set it private. If you want to keep your work confidential, Neptune lets you manage who has access and what they can do. 

To do that, you can invite teammates to your teamworkspace. Invited members will be called a “team” within the workspace. Each member can have one of the selected roles: owner, contributor or viewer

Neptune private projects
New team member invitation and role assignment within the workspace in Neptun

Depending on the access level you assign to your teammates, you limit what a given user can do, starting from monitoring project workflow to launching their own experiments within the same workspace. 

For more details, check out a short intro-video on how an organized workspace looks like for a team.


EDITOR’S NOTE
Research and non-profit organizations can use the Neptune Team plan for free but everyone can try it. Remember that you can use the Neptune Individual plan as you like (work, research, personal projects).


Other things you can log and share

anything else gif

It’s common practice for machine learning engineers to test out multiple approaches for solving a particular technical task. 

How does each training run end up? We have our best model checkpoint, performance results on a separate dataset and, what I particularly love to do, is to build up some plots that help me visually understand the capabilities of my model. 

Wouldn’t it be great to keep this information in one place associated with the experiment run? Sounds like the ideal workspace for me.

Luckily, all of that is available in Neptune. So, let’s take advantage of it and include some extra info with our experiment run.

First, I’d like to upload my model’s best checkpoint. Here’s how:

from sklearn.metrics import confusion_matrix
import scikitplot as skplt
from itertools import chain
import seaborn as sns
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)

path2best_ckpt = './checkpoints/epoch_16-val_loss-0.0019-val_acc_1.0000_import.hdf5'
best_ckpt_basename = os.path.basename(path2best_ckpt)
neptune.log_artifact(path2best_ckpt,
                     os.path.join('model_checkpoints/', best_ckpt_basename)
                    )

Uploading model’s best checkpoint for a particular experiment run to Neptune

Now, within the “Artifacts” tab of my project workspace, I see an uploaded checkpoint:

Neptune Tensorboard artifacts
Uploaded checkpoint displayed and available within the Artifacts tab of my project workspace

Now, I’ll evaluate my trained model on a test set. To do that, I simply do the following:

p = eval_model.predict_generator(test_generator)

Evaluating my trained model on a test set

Since I worked on a classifier, I’d like to evaluate it by looking at the confusion matrix and at the ROC-AUC curve. Let’s create them and attach them to our experiment:

fig_roc, ax = plt.subplots(1, 1)
skplt.metrics.plot_roc_curve(y_true, np.flip(p, axis=0), ax=ax)

fig_cm = plt.figure()
sns.heatmap(conf_matrix, annot=True)

neptune.log_image('confusion matrix', fig_cm, image_name='conf_matrix')
neptune.log_image('roc_auc', fig_roc, image_name='roc_auc')
Neptune Tensorboard confusion matrix
Evaluating Models via ROC-AUC and Confusion Matrix plots
Uploading them to the workspace

Here are the plots uploaded to my workspace:

Neptune Tensorboard plots
Plots uploaded to the workspace become available at Logs tab

There are plenty of things that you can log, keep track of, and share later on. Check this page to see what else is supported.

What’s even cooler is that your teammates can programmatically fetch all of the data that you upload, in a simple and convenient way!

Conclusions

TensorFlow and Keras frameworks are widely spread, and the question of work-sharing and collaboration is quite essential for those of us who work in ML teams. Basic tools like TensorBoard do a great job, but mostly for independent work. Other solutions are either halted or lack proper privacy management.

The solution developed by Neptune does a great job integrating with TensorFlow/Keras, and gives us rich functionality for experiment tracking. We can complement each experiment with logs of our choice, so it’s well described and easy to understand.

I believe that this tool will be helpful for many machine learning engineers looking for a convenient solution for experiment tracking and result sharing. It definitely helps me.


NEXT STEPS

How to get started with Neptune in 5 minutes

1. Create a free account
Sign up
2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune.new as neptune

run = neptune.init('Me/MyProject')
run['params'] = {'lr':0.1, 'dropout':0.4}
run['test_accuracy'] = 0.84
Try live notebook

Deep Dive into TensorBoard: Tutorial With Examples

Read more
Tensorboard Neptune

TensorBoard vs Neptune: How Are They ACTUALLY Different

Read more
Best tools featured

15 Best Tools for Tracking Machine Learning Experiments

Read more
Experiment tracking in project management

How to Fit Experiment Tracking Tools Into Your Project Management Setup

Read more