MLOps Blog

How to Make Your Sacred Projects Easy to Share and Collaborate On

4 min
27th July, 2023

Logging is king! and Sacred helps you achieve just that. It is a tool to configure, organize, log, and reproduce computational experiments. It is designed to introduce only minimal overhead while encouraging modularity and configurability of experiments. 

But wait! story doesn’t end here. While light and minimal overhead does work in its favor, this also means Sacred can only do database entries of your model metadata. There’s no frontend that would help analyze results. So, It’s essentially a tool where you can log your metadata in a database but can’t see charts and other things you need to visualize. It also makes it challenging to share these results with other people.

Let’s look into some key areas where Sacred misses out.

Sacred for experiment tracking: key missing features 

While Sacred lets you log model metadata and configurations, it’s only limited to that. You see, when you log experiment metadata, the intention is to look at it in a structured, visually appealing fashion. For example, instead of looking at just GPU usage percentage, I’d bet you’d be more inclined towards looking at it via a moving graph. That’s just how the human brain works, it appreciates images over numbers. 

But that’s not the only missing point. Let’s look deeper into where Sacred misses out:

  1. Lack of user interface: Because of the missing frontend part, Sacred misses out on key experiment tracking features like visualizing experiments, comparing them, and saving customized views or dashboards. 
  2. No collaborative features: Working and running experiments within a team requires you to collaborate on many levels, be it comparing experiments and making decisions to monitoring model training. Sacred, being a backend-only tool, doesn’t let you add your team members to the project and collaborate with them on the results in this one workspace.
  3. No dedicated user support: Being an open-source tool, Sacred doesn’t boast dedicated user support, so you’ll have to rely on community and GitHub issues to fix your problems. And while communities are usually very helpful, you can’t rely on them to provide you with an answer quickly. 

This essentially means there’s a void while using Sacred where you’ll find yourself in a situation where you’ll need another tool to fill in that gaps.

Experiment tracking with Sacred and neptune.ai: how can neptune.ai fill the gaps?

neptune.ai is an ML metadata store that was built for research and production teams that run many experiments. It has a flexible metadata structure that allows you to organize training and production metadata the way you want to. It’s mostly used for experiment tracking and model registry

neptune.ai can be a good alternative to Sacred, since it covers all the Sacred functionality and more. The web app was built for managing ML model metadata and lets you:

  • filter experiments and models with an advanced query language;
  • customize which metadata you see with flexible table views and dashboards;
  • monitor, visualize, and compare experiments and models.

Learn more

Check an in-depth comparison of neptune.ai and Sacred+Omniboard features.

But you can also stay in your beloved Sacred logging API while doing all the heavy lifting of visual logging in the background using Neptune. That’s possible thanks to the Neptune-Sacred integration. We’ll look into the integration in a second. 

First, let’s see how Neptune fills in that gap where Sacred misses out.

Collaboration features for Sacred

Neptune offers a wide spectrum of collaborative features that include:

  • A central metadata store: Your team can train a model on different local machines and log training and model metadata onto a central metadata store to collaborate and make decisions based on data from all runs together.
  • Shared table for all runs and models: Every project created in Neptune comes with a shared table view where you can see all the models and experiments logged by your team. This way you can compare trained model metrics, monitor training progress, and manage what your team is doing at that time.
  • User roles: As part of a team, being an admin, you can manage who gets access to what to make controlled changes to the dashboard.
  • Persistent links: Neptune URLs are persistent, which means that you can share any table views or page you are on, and the other person will see exactly the same thing, so you can safely use these URLs in other tools and reports. You can use a share button (available in the app) or just copy and paste the URL.
Using persistent URLs to the Neptune app

A user interface for Sacred 

As I’ve mentioned already, Neptune comes with an intuitive and customizable interface. It’s actually one of the frontends suggested in the Sacred GitHub repository

The Neptune app lets you display all experiment-related metadata that you’ll need for your runs. It covers every nook and corner of model metadata and may include but is not limited to:

  • Parameter configurations
  • Evaluation metrics 
  • Model weights
  • Performance visualizations (confusion matrix, ROC curve)  
  • Example predictions on the validation set (common in computer vision)
  • Scripts used for running the experiment
  • Environment configuration files 
  • Versions of the data used for training and evaluation

Here you can check what exactly you can log and display in the app

Once the metadata is there, you can easily search through it, filter it and organize it as you want. You can also compare experiments, save table views and create custom dashboards for analyzing results. 

Computer vision dashboard
Example dashboard in Neptune | See in the app

Dedicated user support

Being a managed tool, Neptune offers dedicated support based on the plan you choose. See pricing for more information. Apart from this, there is an extensive and elaborate set of documentation available where you can find functionalities and tutorials.

We’ve established that Neptune covers the missing pieces left by Sacred well, but there’s more. To make life easier for Sacred users, the Neptune team has built an integration to seamlessly conjoin the logging experience. Let’s talk more about the integration.

Neptune-Sacred integration

Neptune-Sacred integration fills in that technical gap of visualizing the logged metadata and unlocks the collaborative features to work with your team on experiments. It works by adding Neptune’s instance called NeptuneObserver to the Sacred experiment variable.

The Neptune-Sacred integration can be implemented simply by installing a python module and importing it. 

# Install the integration
pip install neptune-sacred
# Create a run in Neptune
import neptune
run = neptune.init_run()  

# Create a Sacred experiment 
ex = Experiment("image_classification", interactive=True)

# Add a NeptuneObserver instance to the observers of the experiment 
ex.observers.append(NeptuneObserver(run=run))

Note: For the most up-to-date code example, always refer to the Neptune-Sacred integration documentation.

With the Neptune–Sacred integration, you can log the following metadata automatically:

  • Hyperparameters
  • Metrics and losses
  • Training code and Git information
  • Dataset version
  • Model configuration

Here’s what it looks like in the app:

Neptune_example_sacred
Example dashboard in Neptune, presenting different metadata logged using Neptune-Sacred integration | See in the app

You can check this example project here (no registration is needed). 

Head over to the Neptune-Sacred integration docs to see the detailed laid-out steps to achieve this.

You’ve reached the end

Congratulations! We successfully made your Sacred projects easy to share and collaborate on. This was made possible by using Neptune along with Sacred wherein Neptune will act as a frontend to Sacred’s backend. Just FYI, Neptune on a standalone basis covers all of Sacred’s functionality too. So, if you want only one tool to solve your metadata logging needs, Neptune is a great contender.

That’s it for now. Stay tuned for more, adios!

Was the article useful?

Thank you for your feedback!