ML Metadata Store

A single place to manage all your
model-building metadata

Track experiments, register models, and integrate with any MLOps tool stack

icon Overview

Integrations (7)

Experiment tracking

Track, organize, and compare everything you care about in your ML experiments.
  • Monitor experiments as they are running
  • Keep track of metrics, parameters, diagnostic charts, and more
  • Search, group, and compare experiments with no effort
  • Drill down to every experiment information you need
  • Share results with your team and access all experiment details programmatically
Learn more

Model registry

  • Version production-ready models and metadata associated with them in a single place
  • Review models and transition them between development stages
  • Access all models your team created via API or browse them in the UI
Learn more
icon Integrations
icon Get started

Get started in 5 minutes


Sign up to Neptune and install client library

pip install neptune

Track experiments

import neptune

run = neptune.init_run()
run["params"] = {
    "lr": 0.1, "dropout": 0.4
run["test_accuracy"] = 0.84

Register models

import neptune

model = neptune.init_model()
model["model"] = {
    "size_limit": 50.0,
    "size_units": "MB",
icon Compare Neptune

Compare with other tools

icon Resources

Code examples, videos, projects gallery, and other resources

icon Our users

Trusted by 30000+ ML practitioners and 500+ commercial and research teams

avatar lazyload
“I’ve been mostly using Neptune just looking at the UI which I have, let’s say, kind of tailored to my needs. So I added some custom columns which will enable me to easily see the interesting parameters and based on this I’m just shifting over the runs and trying to capture what exactly interests me.”
Wojciech Rosiński CTO at ReSpo.Vision
avatar lazyload
“Gone are the days of writing stuff down on google docs and trying to remember which run was executed with which parameters and for what reasons. Having everything in Neptune allows us to focus on the results and better algorithms.“
Andreas Malekos Head of Artificial Intelligence at Continuum Industries
avatar lazyload
“Neptune is aesthetic. Therefore we could simply use the visualization it was generating in our reports.

We trained more than 120.000 models in total, for more than 7000 subproblems identified by various combinations of features. Due to Neptune, we were able to filter experiments for given subproblems and compare them to find the best one. Also, we stored a lot of metadata, visualizations of hyperparameters’ tuning, predictions, pickled models, etc. In short, we were saving everything we needed in Neptune.”
Patryk Miziuła Senior Data Scientist at
avatar lazyload
“The way we work is that we do not experiment constantly. After checking out both Neptune and Weights and Biases, Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again.”
Viet Yen Nguyenh CTO at Hypefactors

    Contact with us

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

    * - required fields