Model Registry

Have your model development under control.

Version, store, organize, and query models, and model development metadata including:

  • Dataset, code, env config versions
  • Parameters and evaluation metrics
  • Model binaries, descriptions, and other details
  • Testset prediction previews and model explanations
Talk to a product pro ->
Build reproducible

Record and version all model development metadata

What model metadata can I version in Neptune?

Neptune lets you version, display, and query most metadata generated during model building:

  • Code, .git info, files, and Jupyter notebooks
  • parameters and evaluation metrics
  • datasets (hash+location)
  • model binaries and other file objects
  • images, interactive charts, audio, video, 
  • and much more!

See the full list of supported metadata types
Product_ log_and_display

How does logging API work?

You connect Neptune to your model training from your code via neptune-client library.

  • initialize neptune client with neptune.init()
  • start an experiment with neptune.create_experiment()
  • log whatever object you want with an appropriate logging method. For example neptune.log_metric() for metrics and losses.

Learn more
run = neptune.init('YOUR_ORG/PROJECT',
                    source_files = ['**.*.py', # scripts
                                    'requirements.yaml']) # environment
run['params'] = {'lr': 0.21,
                 'batch_size': 128},
run['data_version'] = get_md5('data/train.csv')} # data versions
run['acc'] = 0.92 # metrics
run['model'].upload('model.pkl') # model binaries

Which languages and ML frameworks does Neptune work with?

Neptune works with any ML framework in Python and R via the official client libraries (but you can record model training from other languages).

To make things easier Neptune has integrations with over 25 major ML frameworks and libraries including:

  • Keras and Tensorflow
  • Pytorch and Lightning, Catalyst, Fastai, Ignite, Skorch
  • Scikit learn, XGBoost, lightGBM
  • Optuna, Skopt, Keras-Tuner
  • and more!

Learn more
#Tensorflow/Keras
model.fit(..., callbacks=[NeptuneMonitor()])

# PyTorch lightning
trainer = pl.Trainer(..., logger=NeptuneLogger())

# lightGBM
gbm = lgb.train(... callbacks = [neptune_monitor()])

# XGBoost
xgb.train(..., callbacks=[neptune_callback()])

# Optuna
study.optimize(…, callbacks=[NeptuneCallback()])

How can I download model information?

You can download model information directly from the UI or use the client library.

Whatever you logged to Neptune can be accessed and queried programmatically.

Learn more
project = neptune.get_project('your-org/your-project')
project.fetch_runs_table().to_pandas()

run = neptune.init('your-org/your-project',  run="SUN-123")
run['parameters/batch_size'].fetch()
run['model'].download()

Organize models in a central model registry

Version and store all your model training runs in a single place

Have data, code, parameters, and model binary versioned for every model training run.

Run model training jobs in every environment (cloud, local, notebooks, you name it) and log whatever information you care about to Neptune. 

Your team can organize, display, compare and query it easily. 

Learn more
One place for experiments

Filter, sort, and group model training runs in a dashboard

Find model training runs you care about quickly:

  • search based on metric and parameter values
  • find runs with user-defined tag
  • group model training runs based on the parameter or tag values
Learn more
Product_Filter, sort, and compare

Compare and debug models with zero effort

  • Compare metrics and parameters in a table that automatically finds what changed between runs
  • Explore model diagnostics charts and prediction explanations when you need to dig deeper
  • Compare training runs on dataset versions 

 

Learn more
Debug models

See (only) the information you want: customize and save dashboard views

Neptune experiment dashboard is fully customizable.

  • You can choose which metrics, parameters, or other information you want to see.
  • You can save dashboard views for later
  • You can have multiple dashboard views for different purposes (exploration, comparison, reporting)
See example project
Product_Save dashboard

Access model information programmatically

You can download all model information that you logged, including:

  • code, 
  • model binary, 
  • metrics, parameters
  • and more!

It is as simple as pointing Neptune to the experiment run that you want in your Python code.

Learn more
project = neptune.get_project('your-org/your-project')
project.fetch_runs_table().to_pandas()

run = neptune.init('your-org/your-project',  run="SUN-123")
run['parameters/batch_size'].fetch()
run['model'].download()

Make your models reproducible and traceable

Track everything you need to reproduce every model training run

Automatically record the code, environment, parameters, model binaries, and evaluation metrics every time you run an experiment. You will not forget to commit your changes because it just happens.

Learn more
run = neptune.init('YOUR_ORG/PROJECT',
                    source_files = ['**.*.py', # scripts
                                    'requirements.yaml']) # environment
run['params'] = {'lr': 0.21,
                 'batch_size': 128},
run['data_version'] = get_md5('data/train.csv')} # data versions
run['acc'] = 0.92 # metrics
run['model'].upload('model.pkl') # model binaries

Go back to your model training runs even months after

Find and access information from your past runs whenever you need them.

No worries about losing your work or not knowing which model is running in production.

Everything is stored, backed up, and ready for you.

See example project
Experiment tracking_Go back

Re-run every past model training

Do you need to re-run a model training job from the past?

Query all the pieces you need from Neptune.

Your code, environment files, data versions, and parameters can be attached to every model training job you run. You can access it programmatically or find it in the UI.

Learn more
Model registry

Find out who created a model and talk to them

Do you want to know who created the model that is running in production?

Neptune automatically records that info for every experiment that is run.

Just find that run, send an experiment link to your colleague, and talk.

Link to an example run
Build reproducible

Find out what data your model was trained on

Your production model is behaving unexpectedly, and you are wondering what it was trained on?

Neptune lets you record data versions and locations for every experiment training run so that when you need it, you can find it in a second.

You can even record a data snapshot to understand it better.

Learn more
Product_Artifact metadata

Collaborate on models with your team

See every model training your team runs

Have every artifact of every model your team builds in a central model registry.

Avoid duplicating expensive jobs.

Everything backed-up, secure, and ready to be accessed at any time.

See example project
Model registry_See team training

Share anything you want with a link

Want to discuss what you see right now in the application?

Just share a link. It’s that simple. 

Experiment details, comparisons, dashboard views, or anything else get persistent links that you can paste or send!

Link to training run comparison

Have a central registry for the models and experiments

You can have all experiments and models your team runs in a single place. 

Code, parameters, and model binaries are logged for every training job so that you can re-produce, re-train, and deploy them in production.

See example project
Metadata_dashboard

Find and fetch everything you need programmatically

Everything that your team logs to Neptune is automatically accessible to every team member.

You can access model training run information like the code, parameters, model binary, or other objects via an API.

Learn more
project = neptune.get_project('your-org/your-project')
project.fetch_runs_table().to_pandas()

run = neptune.init('your-org/your-project',  run="SUN-123")
run['parameters/batch_size'].fetch()
run['model'].download()

Focus on building models.
Leave metadata bookkeeping to Neptune.
Get started in 5 minutes.

1. Create a free account
Sign up
2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune.new as neptune

run = neptune.init('Me/MyProject')
run['params'] = {'lr':0.1, 'dropout':0.4}
run['test_accuracy'] = 0.84
Try live notebook
Get started with Neptune

Not convinced?

Try in a live Notebook (zero setup, no registration)

Try now

Explore example project

Example project
Go to project

Watch screencasts

Screencasts featured
Watch now

See the docs

Docs view
Check now