Experiment Tracking

Organize your ML experimentation in a single place: 

  • Log and display metrics, parameters, images, and other ML metadata
  • Search, group, and compare runs with no extra effort
  • See and debug experiments live as they are running
  • Share results by sending a persistent link
  • Query experiment metadata programmatically
Talk to a product pro ->
Compare experiments

Log various object types and displaying them in the UI

What metadata types can you log and display in Neptune?

You can actually log any object to Neptune and most of them will be displayed nicely in the UI.

Neptune supports:

  • Metrics and learning curves
  • Parameters, tags, and properties
  • Code, .git info, files, and jupyter notebooks
  • Hardware consumption (CPU, GPU, Memory)
  • Images, interactive charts, and HTML objects
  • Audio and Video files
  • Tables and .csv files
  • and more!

Learn more
Logging metadata

How does logging API work?

You connect Neptune to your experiment inside your code via neptune-client library.

  • initialize neptune client with neptune.init()
  • start an experiment with neptune.create_experiment()
  • log whatever object you want with an appropriate logging method. For example neptune.log_metric() for metrics and losses.

Learn more
import neptune

neptune.init() # credentials go here
neptune.create_experiment('my-experiment')
# training and evaluation logic here
neptune.log_metric('accuracy', 0.79)
neptune.log_image('diagnostics', roc_curve_fig)
neptune.log_artifact('model_weights.h5')
# there are many other objects you can log

Which languages and ML frameworks does Neptune work with?

Neptune works with any ML framework in Python and R via the official client libraries (you can also log experiments from other languages).

To make things easier, Neptune has integrations with over 25 major ML frameworks and libraries, including:

  • Keras and Tensorflow
  • Pytorch and Lightning, Catalyst, Fastai, Ignite, Skorch
  • Scikit learn, XGBoost, lightGBM
  • Optuna, Skopt, Keras-Tuner
  • and more!

Learn more
#Tensorflow/Keras
model.fit(..., callbacks=[NeptuneMonitor()])

# PyTorch lightning
trainer = pl.Trainer(..., logger=NeptuneLogger())

# lightGBM
gbm = lgb.train(... callbacks = [neptune_monitor()])

# XGBoost
xgb.train(..., callbacks=[neptune_callback()])

# Optuna
study.optimize(…, callbacks=[NeptuneCallback()])

Can I update existing experiment?

Of course. 

You can fetch an existing experiment from Neptune and log new metadata or update logs.

Learn more
import neptune

project = neptune.init('Me/MyProject') # Get Project
exp = project.get_experiments(id='exp-123')[0] # Get Experiment
exp.log_metric('f1_score', 0.92) # Log things

Organize experiments

Run experiments everywhere, keep the results in one place

You can execute experiment code anywhere, and everything will be logged to the central metadata store:

  • Run on laptop, cloud, online notebooks (colab, deepnote, kaggle), or anywhere else
  • Use with Python, R, or any other language
  • Use the hosted version or deploy Neptune on-prem
Learn more
One place for experiments

Filter, sort, and group experiments in a dashboard

Find the experiments you care about quickly:

  • search based on metric and parameter values
  • find experiments with user-defined tag
  • group experiments based on the parameter, or tag values
Learn more
Change dashboard

Compare experiment runs with zero effort

  • Compare metrics and parameters in a table that automatically finds what changed between experiments
  • Compare overlayed learning curves
  • Drill down to experiment details like interactive performance charts to get a better picture
Learn more
Compare experiments

See (only) the information you want: customize and save dashboard views

Neptune experiment dashboard is fully customizable.

  • You can choose which metrics, parameters, or other information you want to see.
  • You can save dashboard views for later
  • You can have multiple dashboard views for different purposes (exploration, comparison, reporting to managers)
Learn more
Save new dashboard

Drill down to experiment details whenever you need it

Go from high-level analysis to details in one click. 

All the experiment information like code, parameters, evaluation metrics, or model files can be logged and displayed in Neptune.

Learn more
Model metadata

Access everything programmatically

You can download everything you logged to Neptune either in the UI or programmatically.

You can query:

  • experiment leaderboard
  • experiment metrics and parameters and source code
  • experiment artifacts like model checkpoints
  • And more!
Learn more
project = neptune.init('Project')

project.get_leaderboard()

exp = project.get_experiments(id='Proj-123')[0]
exp.get_parameters()
exp.download_artifact('model.pkl')

Make your experiments reproducible

Track everything you need for every experiment run

With Neptune, you can automatically record everything you need to reproduce an experiment:

  • code and .git info
  • environment files,
  • metrics and parameters,
  • model binaries,
  • and more

Find and access information from your past experiments whenever you need them.

Learn more
neptune.create_experiment(
    params={'lr': 0.21,
            'batch_size': 128},
    upload_source_files=['**.*.py', # scripts
                         'env.yaml']) # environment
log_data_version('data/train.csv') # data versions
neptune.log_metric('acc', 0.92) # metrics
neptune.log_artifact('model.pkl') # model binaries

Go back to your experiments even months after

Find and access information from your past experiments whenever you need them.

No worries about losing your work or not knowing which model is running in production.

Everything is stored, backed up, and ready for you.

See example project
Dashboard

Re-run every past experiment

You need to re-run a past experiment for work or research?

Query all the pieces you need from Neptune.

Your code, environment files, data versions, and parameters can be attached to every experiment you run. You can access it programmatically or find it in the UI.

Learn more
Source code and parameters

Have a central place for all your team’s experiments

Have every piece of every experiment or notebook of every teammate in one place

You can execute experiment code on your laptop, cloud environment, or a cluster and every information will be logged to the central storage hosted by us or deployed on-premises.

Works out-of-the-box with Python, R, Jupyter notebooks, and other languages and environments.

Learn how to run Neptune anywhere
Central place for experiments

See what your team is doing, any time, anywhere without logging into a remote server

You don’t have to log into your workstation to see what is going on.

See all your team’s activity with experiments, metrics, notebooks, and any other information on your desktop or mobile device.

See example project
Model training

Save dashboard or experiment views for later and link to them from other tools

Search, filter, group, and compare experiments in the dashboard.

Find out which ideas people are working on and what is bringing results.

Create and save dashboard views based on tasks, people, or results and link to them from other tools that you are using (looking at you Jira).

See example dashboard
Save dashboard view

Share anything you want with a link

Want to discuss what you are seeing right now in the application?

Just share a link, it’s that simple. Experiment details, comparisons, dashboard views, or anything else!

Link to example dashboard
Share with a link

Find and query everything you need programmatically

Everything that your team logs to Neptune is automatically accessible to every team member.

You can access experiments or notebooks including the code, parameters, model binary, or other objects via an API.

Learn more
import neptune

project = neptune.init('MyProject')
experiment = project.get_experiments(id='PRO-332')[0]

experiment.get_parameters()
experiment.download_artifact('model.pkl')
experiment.download_source()

Focus on ML experimentation.
Leave metadata bookkeeping to Neptune.
Get started in 5 minutes

1. Create a free account
Sign up
2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune

neptune.init('Me/MyProject')
neptune.create_experiment(params={'lr':0.1, 'dropout':0.4})
# training and evaluation logic
neptune.log_metric('test_accuracy', 0.84)
Try live notebook

Not convinced?

Try in a live Notebook (zero setup, no registration)

Try now

Explore example project

Example project
Go to project

Watch screencasts

Screencasts featured
Watch now

See the docs

Docs view
Check now