We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”

Read more

Automate and organize experiment tracking
for your ML model development

Log any model metadata from anywhere in your ML pipeline.
Version data and experiments for easier reproducibility.
Monitor experiments as they are running.
Search, visualize, debug, and compare experiments and datasets.
Share and collaborate on experiment results across the organization.

Log various object types and displaying them in the UI

What metadata types can you log and display in Neptune?

You can actually log any object to Neptune and most of them will be displayed nicely in the UI.

Neptune supports:

  • Metrics and learning curves
  • Parameters, tags, and properties
  • Code, .git info, files, and jupyter notebooks
  • Hardware consumption (CPU, GPU, Memory)
  • Images, interactive charts, and HTML objects
  • Audio and Video files
  • Tables and .csv files
  • and more!

Learn more
Product_ log_and_display

How does logging API work?

You connect Neptune to your experiment inside your code via neptune-client library.

  • initialize neptune client with neptune.init()
  • start an experiment with neptune.create_experiment()
  • log whatever object you want with an appropriate logging method. For example neptune.log_metric() for metrics and losses.

Learn more
run = neptune.init() # credentials go here

# training and evaluation logic here
run[“accuracy”] = 0.79

Which languages and ML frameworks does Neptune work with?

Neptune works with any ML framework in Python and R via the official client libraries (you can also log experiments from other languages).

To make things easier, Neptune has integrations with over 25 major ML frameworks and libraries, including:

  • Keras and Tensorflow
  • Pytorch and Lightning, Catalyst, Fastai, Ignite, Skorch
  • Scikit learn, XGBoost, lightGBM
  • Optuna, Skopt, Keras-Tuner
  • and more!

Learn more
# Tensorflow/Keras
model.fit(..., callbacks=[NeptuneCallback()])

# PyTorch lightning
trainer = pl.Trainer(..., logger=NeptuneLogger())

# LightGBM
gbm = lgb.train(... callbacks = [NeptuneCallback()])

# XGBoost
xgb.train(..., callbacks=[NeptuneCallback()])

# Optuna
study.optimize(…, callbacks=[NeptuneCallback()])

Can I update existing experiment?

Of course. 

You can fetch an existing experiment from Neptune and log new metadata or update logs.

Learn more
run = neptune.init('Me/MyProject', run='SUN-123')
run['f1_score'] = 0.92

Organize experiments

Run experiments everywhere, keep the results in one place

You can execute experiment code anywhere, and everything will be logged to the central metadata store:

  • Run on laptop, cloud, online notebooks (colab, deepnote, kaggle), or anywhere else
  • Use with Python, R, or any other language
  • Use the hosted version or deploy Neptune on-prem
Learn more
One place for experiments

Filter, sort, and group experiments in a dashboard

Find the experiments you care about quickly:

  • search based on metric and parameter values
  • find experiments with user-defined tag
  • group experiments based on the parameter, or tag values
Learn more
Product_Filter, sort, and compare

Compare experiment runs with zero effort

  • Compare metrics and parameters in a table that automatically finds what changed between experiments
  • Compare overlayed learning curves
  • Drill down to experiment details like interactive performance charts to get a better picture
Learn more
Product_Experiment autodif table

See (only) the information you want: customize and save dashboard views

Neptune experiment dashboard is fully customizable.

  • You can choose which metrics, parameters, or other information you want to see.
  • You can save dashboard views for later
  • You can have multiple dashboard views for different purposes (exploration, comparison, reporting to managers)
Learn more
Product_Save dashboard

Drill down to experiment details whenever you need it

Go from high-level analysis to details in one click. 

All the experiment information like code, parameters, evaluation metrics, or model files can be logged and displayed in Neptune.

Learn more
Experiment tracking_Drill down exp

Access everything programmatically

You can download everything you logged to Neptune either in the UI or programmatically.

You can query:

  • experiment leaderboard
  • experiment metrics and parameters and source code
  • experiment artifacts like model checkpoints
  • And more!
Learn more
project = neptune.get_project('your-org/your-project')

run = neptune.init('your-org/your-project',  run="SUN-123")

Make your experiments reproducible

Track everything you need for every experiment run

With Neptune, you can automatically record everything you need to reproduce an experiment:

  • code and .git info
  • environment files,
  • metrics and parameters,
  • model binaries,
  • and more

Find and access information from your past experiments whenever you need them.

Learn more
run = neptune.init('YOUR_ORG/PROJECT',
                   source_files = ['**.*.py', # scripts
                                   'requirements.yaml']) # environment
run['params'] = {'lr': 0.21,
                 'batch_size': 128},
run['data_version'] = get_md5('data/train.csv')} # data versions
run['acc'] = 0.92 # metrics
run['model'].upload('model.pkl') # model binaries

Go back to your experiments even months after

Find and access information from your past experiments whenever you need them.

No worries about losing your work or not knowing which model is running in production.

Everything is stored, backed up, and ready for you.

See example project
Experiment tracking_Go back

Re-run every past experiment

You need to re-run a past experiment for work or research?

Query all the pieces you need from Neptune.

Your code, environment files, data versions, and parameters can be attached to every experiment you run. You can access it programmatically or find it in the UI.

Learn more
Model registry

Have a central place for all your team’s experiments

Have every piece of every experiment or notebook of every teammate in one place

You can execute experiment code on your laptop, cloud environment, or a cluster and every information will be logged to the central storage hosted by us or deployed on-premises.

Works out-of-the-box with Python, R, Jupyter notebooks, and other languages and environments.

Learn how to run Neptune anywhere
Team workspace

See what your team is doing, any time, anywhere without logging into a remote server

You don’t have to log into your workstation to see what is going on.

See all your team’s activity with experiments, metrics, notebooks, and any other information on your desktop or mobile device.

See example project
Use computational resources

Save dashboard or experiment views for later and link to them from other tools

Search, filter, group, and compare experiments in the dashboard.

Find out which ideas people are working on and what is bringing results.

Create and save dashboard views based on tasks, people, or results and link to them from other tools that you are using (looking at you Jira).

See example dashboard
Product_Save dashboard

Share anything you want with a link

Want to discuss what you are seeing right now in the application?

Just share a link, it’s that simple. Experiment details, comparisons, dashboard views, or anything else!

Link to example dashboard
Product_share viz with a link

Find and query everything you need programmatically

Everything that your team logs to Neptune is automatically accessible to every team member.

You can access experiments or notebooks including the code, parameters, model binary, or other objects via an API.

Learn more
project = neptune.get_project('your-org/your-project')

run = neptune.init('your-org/your-project',  run="SUN-123")

Focus on ML experimentation.
Leave metadata bookkeeping to Neptune.
Get started in 5 minutes

1. Create a free account
Sign up
2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune.new as neptune

run = neptune.init('Me/MyProject')
run['params'] = {'lr':0.1, 'dropout':0.4}
run['test_accuracy'] = 0.84
Try live notebook
Get started with Neptune


Code examples, videos, projects gallery, and other resources.

Trusted by 20000+ ML practitioners
and 500+ commercial and research teams

Brainly logo
Deepsense.ai using Neptune
Hypefactors logo
InstaDeep logo
See all case studies

Manage all your model metadata in a single place