We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”
Log various object types and displaying them in the UI
What metadata types can you log and display in Neptune?
You can actually log any object to Neptune and most of them will be displayed nicely in the UI.
Neptune supports:
- Metrics and learning curves
- Parameters, tags, and properties
- Code, .git info, files, and jupyter notebooks
- Hardware consumption (CPU, GPU, Memory)
- Images, interactive charts, and HTML objects
- Audio and Video files
- Tables and .csv files
- and more!

How does logging API work?
You connect Neptune to your experiment inside your code via neptune-client library.
- initialize neptune client with neptune.init()
- start an experiment with neptune.create_experiment()
- log whatever object you want with an appropriate logging method. For example neptune.log_metric() for metrics and losses.
run = neptune.init() # credentials go here
# training and evaluation logic here
run[“accuracy”] = 0.79
run[“diagnostics”].upload(roc_curve_fig)
run[“model”].upload(“model_weights.h5”)
Which languages and ML frameworks does Neptune work with?
Neptune works with any ML framework in Python and R via the official client libraries (you can also log experiments from other languages).
To make things easier, Neptune has integrations with over 25 major ML frameworks and libraries, including:
- Keras and Tensorflow
- Pytorch and Lightning, Catalyst, Fastai, Ignite, Skorch
- Scikit learn, XGBoost, lightGBM
- Optuna, Skopt, Keras-Tuner
- and more!
# Tensorflow/Keras
model.fit(..., callbacks=[NeptuneCallback()])
# PyTorch lightning
trainer = pl.Trainer(..., logger=NeptuneLogger())
# LightGBM
gbm = lgb.train(... callbacks = [NeptuneCallback()])
# XGBoost
xgb.train(..., callbacks=[NeptuneCallback()])
# Optuna
study.optimize(…, callbacks=[NeptuneCallback()])
Can I update existing experiment?
Of course.
You can fetch an existing experiment from Neptune and log new metadata or update logs.
run = neptune.init('Me/MyProject', run='SUN-123')
run['f1_score'] = 0.92
Organize experiments
Run experiments everywhere, keep the results in one place
You can execute experiment code anywhere, and everything will be logged to the central metadata store:
- Run on laptop, cloud, online notebooks (colab, deepnote, kaggle), or anywhere else
- Use with Python, R, or any other language
- Use the hosted version or deploy Neptune on-prem

Filter, sort, and group experiments in a dashboard
Find the experiments you care about quickly:
- search based on metric and parameter values
- find experiments with user-defined tag
- group experiments based on the parameter, or tag values

Compare experiment runs with zero effort
- Compare metrics and parameters in a table that automatically finds what changed between experiments
- Compare overlayed learning curves
- Drill down to experiment details like interactive performance charts to get a better picture

See (only) the information you want: customize and save dashboard views
Neptune experiment dashboard is fully customizable.
- You can choose which metrics, parameters, or other information you want to see.
- You can save dashboard views for later
- You can have multiple dashboard views for different purposes (exploration, comparison, reporting to managers)

Drill down to experiment details whenever you need it
Go from high-level analysis to details in one click.
All the experiment information like code, parameters, evaluation metrics, or model files can be logged and displayed in Neptune.

Access everything programmatically
You can download everything you logged to Neptune either in the UI or programmatically.
You can query:
- experiment leaderboard
- experiment metrics and parameters and source code
- experiment artifacts like model checkpoints
- And more!
project = neptune.get_project('your-org/your-project')
project.fetch_runs_table().to_pandas()
run = neptune.init('your-org/your-project', run="SUN-123")
run['parameters/batch_size'].fetch()
run['model'].download()
Make your experiments reproducible
Track everything you need for every experiment run
With Neptune, you can automatically record everything you need to reproduce an experiment:
- code and .git info
- environment files,
- metrics and parameters,
- model binaries,
- and more
Find and access information from your past experiments whenever you need them.
run = neptune.init('YOUR_ORG/PROJECT',
source_files = ['**.*.py', # scripts
'requirements.yaml']) # environment
run['params'] = {'lr': 0.21,
'batch_size': 128},
run['data_version'] = get_md5('data/train.csv')} # data versions
run['acc'] = 0.92 # metrics
run['model'].upload('model.pkl') # model binaries
Go back to your experiments even months after
Find and access information from your past experiments whenever you need them.
No worries about losing your work or not knowing which model is running in production.
Everything is stored, backed up, and ready for you.

Re-run every past experiment
You need to re-run a past experiment for work or research?
Query all the pieces you need from Neptune.
Your code, environment files, data versions, and parameters can be attached to every experiment you run. You can access it programmatically or find it in the UI.

Have a central place for all your team’s experiments
Have every piece of every experiment or notebook of every teammate in one place
You can execute experiment code on your laptop, cloud environment, or a cluster and every information will be logged to the central storage hosted by us or deployed on-premises.
Works out-of-the-box with Python, R, Jupyter notebooks, and other languages and environments.

See what your team is doing, any time, anywhere without logging into a remote server
You don’t have to log into your workstation to see what is going on.
See all your team’s activity with experiments, metrics, notebooks, and any other information on your desktop or mobile device.

Save dashboard or experiment views for later and link to them from other tools
Search, filter, group, and compare experiments in the dashboard.
Find out which ideas people are working on and what is bringing results.
Create and save dashboard views based on tasks, people, or results and link to them from other tools that you are using (looking at you Jira).

Share anything you want with a link
Want to discuss what you are seeing right now in the application?
Just share a link, it’s that simple. Experiment details, comparisons, dashboard views, or anything else!

Find and query everything you need programmatically
Everything that your team logs to Neptune is automatically accessible to every team member.
You can access experiments or notebooks including the code, parameters, model binary, or other objects via an API.
project = neptune.get_project('your-org/your-project')
project.fetch_runs_table().to_pandas()
run = neptune.init('your-org/your-project', run="SUN-123")
run['parameters/batch_size'].fetch()
run['model'].download()
Focus on ML experimentation.
Leave metadata bookkeeping to Neptune.
Get started in 5 minutes
1. Create a free account
Sign up2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune.new as neptune
run = neptune.init('Me/MyProject')
run['params'] = {'lr':0.1, 'dropout':0.4}
run['test_accuracy'] = 0.84
Try live notebook

Resources
Code examples, videos, projects gallery, and other resources.
Trusted by 20000+ ML practitioners
and 500+ commercial and research teams














“I’ve been mostly using Neptune just looking at the UI which I have, let’s say, kind of tailored to my needs. So I added some custom columns which will enable me to easily see the interesting parameters and based on this I’m just shifting over the runs and trying to capture what exactly interests me.”


“Gone are the days of writing stuff down on google docs and trying to remember which run was executed with which parameters and for what reasons. Having everything in Neptune allows us to focus on the results and better algorithms.“


“Neptune is aesthetic. Therefore we could simply use the visualization it was generating in our reports.
We trained more than 120.000 models in total, for more than 7000 subproblems identified by various combinations of features. Due to Neptune, we were able to filter experiments for given subproblems and compare them to find the best one. Also, we stored a lot of metadata, visualizations of hyperparameters’ tuning, predictions, pickled models, etc. In short, we were saving everything we needed in Neptune.”


“The way we work is that we do not experiment constantly. After checking out both Neptune and Weights and Biases, Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again.”