We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”
Metadata store for MLOps, built for research and production teams that run a lot of experiments
Feel in control of your models and experiments by having all metadata organized in a single place.
Focus on ML, leave metadata bookkeeping to Neptune.
Get started in 5 minutes.
What is a metadata store for MLOps?

ML metadata store is an essential part of the MLOps stack that deals with model building metadata management.
It makes it easy to log, store, display, organize, compare and query all metadata generated during ML model lifecycle.
What ML metadata are we talking about?
Experiment and model training metadata
You can log anything that happens during ML run including:
- Metrics
- Hyperparameters
- Learning curves
- Training code and configuration files
- Predictions (images, tables, etc)
- Diagnostic charts (Confusion matrix, ROC curve, etc)
- Console logs
- Hardware logs
- And more

Artifact metadata
For datasets, predictions or models you can log:
- Paths to the dataset or model (s3 bucket, filesystem)
- Dataset hash
- Dataset/prediction preview (head of the table, snapshot of the image folder)
- Description
- Feature column names (for tabular data)
- Who created/modified
- When last modified
- Size of the dataset
- And more

Model metadata
For trained models (production or not) you can log:
- Model binary or location to your model asset
- Dataset versions
- Links to recorded model training runs and experiments
- Who trained the model
- Model descriptions and notes
- Links to observability dashboards (Grafana)
- And more

What can you use an ML metadata store for?
Experiment tracking
Track, organize, and compare everything you care about in your ML experiments.
- Monitor experiments as they are running
- Keep track of metrics, parameters, diagnostic charts, and more
- Search, group, and compare experiments with no effort
- Drill down to every experiment information you need
- Share results with your team and access all experiment details programmatically

Model registry
Have your models versioned, traceable and easily accessible.
- Know exactly how every model was built
- Record dataset, code, parameters, model binaries, and more for every training run
- Get back to every model building metadata even months after
- Share models with your team and access them programmatically

Neptune Metadata Store for MLOps = Client + Database + Dashboard

# log
run['accuracy'] = 0.9
run['predictions'].upload(File.as_image(image_pred))
run['model_weights'].upload('model_weights.h5')
# query
run['model_weights'].download()
Metadata Database
It’s a place where experiment, model, and dataset metadata are stored. They can be logged and queried efficiently.

Dashboard
A visual interface to the metadata database. A place where you can see metadata of your experiment, models, and datasets.

Get started in 5 minutes
1. Create a free account
Sign up2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune.new as neptune
run = neptune.init('Me/MyProject')
run['params'] = {'lr':0.1, 'dropout':0.4}
run['test_accuracy'] = 0.84
Try live notebook

Dive deeper into Neptune features
Log and display any ML metadata you care about
How can you start logging?
To log things to the Neptune experiment, you need
an experiment 🙂.
- Connect Neptune to your script (pass credentials)
- Create an experiment (start logging context)
- Log whatever you want
run = neptune.init('YOUR_ORG/PROJECT', 'YOUR_API_TOKEN') # Credentials
run['test_accuracy'] = 0.84
What can you log and display?
Neptune allows you to log most metadata related to ML models, experiments, and datasets, including:
- Training code and configuration files
- Parameters and metrics
- Hardware consumption and console logs
- Performance charts and interactive visualizations
- Model weights
- and more!

Log more with less code with our integrations!
Don’t implement the loggers yourself. We have 25 + integrations with all major ML frameworks to make things even easier.
- PyTorch and PyTorch Lightning
- TensorFlow / Keras and TensorBoard
- Scikit-Learn, LightGBM, and XGBoost
- Optuna, Scikit Optiimize, and Keras Tuner
- Bokeh, Altair, Plotly, and Matplotlib
- and more!
# Tensorflow/Keras
model.fit(..., callbacks=[NeptuneCallback()])
# PyTorch lightning
trainer = pl.Trainer(..., logger=NeptuneLogger())
# LightGBM
gbm = lgb.train(... callbacks = [NeptuneCallback()])
# XGBoost
xgb.train(..., callbacks=[NeptuneCallback()])
# Optuna
study.optimize(…, callbacks=[NeptuneCallback()])
Compare experiments and models with no extra effort
Experiment auto-diff in a table
Compare metrics and parameters in a table that automatically finds what changed between experiments.
See the difference between metrics, parameters, text, and more with no extra effort!

Compare learning curves
Overlay learning curves and drill down to experiment details like interactive performance charts to get a better picture.
Get all that automatically just because you logged metadata to Neptune.

Organize experiments and model training runs in a single place
Run experiments everywhere, keep the results in one place
You can execute experiment code on your laptop, cloud environment, or a cluster. All ML metadata you care about will be logged to the central storage hosted by us or deployed on-premises.
It works out-of-the-box with Python, R, Jupyter notebooks, and other languages and environments.

Structure your teams’ ML work in workspaces and projects
Clean up your ML teamwork by grouping your experiment into projects and workspaces.
If you are working on a model that does a particular feature, just create a project for it. All the experiments in that project will be easy to compare and search through.
If your team is working for multiple clients or departments, you can create a workspace for each client and separate projects in each organization.

Filter, sort, and compare experiments in a dashboard
Search through your experiments quickly with a dashboard built for ML experiments and models.
- Filter experiments by metric and parameter values
- Display min/max/last value of a metric or loss series like validation loss
- Compare everything with no extra effort

See (only) the information you want: customize and save dashboard views
Choose which metrics, parameters, or other information you want to see and customize your dashboard.
Create multiple dashboard views and save them for later.

See your ML experiments live as they are running
See learning curves live
Take a look at the learning curves and training metrics of your models and compare them to past runs as they are running.
Detect unpromising training runs and react quickly.

See hardware consumption whenever you want
You can monitor the hardware for your experiment runs automatically. See how much GPU/CPU and memory your model training runs are using.
See performance problems quickly, even for multi-gpu training.

Look at model predictions, console logs, and anything else during training
You can log model predictions after every epoch or console logs from a remote machine and have full control over your training process.

Make experiments and model training runs reproducible and traceable
Every time you train your model, you can automatically record:
- Code and environment versions
- Data Versions
- Model parameters
- Model weights and other binaries
- Evaluation metrics,
- Model explanation charts,
- Who trained the model,
- and anything else you need
You will not forget to commit your changes because it happens automatically.
run = neptune.init('YOUR_ORG/PROJECT',
source_files = ['**.*.py',
'requirements.yaml'])
run['params'] = {'lr':0.21,
'data_version': get_md5('data/train.csv')}
run['acc'] = 0.92
run['explanations'].upload(File.as_image(explanation_fig))
run['model'].upload('model.pkl')
Share any result or visualizations with your team by sending a link
Want to discuss what you see right now in the application?
Just share a link. It’s that simple.
Neptune has a system of persistent links that makes sharing experiment details, comparisons, dashboard views, or anything else straightforward.

Query experiment and model training metadata programmatically
You can download everything you logged to Neptune programmatically (or from the UI).
- Experiment table with all metrics and parameters
- Experiment object itself and access all the metadata logged to it
- You can update experiment objects with new information even after training!
project = neptune.get_project('your-org/your-project')
project.fetch_runs_table().to_pandas()
run = neptune.init('your-org/your-project', run="SUN-123")
run['parameters/batch_size'].fetch()
run['model'].download()
Neptune integrates with your favourite frameworks and tools
With Neptune, you can get more out of the tools you use every day. Neptune comes with 25+ integrations with libraries used in machine learning, deep learning, and reinforcement learning.
See what our users are saying

“What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.”
“Neptune allows us to keep all of our experiments organized in a single space. Being able to see my team’s work results any time I need makes it effortless to track progress and enables easier coordination.”
“Neptune is making it easy to share results with my teammates. I’m sending them a link and telling what to look at, or I’m building a View on the experiments dashboard. I don’t need to generate it by myself, and everyone in my team have access to it.”
“Without information I have in the Monitoring section I wouldn’t know that my experiments are running 10 times slower as they could.
All of my experiments are being trained on separate machines which I can access only via ssh. If I would need to download and check all of this separately I would be rather discouraged.
When I want to share my results I’m simply sending a link.“
“(…) thanks for the great tool, has been really useful for keeping track of the experiments for my Master’s thesis. Way better than the other tools I’ve tried (comet / wandb).
I guess the main reason I prefer neptune is the interface, it is the cleanest and most intuitive in my opinion, the table in the center view just makes a great deal of sense. I like that it’s possible to set up and save the different view configurations as well. Also, the comparison is not as clunky as for instance with wandb. Another plus is the integration with ignite, as that’s what I’m using as the high-level framework for model training.”
“I’m working with deep learning (music information processing), previously I was using Tensorboard to track losses and metrics in TensorFlow, but now I switched to PyTorch so I was looking for alternatives and I found Neptune a bit easier to use, I like the fact that I don’t need to (re)start my own server all the time and also the logging of GPU memory etc. is nice. So far I didn’t have the need to share the results with anyone, but I may in the future, so that will be nice as well.”
“I came to Neptune as a solo experimenter and couldn’t believe the difference that it made to my workflow – better insights into my models, and zero details of training runs ever lost, and the sheer satisfaction of seeing the history of each project laid out in front of me. Since I started my own AI-focused business, it has been even more useful, because it makes helping each other so effortless (my team routinely slack URLs to dashboards for interesting runs). The dashboards are a joy – so easy and fast to configure.”
“Previously used tensorboard and azureml but Neptune is hugely better. In particular, getting started is really easy; documentation is excellent, and the layout of charts and parameters is much clearer.”
“For me the most important thing about Neptune is its flexibility. Even if I’m training with Keras or Tensorflow on my local laptop, and my colleagues are using fast.ai on a virtual machine, we can share our results in a common environment.”
“A lightweight solution to a complicated problem of experiment tracking.
What do you like best?
– Easy integration with any pipeline / flow / codebase / framework
– Easy access to logged data over an api (comes also with a simple python wrapper )
– Fast and reliable
– Versioning jupyter notebooks is a great and unique feature
What do you dislike?
– Visualization of the logged data could be improved, support for more advanced plotting would be nice, altough you can always workaround that by sending pictures of charts.
Recommendations to others considering the product:
If you look for a simple, flexible and powerfull tool or you are tired of using excel sheets or tensorboard to track your results, neptune.ai is a good bet.
What problems are you solving with the product? What benefits have you realized?
– machine learning reproducibility
– machine learning system monitoring
– sharing experiment results
– monitoring long running deep learning jobs.“
“Neptune provides an accessible and intuitive way to visualize, analyze and share metrics of our projects.
We can not only discuss it with other team members, but also with management, in a way that can be easily interpreted by someone not familiar with the implementation details.
Tracking and comparing different approaches has notably boosted our productivity, allowing us to focus more on the experiments, develop new, good practices within our team and make better data-driven decisions.
We love the fact that the integration is effortless. No matter what framework we use – it just works in the matter of minutes, allowing us to automate and unify our processes.”
“Exactly what I needed.
What do you like best?
The real-time charts, the simple API, the responsive support
What do you dislike?
It would be great to have more advanced API functionality.
Recommendations to others considering the product:
If you need to monitor and manage your machine learning or any other computational experiments, Neptune.ai is a great choice. It has many features that can make your life easier and your research more organized.
What problems are you solving with the product? What benefits have you realized?
I’m mostly doing an academic research that involves the training of machine learning models, and also other long-running experiments which I need to track in real time. Without Neptune.ai, I would have waste a lot of time building a client for experiment management and monitoring. It also serves as an archive, which I also find very important for my
research.”
“Well this is great for multiple reasons.
For example you continously log some value. And then you realize that you wanted to see the min or average or whatever. Without this option, you will have to download the data and process everything on the local PC. Now you can do such stuff directly in neptune. This is great.”
“Useful tool for tracking many experiments and collaboration on them.
What do you like best?
One place to log all my experiments, very helpful when you have to find some results from a few months back.
It makes collaboration easier as well – just share the link to an experiment with a colleague and you can analyze the results together.
What do you dislike?
The UI for creating graphs with multiple lines could be more flexible.
What problems are you solving with the product? What benefits have you realized?
– tracking data about many experiments
– easily sharing experiments within lab
– going back to old results, collecting data for report/publication
– reproducibility – code and hyperparameters are stored in one place.“
What do you like best?
At the moment everything is pretty useful.
Recommendations to others considering the product:
Everything is available to use. If you need any new features, you can simply ping them. They will consider your suggestion.
What problems are you solving with the product? What benefits have you realized?
Tracking my ML experiment. Hyperparameter tuning. Sharing the result with my colleagues and other tones of benefits.”
“The last few hours have been my first w/ Neptune and I’m really appreciative of how much time it’s saved me not having to fiddle w/ matplotlib in addition to everything else.“
“I tested multiple loggers with pytorch-lightning integrations and found neptune to be the best fit for my needs. Friendly UI, ease of use and great documentatinon.“
“I just had a look at neptune logger after a year and to be honest, I am very impressed with the improvements in UI! Earlier, it was a bit hard to compare experiments with charts. I am excited to try this! I just had a look at neptune logger after a year and to be honest, I am very impressed with the improvements in UI! Earlier, it was a bit hard to compare experiments with charts. I am excited to try this!”
“I had been thinking about systems to track model metadata and it occurred to me I should look for existing solutions before building anything myself.
Neptune is definitely satisfying the need to standardize and simplify tracking of experimentation and associated metadata.
My favorite feature so far is probably the live tracking of performance metrics, which is helpful to understand and troubleshoot model learning. I also find the web interface to be lightweight, flexible, and intuitive.”
“While logging experiments is great, what sets Neptune apart for us at the lab is the ease of sharing those logs. The ability to just send a Neptune link in slack and letting my coworkers see the results for themselves is awesome. Previously, we used Tensorboard + locally saved CSVs and would have to send screenshots and CSV files back and forth which would easily get lost. So I’d say Neptune’s ability to facilitate collaboration is the biggest plus.”
“Indeed it was a game-changer for me, as you know AI training workloads are lengthy in nature, sometimes also prone to hanging in colab environment, and just to be able to launch a set of tests trying different hyperparameters with the assurance that the experiment will be correctly recorded in terms of results and hyper-parameters was big for me.”
“Neptune was easy to set up and integrate into my experimental flow. The tracking and logging options are exactly what I needed and the documentation was up to date and well written.”
“I have been pleasantly surprised with how easy it was to set up Neptune in my PyTorch Lightning projects!”
“I used to keep track of my models with folders on my machine and use naming conventions to save the parameters and model architecture. Whenever I wanted to track something new about the model, I would have to update the naming structure. It was painful. There was a lot of manual work involved.
Now everything happens automatically. I can compare models in the online interface that looks great. It saves me a lot of time, and I can focus on my research instead of keeping track of everything manually.”
“I’m working on a server that is not graphical, it’s always a hassle to connect it to my local laptop to show the results in TensorBoard. I just want to see it online and Neptune lets me do that easily.
When I’m doing hyperparameter optimization and I am running a lot of experiments TensorBoard gets very cluttered. It’s hard to organize and compare things. Neptune UI is very clear, it’s intuitive, and scales with many runs. I can group my experiments by a parameter like dropout to see the impact it has on results. For us in research, it’s not just about the best model run, we need to understand how models perform on a deeper level.
I was looking for alternatives to PyTorch Lightning native logger. The main reason why I chose Neptune over TensorBoard was that you could just change the native logger to NeptuneLogger, pass your user token, and everything would work out of the box. I didn’t have to change the code other than that one line. With TensorBoard I have to change the code to log things. I can customize it to log other things like text or images easily. Neptune just has a way better user experience.”
“The problem with training models on remote clusters is that every time you want to see what is going on, you need to get your FTP client up, download the logs to a machine with a graphical interface, and plot it. I tried using TensorBoard but it was painful to set up in my situation.
With Neptune, seeing training progress was as simple as hitting refresh. The feedback loop between changing the code and seeing whether anything changed is just so much shorter. Much more fun and I get to focus on what I want to do.
I really wish that it existed 10 years ago when I was doing my PhD.”
“You can keep track of your work in spreadsheets, but it’s super error-prone.
And every experiment that I don’t use and don’t look at afterward is wasted compute: it’s bad for the environment, and it’s bad for me because I wasted my time.
So I would say the main argument for using Neptune is that you can be sure that nothing gets lost, everything is transparent, and I can always go back in history and compare.”
“Excellent support service: actually, you [Neptune] have the best customer support I have ever talked to in my life.”
“Neptune.ai provided me with something I wasn’t looking for, but now that I’ve experienced it, I can’t go back.”