Metadata store for MLOps, built for research and production teams that run a lot of experiments

Feel in control of your models and experiments by having all metadata organized in a single place.
Focus on ML, leave metadata bookkeeping to Neptune.
Get started in 5 minutes.

What is a metadata store for MLOps?

ML metadata store is an essential part of the MLOps stack that deals with model building metadata management.
It makes it easy to log, store, display, organize, compare and query all metadata generated during ML model lifecycle. 

What ML metadata are we talking about?

Experiment and model training metadata

You can log anything that happens during ML run including:

  • Metrics
  • Hyperparameters
  • Learning curves
  • Training code and configuration files
  • Predictions (images, tables, etc)
  • Diagnostic charts (Confusion matrix, ROC curve, etc)
  • Console logs
  • Hardware logs
  • And more
See what else you can log
Learning curves

Artifact metadata

For datasets, predictions or models you can log:

  • Paths to the dataset or model (s3 bucket, filesystem)
  • Dataset hash
  • Dataset/prediction preview (head of the table, snapshot of the image folder)
  • Description
  • Feature column names (for tabular data)
  • Who created/modified 
  • When last modified
  • Size of the dataset
  • And more
See what else you can log
Artifact metadata

Model metadata

For trained models (production or not) you can log:

  • Model binary or location to your model asset
  • Dataset versions 
  • Links to recorded model training runs and experiments 
  • Who trained the model
  • Model descriptions and notes
  • Links to observability dashboards (Grafana)
  • And more
See what else you can log
Model metadata

What can you use an ML metadata store for?

Experiment tracking

Track, organize, and compare everything you care about in your ML experiments.

  • Monitor experiments as they are running
  • Keep track of metrics, parameters, diagnostic charts, and more 
  • Search, group, and compare experiments with no effort
  • Drill down to every experiment information you need
  • Share results with your team and access all experiment details programmatically
Learn more
Experiment_tracking

Model registry

Have your models versioned, traceable and easily accessible. 

  • Know exactly how every model was built
  • Record dataset, code, parameters, model binaries, and more for every training run
  • Get back to every model building metadata even months after 
  • Share models with your team and access them programmatically
Learn more
Source code

Neptune Metadata Store for MLOps = Client + Database + Dashboard

Metadata_dashboard

Client library

It makes it easy to log and query the ML metadata database.

Check the docs
# log
experiment.log_metric('accuracy', 0.9)
experiment.log_image('predictions', image_pred)
experiment.log_artifact('model_weights.h5')
# query
project.get_experiment(id='EXP-123').download_artifacts()

Metadata Database

It’s a place where experiment, model, and dataset metadata are stored. They can be logged and queried efficiently.

See example dashboard
Metadata database

Dashboard

A visual interface to the metadata database. A place where you can see metadata of your experiment, models, and datasets.

See example project
Product_dashboard

Get started in 5 minutes

1. Create a free account
Sign up
2. Install Neptune client library
pip install neptune-client
3. Add logging to your script
import neptune

neptune.init('Me/MyProject')
neptune.create_experiment(params={'lr':0.1, 'dropout':0.4})
# training and evaluation logic
neptune.log_metric('test_accuracy', 0.84)
Try live notebook

Dive deeper into Neptune features

Log and display any ML metadata you care about

How can you start logging?

To log things to the Neptune experiment, you need
an experiment 🙂. 

  • Connect Neptune to your script (pass credentials)
  • Create an experiment (start logging context)
  • Log whatever you want
Try live notebook
neptune.init('YOUR_ORG/PROJECT', 'YOUR_API_TOKEN') # Credentials
neptune.create_experiment('great-idea') # Start logging context
neptune.log_metric('accuracy', 0.92) # Log things

What can you log and display?

Neptune allows you to log most metadata related to ML models, experiments, and datasets, including:

  • Training code and configuration files
  • Parameters and metrics
  • Hardware consumption and console logs
  • Performance charts and interactive visualizations
  • Model weights 
  • and more!
Check the docs
Logging metadata

Log more with less code with our integrations!

Don’t implement the loggers yourself. We have 25 + integrations with all major ML frameworks to make things even easier. 

  • PyTorch and PyTorch Lightning
  • TensorFlow / Keras and TensorBoard
  • Scikit-Learn, LightGBM, and XGBoost
  • Optuna, Scikit Optiimize, and Keras Tuner
  • Bokeh, Altair, Plotly, and Matplotlib
  • and more!
See integrations
#Tensorflow/Keras
model.fit(..., callbacks=[NeptuneMonitor()])

# PyTorch lightning
trainer = pl.Trainer(..., logger=NeptuneLogger())

# lightGBM
gbm = lgb.train(... callbacks = [neptune_monitor()])

# XGBoost
xgb.train(..., callbacks=[neptune_callback()])

# Optuna
study.optimize(…, callbacks=[NeptuneCallback()])

Compare experiments and models with no extra effort

Experiment auto-diff in a table

Compare metrics and parameters in a table that automatically finds what changed between experiments.

See the difference between metrics, parameters, text, and more with no extra effort!

Check the docs
Compare experiments table

Compare learning curves

Overlay learning curves and drill down to experiment details like interactive performance charts to get a better picture.

Get all that automatically just because you logged metadata to Neptune. 

Check the docs
Compare experiments

Organize experiments and model training runs in a single place

Run experiments everywhere, keep the results in one place

You can execute experiment code on your laptop, cloud environment, or a cluster. All ML metadata you care about will be logged to the central storage hosted by us or deployed on-premises. 

It works out-of-the-box with Python, R, Jupyter notebooks, and other languages and environments.

See example dashboard
Organize-experiments

Structure your teams’ ML work in workspaces and projects

Clean up your ML teamwork by grouping your experiment into projects and workspaces.

If you are working on a model that does a particular feature, just create a project for it. All the experiments in that project will be easy to compare and search through.

If your team is working for multiple clients or departments, you can create a workspace for each client and separate projects in each organization.

See example project
Team workspace

Filter, sort, and compare experiments in a dashboard

Search through your experiments quickly with a dashboard built for ML experiments and models.  

  • Filter experiments by metric and parameter values
  • Display min/max/last value of a metric or loss series like validation loss
  • Compare everything with no extra effort
See example dashboard
Change dashboard

See (only) the information you want: customize and save dashboard views

Choose which metrics, parameters, or other information you want to see and customize your dashboard. 

Create multiple dashboard views and save them for later.

See example project
Save new dashboard

See your ML experiments live as they are running

See learning curves live

Take a look at the learning curves and training metrics of your models and compare them to past runs as they are running. 

Detect unpromising training runs and react quickly. 

Check the docs
Product_monitor learning curves

See hardware consumption whenever you want

You can monitor the hardware for your experiment runs automatically. See how much GPU/CPU and memory your model training runs are using.  

See performance problems quickly, even for multi-gpu training.

Check the docs
Hardware usage

Look at model predictions, console logs, and anything else during training

You can log model predictions after every epoch or console logs from a remote machine and have full control over your training process. 

Check the docs
Model predictions

Make experiments and model training runs reproducible and traceable

Every time you train your model, you can automatically record:

  • Code and environment versions
  • Data Versions
  • Model parameters
  • Model weights and other binaries 
  • Evaluation metrics, 
  • Model explanation charts,
  • Who trained the model,
  • and anything else you need

You will not forget to commit your changes because it happens automatically. 

See example project
neptune.create_experiment(
     params={'lr':0.21, 
             'data_version': get_md5('data/train.csv')},
     upload_source_files=['**.*.py',
                          'requirements.yaml')
neptune.log_metric('acc', 0.92)
neptune.log_image('explanations', explanation_fig)
neptune.log_artifact('model.pkl')

Share any result or visualizations with your team by sending a link

Want to discuss what you see right now in the application? 

Just share a link. It’s that simple. 

Neptune has a system of persistent links that makes sharing experiment details, comparisons, dashboard views, or anything else straightforward.

See example project
Share with a link

Query experiment and model training metadata programmatically

You can download everything you logged to Neptune programmatically (or from the UI). 

  • Experiment table with all metrics and parameters
  • Experiment object itself and access all the metadata logged to it
  • You can update experiment objects with new information even after training!
Check the docs
project = neptune.init('Project')

project.get_leaderboard()

exp = project.get_experiments(id='Proj-123')[0]
exp.get_parameters()
exp.download_artifact('model.pkl')
exp.download_sources()

Neptune integrates with your favourite frameworks and tools

With Neptune, you can get more out of the tools you use every day. Neptune comes with 25+ integrations with libraries used in machine learning, deep learning, and reinforcement learning.

See all integrations

See what our users are saying

What we like about Neptune is that it easily hooks into multiple frameworks. Keeping track of machine learning experiments systematically over time and visualizing the output adds a lot of value for us.

Ronert Obst
Head of Data Science @New Yorker

Neptune allows us to keep all of our experiments organized in a single space. Being able to see my team’s work results any time I need makes it effortless to track progress and enables easier coordination.

Michael Ulin
VP, Machine Learning @Zesty.ai

Neptune is making it easy to share results with my teammates. I’m sending them a link and telling what to look at, or I’m building a View on the experiments dashboard. I don’t need to generate it by myself, and everyone in my team have access to it.

Maciej Bartczak
Resarch Lead @Banacha Street

“I chose Neptune over WandB because it is more lightweight and I’m more comfortable working with it. The possibilities for “custom” logging are practical and the tool is really stable.

I observed the following with W&B:

– I had some errors during the experiments: symlink creation/user rights errors and model upload cycling infinitely (which made WandB not detect the end of the run).
– Slower/heavier (took more time at each epochs’ end).”

Jonathan Donzallaz
Data Science Researcher

Without information I have in the Monitoring section I wouldn’t know that my experiments are running 10 times slower as they could.
All of my experiments are being trained on separate machines which I can access only via ssh. If I would need to download and check all of this separately I would be rather discouraged.
When I want to share my results I’m simply sending a link.

Michał Kardas
Machine Learning Researcher @TensorCell

Hi there, thanks for the great tool, has been really useful for keeping track of the experiments for my Master’s thesis. Way better than the other tools I’ve tried (comet / wandb).

I guess the main reason I prefer neptune is the interface, it is the cleanest and most intuitive in my opinion, the table in the center view just makes a great deal of sense. I like that it’s possible to set up and save the different view configurations as well. Also, the comparison is not as clunky as for instance with wandb. Another plus is the integration with ignite, as that’s what I’m using as the high-level framework for model training.”

Klaus-Michael Lux
Data Science and AI student, Kranenburg, Germany

I’m working with deep learning (music information processing), previously I was using Tensorboard to track losses and metrics in TensorFlow, but now I switched to PyTorch so I was looking for alternatives and I found Neptune a bit easier to use, I like the fact that I don’t need to (re)start my own server all the time and also the logging of GPU memory etc. is nice. So far I didn’t have the need to share the results with anyone, but I may in the future, so that will be nice as well.

Ondřej Cífka
PhD student in Music Information Processing at Télécom Paris

Within the first few tens of runs, I realized how complete the tracking was – not just one or two numbers, but also the exact state of the code, the best-quality model snapshot stored to the cloud, the ability to quickly add notes on a particular experiment. My old methods were such a mess by comparison.

Edward Dixon
Enterprise Data Scientist @Intel

“Previously used tensorboard and azureml but Neptune is hugely better. In particular, getting started is really easy; documentation is excellent, and the layout of charts and parameters is much clearer.”

Simon Mackenzie
AI Engineer and Data Scientist
Such a fast setup! Love it:)
Kobi Felton
PhD student in Music Information Processing at Télécom Paris

For me the most important thing about Neptune is its flexibility. Even if I’m training with Keras or Tensorflow on my local laptop, and my colleagues are using fast.ai on a virtual machine, we can share our results in a common environment.

Víctor Peinado
Senior NLP/ML Engineer

A lightweight solution to a complicated problem of experiment tracking.

What do you like best?

– Easy integration with any pipeline / flow / codebase / framework
– Easy access to logged data over an api (comes also with a simple python wrapper )
– Fast and reliable
– Versioning jupyter notebooks is a great and unique feature

What do you dislike?

– Visualization of the logged data could be improved, support for more advanced plotting would be nice, altough you can alway workaround that by sending pictures of charts.

Recommendations to others considering the product:

If you look for a simple, flexible and powerfull tool or you are tired of using excel sheets or tensorboard to track your results, neptune.ai is a good bet.

What problems are you solving with the product? What benefits have you realized?

– machine learning reproducibility
– machine learning system monitoring
– sharing experiment results
– monitoring long running deep learning jobs.“

Jakub Cieślik
Senior Data Scientist interested in Computer Vision

Neptune provides an accessible and intuitive way to visualize, analyze and share metrics of our projects.
We can not only discuss it with other team members, but also with management, in a way that can be easily interpreted by someone not familiar with the implementation details.
Tracking and comparing different approaches has notably boosted our productivity, allowing us to focus more on the experiments, develop new, good practices within our team and make better data-driven decisions.
We love the fact that the integration is effortless. No matter what framework we use – it just works in the matter of minutes, allowing us to automate and unify our processes.

Tomasz Grygiel
Data Scientist @idenTT

Exactly what I needed.

What do you like best?

The real-time charts, the simple API, the responsive support

What do you dislike?

It would be great to have more advanced API functionality.

Recommendations to others considering the product:

If you need to monitor and manage your machine learning or any other computational experiments, Neptune.ai is a great choice. It has many features that can make your life easier and your research more organized.

What problems are you solving with the product? What benefits have you realized?

I’m mostly doing an academic research that involves the training of machine learning models, and also other long-running experiments which I need to track in real time. Without Neptune.ai, I would have waste a lot of time building a client for experiment management and monitoring. It also serves as an archive, which I also find very important for my
research.”

Boaz Shvartzman
Computer vision researcher and developer @TheWolf

Well this is great for multiple reasons.

For example you continously log some value. And then you realize that you wanted to see the min or average or whatever. Without this option, you will have to download the data and process everything on the local PC. Now you can do such stuff directly in neptune. this is great.”

Adrian Kraft
AI Robotics

Useful tool for tracking many experiments and collaboration on them.

What do you like best?

one place to log all my experiments, very helpful when you have to find some results from a few months back.

It makes collaboration easier as well – just share the link to an experiment with a colleague and you can analyze the results together.

What do you dislike?

The UI for creating graphs with multiple lines could be more flexible.

What problems are you solving with the product? What benefits have you realized?

– tracking data about many experiments
– easily sharing experiments within lab
– going back to old results, collecting data for report/publication
– reproducibility – code and hyperparameters are stored in one place.“

Błażej Osiński
Researcher in reinforcement learning @deepsense.ai
This thing is so much better than Tensorboard, love you guys for creating it!
Dániel Lévai
Junior Researcher at Rényi Alfréd Institute of Mathematics in Budapest, Hungary
Fast, easy to use, supportive, update features regularly.

What do you like best?
They respect to feedback and suggestion and update regularly the new features for a better experience.What do you dislike?

At the moment everything is pretty useful

Recommendations to others considering the product:

Everything is available to use. If you need any new features, you can simply ping them. They will consider your suggestion.

What problems are you solving with the product? What benefits have you realized?

tracking my ML experiment. Hyperparameter tuning. Sharing the result with my colleagues and other tones of benefits.”

Hamed Hojatian
PhD Researcher on Applied ML in telecommunication

The last few hours have been my first w/ Neptune and I’m really appreciative of how much time it’s saved me not having to fiddle w/ matplotlib in addition to everything else.

Hayden Le
Research Associate at UM’s Education Policy Initiative

I tested multiple loggers with pytorch-lightning integrations and found neptune to be the best fit for my needs. Friendly UI, ease of use and great documentatinon.

Itsik Adiv
Research student at Tel Aviv University
I didn’t expect this level of support.
Daeyun Shin
PhD Candidate, UC-Irvine

I just had a look at neptune logger after a year and to be honest, I am very impressed with the improvements in UI! Earlier, it was a bit hard to compare experiments with charts. I am excited to try this! I just had a look at neptune logger after a year and to be honest, I am very impressed with the improvements in UI! Earlier, it was a bit hard to compare experiments with charts. I am excited to try this!

Abhinav Moudgil
Researcher at Georgia Institute of Technology
I am super messy with my experiments, but now I have everything organized for me automatically. I love it!
Daniela Rim
MS student in Computer Science at Handong Global University

“I had been thinking about systems to track model metadata and it occurred to me I should look for existing solutions before building anything myself.
Neptune is definitely satisfying the need to standardize and simplify tracking of experimentation and associated metadata.
My favorite feature so far is probably the live tracking of performance metrics, which is helpful to understand and troubleshoot model learning.
I also find the web interface to be lightweight, flexible, and intuitive.”

Ian Miller
CTO @ Betterbin

“While logging experiments is great, what sets Neptune apart for us at the lab is the ease of sharing those logs. The ability to just send a Neptune link in slack and letting my coworkers see the results for themselves is awesome. Previously, we used Tensorboard + locally saved CSVs and would have to send screenshots and CSV files back and forth which would easily get lost. So I’d say Neptune’s ability to facilitate collaboration is the biggest plus.”

Greg Rolwes
Computer Science Undergraduate at Saint Louis University

Indeed it was a game-changer for me, as you know AI training workloads are lengthy in nature, sometimes also prone to hanging in colab environment, and just to be able to launch a set of tests trying different hyperparameters with the assurance that the experiment will be correctly recorded in terms of results and hyper-parameters was big for me.”

Bouhamza Khalil
Part time PHD student at ENSIAS Mohamed V University in Rabat, Morocco

“Neptune was easy to set up and integrate into my experimental flow. The tracking and logging options are exactly what I needed and the documentation was up to date and well written.”

Varun Ravi Varma
Teaching Assistant at University of Groningen

“I have been pleasantly surprised with how easy it was to set up Neptune in my PyTorch Lightning projects!”

Alex Morehead
Ph.D. Student in Computer Science at the University of Missouri

I used to keep track of my models with folders on my machine and use naming conventions to save the parameters and model architecture. Whenever I wanted to track something new about the model, I would have to update the naming structure. It was painful. There was a lot of manual work involved.

Now everything happens automatically. I can compare models in the online interface that looks great. It saves me a lot of time, and I can focus on my research instead of keeping track of everything manually.

Abdalrheem Ijjeh
Researcher at IFFM, Polish Academy of Sciences

“I’m working on a server that is not graphical, it’s always a hassle to connect it to my local laptop to show the results in TensorBoard. I just want to see it online and Neptune lets me do that easily.

When I’m doing hyperparameter optimization and I am running a lot of experiments TensorBoard gets very cluttered. It’s hard to organize and compare things. Neptune UI is very clear, it’s intuitive, and scales with many runs. I can group my experiments by a parameter like dropout to see the impact it has on results. For us in research, it’s not just about the best model run, we need to understand how models perform on a deeper level.

I was looking for alternatives to PyTorch Lightning native logger. The main reason why I chose Neptune over TensorBoard was that you could just change the native logger to NeptuneLogger, pass your user token, and everything would work out-of-the-box. I didn’t have to change the code other than that one line. With TensorBoard I have to change the code to log things. I can customize it to log other things like text or images easily. Neptune just has a way better user experience.”

Ihab Bendidi
Biomedical AI Researcher

“The problem with training models on remote clusters is that every time you want to see what is going on, you need to get your FTP client up, download the logs to a machine with a graphical interface, and plot it. I tried using TensorBoard but it was painful to set up in my situation.

With Neptune, seeing training progress was as simple as hitting refresh. The feedback loop between changing the code and seeing whether anything changed is just so much shorter. Much more fun and I get to focus on what I want to do. I really wish that it existed 10 years ago when I was doing my PhD.”

Kaare Mikkelsen
Assistant Professor at Aarhus University
Load more

They already use Neptune to manage their ML metadata.
When will you?

Get started in 5 minutes.

Not convinced?

Try in a live Notebook (zero setup, no registration)

Try now

Explore example project

Example project
Go to project

Watch screencasts

Screencasts featured
Watch now

See the docs

Docs view
Check now