Model registry for ML Engineers

Have all your production-ready models in a centralized model repository

Version production-ready models and metadata associated with them in a single place.
Review models and transition them between development stages.
Access all models your team created via API or browse them in the UI

No credit card required
import neptune

# Register model
model = neptune.init_model(
    name="face_detection", key="DET"
)
model["validation/dataset"].track_files("s3://datasets")
# Save version and metadata
model_version = neptune.init_model_version(
    model="FACE-DET"
)
model_version["validation/acc"] = 0.97
model_version["model/binary"].upload("model.pt")
# Transition development stage
model_version.change_stage("staging")
# Access model
model_version["model/binary"].download()
icon Register

Register a model

Register production-ready model. You can attach any metadata or artifacts to it and organize them in any structure you want

model = neptune.init_model(
    name="face_detection", key="DET",
)
model["validation/dataset"].track_files("s3://datasets/validation")

icon Version

Create model version

For any registered model, create as many model versions as you want. Again, you can attach whatever metadata you want to it

model_version = neptune.init_model_version(
    model="FACE-DET",
)
model_version["model/binary"].upload("model.pt")
model_version["validation/acc"] = 0.97

Version external model artifacts

Save hash, location and other model artifact metadata. You don’t have to upload the model to Neptune. Just keep track of the model reference to local or S3-compatible storage

model_version["model/binary"].track_files("model.pt")

icon Review

Review and change stages

Look at the validation, test metrics and other model metadata and approve stage transitions. You can move models between None/Staging/Production/Archived

model_version.change_stage("staging")

icon Share

Access and share models

Every model and model version is accessible via Neptune App or through the API. Once you have all the model artifacts you can deploy your model in your production pipelines or serve it via API

model_version = neptune.init_model_version(with_id="FACE-DET-42")

model_version["model/signature"].download()

icon Integrate

Integrate with any MLOps stack

icon Case studies

Trusted by 20000+ ML practitioners and 500+ commercial and research teams

avatar lazyload
quote
I’ve been mostly using Neptune just looking at the UI which I have, let’s say, kind of tailored to my needs. So I added some custom columns which will enable me to easily see the interesting parameters and based on this I’m just shifting over the runs and trying to capture what exactly interests me.
Wojciech Rosiński CTO at ReSpo.Vision
avatar lazyload
quote
Gone are the days of writing stuff down on google docs and trying to remember which run was executed with which parameters and for what reasons. Having everything in Neptune allows us to focus on the results and better algorithms.
Andreas Malekos Head of Artificial Intelligence at Continuum Industries
avatar lazyload
quote
Neptune is aesthetic. Therefore we could simply use the visualization it was generating in our reports.

We trained more than 120.000 models in total, for more than 7000 subproblems identified by various combinations of features. Due to Neptune, we were able to filter experiments for given subproblems and compare them to find the best one. Also, we stored a lot of metadata, visualizations of hyperparameters’ tuning, predictions, pickled models, etc. In short, we were saving everything we needed in Neptune.
Patryk Miziuła Senior Data Scientist at deepsense.ai
avatar lazyload
quote
The way we work is that we do not experiment constantly. After checking out both Neptune and Weights and Biases, Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again.
Viet Yen Nguyen CTO at Hypefactors

Get started

1

Sign up to Neptune and install client library

pip install neptune
2

Track experiments

import neptune

run = neptune.init_run()
run["params"] = {
    "lr": 0.1, "dropout": 0.4
}
run["test_accuracy"] = 0.84
3

Register models

import neptune

model = neptune.init_model()
model["model"] = {
    "size_limit": 50.0,
    "size_units": "MB",
}
model["model/signature"].upload(
    "model_signature.json")
decor
icon Resources

Code examples, videos, projects gallery, and other resources

Frequently asked questions

Yes, you can deploy Neptune on-premises and other answers

  • Read more about our deployment options here.

    But in short, yes, you can deploy Neptune on your on-prem infrastructure or in your private cloud.Ā 

    It is a set of microservices distributed as a Helm chart that you deploy on Kubernetes.Ā 

    If you don’t have your own Kubernetes cluster deployed, our installer will set up a single-node cluster for you.Ā 

    As per infrastructure requirements, you need a machine with at least 8 CPUs,Ā  32GB RAM, and 1TB SSD storage.

    Read the on-prem documentation if you’re interested, or talk to us (support@neptune.ai) if you have questions.

    If you have any trouble, our deployment engineers will help you all the way.

  • Yes, you can just reference datasets that sit on your infrastructure or in the cloud.Ā 

    For example, you can have your datasets on S3 and just reference the bucket.Ā 

    run[“train_dataset”].track_files(“s3://datasets/train.csv”)

    Neptune will save the following metadata about this dataset:Ā 

    • version (hash),Ā 
    • location (path),Ā 
    • size,Ā 
    • folder structure, and contents (files)

    Neptune never uploads the dataset, just logs the metadata about it.Ā 

    You can later compare datasets or group experiments by dataset version in the UI.

  • Short version. People choose Neptune when:

    • They don’t want to maintain infrastructure (including autoscaling, backups etc.),
    • They keep scaling their projects (and get into thousands of runs),
    • They collaborate with a team (and want user access, multi-tenant UI etc.).

    For the long version, read this full feature-by-feature comparison.

  • Short version. People choose Neptune when:

    • They want to pay a reasonable price and the ability to invite unlimited users for free,
    • They want a super flexible tool (customizable logging structure, dashboards, works great with time series ML),
    • They want a component for experiment tracking and model registry, NOT an end-to-end platform (WandB has HPO, orchestration, model deployment, etc. We integrate with best-in-class tools in the space).

    For the long version, read this full feature-by-feature comparison.

  • It depends on what ā€œmodel monitoringā€ you mean.Ā 

    As we talk to teams, it seems that “model monitoring” means six different things to three different people:Ā 

    • (1) Monitor model performance in production: See if the model performance decays over time, and you should re-train it
    • (2) Monitor model input/output distribution: See how the distribution of input data, features, and predictions distribution change over time?
    • (3) Monitor model training and re-training: See learning curves, trained model predictions distribution, or confusion matrix during training and re-training
    • (4) Monitor model evaluation and testing: log metrics, charts, prediction, and other metadata for your automated evaluation or testing pipelines
    • (5) Monitor hardware metrics: See how much CPU/GPU or Memory your models use during training and inference
    • (6) Monitor CI/CD pipelines for ML: See the evaluations from your CI/CD pipeline jobs and compare them visually

    So when looking at tooling landscape and Neptune:

    • Neptune does (3) and (4) really well, but we saw teams use it for (5) and (6)
    • Prometheus + Grafana is really good at (5), but people use it for (1) and (2)
    • WhyLabs or Arize are really good at (1) and (2)