Have all your production-ready models in a centralized model repository
Version production-ready models and metadata associated with them in a single place.
Review models and transition them between development stages.
Access all models your team created via API or browse them in the UI

import neptune
# Register model
model = neptune.init_model(
name="face_detection", key="DET"
)
model["validation/dataset"].track_files("s3://datasets")
# Save version and metadata
model_version = neptune.init_model_version(
model="FACE-DET"
)
model_version["validation/acc"] = 0.97
model_version["model/binary"].upload("model.pt")
# Transition development stage
model_version.change_stage("staging")
# Access model
model_version["model/binary"].download()

Register a model
Register production-ready model. You can attach any metadata or artifacts to it and organize them in any structure you want

model = neptune.init_model(
name="face_detection", key="DET",
)
model["validation/dataset"].track_files("s3://datasets/validation")
Create model version
For any registered model, create as many model versions as you want. Again, you can attach whatever metadata you want to it

model_version = neptune.init_model_version(
model="FACE-DET",
)
model_version["model/binary"].upload("model.pt")
model_version["validation/acc"] = 0.97
Version external model artifacts
Save hash, location and other model artifact metadata. You donāt have to upload the model to Neptune. Just keep track of the model reference to local or S3-compatible storage

model_version["model/binary"].track_files("model.pt")
Review and change stages
Look at the validation, test metrics and other model metadata and approve stage transitions. You can move models between None/Staging/Production/Archived

model_version.change_stage("staging")
Integrate with any MLOps stack
Trusted by 20000+ ML practitioners and 500+ commercial and research teams
Get started
Sign up to Neptune and install client library
pip install neptune
Track experiments
import neptune
run = neptune.init_run()
run["params"] = {
"lr": 0.1, "dropout": 0.4
}
run["test_accuracy"] = 0.84
Register models
import neptune
model = neptune.init_model()
model["model"] = {
"size_limit": 50.0,
"size_units": "MB",
}
model["model/signature"].upload(
"model_signature.json")
Code examples, videos, projects gallery, and other resources
Yes, you can deploy Neptune on-premises and other answers
-
Read more about our deployment options here.
But in short, yes, you can deploy Neptune on your on-prem infrastructure or in your private cloud.Ā
It is a set of microservices distributed as a Helm chart that you deploy on Kubernetes.Ā
If you donāt have your own Kubernetes cluster deployed, our installer will set up a single-node cluster for you.Ā
As per infrastructure requirements, you need a machine with at least 8 CPUs,Ā 32GB RAM, and 1TB SSD storage.
Read the on-prem documentation if you’re interested, or talk to us (support@neptune.ai) if you have questions.
If you have any trouble, our deployment engineers will help you all the way.
-
Yes, you can just reference datasets that sit on your infrastructure or in the cloud.Ā
For example, you can have your datasets on S3 and just reference the bucket.Ā
run[“train_dataset”].track_files(“s3://datasets/train.csv”)
Neptune will save the following metadata about this dataset:Ā
- version (hash),Ā
- location (path),Ā
- size,Ā
- folder structure, and contents (files)
Neptune never uploads the dataset, just logs the metadata about it.Ā
You can later compare datasets or group experiments by dataset version in the UI.
-
Short version. People choose Neptune when:
- They donāt want to maintain infrastructure (including autoscaling, backups etc.),
- They keep scaling their projects (and get into thousands of runs),
- They collaborate with a team (and want user access, multi-tenant UI etc.).
For the long version, read this full feature-by-feature comparison.
-
Short version. People choose Neptune when:
- They want to pay a reasonable price and the ability to invite unlimited users for free,
- They want a super flexible tool (customizable logging structure, dashboards, works great with time series ML),
- They want a component for experiment tracking and model registry, NOT an end-to-end platform (WandB has HPO, orchestration, model deployment, etc. We integrate with best-in-class tools in the space).
For the long version, read this full feature-by-feature comparison.
-
It depends on what āmodel monitoringā you mean.Ā
As we talk to teams, it seems that “model monitoring” means six different things to three different people:Ā
- (1) Monitor model performance in production: See if the model performance decays over time, and you should re-train it
- (2) Monitor model input/output distribution: See how the distribution of input data, features, and predictions distribution change over time?
- (3) Monitor model training and re-training: See learning curves, trained model predictions distribution, or confusion matrix during training and re-training
- (4) Monitor model evaluation and testing: log metrics, charts, prediction, and other metadata for your automated evaluation or testing pipelines
- (5) Monitor hardware metrics: See how much CPU/GPU or Memory your models use during training and inference
- (6) Monitor CI/CD pipelines for ML: See the evaluations from your CI/CD pipeline jobs and compare them visually
So when looking at tooling landscape and Neptune:
- Neptune does (3) and (4) really well, but we saw teams use it for (5) and (6)
- Prometheus + Grafana is really good at (5), but people use it for (1) and (2)
- WhyLabs or Arize are really good at (1) and (2)