We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”
Register a model
Register a production-ready model.
You can attach any metadata or artifacts to it and organize them in any structure you want.
model = neptune.init_model(
name="face_detection", key="DET",
)
model["validation/dataset"].track_files(
"s3://datasets/validation")
Create model version
For any registered model, create as many model versions as you want.
Again, you can attach whatever metadata you want to it.
model_version = neptune.init_model_version(
model="FACE-DET",
)
model_version["model/binary"].upload(
"model.pt")
model_version["validation/acc"] = 0.97
Version external model artifacts
Save hash, location and other model artifact metadata.
You don’t have to upload the model to Neptune.
Just keep track of the model reference to local or S3-compatible storage.
model_version["model/binary"].track_files(
"model.pt")
Review and change stages
Look at the validation, test metrics and other model metadata and approve stage transitions.
You can move models between None/Staging/Production/Archived.
model_version.change_stage("staging")
Access and share models
Every model and model version is accessible via Neptune App or through the API.
Once you have all the model artifacts you can deploy your model in your production pipelines or serve it via API.
import neptune.new as neptune
model_version = neptune.init_model_version(
version="FACE-DET-42")
model_version["model/signature"].download()

Resources
Code examples, videos, projects gallery, and other resources.
Trusted by 20000+ ML practitioners
and 500+ commercial and research teams














“I’ve been mostly using Neptune just looking at the UI which I have, let’s say, kind of tailored to my needs. So I added some custom columns which will enable me to easily see the interesting parameters and based on this I’m just shifting over the runs and trying to capture what exactly interests me.”


“Gone are the days of writing stuff down on google docs and trying to remember which run was executed with which parameters and for what reasons. Having everything in Neptune allows us to focus on the results and better algorithms.“


“Neptune is aesthetic. Therefore we could simply use the visualization it was generating in our reports.
We trained more than 120.000 models in total, for more than 7000 subproblems identified by various combinations of features. Due to Neptune, we were able to filter experiments for given subproblems and compare them to find the best one. Also, we stored a lot of metadata, visualizations of hyperparameters’ tuning, predictions, pickled models, etc. In short, we were saving everything we needed in Neptune.”


“The way we work is that we do not experiment constantly. After checking out both Neptune and Weights and Biases, Neptune made sense to us due to its pay-per-use or usage-based pricing. Now when we are doing active experiments then we can scale up and when we’re busy integrating all our models for a few months that we scale down again.”