Experiment tracking tool for machine learning projects
Track and organize your entire experimentation process from exploratory analysis, to model training runs and hyperparameter sweeps, and everything in between.
Log metrics, hyperparameters, data versions, hardware usage and more. Work on any infra, any language, scripts or notebooks.
Record data exploration
Experiments don’t have to stop with running training scripts. Version your exploratory data analysis and share with your team.
Manage your team with organizations, projects, and user roles. Organize experiments with tags and custom views.
Quick and simple setup
Start tracking experiments in minutes, work like you used to… just log it
Insert a few lines of code into your standard training and validation scripts and start logging your experiment data.
Run on your laptop, in the cloud, on Google Colab or wherever you want.
Use in the scripts or in Jupyter notebooks. Run experiments your way just let us track them.
pip install neptune-client
import neptune neptune.init('awesome-project') neptune.create_experiment('great-idea') # any training or validation code you want neptune.log_metric('auc', score) neptune.log_image('diagnostics', 'roc_auc.png') neptune.log_artifact('model_weights.h5')
UI that scales, super customizable and designed for teams
Log and organize millions of experiment runs.
Create custom views for data scientists or managers, and save them for later.
Search through experiments quickly with a powerful language.
Inteligent table that shows you diffs and more
When you compare multiple experiment runs sometimes it is difficult to figure out what is different and what you should look for.
We’ve created a table that automatically finds the columns and values that are different and displays them for you!
Track versions of your datasets, group results by datasets
Datasets change during the project lifetime. You can log datasets signatures as you run experiemnts and group the results by dataset in the UI.
log_data_version('/path/to/my_data.csv') # local log_s3_data_version('my-bucket', 'train_dir/') # s3
Organize your projects, give different roles to different people
You can assign people to different organizations and projects.
You can choose whether they should be able to edit experiment data or simply view what is happening and comment on it.
Experiment in the notebooks, let us autosave your .ipynb code
When you are running some quick and dirty experiments in the notebooks some parameters or code changes can be lost.
So we created an extension to snapshot your .ipynb code whenever you run your experiment!
Notebook Versioning and Diffing
Version your exploratory analysis, get notebook diffs for free
Experimentation doesn’t stop at the training script so we created an extension to track the exploratory data analysis or results exploration that you do in your Jupyter notebooks.
With that you can save, share and diff the analysis of your entire team!
Fetch experiment data from the app, visualize results in notebooks
Do you want to access experiments data like metrics, hyperparmaters, or model binaries programatically?
Neptune lets you fetch everything you or your teammates logged directly to your scripts or notebooks!
Connect with the tools you use, start tracking in minutes
Neptune provides loggers for all the major machine learning frameworks and hyperparameter optimization libraries so that you don’t have to implement them yourself.
Something is missing? Tell us and we will do that for you!
# Pytorch Lightning trainer = Trainer(logger=NeptuneLogger(...)) # Catalyst runner = SupervisedRunner() runner.train(callbacks=[NeptuneLogger(...)]) # Fastai learn.callbacks.append(NeptuneMonitor()) learn.fit_one_cycle(...) # Optuna study.optimize(..., callbacks=[NeptuneMonitor()])
Try it out on Google Colab. No registration needed.
We integrate with your favourite frameworks and tools
Lets me see the progress anytime
Gives us flexibility we need
Hooks to multiple frameworks
Manage your ML experiments. Create Free AccountGet Started, It’s Free!