Organize your Machine Learning projects

Organize experiments

Run experiments everywhere, keep the results in one place

You can execute experiment code on your laptop, cloud environment, or a cluster and every information will be logged to the central storage hosted by us or deployed on-premises. Works out-of-the-box with Python, R, Jupyter notebooks, and other languages and environments.

import neptune

neptune.init('MyProject')
neptune.create_experiment()

Filter, sort, and compare experiments in a dashboard

Search through your experiments, compare them, and find the information you need in minutes.

See (only) the information you want: customize and save dashboard views

Choose which metrics, parameters, or other information you want to see and customize your dashboard. Create multiple dashboard views and save them for later.

Drill down to experiment details whenever you need it

Go from high-level analysis to details in one click. All the experiment information like, code, parameters, evaluation metrics, or model files can be logged and displayed in Neptune.

Check out what you can log to Neptune here.

Access everything programmatically

You can download everything you logged to Neptune either in the UI or programmatically.

import neptune

project = neptune.init('MyProject')

df = project.get_leaderboard(tag=['features_v1'])
df.head()

Organize models

Have data, code, parameters, and model binary versioned for every model training run

You can log every important information about the model. Whether those are parameters, source code, environment definition, data version, or a model binary it can be easily logged to Neptune for every training run.

neptune.create_experiment(params={'lr':0.1, 'dropout':0.3}, 
                                             upload_source_files = ['**/*.py', 'environment.yaml'])
...
log_data_version('path/to/my/data')
...
neptune.log_artifact(model, 'model.pkl')

Access model information programmatically

You can download all model information that you logged including, code, model binary, metrics, and parameters. It is as simple as pointing Neptune to the experiment run that you want in your Python code.

import neptune

project = neptune.init('Project')
exp = project.get_experiments(id='Proj-123')[0]

exp.get_parameters()
exp.download_artifact('model.pkl')

Organize data exploration and error analysis

Have all of your notebook checkpoints stored and rendered in one place

You can download all model information that you logged including, code, model binary, metrics, and parameters. It is as simple as pointing Neptune to the experiment run that you want in your Python code.

Organize your notebook checkpoints: add name and description

Search through notebook checkpoints and remembering to name everything as you do your exploratory analysis is difficult. But you can version everything and name only the important parts of the analysis. Once you name a checkpoint or add a description to it you will be able to find it easily whenever you need it!

Easily find and download notebook checkpoints directly into your Jupyter Notebook or Jupyter Lab

Finding the analysis you care about can be tricky, especially if it was quick and ad-hoc. Neptune lets you search through the notebook checkpoints you saved and find the one that you care about. Once you find it you can fetch it directly into your notebook!

Organize teamwork

Structure your teams’ ML work in organizations and projects

Clean up your ML teamwork by grouping your experiment into projects and organizations.

If you are working on a model that does a particular feature just create a project for it. All the experiments in that project will be easy to compare and search through.

If your team is working for multiple clients or departments you can create an organization for each client and separate projects in each organization.

Manage who can access each project and what they can do

Give different roles to different people in your projects. You can have data scientists with full edit rights and clients or business folks that can just view what is going on defined for every single project.

Thousands of Data Scientists already have their ML experimentation in order.
When will you?

✓ Sign up for a free account
✓ Add a few lines to you code
✓ Get back to running your experiments

Start tracking for FREE