Monitor, version, and share Keras experiments. It’s just one callback.

See experiment information you care about: learning curves, hardware consumption, parameters, and more. Have experiments versioned, easy to find and compare, and share them with others. Run everywhere, see in one place.

It really is free and takes 5 min to setup

Start collaborating for FREE

It is a good practice to have control over your model training process

And hundreds of data scientists are using Neptune to see that their experiments are running smoothly. 

“Within the first few tens of runs, I realized how complete the tracking was – not just one or two numbers, but also the exact state of the code, the best-quality model snapshot stored to the cloud, the ability to quickly add notes on a particular experiment. My old methods were such a mess by comparison.”

Edward Dixon

Data Scientist @Intel

Ok, but how is it better, then?

Logging to console

  • Runs are not versioned -> no parameters, results or source code info saved anywhere
  • No option to compare your running and previous experiments
  • Difficult to see hardware consumption over the entire training and evaluation process
  • Impossible to share your experiment runs with others

TensorBoard

  • You have to maintain your logdir and dashboard -> easy to lose your work
  • Hard to compare experiments with your team when everyone is running things on different machines
  • No experiment dashboard to see metrics and parameters in high-level
  • Limited experiment compare options
  • You cannot log model checkpoints, html visualizations, or performance charts easily

It really is free and takes 5 min to setup

Start collaborating for FREE

Monitor, version and share your Keras experiments in a few steps

Add a few lines to your scripts

Connect Neptune to your project by adding literally 3 lines on top of your training script.

import neptune

neptune.init('Me/My-Project')
neptune.create_experiment(name='keras-integration-example’)

Add a Callback

Pass a callback to your model.fit – > everything that happens during training will be logged automatically. 

from neptunecontrib.monitoring.keras import NeptuneMonitor

model.fit(x_train, y_train,..., 
               callbacks=[NeptuneMonitor()])

Run it anywhere and see your model training live

You can run it on your laptop, remote cluster, or cloud. Works with Notebooks too, Colab and Kaggle kernels included.

Your learning curves, hardware consumption, scripts or notebooks, git info, and more are logged automatically. 

Compare your running experiments with previous versions

See and compare your experiments on metrics, parameters and other things in an interactive dashboard. 

You can also compare learning curves of your running and previous experiments live!

Share your runs or experiment comparisons with a link

Compared experiments and see something you want to discuss with other scientists? 

Want to share progress with your manager?

Build a model and want to share it with the production team?

Just send a link, it’s that simple. Experiment comparisons, model details, dashboard views, notebook checkpoints, or anything else!

Additionally…

Log parameters

You can keep track of your hyperparameters and compare experiments based on those in the UI.

...
neptune.create_experiment(params={'epoch_nr': 5,
           	 'batch_size': 256,
      	 'lr': 0.005,
      	 'dropout': 0.05})

Log charts and predictions

You can log things like image predictions after every epoch or ROC curves, works with interactive html charts as well!

neptune.log_image('predictions', image)

Log model checkpoints

You can save your model checkpoint after every epoch or at the end of the training. 

Everyone on your team can access them from the UI or programmatically!

neptune.log_artifact('model.h5')

It really is free and takes 5 min to setup

Start collaborating for FREE

“But I have my runs logged with TensorBoard. What do I do?”
-> It’s simple, we have integration.

Convert logdir to Neptune experiments

You can convert your existing experiments by sending your TensorBoard logdir to Neptune:

neptune tensorboard /path/to/tensorboard/logdir 

All your previous experiments are now in Neptune.

Plug in Neptune to TensorBoard

To log new experiments just add a few lines to your existing scripts:

import neptune, neptune_tensorboard

neptune.init('Me/My-Project')
neptune.create_experiment(name='keras-integration-example')
neptune_tensorboard.integrate_with_keras()

Your experiments are now versioned, you can see them live in the app, and share with the team.

Extend -> log other things you care about

And you can extend it by logging html visualizations, model checkpoints, or one of many other objects that Neptune supports.

...
neptune.log_artifact('model.h5')
neptune.log_image('predictions', PIL.image)
log_chart('ROC curve', plotly_fig)

What our users say

Over 3,500 ML people started monitoring their experiments with Neptune this year – read what some of them have to say:

“Within the first few tens of runs, I realized how complete the tracking was – not just one or two numbers, but also the exact state of the code, the best-quality model snapshot stored to the cloud, the ability to quickly add notes on a particular experiment. My old methods were such a mess by comparison.”

Edward Dixon

Data Scientist @Intel

“Such a fast setup! Love it:)”

Kobi Felton

“Without the information I have in the Monitoring section I wouldn’t know that my experiments are running 10 times slower than they could.
All of my experiments are being trained on separate machines which I can access only via ssh. If I would need to download and check all of this separately I would be rather discouraged :).”

Michał Kordas

Machine Learning Researcher @TensorCell

They already have their ML experimentation in order.
When will you?

✓ Sign up for a free account
✓ Add a few lines to you code
✓ Get back to running your experiments

Start tracking for FREE