Log every object you care about for every experiment you run and have it displayed in a beautiful UI.

How can you start logging?

To log things to the Neptune experiment you need an experiment 🙂 There are two options to get it, both take 2 lines of code. 

Create a new experiment

You can create a new experiment object by using neptune.create_experiment method.

import neptune

neptune.init('shared/pytorch-lightning-integration') # Get Project
neptune.create_experiment() # Create new experiment

neptune.log_metric('accuracy', 0.92) # Log things

Update existing experiment

You can fetch an existing experiment from Neptune and log new information or update existing logs.

import neptune

project = neptune.init('shared/pytorch-lightning-integration') # Get Project

experiment = project.get_experiments(id='PYTOR-63')[0] # Get Experiment
experiment.log_metric('f1_score', 0.92) # Log things

What types of things can you log?

You can log a ton of different object types to Neptune and it will understand how to display them.

Metrics

You can log one or multiple metrics to a log section with the neptune.log_metric method:

neptune.log_metric('test_accuracy', 0.76)
for epoch in range(epoch_nr):
    ... 
    neptune.log_metric('test_accuracy', epoch_accuracy)

You can compare runs based on metrics in the experiment dashboard:

For every metric in the “Logs” section, an interactive chart will be created in the “Charts” section.


Images

You can log one or multiple images to a log section with the neptune.log_image method:

neptune.log_image('predictions', image)
for image in validation_predications:
    neptune.log_image('predictions', image)

Your charts will be browsable in the “predictions” tab of the “logs” section of the UI.

Note:

You can create multiple log tabs with images and log multiple images to a single log tab.

There are a few supported types:

Matplotlib

import matplotlib.pyplot as plt
fig = plt.figure()
plt.hist(x)

neptune.log_image('prediction_histogram', fig)

PIL

from PIL import Image

image = Image.open('/path/to/my_image.png')
neptune.log_image('training_images', image)

Numpy

import numpy as np

array = np.random.random((10,10))
neptune.log_image('heatmap', array)

Interactive charts

You can log interactive charts and they will be rendered interactively in the UI with the neptunecontrib.api.log_chart

Note:

Unlike neptune.log_image logging with log_chart creates an object in the “Charts” subfolder of the “Artifacts” section. If you log two charts of the same name your chart will get overwritten.

There are many supported chart flavors:

Matplotlib

import matplotlib.pyplot as plt

figure, axs = plt.subplots(2, 2)
axs[0, 0].hist(data[0])
axs[1, 0].scatter(data[0], data[1])
axs[0, 1].plot(data[0], data[1])
axs[1, 1].hist2d(data[0], data[1])

from neptunecontrib.api import log_chart

log_chart(name='mpl_chart', chart=figure)

Altair

import altair as alt
...
brush = alt.selection(type='interval')

points = alt.Chart(source).mark_point().encode(
...).add_selection(
    brush)

bars = alt.Chart(source).mark_bar().encode(
...).transform_filter(
    brush)

chart = points & bars

from neptunecontrib.api import log_chart

log_chart(name='alt_chart', chart=chart)

Bokeh

from bokeh.plotting import figure
...
p = figure( title="Texas Unemployment, 2009",...])
p.hover.point_policy = "follow_mouse"
p.patches('x', 'y', source=data,...)

from neptunecontrib.api import log_chart

log_chart(name='bokeh_chart', chart=p)

Plotly

import plotly.express as px
...
fig = px.scatter_3d(df, x='sepal_length', y='sepal_width', z='petal_width',
              color='species')

from neptunecontrib.api import log_chart

log_chart('3d point clouds', fig)

Video

You can log video files and then watch them directly from the Web UI. Videos appear in the experiment, in the “Artifacts/video” folder.

from neptunecontrib.api.video import log_video

log_video('my_video.mp4')


Audio

If you work with audio files, you can log and listen to them directly from the Web UI. They appear in the experiment, in the “Artifacts/audio” folder.

from neptunecontrib.api.audio import log_audio

log_audio(''my_audio.mp3')

Tables

You can log tables like Pandas dataframes or csv files to Neptune and they will be displayed in the “Artifacts” section in the UI.

Pandas

from neptunecontrib.api import log_table

log_table('pandas_df', df)

.csv files

neptune.log_artifact('features_sample.csv')


HTML objects

You can log any HTML object to Neptune and it will be displayed in the “html” subfolder of the “Artifacts” section in the UI.

from neptunecontrib.api import log_html
...
log_html('button_example', html)

File objects

You can log any file to Neptune as an artifact. Those could be model binaries, results predictions, or any other file. Just use neptune.log_artifact method: 

neptune.log_artifact('model.pkl')

Python objects

You can log any picklable Python object to Neptune as an artifact. Those could be models, tables or custom objects as long as they can be pickled. Just use neptunecontrib.api.pickle_and_log_artifact method: 

from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()

from neptunecontrib.api import pickle_and_log_artifact

pickle_and_log_artifact(rf, 'rf')

Parameters

You can log (hyper) parameters of your model training to Neptune. You can do that when you create your experiment.

neptune.create_experiment(params={'lr':0.01, 'dropout':0.1},...)
...

They will be displayed in the “Parameter” section of the UI.


Code

You can version your code with Neptune. Some things are versioned automatically and some you can decide to log or not. 

Git

Your commit information (id, message, author, date), branch, and remote repository address are logged automatically whenever you run your experiment. What is more, Neptune logs the entrypoint file so that you have all the information about the run.
It is shown in the “Details” section of the UI.

Code Snapshots

You can snapshot some files or folders and have them displayed in the Source code section of the UI.
To do that just specify all the files (or regex) that you want to snapshot when you create an experiment.

neptune.create_experiment(upload_source_files=['**/*.py', 'config.yaml'])
...

Notebook Snapshots

You can also save notebook checkpoints to Neptune. You just need to install an additional notebook extension. With that you can log all the notebook by clicking a button or let Neptune auto-snapshot your experiments whenever you create a new one.


Data versions

You can easily log data versions to Neptune and know on which data your experiment was run. To save the location and the hash of the data, both for the local filesystem and the S3 buckets, you can use functions from the neptunecontrib.versioning.data module. 

from neptunecontrib.versioning.data import log_data_version

log_data_version('/path/to/data/my_data.csv')


Hardware consumption

You can monitor the hardware for your experiment runs automatically. You can see the consumption of GPU, CPU, and memory in the “Monitoring” section of the UI:


Experiment description and tags

You can add a description to your experiments or tag them to remember what it was about. It can be done both programmatically or from the “Details” section of the UI.

neptune.create_experiment(description='this is a quick hyperparameter sweep',
                                             tags=['hpo', 'lgbm', 'features_v1'])
...

Properties

You can log user-defined properties of your model runs to Neptune.

Those could be data versions, URL or path to the model on your filesystem, or anything else that is a name(str): value(str) pair. 

There are two options to do:

When you create an experiment: 

You can add a description to your experiments or tag them to remember what it was about. It can be done both programmatically or from the “Details” section of the UI.

neptune.create_experiment(properties={'data_version':'1231qwd0-i3'},...)

or during training:

neptune.set_property('data_version', '1231qwd0-i3')

They will be displayed in the “Details” section of the UI.

Framework integrations

For most machine learning frameworks we’ve created integrations so that you wouldn’t have to implement logging utils yourself. 

TensorFlow

import neptune, neptune_tensorboard as neptune_tb

neptune.init(...)
neptune_tb.integrate_with_tensorflow()

neptune.create_experiment(...)

Keras

import neptune, neptune_tensorboard as neptune_tb

neptune.init(...)
neptune_tb.integrate_with_keras()

neptune.create_experiment(...)
...
model.fit()

Pytorch Lightning

from pytorch_lightning.logging.neptune import NeptuneLogger

trainer = Trainer(logger= NeptuneLogger(...))

trainer.fit(model)

Pytorch Ignite

from ignite.contrib.handlers.neptune_logger import *

npt_logger = NeptuneLogger(...)

npt_logger.attach(validation_evaluator,
                  log_handler=OutputHandler(tag="validation",
                                            metric_names=["loss", "accuracy"],
                                            another_engine=trainer),
                  event_name=Events.EPOCH_COMPLETED)

trainer.run(...)

Catalyst

from catalyst.contrib.dl.callbacks.neptune import NeptuneLogger

runner = SupervisedRunner()
runner.train(..., callbacks=[NeptuneLogger(...)])

skorch

import neptune
from skorch.callbacks.logging import NeptuneLogger

neptune.init(...)
experiment = neptune.create_experiment(...)

net = NeuralNetClassifier(...,callbacks=[NeptuneLogger(experiment,...)])
net.fit(X, y)

Fastai

import neptune
from neptunecontrib.monitoring.fastai import NeptuneMonitor

neptune.init(...)
neptune.create_experimen(...)

learn = cnn_learner(...,callback_fns=[NeptuneMonitor])
learn.fit_one_cycle()

XGBoost

import neptune
from neptunecontrib.monitoring.xgboost import neptune_callback

neptune.init(...)
neptune.create_experiment(...)

xgb.train(... callbacks=[neptune_callback()])

LightGBM

import neptune
from neptunecontrib.monitoring.lightgbm import neptune_monitor

neptune.init(...)
neptune.create_experiment(...)

gbm = lgb.train(...,callbacks=[neptune_monitor()])

Optuna

import neptune
from neptunecontrib.monitoring.optuna import NeptuneCallback

neptune.init(...)
neptune.create_experiment(...)

study.optimize(..., callbacks=[NeptuneCallback()])

Scikit-Optimize

import neptune
from neptunecontrib.monitoring.skopt import NeptuneCallback

neptune.init(...)
neptune.create_experiment(...)

results = skopt.forest_minimize(..., callback=[neptune_callback])

Thousands of Data Scientists already have their ML experimentation in order. When will you?

✓ Sign up for a free account
✓ Add a few lines to you code
✓ Get back to running your experiments

Start tracking for FREE