We are excited to announce that Neptune and Modelbit have partnered to release an integration to enable better ML model deployment and experiment tracking. Data scientists and machine learning engineers can use the integration to train and deploy machine learning models in Modelbit while logging and visualizing training progress in Neptune.
If you are not already familiar, Neptune is a lightweight experiment tracker for MLOps. It offers a single place to track, compare, store, and collaborate on experiments and models.
Modelbit is a machine learning platform that makes deploying custom ML models to REST Endpoints as simple as calling “modelbit.deploy()” in any data science notebook or Python editor.
In this post, we will cover the following topics:
- Setting up the integration between Modelbit and Neptune
- Creating a training job in Modelbit that:
- Logs the model’s hyperparameters and accuracy to Neptune
- Deploys the model to a REST endpoint.
In case you want to jump right into setting up the integration, you can follow the instructions in Modelbit’s documentation.
Setting up the integration
To get started, you’ll need to create free accounts with both Modelbit and Neptune:
Modelbit integrates with Neptune using your Neptune API token so you can log training metadata and model performance to your Neptune projects.
To add your Neptune API token to Modelbit, go to the Integrations tab of Settings in your Modelbit account, click the “Neptune” tile, and add your “NEPTUNE_API_TOKEN”. This token will be available in your training jobs’ environments as an environment variable so you can automatically authenticate with Neptune.
Creating a Modelbit training job that uses Neptune
We’ll make a training job to train a model to predict flower types, using the Scikit Learn Iris dataset. We’ll log the model’s hyperparameters and accuracy to Neptune and then deploy the model to a REST endpoint.
Our model is very simple and relies on two features to predict the flower type.
First, import “modelbit” and “neptune” and authenticate your notebook with Modelbit:
If your “NEPTUNE_API_TOKEN” isn’t already in your notebook’s environment, add it:
Creating the training job
We’ll create a function to encapsulate our training logic. At the top of the function we call “run = neptune.init(…)” to start a run and record our hyperparameters with “run[…]”. Be sure to change the “project=” parameter in “neptune.init_run”.
Then we create and fit the model, logging the model’s accuracy to Neptune and saving the model with “mb.add_model”.
Deploy and run the training job
We can now deploy our training function to Modelbit with “mb.add_job”:
Click the “View in Modelbit” button, then click “Run Now”. Once the job completes, head over to your Neptune project to see that the job logged a new run!
Create a REST endpoint
Finally, we’ll deploy our flower predictor model to a REST endpoint. We’ll make an inference function that accepts two input features and calls the model we trained, returning the predicted flower type:
Deploy the inference function to create a REST endpoint:
Our flower predicting model is live as a REST endpoint, and every time we retrain it the hyperparameters and accuracy are logged to Neptune for careful tracking.
Both Neptune and Modelbit share a vision of empowering ML teams to confidently ship impactful ML models into production. With this integration, machine learning engineers and data scientists can train and deploy machine learning models in Modelbit while logging and visualizing training progress in Neptune.