Compare Neptune vs Comet

Super similar software
Polar opposite price

neptune-logo
vs
Comet

Both Neptune and Comet allow you to effectively track, visualize, and compare your ML model metadata.
Both offer flexible cloud-hosted or on-prem deployment options. And both let you easily
collaborate, and share experiment results with your stakeholders.

But only Neptune will fit nicely into your budget.

Unlimited users on every
paid plan
Flexible data structures
for seamless migration
Clear, detailed docs
& tutorials

Choose Neptune for faster ML metadata management
For much, much less

icon Cost

Reasonable pricing that fits your reality

Per-user pricing for metadata management doesn’t make sense when half your team only needs to view the UI. Or if you frequently add and remove users. Or when you can go weeks on end without training any models. 

With Neptune, you pay a low monthly base fee — and get unlimited access for your entire team. (Plus, you get 3.5X more logging hours — even on the most affordable plan.)

icon Interface

The user interface that works as fast as you do

You shouldn’t have to live with a lagging UI or long experiment load times. Neptune is — and always will be — everything you need to manage your ML metadata. And nothing else. We built it as lightweight as possible. So you can work as fast as possible.

Scale from 10 to 2000+ runs. With zero effect on speed.

Feature-by-feature comparison

Take a deep dive into
what makes Neptune different

Commercial Requirements

Commercial requirements chevron
Standalone component or a part of a broader ML platform?

Standalone component. ML metadata store that focuses on experiment tracking and model registry

Stand-alone tool with community, self-serve and managed deployment options

Is the product available on-premises and / or in your private/public cloud?

Yes, you can deploy Comet on any cloud environment or on-premise

Is the product delivered as commercial software, open-source software, or a managed cloud service?

Managed cloud service

Managed cloud service

Support: Does the vendor provide 24Ă—7 support?
SSO, ACL: does the vendor provide user access management?

General Capabilities

Setup chevron
What are the infrastructure requirements?

No special requirements other than having the neptune-client installed and access to the internet if using managed hosting. Check here for infrastructure requirements for on-prem deployment

No special requirements other than having the comet_ml installed and access to the internet if using managed hosting. Check here for on-prem deployment

How much do you have to change in your training process?

Minimal. Just a few lines of code needed for tracking. Read more

Minimal. Few lines of code needed for tracking

Does it integrate with the training process via CLI/YAML/Client library?

Yes, through the neptune-client library

Yes, through comet_ml library

Does it come with a web UI or is it console-based?
Serverless UI

No

No

Flexibility, speed, and accessibility chevron
Customizable metadata structure

Yes

Yes

How can you access model metadata?
– gRPC API

No

No

– CLI / custom API

Yes

Yes

– REST API

No

Yes

– Python SDK

Yes

Yes

– R SDK

Yes

Yes

– Java SDK

No

Yes

– Julia SDK

No

No

Supported operations
– Search

Yes

– Update

Yes

Yes

– Delete

Yes

Yes

– Download

Yes

Yes

Distributed training support

Yes

Yes

Pipelining support

Yes

Only for Kubeflow and Vertex AI

Logging modes
– Offline

Yes

Yes

– Debug

Yes

No

– Asynchronous

Yes

Yes by default

– Synchronous

Yes

No explicit selection

Live monitoring

Yes

Yes

Mobile support

No

No

Webhooks and notifications

No

Webhooks only for model management. Notifications only for change in experiment status

Experiment Tracking

Log and display of metadata chevron
Dataset
– location (path/s3)

Yes

Yes

– hash (md5)

Yes

Yes

– Preview table

Yes

Yes

– Preview image

Yes

Yes

– Preview text

Yes

Yes

– Preview rich media

Yes

– Multifile support

Yes

Yes

– Dataset slicing support

No

No

Code versions
– Git

Yes

Yes

– Source

Yes

Yes

– Notebooks

Yes

Yes

Parameters

Yes

Yes

Metrics and losses
– Single values

Yes

Yes

– Series values

Yes

Yes

– Series aggregates (min/max/avg/var/last)

Yes

Yes

Tags

Yes

Yes

Descriptions/comments

Yes

Yes

Rich format
– Images (support for labels and descriptions)

Yes

No

– Plots

Yes

Yes

– Interactive visualizations (widgets and plugins)

Yes

Yes

– Video

Yes

No

– Audio

Yes

Yes

– Neural Network Histograms

No

Yes

– Prediction visualization (tabular)

No

Yes

– Prediction visualization (image)

No

NA

– Prediction visualization (image – interactive confusion matrix for image classification)

No

Yes

– Prediction visualization (image – overlayed prediction masks for image segmentation)

No

No

– Prediction visualization (image – overlayed prediction bounding boxes for object detection)

No

Only with the YOLOv5 integration

Hardware consumption
– CPU

Yes

Yes

– GPU

Yes

Yes

– TPU

No

No

– Memory

Yes

Yes

System information
– Console logs (Stderr, Stdout)

Yes

No

– Error stack trace

Yes

No

– Execution command

No

No

– System details (host, user, hardware specs)

Yes

Yes

Environment config
– pip requirements.txt

Yes

Yes

– conda env.yml

Yes

Yes

– Docker Dockerfile

Yes

Yes

Files
– Model binaries

Yes

Yes

– CSV

Yes

Yes

– External file reference (s3 buckets)

Yes

Logs metainfo for datasets in S3 like path

Comparing experiments chevron
Table format diff

Yes

Yes

Overlayed learning curves

Yes

Yes

Parameters and metrics
– Groupby on experiment values (parameters)

Yes

Yes

– Parallel coordinates plots

Yes

Yes

– Parameter Importance plot

No

No

– Slice plot

No

No

– EDF plot

No

No

Rich format (side by side)
– Image

Yes

No

– Video

No

No

– Audio

No

No

– Plots

No

Yes

– Interactive visualization (HTML)

No

No

– Text

Yes

Yes

– Neural Network Histograms

No

No

– Prediction visualization (tabular)

Yes

Yes

– Prediction visualization (image, video, audio)

No

No

Code
– Git

No

No

– Source files

No

No

– Notebooks

Yes

No

Environment
– pip requirements.txt

No

No

– conda env.yml

No

No

– Docker Dockerfile

No

No

Hardware
– CPU

Yes

No

– GPU

Yes

No

– Memory

Yes

No

System information
– Console logs (Stderr, Stdout)

Yes

Yes

– Error stack trace

Yes

Yes

– Execution command

No

No

– System details (host, owner)

Yes

Yes

Data versions
– Location

Yes

Yes

– Hash

Yes

Yes

– Dataset diff

Yes

No

– External reference version diff (s3)

No

No

Files
– Models

No

No

– CSV

No

No

Custom compare dashboards
– Combining multiple metadata types (image, learning curve, hardware)

Yes

Yes

– Logging custom comparisons from notebooks/code

No

No

– Compare/diff of multiple (3+) experiments/runs

Yes

No

Organizing and searching experiments and metadata chevron
Experiment table customization
– Adding/removing columns

Yes

Yes

– Renaming columns in the UI

Yes

Yes

– Adding colors to columns

Yes

Yes

– Displaying aggregate (min/max/avg/var/last) for series like training metrics in a table

Yes

No

– Automagical column suggestion

Yes

No

Experiment filtering and searching
– Searching on multiple criteria

Yes

Yes

– Query language vs fixed selectors

Query language

Query language

– Saving filters and search history

Yes

Yes

Custom dashboards for a single experiment
– Can combine different metadata types in one view

Yes

Yes, via Reports

– Saving experiment table views

Yes

Yes, via project views

– Logging project-level metadata

Yes

Yes but only environment and git hash

– Custom widgets and plugins

No

Yes can be custom coded in Comet Panels

Tagging and searching on tags

Yes

Yes

Nested metadata structure support in the UI

Yes

No

Reproducibility and traceability chevron
One-command experiment re-run

No

No

Experiment lineage
– List of datasets used downstream

No

No

– List of other artifacts (models) used downstream

No

– Downstream artifact dependency graph

No

Reproducibility protocol

Limited

Yes

Is environment versioned and reproducible

Yes

Yes

Saving/fetching/caching datasets for experiments

No

No

Collaboration and knowledge sharing chevron
Sharing UI links with project members

Yes

Yes

Sharing UI links with external people

Yes

Yes

Commenting

Yes

Interactive project-level reports

No

Yes

Model Registry

Model versioning chevron
Code versions (used for training)

Yes

Yes

Environment versions

No

Parameters

Yes

Yes

Dataset versions

Yes

Results (metrics, visualizations)

Yes

Yes

Explanations (SHAP, DALEX)
Model files (packaged models, model weights, pointers to artifact storage)

Yes

Yes

Model lineage and evaluation history chevron
Models/experiments created downstream

No

No

History of evaluation/testing runs

No

Yes

Support for continuous testing

No

No

Users who created a model or downstream experiments

No

No

Access control, model review, and promoting models chevron
Main stage transition tags (develop, stage, production)

Yes

Yes

Custom stage tags

No

Yes

Locking model version and downstream runs, experiments, and artifacts

No

No

Adding annotations/comments and approvals from the UI
Model compare (current vs challenger etc)

Limited

No

Compatibility audit (input/output schema)

No

No

Compliance audit (datasets used, creation process approvals, results/explanations approvals)

No

No

CI/CD/CT compatibility chevron
Webhooks

No

Yes

Model accessibility

No

No

Support for continuous testing

No

No

Integrations with CI/CD tools

No

No

Model searching chevron
Registered models

Yes

Yes

Active models

Yes

By metadata/artifacts used to create it

Yes

By date

Yes

Yes

By user/owner

Yes

Yes

By production stage

Yes

No

Search query language

Yes

No

Model packaging chevron
Native packaging system

No

No

Compatibility with packaging protocols (ONNX, etc)

No

No

One model one file or flexible structure

No

No

Integrations with packaging frameworks

No

No

Integrations and Support

Languages chevron
Java

No

Yes

Julia

No

No

Python

Yes

Yes

REST API

No

Yes

Model training chevron
Catalyst

Yes

No

CatBoost

No

No

fastai

Yes

No

FBProphet

Yes

Yes

Gluon

No

No

HuggingFace

Yes

Yes

H2O

No

No

LightGBM

Yes

Yes

Paddle

No

No

PyTorch

Yes

Yes

PyTorch Ignite

No

PyTorch Lightning

Yes

Yes

Scikit Learn

Yes

Yes

Skorch

Yes

No

Spacy

No

No (only used as dependency for another integration ludwig)

Spark MLlib

No

Statsmodel

No

No

TesorFlow / Keras

Yes

Yes

XGBoost

Yes

Yes

Hyperparameter Optimization chevron
Hyperopt

No

No

Keras Tuner

No

Optuna

Yes

No

Ray Tune

No

No

Scikit-Optimize

No

Model visualization and debugging chevron
DALEX

No

Netron

No

No

SHAP

No

Yes

TensorBoard

Yes

IDEs and Notebooks chevron
JupyterLab and Jupyter Notebook

Yes

Yes

Google Colab

Yes

Yes

Deepnote

Yes

No

AWS SageMaker

Yes

No

Data versioning chevron
DVC

Yes

No

Orchestration and pipelining chevron
Airflow

No

No

Argo

No

No

Kedro

Yes

No

Kubeflow

No

No

ZenML

Yes

Experiment tracking tools chevron
MLflow

Yes

Sacred

Yes

No

TensorBoard

Yes

CI/CD chevron
GitHub Actions

No

Gitlab CI

No

No

CircleCI

No

No

Travis

No

No

Jenkins

No

No

Model serving chevron
Seldon

No

No

Cortex

No

No

Databricks

No

No

Model versioning chevron
Seldon

No

No

Fiddler.ai

No

No

Arthur.ai

No

No

LLMs chevron
LangChain

No

Yes

This table was updated on 10 October 2022. Some information may be outdated.
Report outdated information here.
quote
Neptune is way better than the other tools I’ve tried, like Comet and WandB. In my opinion, Neptune has the cleanest and most intuitive interface — that’s the main reason I prefer using it.
Klaus-Michael Lux Data Scientist

You deserve fairly priced ML metadata management

With Neptune, that’s exactly what you get.

Check out the best-fit plan for your business today.

    Contact with us

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

    * - required fields