MLOps Blog

ML Model Registry: What It Is, Why It Matters, How to Implement It

14 min
Stephen Oladele
11th May, 2023

Why do you have to know more about model registry? If you were once the only data scientist on your team you can probably relate to this: you start working on a machine learning project and perform a series of experiments that produce various models (and artifacts) that you “track” through non-standard naming conventions. Since the naming conventions you used for your model files were unclear, it took you a while to find the most optimal model you trained. When finally you did, you decided to either hand the raw artifacts over to the operations team or worse, deploy it yourself. 

The operations team collected your model and told you they need more information on: 

  • 1 How to use it
  • 2 If the model is tested
  • 3 The runtime dependencies for the model
  • 4 And other crucial operational information

Because all you did, was build the model and hand it off, it was perhaps difficult to collaborate with them for a successful deployment.

Now imagine another scenario. Your company is planning on shipping out more ml-based products/features. A data scientist, engineer, and maybe even a product manager joins the team. When you were working alone, although your model was a pain to deploy (if it was deployed), your workflow worked. Now that you have new teammates, and they started to ask you about your model versions, you realized that storing the models in files is not that manageable after all. This is the moment when you really feel the pain of not having an efficient way to share your model versions, artifacts, and model metadata. 

The thing is, at this point, there is no easy way to go back in time and set up something proper. 

And your new and improved cross-functional team is asking you about:

  • Where can we find the best version of this model so we can audit, test, deploy, or reuse it?
  • How was this model trained?
  • How can we track the docs for each model to make sure they are compliant and people can know the necessary details about it including the metadata?
  • How can we review models before they are put to use or even after they have been deployed?
  • How can we integrate with tools and services that make shipping new projects easier?

Can you blame them?

They want to understand what is running in production, how to improve it or roll back to previous versions. It makes perfect sense. 

So with all that pretty experience, you gained you start your next project and look for a tool that deals with it. And you find this article about the ML model registry. 

What is a model registry?

A model registry is a central repository that allows model developers to publish production-ready models for ease of access. With the registry, developers can also work together with other teams and stakeholders, collaboratively manage the lifecycle of all models in the organization. A data scientist can push trained models to the model registry. Once in the registry, your models are ready to be tested, validated, and deployed to production in a workflow that is similar to the one below:

What is a model registry?
What is a model registry? | Source: Author

The model registry provides:

  • Centralized storage for all types of models,
  • And a collaborative unit for model lifecycle management.

Let’s take a closer look at the points above:

Centralized storage

The model registry provides a central storage unit that holds models (including model artifacts) for easy retrieval by an application (or service). Without the model registry, the model artifacts would be stored in files that are difficult to track, and saved to whatever source code repository is established. With a model registry, the process is made simpler through a centralized area of storage for these models.

The centralized storage also enables data teams to have a single view of the status of all models, making collaboration easier. Here is an example showing a single view of different models with model artifacts stored in a model registry:

Model registry neptune
A list of different models registered in the model registry | Source

Collaborative unit

The model registry provides a collaborative unit for ML teams to work with—and share—models. It enables collaboration in the following ways:

  • Bridging the gap between experiment and production activities.
  • Providing a central UI (user interface) for teams to collaborate on models.
  • Providing an interface for downstream systems to consume models.

Bridging the gap between experiment and production activities

The model registry is a central component of MLOps that enables model development teams, software development teams, and operational teams to collaborate. This is an important part of technology and culture within the organization. The all-too-familiar gap between building machine learning models and operationalizing them can be bridged by using a model registry as part of your MLOps stack. 

Model registry is a central component of MLOps
Model registry is a central component of MLOps | Source: modified and adapted from YouTube

This registry along with the automated training pipeline can also enable continuous integration, delivery, and training (CI/CD/CT) of a model service in production as models can be frequently updated, pushed to this registry, and deployed all within the pipeline.

Read also

How Machine Learning Teams Use CI/CD in Production [Examples]

Providing a central UI (user interface) for teams to collaborate on models

The model registry provides teams with visibility over their models. With a central interface, teams can: 

  • Search for models,
  • View the status of models (if they are being staged, deployed, or retired),
  • Approve or disapprove models across different stages,
  • And view the necessary documentation. 

This makes model discovery easier for everyone on the team. If a model needs to be deployed, the operations teams can easily:

  • Search for it,
  • Look up the validation results and other metrics,
  • Package the model (if needed),
  • And move it from the staging environment to the production environment. 

This improves the way cross-functional teams collaborate on ML projects.

Model registry neptune
Inspecting model’s versions through the dashboard in neptune.ai | Source

Through the UI, the model reviewer (or QA engineer) can also audit the model to ensure it is suitable to be deployed to production before approving it, releasing it, or auditing production models for governance. 

Providing an interface for downstream services to consume models

Model registries provide interfaces that enable downstream services to consume the model through API integration. The downstream systems can easily pull the latest (or acceptable) version of the model through this integration. The integration can also track offline and online evaluation metrics for the models. This makes it easy to build an automated setup with CI/CD/CT with ML pipelines, as we discussed in a previous section. The downstream service could be either a model user, an automated job, or a REST serving that can consume the most stable—or any—version of the model. 

Read also

Best Alternatives to MLflow Model Registry

Why do you need a model registry?

Since you have been introduced to what a model registry is in the previous section, your question might perhaps be why it’s useful for you and what benefits it brings to your workflow. The model registry enables machine learning operations. Recall that the four pillars of MLOps include:

  • 1 Production model deployment
  • 2 Production model monitoring
  • 3 Model lifecycle management
  • 4 Production model governance

Now let’s learn how a model registry component in your MLOps workflow can enable the deployment, management, and governance pillars.

Model registry enables faster deployment of your models

You learned earlier that one of the ways a model registry enables collaboration is that it bridges the gap between experiment and production activities. This results in a faster rollout of your production models. In addition, model registries store trained models for fast and easy retrieval by any integrated application or one of the model deployment tools, which is ultimately what you want in an ideal automation setup. 

With a model registry, software engineers and reviewers can easily identify and select only the best version of the trained models (based on the evaluation metrics), so the model can be tested, reviewed, and released to production. This makes it a good unifying component for both training and deployment pipelines, as there is less friction in the hand-off of production-ready models from experiment to production environments.

Model registry simplifies model lifecycle management

When you work in a large organization with lots of experiments running, many models, and cross-functional teams, managing the lifecycle of these models is often a challenging process. While management might be possible with one or a few models, in most cases you will have a lot of models running in production and servicing different use cases. The model registry helps tackle this challenge and simplify the management of your model lifecycle. With the registry, you can:

  • Register, track, and version your trained, deployed, and retired models in a central repository that is organized and searchable.
  • Store the metadata for your trained models, as well as their runtime dependencies so the deployment process is eased.
  • Build automated pipelines that make continuous integration, delivery, and training of your production model possible.
  • Compare models running in production (champion models) to freshly trained models (or challenger models) in the staging environment.

Here is an example of a registered and versioned trained model in a model registry, with the model training summary and relevant metadata included:

Model registry neptune 2
Example of a trained model’s version in the Neptune model registry | Source

The registry can also track and store online and offline evaluation metrics for the models. With this functionality, you can easily look up models that are in production to detect a drop in the performance of the model (or concept drift). You can also compare their online and offline performance to see which of the production models need to be reviewed, maintained, or archived.

Not only can you track evaluation metrics for the model both in production and training, but you can also track the system metrics to understand which models are consuming the most application resources (CPU, memory, and GPU usage). Here is an example of the neptune.ai model registry tracking the offline system and evaluation metrics for a top-performing model:

Model registry - monitoring
Trained model monitoring on neptune.ai | Source

Model registry enables production model governance

One thing that the model registry does really well is centralizing models and organizing their relevant details. With the registry, you have a central source of truth for your models throughout different stages of their lifecycle, including: 

  • Development, 
  • Validation,
  • Deployment,
  • And monitoring.

This helps create visibility and model discovery which is very crucial for models that require thorough regulatory complaint processes in specific industries such as health, finance, and law.

A user in charge of ensuring legal compliance should be able to easily review the models in the registry and understand: 

  • How the model was trained,
  • What version of data the model is trained on,
  • The conditions a model performs best and produces consistent results at, being well-informed of the model’s capabilities and limitations.

A standard model registry will also enforce the documentation and reporting of models, ensuring results are repeatable and can be reproduced by any auditing user. Review, approve, release, and rollback are all steps in the model launch process that the registry may help with. These choices are based on a variety of factors, including offline performance, bias and fairness measures, and the results of online experiments.

Model registry can also improve model security

Models, as well as the underlying packages used to build them, must be scanned for vulnerabilities, especially when a large number of packages are used to develop and deploy the models. The model registry can manage specific versions of the packages and you can scan and remove security vulnerabilities that may pose a threat to the system.

Models are likewise vulnerable to adversarial attacks, and as a result, they must be maintained and secured. In some cases, the least privilege access security concept must be employed so that only authorized users have access to specified model information, data privacy, and protecting PII and other resources.

Where does a model registry fit in the MLOps stack?

If you want to run machine learning projects efficiently and at scale, you would most likely need to add a model registry to your MLOps stack. Depending on what level of implementation you are in your MLOps stack, your needs and requirements for a model registry would differ. Where does it fit? Well, recall we learned earlier that the model registry sits between machine learning development and deployment. 

Model registry in MLOps level 0

If you are at level 0 implementation of MLOps, your workflow with a model registry could look like this: 

MLOps level 0 workflow with a model registry
MLOps level 0 workflow with a model registry | Source (modified)

The output from the experimentation step is fed into the model registry. This involves a manual process where the data scientist prepares the model artifact and metadata, and could also package them (serialization, containerization) before registering them. The operations team can push the packaged model to the staging environment for testing before deploying it to a prediction service engine that can integrate with other applications.

May interest you

ML Model Testing: 4 Teams Share How They Test Their Models

Model Deployment Strategies

Model registry in MLOps level 1

As opposed to level 0 (where the workflow is a manual process), the goal of the workflow in level 1 is to perform continuous training of the model by automating the ML pipeline. This is one process a model registry enables well because of its ability to integrate with the pipeline. At this level, the entire pipeline is deployed and when models are trained on the provided dataset, the output (trained model and metadata) is fed into the model registry where it can be staged and if it passes the necessary tests and checks, it can be fed to the continuous delivery pipeline for release.

MLOps level 1 workflow with a model registry
MLOps level 1 workflow with a model registry | Source (modified)

Model registry in MLOps level 2

The role of the model registry in level 1 of the MLOps workflow is also the same as that of level 2—the automated pipeline delivers the trained model to the model registry where it is staged, may be passed through QA checks, and sent to the continuous delivery pipeline:

Stages of the CI/CD automated ML pipeline with a model registry
Stages of the CI/CD automated ML pipeline with a model registry | Source (modified)

The model registry serves as a crucial component in any automated pipeline because event triggers can be integrated with it to promote models with good metrics upon re-training on fresh data or archive models.

Key functionalities of a model registry

In the previous section, we learned how the model registry fits into your MLOps workflow. To understand the key functionalities of a model registry and its must-haves, let’s take a look at how it fits between development and operations.

Workflow components of an ideal model registry
Workflow components of an ideal model registry | Source: Author

The key functionalities in the model registry include the following:

  • Integrates with experiment management systems or training pipelines.
  • Provides a staging environment for your trained models.
  • Integrates with model delivery tools and services for automation.
  • Integrates with model deployment tools.

Integrate with experiment management systems or training pipelines

Model registries must be able to integrate with systems that output the trained models. The trained models could be the raw artifacts (model weights, configuration, and metadata) or models that have been serialized into a file (e.g., an ONNX file) for compatibility with the production environment or containerized (using Docker) to be exported to the production environment.

The model registry should be able to:

  • Register the model, 
  • Assign a version to it,
  • Note the version of the dataset the model was trained on,
  • Add annotations and tags,
  • Retrieve the parameters, validation results (including metrics and visualizations), and other relevant model metadata on the model from the experiment management system.

To make collaboration easier, the registry should also include details such as:

  • The model owner or developer, 
  • Experiment run id the model was trained under, 
  • Versioned model source code, 
  • Environment runtime dependencies used to train the model (and the versions),
  • Comments and model change history, 
  • And the model documentation.

Integrate with staging environment for your trained models

The model registry should provide the functionality for integrating with the staging environment for running all types of checks and balances on the model. These checks can include integration testing (with other applications) and other QA tests before the model can be promoted to the production environment. 

Sharing and collaboration should be enabled for models in this environment so that deployment engineers can work with data scientists to test models and ensure they are good to deploy.

In the staging environment, the model reviewers should also be able to perform fairness checks on the model to make sure it:

  • Outputs explainable results
  • Complies with regulatory requirements
  • And provides useful business benefits.

Generally, the governance and approval workflows should be configured in this environment. There should also be access level control and secure authorization to models in this environment, especially models trained on data with sensitive information.

Integrate with model delivery (CI/CD) tools and services for automation

Automation is a critical part of building any scalable software. In machine learning, building automated pipelines will allow you to spend more time building new products rather than maintaining old models

A model registry should be able to integrate with pipeline automation tools and provide custom APIs that can allow you to plug custom workflow tools. For example, using webhooks to trigger downstream actions based on predefined events in the registry. 

You should also be able to configure model promotion schemes through different environments like development (training), staging (testing), and production (serving).  Performance is a crucial requirement for building automated pipelines. Model registries should be highly available for automated jobs that are event- or schedule-based to enable continuous training and delivery of the model.

Integrate with model deployment tools

Eventually, models have to be deployed, and the more efficient the deployment process, the better. Model registries should be able to integrate with downstream services and REST serving services that can consume the model, and serve it in the production environment. 

The registry should also be able to collect real-time (or aggregated) metrics on the production model, to log performance details of the model. This will be helpful for comparison between models (deployed and staged), as well as auditing the production model for review.

How do you set up an ML model registry?

Build vs manage vs purchase

Setting up a model registry for your MLOps workflow will require you to decide on either building one, maintaining one, or buying one. So far in this guide, we have focused on understanding what a model registry is and why you need one. You have also learned where a model registry fits at a certain implementation level of the MLOps workflow. We have also established that a model registry can be useful regardless of what MLOps implementation level you are. Perhaps the higher the level, the more you would need a model registry too.

One of the more crucial decisions that come to mind is if you should build your own, manage, or self-host a solution, or purchase a fully-managed solution. Let’s take a close look at each one of these decisions and the factors to consider before making a choice.

Building a model registry solution

Like any software solution, if you understand the key functionalities and requirements, you can build out a system yourself. This is the case with a model registry. You may want to set up the following:

  • Object storage for models and artifacts.
  • Database for logging model details.
  • API integration for both receiving models, promoting models across various environments and collecting model information from the different environments.
  • User interface (UI) for ML teams to interact with a visual workflow. 

While building the solution yourself might seem ideal, you should consider the following factors:

  • Incentive: What’s the incentive to build out your solution? Is it for customization or for owning a proprietary license to the solution?
  • Human resources: Do you have the talents and skills to build out your solution?
  • Time: How long would it take you to build out a solution and is it worth the wait?
  • Operations: When the solution is eventually built out, who would maintain its operations?
  • Cost: What would it cost you to build a solution, including the maintenance of the solution?

Maintaining a self-hosted model registry

Another option to consider—if you do not want to build out a solution—is to maintain an existing solution yourself. In this scenario, the solution has been built out already, but you might have to manage some features such as the object storage and the database. Most of these existing solutions are open source solutions.

The following are the factors to consider:

  • Type of solution: Are you going to opt for an open-source solution with no license cost or a closed-source solution with license cost?
  • Operations: Who is going to manage the solution? Does the solution support consistent maintenance and software updates?
  • Cost: What is the cost of operating the solution in terms of the infrastructure to host it and the running cost? 
  • Features: What features have already been implemented and what features do you have to build and manage yourself? Is it worth adopting compared to building out your solution?
  • Support: What type of support is available in case things break during operations? Is there a community or dedicated customer support channel? For open-source solutions, while you might have a community, you will likely lack the necessary developer support required to fix things compared to closed-source solutions.
  • Accessibility: How easy is it to get started with the solution? Is the documentation comprehensive enough? Can everyone from the model reviewers, to the model developers, and software engineers intuitively use the solution?

Purchase the license to a fully-managed solution

The final option to consider is subscribing to a fully managed solution where the operations and management of the registry are handled by the solution vendor. In this case, you do not have to worry about building or maintaining a solution. You just have to ensure your systems and services can integrate with the registry.

Here are the factors to consider: 

  • Industry type: What type of industry is the model built for? What sensitive information has the models learned? Are there data privacy compliance measures? Is the model only allowed to stay on-premise?
  • Features: Are the key features and functionalities of any model registry available in this solution? What extra features are available and how relevant are they to your workflow?
  • Cost: What’s the cost for purchasing a license and do the features justify the cost?
  • Security: How secure is the platform hosting the solution? Is it resistant to third-party attacks?
  • Performance: Is the registry highly performant? For situations where models are too large, can the registry provide models for services to consume at low latency?
  • Availability: What’s the uptime of the solution and does it meet your required service level agreement (SLA)?
  • Support: What level of support is available in case things go south?
  • Accessibility: How easy is it to get started with the solution? Is the documentation and learning support decent enough? What’s the learning curve in terms of usage?

You have now learned the various options available for you to decide on if you want to choose a solution. You should also carefully consider the factors under each option so you can make an optimal decision. Let’s take a look at some of the model registry solutions on the market. 

What ML model registry solutions are out there?

1. neptune.ai model registry

Type: Proprietary, with free and paid offerings.

Options: Managed (self-hosted), fully-managed offering.

Model registry header
Neptune model registry dashboard | Source

Neptune is a metadata store for MLOps, built for research and production teams that run a lot of experiments. 

It gives you a central place to log, store, display, organize, compare, and query all metadata generated during the machine learning lifecycle. 

Individuals and organizations use Neptune for experiment tracking and model registry to have control over their experimentation and model development. 

Features

Neptune lets you:

  • Create models and track generic model metadata, such as the model signature and validation dataset.
  • Create versions of your models:
    • Log parameters and other metadata that might change from one version to another.
    • Track or store model binaries.
    • Track the performance of specific model versions.
  • Manage model stage transitions using four available stages.
  • Query and download any stored model files and metadata.

Check the model registry documentation for more details.

Neptune stores:

  • Dataset metadata,
  • Model source code version,
  • Environment configuration versions,
  • Model parameters,
  • Model evaluation metrics,
  • Model binaries.

It supports and stores many ML model-related metadata types, and you can version, display, and query most metadata generated during model building.

In terms of cost, Neptune has both a self-hosted option available and a fully-managed Cloud offering with various subscription tiers.

If you want to learn more about it:

️ See the product page

️ See the documentation

️ Check out an example project (no registration required)

2. MLflow model registry

Type: Open source

Options: Managed (self-hosted), fully-managed offering.

MLflow model registry dashboard
MLflow model registry dashboard | Source

The MLflow Model Registry component is a centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of an MLflow Model. It provides model lineage (which MLflow experiment and run produced the model), model versioning, stage transitions (for example from staging to production), and annotations. The model registry component was one of the clamored needs of MLflow users in 2019. 

The MLflow Model Registry is one of the few open-source model registries available in the market today. You can decide to manage this on your infrastructure or use a fully-managed implementation on a platform like Databricks.

Features

MLflow provides: 

  • Annotation and description tools for tagging models, providing documentation and model information such as the date the model was registered, modification history of the registered model, the model owner, stage, version, and so on.
  • Model versioning to automatically keep track of versions for registered models when updated.
  • An API integration to serve machine learning models as RESTful APIs for online testing, dashboard updates, etc.
  • CI/CD workflow integration to record stage transitions, request, review, and approve changes as part of CI/CD pipelines for better control and governance.
  • A model stage feature to assign preset or custom stages to each model version, like “Staging” and “Production” to represent the lifecycle of a model.
  • Configuration for promotion schemes to easily move models across different stages.

MLflow stores:

  • The model artifacts,
  • Metadata,
  • Parameters,
  • Metrics.

The pricing will depend on the option you are opting for—a self-hosted solution or a fully-managed offering.

You can learn more about the workflow here and get started with MLflow here.

3. Amazon Sagemaker model registry 

Type: Bundled with SageMaker’s free-tier and on-demand pricing.

Options: Fully managed offering

Amazon SageMaker model registry
Amazon SageMaker model registry | Source

Amazon SageMaker is a fully managed service that developers can use for every step of ML development, including model registry. The model registry is part of the suite of MLOps offerings in SageMaker that helps users build and operationalize machine learning solutions by automating and standardizing MLOps practices across their organization.

Features

With the SageMaker model registry you can do the following:

  • Catalog models for production.
  • Manage model versions.
  • Associate metadata, such as training metrics, with a model.
  • Manage the approval status of a model.
  • Deploy models to production.
  • Automate model deployment with CI/CD.

You can make a model group to keep track of all the models you’ve trained to solve a specific problem. Each model you train can then be registered, and the model registry will add it to the model group as a new model version. A typical workflow might look like the following:

  • Create a model group.
  • Create an ML pipeline with SageMaker Pipelines that trains a model.
  • For each run of the ML pipeline, create a model version that you register in the model group you created in the first step.

The cost of using the Model Registry is bundled with the SageMaker pricing tiers. You can learn more about the model registry component of SageMaker in the documentation.

Read also

The Best Amazon SageMaker Alternatives [for Experiment Tracking and Model Management]

4. Verta.ai model registry

Type: Proprietary, with Open Source, SaaS, and Enterprise offerings.

Options: Fully managed offering.

Verta.ai Model Registry dashboard
Verta.ai model registry dashboard | Source

The Verta.ai Model Registry helps you manage your AI‑ML models in one place. It provides features that enable you to package, validate, and reliably promote release-ready models and apply safe release and governance practices.

Features

  • It provides a unified hub to publish release-ready models by allowing you to:
    • Connect to an experiment management system for end-to-end information tracking.
    • Publish all the model metadata, documentation and artifacts in one central repository.
    • Select the best fit models from model experiments and stage them for release.
    • Record state transitions and manage release lifecycle from development, staging, production to archived.
  • It enables model validation and CI/CD automation by allowing you to:
    • Integrate with existing CI/CD pipelines like Jenkins, Chef, and so on.
    • Use webhooks to trigger downstream actions for model validation and deployment.
    • Automatically track model versions and tagged releases.
  • Setup granular access control editors, reviewers, and collaborators.
  • Access detailed audit logs for compliance.
  • Release models once they pass basic security and fairness checks.

You can learn more about the pricing tiers available on this page. You can learn more about the Verta Registry on this page.

Want to learn about more solutions available in the market?

Best Alternatives to MLflow Model Registry

Model Registry Makes MLOps Work – Here’s Why

Clearing some model registry misconceptions

Model registry vs model store

A common misconception is that a model registry is just a model store as a marketing term. While both components are almost indistinguishable in terms of functionalities, and both may be used interchangeably by some practitioners, there are some subtle differences between them.

The model store is a superset of a model registry – meaning within a model store, you can find the model registry component. The store is a service other services can interface with to retrieve models from the registry. In a model registry, you may store and fetch models (like a docker registry), but in a model store, you can have the full package of logging, discovery, assets, pipelines, metadata, all model information, and even blueprints to use for building new models. Examples of typical model stores are Google’s AI Hub and Hugging Faces Model library

Model registry vs experiment tracking

Another common misconception in the MLOps world is that the registry is just “experiment tracking renamed”. As you may have learned, this is far from the truth. The model registry has to integrate with the experiment management system (which tracks the experiments) to register models from various experiment runs to make them easier to find and work with. Let’s take a look at some of the key differences between a model registry and an experiment management system.

 
Model registry
Experiment tracking

Purpose

To store trained, production, and retired models in a central repository

To track experiment runs of different parameter configurations and combinations

Priority

To make sure models are discoverable and can be accessed by any user or system

To make sure experiments are easier to manage and collaborate on

Integration

Integrates with the experiment tracking system to register models from successful experiments including the model and experiment metadata

Integrates with the training pipeline to perform experiment runs and track experiment details including the dataset version and metadata

MLOps

A crucial piece of MLOps and production models

Most useful in the model development phase, but has indirect impact on operationalizing the model.

Conclusion

In this guide, you have learned so far that model registries enable the successful operationalization of machine learning projects. It provides visibility to your models and makes it easy for users to discover and work with models. With the model registry, you can:

  • Deploy models with certainty.
  • Manage the model lifecycle effectively.
  • Enabled an automated workflow.
  • Share and collaborate on models and projects in an organized workflow.
  • Govern machine learning models appropriately.

The next step is for you to pick up a solution and see if it improves your MLOps workflow. If you are an individual data scientist, Neptune is free and easy to get started with within 5 minutes – but of course, let the article be your guide to making a concrete decision based on your level of MLOps implementation and the factors worth taking into account.

Here’s to more building and Ops-ing!

References and resources