We Raised $8M Series A to Continue Building Experiment Tracking and Model Registry That “Just Works”

Read more

Blog » MLOps » Real-World MLOps Examples: Model Development in Hypefactors

Real-World MLOps Examples: Model Development in Hypefactors

In this first installment of the series “Real-world MLOps Examples,” Jules Belveze, an MLOps Engineer, will walk you through the model development process at Hypefactors, including the types of models they build, how they design their training pipeline, and other details you may find valuable. Enjoy the chat!

Company profile

Hypefactors
Media monitoring dashboard in Hypefactors | Source

Hypefactors provides an all-in-one media intelligence solution for managing PR and communications, tracking trust, product launches, and market and financial intelligence. They operate large data pipelines that stream in the world’s media data ongoingly in real-time. AI is used for many automations that were previously performed manually.

Guest introduction

Could you introduce yourself to our readers?

Hey Stephen, thanks for having me! My name is Jules. I am 26. I was born and raised in Paris, I am currently living in Copenhagen.

Hey Jules! Thanks for the intro. Walk me through your background and how you got to Hypefactors.

I hold a Bachelor’s in statistics and probabilities and a Master’s in general engineering from universities in France. On top of that, I also graduated in Data Science with a focus on deep learning from Danish Technical University, Denmark. I’m fascinated by multilingual natural language processing (and therefore specialized in it). I also researched anomaly detection on high-dimensional time series during my graduate studies with Microsoft. 

Today, I work for a media intelligence tech company called Hypefactors, where I develop NLP models to help our users gain insights from the media landscape. What currently works for me is having the opportunity to carry out models from prototyping all the way to production. I guess you could call me a nerd, at least that’s how my friend describes me, as I spent most of my free time either coding or listening to disco vinyl.

Model development at Hypefactors

Could you elaborate on the types of models you build at Hypefactors?

Even though we also have computer vision models running in production, we mainly build NLP (Natural Language Processing) models for various use cases. We need to cover multiple countries and handle many languages. The multilingual aspect makes developing with “classical machine learning” approaches hard. We craft deep learning models on top of the transformer library

We run all sorts of models in production, varying from span extraction or sequence classification to text generation. Those models are designed to serve different use cases, like topic classification, sentiment analysis, or summarisation.

Could you pick one use case from Hypefactors and walk me through your machine learning workflow end-to-end?

All our machine learning projects tend to follow a similar life cycle. We either start an ML project to improve our users’ experience or add a meaningful feature to our clients’ experience and then translate it into an ML task. 

Let me walk you through the process we followed for our latest addition, a named entity recognition model. We started by crafting a POC (proof of concept) using out-of-the-box models, but due to some drift between our production data and the data the models were fine-tuned on, we had to internally label our data following annotation guidelines that we neatly defined. We then started by designing a relatively simple model and iterated over it until we reached a performance comparable to the SOTA. The model was then optimized for inference and tested under real-life conditions. 

Based on the outcome of the QA (quality assurance) session, we iterate on the data (e.g., refining the annotation guidelines) as well as the model (e.g., improving its precision) before deploying it to production. Once deployed, our models are continuously monitored and regularly improved using active learning.

ML workflow at Hypefactors
ML workflow at Hypefactors | Source: Author

Could you describe your tool stack for model development?

We use several different tools for model development. We recently migrated our codebase to a combination of both PyTorch Lightning and Hydra to reduce boilerplate. The former enables structured code between four main components: 

  • 1Data
  • 2Model
  • 3Optimization
  • 4Non-essentials

PyTorch Lightning abstracts away all the boilerplate code and engineering logic. Since its adoption, we have noticed a significant speedup when iterating on models or launching new PoCs (proof of concepts).

Additionally, Hydra helps you “elegantly” write configuration files. To help us design and implement neural networks, we heavily rely on the Transformer library. When tracking experiments and data versioning, we use Neptune.ai, which has smooth integration with Lightning. Finally, we picked Metaflow over other tools to design and run our training pipelines.

Hypefactors model training and evaluation stack
Hypefactors model training and evaluation stack | Source: Author

How does your NLP use case drive the training pipeline design choices? 

Running an end-to-end NLP training pipeline requires a lot of computing power. To me, one of the most arduous tasks in natural language processing is data cleaning. This becomes even more relevant when working with textual data directly extracted from the web or social media. Even though big language models like BERT or GPT are fairly robust, data cleaning is a crucial step as this can directly impact a model’s performance. This implies quite heavy preprocessing and thus the need for parallel computing. Also, fine-tuning pre-trained language models requires running training on hardware optimized for computation (e.g., GPU, TPU, or IPU).

Also, we treat the evaluation of our NLP models differently than “regular” ones. Even though evaluation metrics are quite representative of a model’s performance, one can not solely rely on them. A good illustration of such a problem is the ROUGE score, used for abstractive summarization. Even though the ROUGE score might give a good representation of the n-grams overlap between the summary and the original text, manual inspection is needed to assess semantical and factual exactness. This makes it really hard to have a fully automated pipeline that does not require any human intervention.

What tools do you use for your training pipelines, and what are their main components? 

We recently started to design reusable end-to-end training pipelines, mainly to save us time. Our pipelines are conceived using Netflix’s Metaflow, and they all share the same building blocks. 

We first fetch freshly manually annotated data from our labeling tool before processing it. Once processed, the dataset is versioned along with a configuration file. 

We also save code and git hashes, making it possible to reproduce the exact same experiment. We then start training the desired model. 

At the end of the training, the best weights are saved into an in-house tool and a training report is generated, enabling us to compare this run with previous ones. We finally export our checkpoints to ONNX and optimize the model for inference. 

See: Scaling-up PyTorch inference: serving billions of daily NLP inferences with ONNX runtime [Microsoft open source blog]

The way our pipelines are designed, anyone with a bit of technical knowledge can either reproduce an experiment or train a new version of an existing model with freshly annotated data or a different configuration.

What kinds of tools are easily available out there and what tools are required to be implemented in-house?

Regarding the modeling aspect, we heavily rely on the transformers library. However, due to the specificity of our use cases (web data and multilingual needs), we craft models on top of it. One of the drawbacks of working with such massive models is that they are hard to scale. There are quite a bunch of tools available to shrink transformer-based models (e.g., DeepSpeed, DeepSparse), but they suffer from base-model limitations. We have implemented an in-house tool that enables us to train various early-exiting architectures, perform model distillation, pruning, or quantization.

The experiment tracking and metadata store space features plenty of complete tools which are easy to use, so there was no need for us to reinvent the wheel.

The same goes for ML workflow orchestrators. We actually spent quite some time picking one that was mature enough and for which the learning curve was not too steep. We ended up choosing Metaflow over Kubeflow or MLFlow because of its ease of adoption, its available features, and its growing community.

In general, there are a plethora of tools available for all the different building blocks of a machine learning workflow, which might also be overwhelming.

What type of hardware do you use to train your models and do you use any kind of parallel computing?

All our training flows are run on machines featuring one or more GPUs, depending on the compute power required for the given task. PyTorch Lightning makes it relatively easy to switch from single to multi-GPUs and also comes with various backend and distributed modes. NLP tasks require relatively heavy preprocessing. We thus use distributed training through the DDP PyTorch mode, which uses multiprocessing instead of threading to overcome Python’s GIL problem. Along with this, we try to maximize the usage of tensors operations when designing models, to fully leverage the GPUs’ capabilities.

As we only fine-tune models, there has not been a need for us to perform sharded training. However, we occasionally train models on TPUs when we have the need to iterate fast.

When it comes to data processing, we use “datasets,” a Python library built on top of Apache Arrow, enabling faster I/O operations.

What tool(s) do you wish to see coming out in the near future?

I think every Machine Learning Engineer will agree that what is currently missing is this one tool to rule-them-all. People need to have at least 5 to 6 different tools for the training part only, which makes it hard to maintain as well as to pick up. I really wish we will soon see emerging tools that will encompass multiple steps.

Closer to the NLP space, I am seeing more and more people focusing on ensuring annotation quality, but we are still quite limited by the nature of the task. Spotting wrong labels is a difficult task, but a solid tool for it could really be a game-changer. I think most data scientists will agree that data inspection is a really time-consuming task.

Also, an important aspect of the workflow is model testing. It is really tricky in NLP to find relevant metrics guaranteeing the faithfulness of a model. There are a couple of tools popping up (e.g. we started using Microsoft’s “checklist“) but having a wider range of tools in this area could, in my opinion, be interesting. 

For each task, our data experts come up with a set of behavioral test cases, from relatively simple to more complex, divided into “test aspects.”  We then use checklist to generate a summary of the different tests and compare experiments. The same goes for model explainability.


Thanks to Jules Belveze and the team at Hypefactors for working with us to create this article!


READ NEXT

Continuum Industries Case Study: How to Track, Monitor & Visualize CI/CD Pipelines

7 mins read | Updated August 9th, 2021

Continuum Industries is a company in the infrastructure industry that wants to automate and optimize the design of linear infrastructure assets like water pipelines, overhead transmission lines, subsea power lines, or telecommunication cables.  

Its core product Optioneer lets customers input the engineering design assumptions and the geospatial data and uses evolutionary optimization algorithms to find possible solutions to connect point A to B given the constraints. 

As Chief Scientist Andreas Malekos, who works on the Optioneer AI-powered engine, explains:

“Building something like a power line is a huge project, so you have to get the design right before you start. The more reasonable designs you see, the better decision you can make. Optioneer can get you design assets in minutes at a fraction of the cost of traditional design methods.”

Andreas Malekos

Andreas Malekos

Chief Scientist @Continuum Industries

But creating and operating the Optioneer engine is more challenging than it seems:

  • The objective function does not represent reality
  • There are a lot of assumptions that civil engineers don’t know in advance
  • Different customers feed it completely different problems, and the algorithm needs to be robust enough to handle those

Instead of building the perfect solution, it’s better to present them with a list of interesting design options so that they can make informed decisions.

The engine team leverages a diverse skillset from mechanical engineering, electrical engineering, computational physics, applied mathematics, and software engineering to pull this off.

Problem

A side effect of building a successful software product, whether it uses AI or not, is that people rely on it working. And when people rely on your optimization engine with million-dollar infrastructure design decisions, you need to have a robust quality assurance (QA) in place.

As Andreas pointed out, they have to be able to say that the solutions they return to the users are:

  • Good, meaning that it is a result that a civil engineer can look at and agree with
  • Correct, meaning that all the different engineering quantities that are calculated and returned to the end-user are as accurate as possible

On top of that, the team is constantly working on improving the optimization engine. But to do that, you have to make sure that the changes:

  • Don’t break the algorithm in some way or another
  • They actually improve the results not just on one infrastructure problem but across the board

Basically, you need to set up a proper validation and testing, but the nature of the problem the team is trying to solve presents additional challenges:

  • You cannot automatically tell whether an algorithm output is correct or not. It is not like in ML where you have labeled data to compute accuracy or recall on your evaluation set. 
  • You need a set of example problems that is representative of the kind of problem that the algorithm will be asked to solve in production. Furthermore, these problems need to be versioned so that repeatability is as easily achievable as possible.
Continue reading ->
Active learning

Active Learning: Strategies, Tools, and Real-World Use Cases

Read more
MLOps LIve Jacopo Tagliabue

Setting up MLOps at a Reasonable Scale with Jacopo Tagliabue

Read more
MLOps pipeline for computer vision

Building MLOps Pipeline for Computer Vision: Image Classification Task [Tutorial]

Read more
Building MLOps pipeline for time series

Building MLOps Pipeline for Time Series Prediction [Tutorial]

Read more