Case Study

ReSpo.Vision

Neptune helped us reach our objective of easier pipeline tracking. We can more easily debug pipeline problems and assess the outputs' performance and quality.
Wojtek RosiƄski
Chief Technology Officer at ReSpo.Vision

ReSpo.Vision uses computer vision and machine learning in sports data analysis to extract 3D data from single-view camera sports broadcast videos, providing players, scouts, managers, clubs and federations with an unmatched depth of knowledge. They concentrate on football (soccer) and target sports clubs, leagues, bookmakers and media.

Football match analytics with ReSpo.Vision technology | Source

The team works on all aspects of machine learning. They collect raw data, label it, and add new datasets to training and evaluation pipelines. If there’s an improvement during the iteration, they’d push the new models into their production system.

Kedro pipelines are crucial to their tech stack because they manage the workflow through those pipelines. The team defines the number of pipelines for specific models, and algorithms of data processing methods as pipelines, enabling them to easily set parameters for multiple jobs, creating a manageable and reproducible workflow.

Problem

The ReSpo.Vision team works on improving their core pipelines consisting of stacks of models that enable them to extract 3D data from the football field. So, they run a lot of kedro pipelines to get and process this kind of data from football (soccer) games. 

It was great at the start, but when they scaled up the number of matches processed, which led to an increase in kedro pipelines ran to build different models, they realized that:

  • It would be hard to manage the size of many kedro pipelines running different experiments.
  • It would not be possible to report on the quality of pipeline results to clients and other stakeholders who are not technical.
  • Debugging pipeline failures at scale would not be productive.

Managing the scale of many Kedro pipelines

avatar lazyload
quote
One of the biggest challenges [we had] was managing the pipelines and the process itself because we had 40 to 50 different pipelines. Depending on the exact use case or what kind of data we’d like to output, we could have different combinations for running them to get different outputs. So basically, the entire system isn’t so simple.
Wojtek RosiƄski Chief Technology Officer at ReSpo.Vision

Wojtek said that it was hard to run many experiments that matched different use cases if they didn’t keep track of the metadata for the pipeline experiments. They would usually have a hard time figuring out: 

  • If their pipelines all finished successfully.
  • The dataset used to create an experiment run.
  • The exact parameters used for each run.
  • How the results of each run compare to the previous runs.

In the case of tracking the parameters for each run, whenever they had great results from one of the experiments, it would take additional effort to figure out what combination of parameters they used in Optuna gave the best results.

When they increased the number of football games their pipelines handled, the problem became clear.

Reporting the quality of pipeline results to clients and non-technical stakeholders

“We wanted a friendly method for even a non-technical person to look at a couple of plots, scores, or similar, and decide if we wish to send the processed data (to the client), or maybe someone else with more knowledge should investigate it.”

avatar lazyload
quote
We wanted a friendly method for even a non-technical person to look at a couple of plots, scores, or similar, and decide if we wish to send the processed data (to the client), or maybe someone else with more knowledge should investigate it.
Ɓukasz Grad Chief Data Scientist at ReSpo.Vision

The business of ReSpo.Vision is directly affected when valuable outputs (processed data) for analytics are sent to client applications and downstream pipelines. They had difficulty communicating the results to clients, non-technical people, and some engineers on the team.

Debugging pipeline failures at scale was not productive 

Debugging issues with the experiment results was difficult because they ran many pipelines at scale, and most pipeline results depended on the output from upstream pipelines.

Solution

The objectives of the team at ReSpo.Vision are to:

  • Work on their core pipeline models to improve the quality of output data they would send to their customers.
  • Leverage wrappers and tools to facilitate their ML workflow.

For the team to achieve these goals, they need to be able to successfully run their pipelines at scale. The blockers we talked about in the last section were things that stopped them from doing this. 

They needed a better way to manage their pipeline runs and solutions to help them make the best use of their resources. The solution is required to meet the following requirements:

  • Simple integration with kedro pipelines.
  • High readability for logging tons of pipeline metadata in real-time.
  • Accessible and intuitive comparison feature for pipeline experiment runs.

The team used Neptune in a project they previously worked on and found that it met those requirements.

avatar lazyload
quote
We didn’t look for other solutions to Neptune because we had a pleasant experience with it before. We knew it would be hugely beneficial to have one place to sort and filter all the results we generated from running the pipelines.
Wojtek RosiƄski Chief Technology Officer at ReSpo.Vision

Neptune provided the team with these features:

  • 1 Kedro-neptune and 25+ more integrations
  • 2 Experiment tracking for complex, high-performance kedro pipelines
  • 3 An intuitive user experience and an interface that works for all stakeholders
  • 4 Monitoring compute usage for their pipelines
  • 5 Dashboards and UI that work
  • 6 Responsive developer support

  • “We were able to integrate Neptune with our ML and kedro code in 1-2 days, so it was pretty swift as there were no problems at this stage.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    One of the requirements Neptune needed to meet for Wojtek and his team was to provide easy integration with kedro, which turned out to be the case. With the kedro-neptune plug-in, the team was able to connect their kedro pipelines to their Neptune account quickly.

    They also found that Neptune had over 25 more integrations with other Python machine learning libraries in their stack, which made it easy to adapt workflows. Ɓukasz claimed that:

    “I was surprised at how many frameworks and machine learning tools Neptune integrates with. For example, we use Pytorch Lightning and Neptune integrates with this framework, so it was also effortless to add logging outside of kedro.” – Ɓukasz Grad, Chief Data Scientist at ReSpo.Vision

  • “Experiment tracking is an important part of the pipeline process because we can easily assess which models are the best and not waste compute resources on training non-optimal models.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Aside from the scale of the pipelines the team runs, another aspect of their use case that stood out was the sheer complexity of the pipelines they ran and the amount of compute resources they consumed.

    “When we run ten parallel pipelines and some fail, if we run them as detached processes or delegate them, it would become quite challenging to track them on the machine itself. Neptune is very helpful in these situations because it lets me sort through those runs and easily catch runs executed with an error. Then, I can triage the error to see what happened and debug the pipeline.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Neptune helps the ReSpo.Vision team to:

    • Track parameters (such as batch sizes and confidence thresholds)
    • Compute experiment statistics for all their kedro pipelines regardless of the run sequence

    It also worked regardless of the scale of the pipeline and the workload it handled

    Sometimes, it’s not easy to see what a good result looks like during an inference process since they have a couple of pipelines that depend on upstream pipelines to give a valuable output. So they often find themselves leveraging the summary statistics provided by Neptune.

    “We use Neptune to compute some statistics or aggregate the runs and know what pipelines we should run based on the output of previous runs and if there were issues with the last run. Because it’s tough to look at every football match we process and the outputs (bounding boxes, e.t.c) and tell whether they’re good or bad—it will be challenging to work with manually.” – Ɓukasz Grad, Chief Data Scientist at ReSpo.Vision

    The team uses Neptune to monitor the statistics of their training and data processing pipeline

    The team uses Neptune to monitor the statistics of their training and data processing pipeline | Source

  • “When we use Neptune with kedro, we can easily track the progress of pipelines being run on many machines, because often we run many pipelines concurrently, so comfortably tracking each of them becomes almost impossible. With Neptune, we can also easily run several pipelines using different parameters and then compare the results via UI.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Neptune’s experiment tracking feature also provides more functionality than kedro’s native experiment tracking functionality (in beta as of this writing), so it was a no-brainer for the team to keep using Neptune.

    They didn’t have to write any custom logic to track the results of the pipeline experiments. The team would then leave the rest to Neptune, which would log their preferred metadata and track pipeline runs after they chose the metadata and metrics they wanted to monitor.

    The dashboards and user interface also provided a unique feel to their workflow. They mostly interact with Neptune through the UI, which they tailor to their needs. They added some custom columns that will make it easy to see the insights from pipeline runs through charts and other visualizations.

    “I like those dashboards because we need several metrics, so you code the dashboard once, have those styles, and easily see it on one screen. Then, any other person can view the same thing, so that’s pretty nice.” – Ɓukasz Grad, Chief Data Scientist at ReSpo.Vision

    They were able to quickly identify what piqued their interest or drew their attention thanks to the visualizations. Both technical and non-technical users found the reports and dashboards to be interactive and intuitive.

    “For all our running pipelines, we use those Neptune dashboards. We try to actively have one dashboard per pipeline that will summarize the execution, provide those aggregates, some plots, and other things that we can look at to judge whether or not the output is good.” – Ɓukasz Grad, Chief Data Scientist at ReSpo.Vision

    They can also interact with Neptune through the API, which lets them perform actions such as filtering the execution time of different configurations. They can query all those times, and then analyze them from their notebook or other applications.

  • “We monitor the parameters in each workflow stage between data preprocessing, prediction, and data post-processing. We can define, for example, the number of workers which will handle each of those stages to optimize the performance as much as possible and ensure that the GPUs have the highest possible throughput.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Neptune also helped the team by giving them the information they needed to make the most of the compute used by their pipelines. Some model pipelines use datasets with hundreds of thousands of images to train. Because most models will also need high-resolution datasets, the amount of computation required to train them is massive.

    Most of the time, they run kedro pipelines in the cloud. They usually set up big machines with, say, 100 GPUs, and then divide pipeline tasks among the GPUs.

    “For some of the pipelines, Neptune was helpful for us to see the utilization of the GPUs. The utilization graphs in the dashboard are a perfect proxy for finding some bottlenecks in the performance, especially if we are running many pipelines of those (football) matches.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    By logging their functional (model evaluation) and operational (system and resource usage) metrics, they can keep track of the results and the GPU consumption rate for each pipeline in real time. So, they can use what they’ve learned to improve their experiments and make sure that the jobs that are running use all the GPUs that are available.

    “The major point was that during the machine learning experiment, you want the GPU to be at 100% throughout the whole experiment, and it’s not so obvious when you look at it at one point in time. So that Neptune feature was very nice because the compute utilization is tracked by default, so that’s very handy.” – Ɓukasz Grad, Chief Data Scientist at ReSpo.Vision

    The team uses Neptune to monitor the resource consumption of their training and data processing pipelines

    The team uses Neptune to monitor the resource consumption of their training and data processing pipelines | Source

  • “Pay-as-you-go is a practical pricing model for us because some months we will be running lots of pipelines and other months just common ones. So, when we can adjust the amount of computation or logging time, what we get from Neptune is a good solution.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Neptune’s pricing option met the team’s expectations because most of the logging time they get is from running the inference pipelines and not the model training. In some months, they only pay for the monitoring hours when they have to process hundreds of football matches.

    In addition to the pricing model, the team found Neptune’s developer support team supportive right from the onboarding stage.

Results

avatar lazyload
quote
From my perspective, I expected Neptune to be able to track our kedro pipelines easily, and this is what Neptune delivered.
Wojtek RosiƄski Chief Technology Officer at ReSpo.Vision
The team can share the processed data for analytics and other output metadata directly with clients and non-technical stakeholders | Source

The Respo.Vision team was able to do the following because they used Neptune as their experiment tracking tool for kedro pipelines:

  • “Neptune made it much easier to compare the models and select the best one over the last couple of months, especially since we’ve been working on this player and team separation model in an unsupervised way, during a match, to split the players into two separate teams.” – Ɓukasz Grad, Chief Data Scientist at ReSpo.Vision

    The team had a number of ways to solve the problem of separating players from their teams, and they would usually train hundreds of models. Because they could work on a single task for three months or so, Neptune was useful in selecting the best model(s). They wouldn’t be able to recall how each model performed if they didn’t log metadata and results to Neptune throughout that time.

    What does the “best performing model” mean for their business?

    “If we can choose the best performing model, then we can save time because we would need fewer integrations to ensure high data quality. Customers are much happier because they receive higher quality data, enabling them to perform more detailed match analytics.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

  • “If we know which models will be the best and how to choose the best parameters for them to run many pipelines, then we will just run fewer pipelines. This, in turn, will cause the compute time to be shorter, and then we save money by not running unnecessary pipelines that will deliver suboptimal results.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Choosing the best model(s) from hundreds and the corresponding parameters allows the team to run pipelines that only provide the best value, reducing development and compute time, saving money, and improving workflow.

  • “Neptune helped us achieve our objective of easier pipeline tracking. We can more easily debug pipeline problems and assess the outputs’ performance and quality.” – Wojtek RosiƄski, Chief Technology Officer at ReSpo.Vision

    Because they’d only be dealing with the pipelines that are relevant and provide value, the team’s workflow improved, making kedro pipeline debugging easier and faster. And, because their pipeline runs log parameters, it’s now easier for them to parse those logs and identify problems with failed runs using Neptune.


Thanks to Wojtek RosiƄski and Ɓukasz Grad for working with us to create this case study!

Looking for a tool that understands your workflow and adjusts to it?

    Contact with us

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

    * - required fields