MLOps Blog

How to Use SHAP Values to Optimize and Debug ML Models

7 min
28th August, 2023

Picture this, you’ve dedicated countless hours to training and fine-tuning your model, meticulously analyzing mountains of data. Yet, you lack a clear understanding of the factors influencing its predictions and, as a result, find it hard to improve it further. 

If you have ever found yourself in such a situation, trying to make sense of what goes inside this black box, you are in the right place. This article will dive deep into the captivating realm of SHAP (Shapley Additive Explanations) values, a powerful framework that helps explain a model’s decision-making process, and how you can harness its power to easily optimize and debug your ML models.

So without further ado, letā€™s begin!

SHAP values explained
SHAP values explained | Modified based on the source

Debugging models using SHAP values

Model debugging is an essential process that involves pinpointing and rectifying issues that emerge during machine learning models’ training and evaluation phases. This is the arena where SHAP values step in, offering significant assistance. They help us do the following:

  • 1 Identifying features that affect prediction
  • 2 Explore model behavior
  • 3 Detecting bias in models
  • 4 Assessing model robustnessĀ 

Identifying features that affect prediction

An integral part of model debugging involves determining the features that significantly influence predictions. SHAP values serve as an exact tool for this task, empowering you to identify key variables that shape a model’s output.  

By utilizing SHAP values, one can evaluate each feature’s relative contribution, providing insights into the key factors that drive your model’s predictions. Insights from scrutinizing SHAP values across multiple instances can help ascertain the model’s consistency or reveal if particular features exert excessive impact, potentially leading to bias or compromising the reliability of predictions. 

Therefore, SHAP values emerge as a potent instrument in pinpointing influential features within a model’s prediction landscape. They assist in refining and debugging models, while summary and dependence plots act as effective visualization aids for understanding feature importance. We will take a look at some of these plots in upcoming sections.

Explore model behavior

Models sometimes exhibit perplexing outputs or unexpected behaviors, making it critical to understand their inner workings. For example, let’s say you have a fraud detection model that unexpectedly flagged a legitimate transaction as fraudulent, causing inconvenience for the customer. This is where SHAP can prove to be invaluable. 

  • By quantifying the contributions of each feature to a prediction, SHAP values can help explain why a certain transaction was classified as fraudulent. 
  • SHAP values can enable practitioners to explore how a change in a feature like credit history influences the classification. 
  • Analyzing SHAP values across multiple instances can unveil scenarios where this model may underperform or fail. 

Detecting bias in models

Bias in models can have profound implications, exacerbating social disparities and injustices. SHAP values facilitate the identification of potential bias sources by quantifying each feature’s effect on model predictions. 

A meticulous examination of SHAP values allows data scientists to discern if the model’s decisions are influenced by discriminatory factors. Such consciousness helps practitioners to get rid of bias through feature representation adjustments, rectifying data imbalances, or adopting fairness-aware methodologies. 

Equipped with this information, practitioners can actively work towards bias reduction, ensuring their models uphold fairness. Addressing bias and guaranteeing fairness in machine learning models is an essential ethical obligation. 

Assessing model robustness 

Model robustness plays a vital role in model performance, ensuring its reliability in various scenarios.  

  • By examining the consistency of feature contributions across different samples, SHAP values enable data scientists to gauge a model’s stability and dependability. 
  • By scrutinizing the stability of SHAP values for each feature, practitioners can identify inconsistent or volatile behavior. 
  • By identifying features with unstable contributions, practitioners can focus on improving those aspects through data preprocessing, feature engineering, or model adjustments. 

These irregularities act as warning signs, highlighting potential weaknesses or instabilities in the model. Armed with this understanding, data scientists can take targeted measures to enhance the model’s reliability. 

Optimizing models using SHAP values

SHAP values can help data scientists optimize machine learning models for better performance and efficiency by allowing them to keep a check on the following:

  • 1 Feature engineering
  • 2 Model selection
  • 3 Hyperparameter tuningĀ 

Feature engineering

Effective feature engineering is a well-known technique to enhance model performance. By understanding the impact of different features on predictions, you can prioritize and optimize your feature engineering efforts. SHAP values provide important insights into this process. 

This analysis allows data scientists to grasp feature importance, interactions, and relationships more precisely. It equips them to conduct focused feature engineering, maximizing the extraction of relevant and impactful features.

With SHAP values, practitioners can:

  • Uncover influential features: SHAP values highlight features with substantial impact on predictions, enabling their prioritization during feature engineering.
  • Recognize irrelevant features: Features with persistently low SHAP values across instances may be less consequential and can potentially be pruned to simplify the model.
  • Discover interactions: SHAP values can expose unforeseen feature interactions, promoting the generation of new, performance-enhancing features.

Thus, SHAP values streamline the feature engineering process, amplifying the model’s predictive prowess by facilitating the extraction of the most pertinent features.

Model selection

Model selection, a critical step in building high-performing models, entails picking the optimal model from a pool of candidate models. SHAP values can assist in this process through:

  • Model comparison: SHAP values, calculated for each model, allow you to contrast feature importance rankings, granting insights into how different models utilize features to form predictions.
  • Complexity evaluation: SHAP values can indicate models with excessive reliance on complex interactions or high-cardinality features, which might be more susceptible to overfitting.

Hyperparameter tuning 

Hyperparameter tuning, a crucial phase in boosting model performance, involves optimizing a model’s hyperparameters. SHAP values can aid this process by:

  • Guiding the tuning process: If SHAP values indicate a tree-based model’s excessive dependence on a particular feature, reducing the max_depth hyperparameter could coax the model into utilizing other features more.
  • Evaluating tuning results: A comparison of SHAP values pre and post-tuning provides an in-depth understanding of the tuning process’s influence on the model’s feature utilization.

Insights derived from SHAP values allow data scientists to pinpoint the configurations leading to optimal performance. 

SHAP library features for ML debugging and optimization

To provide a comprehensive understanding of the SHAP library’s features for machine learning (ML) debugging and optimization, we will illustrate its capabilities through a practical use case of predicting a label. 

For this demonstration, we will utilize the Adult Income dataset, which is available on Kaggle. The Adult Income dataset comprises various attributes that contribute to determining an individual’s income level. The primary objective of the dataset is to predict whether an individual’s income exceeds a certain threshold, precisely $50,000 per year.

In our exploration of the SHAP functionalities, we will dive into the capabilities it offers with a model like the XGBoost classifier. The complete process, including data preprocessing and model training steps, can be found in a notebook hosted on neptune.ai because of its convenient metadata storage, quick comparison, and sharing capabilities.

Editor’s note

Do you feel like experimenting with neptune.ai?

SHAP beeswarm plot

The SHAP beeswarm plot visualizes the distribution of SHAP values across features in a dataset. Resembling a swarm of bees, the arrangement of points reveals insights into the role and impact of each feature on the model’s predictions. 

On the plot’s x-axis, dots represent the SHAP values of individual data instances, providing crucial information about feature influence. A wider spread or higher density of dots indicates more significant variability or a more substantial impact on the model’s predictions. This allows us to evaluate the significance of features in contributing to the model’s output.

Additionally, the plot employs a default color mapping on the y-axis to represent low or high values of the respective features. This color scheme aids in identifying patterns and trends in the distribution of feature values across instances.

See in the app
SHAP values and the beeswarm plot of the Xgboost model pinpoint

Here, the SHAP beeswarm plot of the XGboost model pinpoints the top five critical features for predicting if an individual’s income exceeds $50,000 per year: Marital Status, Age, Capital Gain, Education Level (denoted as Education Number), and Weekly Working Hours. 

The SHAP beeswarm plot defaults to ordering features based on the mean absolute value of the SHAP values, which represents the average impact across all instances. This prioritizes features with a broad and consistent influence but may overlook rare instances with high impact.

To focus on features that have high impacts on individual people, an alternative sorting method can be used. By sorting features based on the maximum absolute value of the SHAP values, we highlight those that have the most substantial impact on specific individuals, regardless of their frequency or occurrence. 

Sorting features by their maximum absolute SHAP value allows us to pinpoint the features that exhibit rare but highly influential effects on the model’s predictions. This approach enables us to identify the key factors that have significant impacts on individual instances, providing a more detailed understanding of feature importance.

See in the app
Sorting features based on the maximum absolute value of the SHAP values

Sorting features based on the maximum absolute value of the SHAP values reveals the top 5 influential features: capital gain, capital loss, age, education level, and marital status. These features demonstrate the highest absolute impact on individual predictions, regardless of their average impact. 

By considering the maximum absolute SHAP values, we can uncover rare but impactful features that greatly affect individual predictions. This sorting approach enables us to gain valuable insights into the key factors driving income levels within the adult income model.

SHAP bar plot 

The SHAP bar plot is a powerful visualization tool that provides insights into the importance of each feature in an ML model. It employs horizontal bars to represent the magnitude and direction of the effects that features have on the model’s predictions. 

By ranking the features based on their average absolute SHAP values, the bar plot offers a clear indication of which features carry the most significant influence on the model’s predictions. 

The length of each bar in the SHAP bar plot corresponds to the magnitude of a feature’s contribution to the prediction. Longer bars indicate greater importance, signifying that the corresponding feature has a more substantial impact on the model’s output. 

To enhance interpretability, the bars in the plot are often color-coded to denote the direction of a feature’s impact. Positive contributions may be depicted in one color, while negative contributions are represented in another color. This color scheme allows for easy and intuitive comprehension of whether a feature positively or negatively affects the prediction.

See in the app
SHAP values and the bar plot

The information derived from the local bar plot is invaluable for debugging and optimization, as it helps identify features that require further analysis or modification to improve the model’s performance.

In the case of the local bar plot, let’s consider the example where the race feature has a SHAP value of -0.29 and ranks as the fourth most predictive feature for the first data instance.

See in the app
The local SHAP bar plot

This indicates that the race feature has a negative influence on the prediction for that particular data point. This finding draws attention to the need for investigations into building fairness-aware models. Analyzing potential biases and ensuring fairness is crucial, especially if race is considered a protected attribute. 

Special attention should be given to evaluating the model’s performance across different racial groups and mitigating any discriminatory effects. The combination of both global and local bar plots provides valuable insights for model debugging and optimization. 

SHAP waterfall plot 

The SHAP waterfall plot is a great tool for understanding the contribution of individual features to a specific prediction. It provides a concise and intuitive visualization that allows data scientists to assess the incremental effect of each feature on the model’s output, aiding in model optimization and debugging. 

The plot starts from a baseline prediction and visually represents how the addition or removal of each feature alters the prediction. Positive contributions are depicted as bars that push the prediction higher, while negative contributions are represented as bars that pull the prediction lower. 

The length and direction of these bars in the SHAP waterfall plot provide valuable insights into the influence of each feature on the model’s decision-making process.

See in the app
SHAP values and the waterfall plot

SHAP force plot

The SHAP force plot and waterfall plot are similar in that they both show how the features of a data point contribute to the model’s prediction, as they both show the magnitude and direction of the contribution as arrows or bars. 

The main difference between the two plots is the orientation. SHAP force plots show the features from left to right, with the positive contributions on the left and the negative contributions on the right. Waterfall plots show the features from top to bottom, with the positive contributions at the top and the negative contributions at the bottom. 

SHAP values and the force plot(in the neptune.ai app)
SHAP values and the force plot

The stacked force plot is particularly useful for examining misclassified instances and gaining insights into the factors driving those misclassifications. This allows for a deeper understanding of the model’s decision-making process and helps pinpoint areas that require further investigation or improvement. 

However, it’s important to note that generating and interpreting stacked force plots can be time-consuming, especially when dealing with large datasets.

See in the app
SHAP values and the stacked force plot

SHAP dependence plot

SHAP dependence plot is a visualization tool that helps understand the relationship between a feature and the model’s prediction. It allows you to see how the relationship between the feature and the prediction changes as the feature value changes. 

In a SHAP dependence scatter plot, the feature of interest is represented along the horizontal axis, while the corresponding SHAP values are plotted on the vertical axis. Each data point on the scatter plot represents an instance from the dataset, with the feature’s value and the corresponding SHAP value associated with that instance.

See in the app
SHAP values and the dependence plot

In this example, the SHAP dependence scatter plot showcases the non-linear relationship between the “age” feature and its corresponding SHAP values. On the x-axis, the “age” values are displayed, while the y-axis represents the SHAP values associated with each “age” value. 

By examining the scatter plot, we can observe a positive trend where the contribution of the “age” feature increases as the “age” value increases. This suggests that higher values of “age” have a positive impact on the model’s prediction. 

To identify potential interaction effects between features, we can enhance the Age dependence scatter plot by incorporating color coding based on another feature. By passing the entire Explanation object to the color parameter, the scatter plot algorithm attempts to identify the feature column that exhibits the strongest interaction with Age, or we can define the feature ourselves. 

By examining the scatter plot, we can analyze the pattern and trend of the relationship between age and the model’s output while taking into account different levels of hours_per_week. If there is an interaction effect, it will be evident through distinct patterns in the scatter plot.

See in the app
SHAP values and the interactive plot

In this plot, we can observe that individuals who work fewer hours per week are more likely to be in their 20s. This age group typically includes students or individuals who are just starting their careers. The plot indicates that these individuals have a lower likelihood of earning over $50k. 

This pattern suggests that the model has learned from the data that individuals in their 20s, who tend to work fewer hours per week, are more unlikely to earn higher incomes. 

Conclusion

In this article, we explored how to utilize SHAP values to optimize and debug machine learning models. SHAP values provide a powerful tool for understanding model behavior and identifying important features for predictions. 

We discussed various features of the SHAP library, including beeswarm plots, bar plots, waterfall plots, force plots, and dependence plots, which aid in visualizing and interpreting SHAP values.

Key takeaways from the article include:

  • SHAP values help us understand how models work and identify influential features.
  • SHAP values can highlight irrelevant features that have little impact on predictions.
  • SHAP values provide insights for improving model performance by identifying areas for enhancement.
  • The SHAP library offers a range of visualization techniques for better understanding and debugging models.

I hope after reading this article, you will treat SHAP as a valuable tool in your arsenal for debugging and optimizing your ML models.

References

Was the article useful?

Thank you for your feedback!
What topics would you like to see for your next read?
Let us know what should be improved

    Thanks! Your suggestions have been forwarded to our editors