Your model is only valuable when it’s used

In the past 6 months, I have spoken to representatives of nearly a hundred companies and had insightful conversations about the data science activities in their organisations. Besides discovering and rediscovering many times how many businesses underestimate the hurdle of deploying, upscaling, and monitoring their machine learning (ML) efforts, there is another major challenge around the corner. 

While in 2019 data science skills to build models were most demanded, technology leaders are now facing the challenge to convince their business counterparts (and other stakeholders) to use their models. Because, what is the point of building a model when we do not use it? Unfortunately, too often models are not used in production (87%, according to Venture Beat). Recently I talked to a major grid operator who uses analytics for predictive maintenance. The data lab has developed excellent performing models, but the maintenance engineers won’t use them, because they do not trust them and/or do not understand the outcomes. A large amount of money, time, and effort is flushed away by building models that are never used. Let alone the frustration of the data scientists building them. 

Why are models not used? 

The reason why models are not used (assuming they are successfully deployed to the production environment) often has to do with explainabilityWhat is explainability in machine learning? Loosely defined it means “any technique that helps the user or developer of ML models understand why models behave the way they do”. In our case, we are most interested in the user perspective of explainability. Those engineers at the grid operator simply did not trust the models sufficiently, because they did not understand why it predicted X, and what factors lead to that prediction. In this article, we focus on “local” explainability only. This answers questions such as: which feature was most important for a specific prediction? What is the minimal change to the input x required to change the output of that single prediction (causality)?

“Figuring out causal factors is the holy grail of explainability”

The challenge is most prevalent in use cases where the results are of a critical nature. For example, in the insurance and healthcare sector. If an employee cannot explain why the request for a loan has been denied, he or she can also not ensure that the model does not discriminate. Also, it is important to determine the trust of the mode when a doctor plans to take cancer treatment action based on the prediction of the diagnosis. E.g. sometimes a lower accuracy can be useful to prevent false negatives, because a false negative can lead to a decision to not perform treatment (Zambri Long, 2018).

Moreover, GDPR article 13-15 states that data objects must have access to “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. In other words, the prediction must be explainable. 

Then let’s explain the model! 

How to explain ML models?

In all honesty, there is no uniform way to explain every model’s outcome (yet). However, is are a variety of methods available that attempt to help you with the question “Where did that local prediction come from exactly?:

Types of explanation methods  

There are two types of models to distinguish (see graph below). First, there are the “interpretable models” or “glass box models”. Linear regression, logistic regression, and simple decision trees are examples of such models. These models are highly interpretable because they consist of a simple mathematical formula or a bunch of thresholds that can be answered with yes/no. Therefore, these models do not require a sophisticated explainability method. However, the performance of these models is often far below that of more complex models, such as deep learning or boosted decision trees. 

No alt text provided for this image

What is a non-interpretable model? 

Second, there are “non-interpretable models” or “black box models”, such as neural networks or boosting models. These are difficult, if not impossible, to interpret, and thus require methods to explain them. Either we use “model-agnostic methods” or “model-specific interpretation methods”, whereas the latter we will not discuss because each method is limited to only one type of model.

“Model agnostic methods” are methods whereby the explanation is separated from the model itself. Such methods are advantageous because they are applicable to any type of model, regardless of the complexity. This means we can apply them to both complex, and simple models. While applying such a method to simple models is possible, it’s likely overkill and may lead to diverging explanations. 

Game theory and Shapley values. 

An upcoming example of model-agnostic methods that is gaining popularity is Shapley values. Shapley values uses game theory. It assumes that each feature is a player, and the prediction is the payout of the game. You can find out how much each feature contributed to the single prediction outcome compared to the average prediction. The big advantages of Shapley values is that the prediction is fairly distributed among the features and that it’s based on a solid theory. 

Asymmetric Shapley values (ASVs)

Calculating Shapley values for each prediction requires a lot of computational power because it calculates exact values (more info here). To overcome this issue, many other variants of the method are being researched. Examples of such variants are SHAP (SHapley Additive exPlanations) and Asymmetric Shapley values (ASVs). Both attempt to solve the shortcomings of Shapley values (Frey. et al., 2019). Other model-agnostic methods include LIME and Scoped rules. Whereas LIME seems to be less popular, SHAP variants are upcoming. The main advantage of SHAP compared to LIME is that with SHAP the difference between the prediction and the average prediction is fairly distributed.  

Application of the “Shapley Additive Explanations” method?

A good example of how SHAP is applied and how to interpret the results is this article by Raoniar on Towards data science (2020). Raoniar performs the explanation technique on an ML model and explains step by step how to interpret the results. 

What makes explainability hard?

Now that we know that we technically can explain the specific outcome of models, why does it still pose a challenge to use such methods in practice? Find some of the concerns that are preventing organizations from explaining models below (Bhatt et al., 2020).

  1. Causality of models is lacking

Yet very few methods manage to explain causal relationships. Even though this is the “holy grail of explainability” (Bhatt et al., 2020). Many organizations would be keen to use such methods, if available and practical. One method that claims it explains causality, is the Asymmetric Shapley values by Frye et al., 2019.

  1. Privacy

In some cases, full explainability of predictions can be used to learn about the training data. Meaning, yet unknown relationships may be discovered about specific instances, violating the privacy of that instance.

  1. Performance improvement

While we do not touch upon the data scientists/developer side of explainability, it is important to include it now. Some data scientists may use explainability models to improve the models’ performance, while in some cases this could increase the capabilities and performance of malicious systems, affecting the end-user negatively. (source here)

  1. Different stakeholders

Different stakeholders require different ways of model explanation. One may use it as a sanity check, one may review the prediction and explanation in order to improve the model’s performance. Preece et al. describe how different stakeholders have different explainability needs (2018). Your methods should align with that. Frye et al., established an approach to overcome this challenge, using a three-step method 1) identify stakeholders, 2) engage with stakeholders (what do you need?), and 3) understand the purpose of the explanation. More of that here.

Recommendation

My advice to organizations experimenting with data science and trying to turn their efforts into value is to critically review what their stakeholders expect in terms of explainability. E.g., do you need to show causality, or is feature importance fine? Also, do you need the black box models? Or will glass box models suffice? Of course, you can test both and review performances. 

Fast iterations and the ability to easily do pilots can help to align with your stakeholders’ expectations. While this article only scratches the surface on “explainability” and does not offer a full solution, I hope to contribute to a broader understanding and awareness of the topic, to stimulate further discussion, pilots and research.

Conclusion 

While we can develop very complex and high performing models, we struggle to explain them to our stakeholders. Especially in situations where predictions are critical (e.g. healthcare, insurance), explainability is a must. Only then, models will deliver the value we hope for. Methods to explain such black-box models exist but have limitations, e.g. LIME. More recent methods such as Asymmetric Shapley values promise to tackle the limitations, such as the inability to determine causality. However, such methods are yet relatively new and immature, and we should test them more thoroughly before conclusions can be drawn. 

This blog has been republished by AIIA. To view the original article, please click HERE.