TY - GEN
T1 - A Local Non-Additive Framework for Explaining Black-Box Predictive Models
AU - Mohammadi, Majid
AU - Tiddi, Ilaria
AU - Ten Teije, Annette
N1 - Publisher Copyright:
© 2023 The Authors.
PY - 2023
Y1 - 2023
N2 - Understanding the rationale behind the predictions made by machine learning models holds paramount importance across numerous applications. Various explainable models have been developed to shed light on these predictions by assessing the individual contributions of features to the outcome of black-box models. However, existing methods often overlook the crucial aspect of interactions among features, restricting the explanation to isolated feature attributions. In this paper, we introduce a novel Choquet integral-based explainable method, termed ChoquEx, which not only considers the interactions among features but also enables the computation of contributions for any subset of features. To achieve this, we propose an innovative algorithm based on support vector regression that efficiently estimates the contributions of all feature subsets. Intriguingly, we leverage game-theoretic concepts, including Shapley values and interaction index, to calculate both the feature importance and interaction strength. This approach adds further interpretability and insight into the model's decision-making process. To evaluate the effectiveness of ChoquEx, we conduct extensive experiments on diverse real-world scenarios. Our results convincingly demonstrate the superiority of the proposed model over existing explainable techniques. With its ability to unravel feature interactions and furnish comprehensive explanations, ChoquEx significantly enhances our understanding of predictive models, opening new avenues for applying machine learning in critical domains that require transparent decision-making.
AB - Understanding the rationale behind the predictions made by machine learning models holds paramount importance across numerous applications. Various explainable models have been developed to shed light on these predictions by assessing the individual contributions of features to the outcome of black-box models. However, existing methods often overlook the crucial aspect of interactions among features, restricting the explanation to isolated feature attributions. In this paper, we introduce a novel Choquet integral-based explainable method, termed ChoquEx, which not only considers the interactions among features but also enables the computation of contributions for any subset of features. To achieve this, we propose an innovative algorithm based on support vector regression that efficiently estimates the contributions of all feature subsets. Intriguingly, we leverage game-theoretic concepts, including Shapley values and interaction index, to calculate both the feature importance and interaction strength. This approach adds further interpretability and insight into the model's decision-making process. To evaluate the effectiveness of ChoquEx, we conduct extensive experiments on diverse real-world scenarios. Our results convincingly demonstrate the superiority of the proposed model over existing explainable techniques. With its ability to unravel feature interactions and furnish comprehensive explanations, ChoquEx significantly enhances our understanding of predictive models, opening new avenues for applying machine learning in critical domains that require transparent decision-making.
UR - http://www.scopus.com/inward/record.url?scp=85175828512&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85175828512&partnerID=8YFLogxK
U2 - 10.3233/FAIA230458
DO - 10.3233/FAIA230458
M3 - Conference contribution
AN - SCOPUS:85175828512
SN - 9781643684369
T3 - Frontiers in Artificial Intelligence and Applications
SP - 1728
EP - 1738
BT - ECAI 2023
A2 - Gal, Kobi
A2 - Gal, Kobi
A2 - Nowe, Ann
A2 - Nalepa, Grzegorz J.
A2 - Fairstein, Roy
A2 - Radulescu, Roxana
PB - IOS Press BV
T2 - 26th European Conference on Artificial Intelligence, ECAI 2023
Y2 - 30 September 2023 through 4 October 2023
ER -