TY - GEN
T1 - Machine learning explainability in breast cancer survival
AU - Jansen, Tom
AU - Geleijnse, Gijs
AU - van Maaren, Marissa
AU - Hendriks, Mathijs P.
AU - Ten Teije, Annette
AU - Moncada-Torres, Arturo
PY - 2020
Y1 - 2020
N2 - Machine Learning (ML) can improve the diagnosis, treatment decisions, and understanding of cancer. However, the low explainability of how “black box” ML methods produce their output hinders their clinical adoption. In this paper, we used data from the Netherlands Cancer Registry to generate a ML-based model to predict 10-year overall survival of breast cancer patients. Then, we used Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to interpret the model's predictions. We found that, overall, LIME and SHAP tend to be consistent when explaining the contribution of different features. Nevertheless, the feature ranges where they have a mismatch can also be of interest, since they can help us identifying “turning points” where features go from favoring survived to favoring deceased (or vice versa). Explainability techniques can pave the way for better acceptance of ML techniques. However, their evaluation and translation to real-life scenarios need to be researched further.
AB - Machine Learning (ML) can improve the diagnosis, treatment decisions, and understanding of cancer. However, the low explainability of how “black box” ML methods produce their output hinders their clinical adoption. In this paper, we used data from the Netherlands Cancer Registry to generate a ML-based model to predict 10-year overall survival of breast cancer patients. Then, we used Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to interpret the model's predictions. We found that, overall, LIME and SHAP tend to be consistent when explaining the contribution of different features. Nevertheless, the feature ranges where they have a mismatch can also be of interest, since they can help us identifying “turning points” where features go from favoring survived to favoring deceased (or vice versa). Explainability techniques can pave the way for better acceptance of ML techniques. However, their evaluation and translation to real-life scenarios need to be researched further.
KW - Artificial Intelligence
KW - Interpretability
KW - Oncology
KW - Prediction model
UR - http://www.scopus.com/inward/record.url?scp=85086886132&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85086886132&partnerID=8YFLogxK
U2 - 10.3233/SHTI200172
DO - 10.3233/SHTI200172
M3 - Conference contribution
C2 - 32570396
AN - SCOPUS:85086886132
SN - 9781643680828
T3 - Studies in Health Technology and Informatics
SP - 307
EP - 311
BT - Digital Personalized Health and Medicine
A2 - Pape-Haugaard, Louise B.
A2 - Lovis, Christian
A2 - Madsen, Inge Cort
A2 - Weber, Patrick
A2 - Nielsen, Per Hostrup
A2 - Scott, Philip
PB - IOS Press
T2 - 30th Medical Informatics Europe Conference, MIE 2020
Y2 - 28 April 2020 through 1 May 2020
ER -