Fairness Evaluation of Risk Estimation Models for Lung Cancer Screening

Shaurya Gaur, Michel Vitale, Alessa Hering, Johan Kwisthout, Colin Jacobs, Lena Philipp, Fennie van der Graaf

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Lung cancer is the leading cause of cancer-related mortality in adults worldwide. Screening high-risk individuals with annual low-dose CT (LDCT) can support earlier detection and reduce deaths, but widespread implementation may strain the already limited radiology workforce. Artificial intelligence (AI) models have shown potential in estimating lung cancer risk from LDCT scans. However, high-risk populations for lung cancer are diverse, and these models’ performance across diverse demographic groups remains an open question. In this study, we used the JustEFAB ethical framework to evaluate potential performance disparities and fairness in two AI-based risk estimation models for lung cancer screening: the Sybil lung cancer risk model and the Venkadesh21 nodule risk estimator. We also examined disparities in the PanCan2b logistic regression model recommended in the British Thoracic Society nodule management guideline. Both AI-based models were trained on data from the U.S.-based National Lung Screening Trial (NLST), and assessed on a held-out NLST validation set. We evaluated area under the ROC curve (AUROC), sensitivity, and specificity across demographic subgroups, and explored potential confounding from clinical risk factors. We observed a statistically significant AUROC difference in Sybil’s performance between women (0.88, 95% CI: 0.86, 0.90) and men (0.81, 95% CI: 0.78, 0.84, p < .001). At 90% specificity, Venkadesh21 showed lower sensitivity for Black (0.39, 95% CI: 0.23, 0.59) than White participants (0.69, 95% CI: 0.65, 0.73). These differences were not explained by available clinical confounders and may be classified as unfair biases according to JustEFAB. Our findings highlight the importance of improving and monitoring model performance across underrepresented subgroups in lung cancer screening, as well as further research on algorithmic fairness in this field.
Original languageEnglish
Pages (from-to)559-593
Number of pages35
JournalMachine Learning for Biomedical Imaging
Volume3
Issue numberSpecial issue
Early online date21 Dec 2025
DOIs
Publication statusPublished - 2025

Funding

Special issue on FAIMI

Fingerprint

Dive into the research topics of 'Fairness Evaluation of Risk Estimation Models for Lung Cancer Screening'. Together they form a unique fingerprint.

Cite this