TY - GEN
T1 - Continual Class Incremental Learning for CT Thoracic Segmentation
AU - Elskhawy, Abdelrahman
AU - Lisowska, Aneta
AU - Keicher, Matthias
AU - Henry, Joseph
AU - Thomson, Paul
AU - Navab, Nassir
PY - 2020
Y1 - 2020
N2 - Deep learning organ segmentation approaches require large amounts of annotated training data, which is limited in supply due to reasons of confidentiality and the time required for expert manual annotation. Therefore, being able to train models incrementally without having access to previously used data is desirable. A common form of sequential training is fine tuning (FT). In this setting, a model learns a new task effectively, but loses performance on previously learned tasks. The Learning without Forgetting (LwF) approach addresses this issue via replaying its own prediction for past tasks during model training. In this work, we evaluate FT and LwF for class incremental learning in multi-organ segmentation using the publicly available AAPM dataset. We show that LwF can successfully retain knowledge on previous segmentations, however, its ability to learn a new class decreases with the addition of each class. To address this problem we propose an adversarial continual learning segmentation approach (ACLSeg), which disentangles feature space into task-specific and task-invariant features. This enables preservation of performance on past tasks and effective acquisition of new knowledge.
AB - Deep learning organ segmentation approaches require large amounts of annotated training data, which is limited in supply due to reasons of confidentiality and the time required for expert manual annotation. Therefore, being able to train models incrementally without having access to previously used data is desirable. A common form of sequential training is fine tuning (FT). In this setting, a model learns a new task effectively, but loses performance on previously learned tasks. The Learning without Forgetting (LwF) approach addresses this issue via replaying its own prediction for past tasks during model training. In this work, we evaluate FT and LwF for class incremental learning in multi-organ segmentation using the publicly available AAPM dataset. We show that LwF can successfully retain knowledge on previous segmentations, however, its ability to learn a new class decreases with the addition of each class. To address this problem we propose an adversarial continual learning segmentation approach (ACLSeg), which disentangles feature space into task-specific and task-invariant features. This enables preservation of performance on past tasks and effective acquisition of new knowledge.
UR - http://www.scopus.com/inward/record.url?scp=85092200855&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60548-3_11
DO - 10.1007/978-3-030-60548-3_11
M3 - Conference contribution
SN - 9783030605476
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 106
EP - 116
BT - Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning - 2nd MICCAI Workshop, DART 2020, and 1st MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Proceedings
A2 - Albarqouni, S.
A2 - Bakas, S.
A2 - Kamnitsas, K.
A2 - Cardoso, M.J.
A2 - Landman, B.
A2 - Li, W.
A2 - Milletari, F.
A2 - Rieke, N.
A2 - Xu, Z.
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2020, and the 1st MICCAI Workshop on Distributed and Collaborative Learning, DCL 2020, held in conjunction with the Medical Image Computing and Computer Assisted Intervention, MICCAI 2020
Y2 - 4 October 2020 through 8 October 2020
ER -