TY - GEN
T1 - When Machine Unlearning Jeopardizes Privacy
AU - Chen, Min
AU - Zhang, Zhikun
AU - Wang, Tianhao
AU - Backes, Michael
AU - Humbert, Mathias
AU - Zhang, Yang
PY - 2021
Y1 - 2021
N2 - The right to be forgotten states that a data owner has the right to erase their data from an entity storing it. In the context of machine learning (ML), the right to be forgotten requires an ML model owner to remove the data owner's data from the training set used to build the ML model, a process known asmachine unlearning. While originally designed to protect the privacy of the data owner, we argue that machine unlearning may leave some imprint of the data in the ML model and thus create unintended privacy risks. In this paper, we perform the first study on investigating the unintended information leakage caused by machine unlearning. We propose a novel membership inference attack that leverages the different outputs of an ML model's two versions to infer whether a target sample is part of the training set of the original model but out of the training set of the corresponding unlearned model. Our experiments demonstrate that the proposed membership inference attack achieves strong performance. More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy. We notice that the privacy degradation is especially significant for well-generalized ML models where classical membership inference does not perform well. We further investigate four mechanisms to mitigate the newly discovered privacy risks and show that releasing the predicted label only, temperature scaling, and differential privacy are effective. We believe that our results can help improve privacy protection in practical implementations of machine unlearning. \footnoteOur code is available at \urlhttps://github.com/MinChen00/UnlearningLeaks.
AB - The right to be forgotten states that a data owner has the right to erase their data from an entity storing it. In the context of machine learning (ML), the right to be forgotten requires an ML model owner to remove the data owner's data from the training set used to build the ML model, a process known asmachine unlearning. While originally designed to protect the privacy of the data owner, we argue that machine unlearning may leave some imprint of the data in the ML model and thus create unintended privacy risks. In this paper, we perform the first study on investigating the unintended information leakage caused by machine unlearning. We propose a novel membership inference attack that leverages the different outputs of an ML model's two versions to infer whether a target sample is part of the training set of the original model but out of the training set of the corresponding unlearned model. Our experiments demonstrate that the proposed membership inference attack achieves strong performance. More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy. We notice that the privacy degradation is especially significant for well-generalized ML models where classical membership inference does not perform well. We further investigate four mechanisms to mitigate the newly discovered privacy risks and show that releasing the predicted label only, temperature scaling, and differential privacy are effective. We believe that our results can help improve privacy protection in practical implementations of machine unlearning. \footnoteOur code is available at \urlhttps://github.com/MinChen00/UnlearningLeaks.
UR - http://www.scopus.com/inward/record.url?scp=85117762075&partnerID=8YFLogxK
U2 - 10.1145/3460120.3484756
DO - 10.1145/3460120.3484756
M3 - Conference contribution
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 896
EP - 911
BT - CCS 2021: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
ER -