When Machine Unlearning Jeopardizes Privacy

Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

The right to be forgotten states that a data owner has the right to erase their data from an entity storing it. In the context of machine learning (ML), the right to be forgotten requires an ML model owner to remove the data owner's data from the training set used to build the ML model, a process known asmachine unlearning. While originally designed to protect the privacy of the data owner, we argue that machine unlearning may leave some imprint of the data in the ML model and thus create unintended privacy risks. In this paper, we perform the first study on investigating the unintended information leakage caused by machine unlearning. We propose a novel membership inference attack that leverages the different outputs of an ML model's two versions to infer whether a target sample is part of the training set of the original model but out of the training set of the corresponding unlearned model. Our experiments demonstrate that the proposed membership inference attack achieves strong performance. More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy. We notice that the privacy degradation is especially significant for well-generalized ML models where classical membership inference does not perform well. We further investigate four mechanisms to mitigate the newly discovered privacy risks and show that releasing the predicted label only, temperature scaling, and differential privacy are effective. We believe that our results can help improve privacy protection in practical implementations of machine unlearning. \footnoteOur code is available at \urlhttps://github.com/MinChen00/UnlearningLeaks.
Original languageEnglish
Title of host publicationCCS 2021: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages896-911
Number of pages16
ISBN (Electronic)9781450384544
DOIs
Publication statusPublished - 2021

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Funding

We thank our shepherd Gautam Kamath and the anonymous reviewers for their constructive comments. This work is partially funded by the Helmholtz Association within the project “Trustworthy Federated Data Analytics” (TFDA) (funding number ZT-I-OO1 4). Tianhao Wang is funded by National Science Foundation (NSF) under Grand No.1931443, a Bilsland Dissertation Fellowship, and a Packard Fellowship.

FundersFunder number
TFDAZT-I-OO1 4
National Science Foundation1931443
Helmholtz Association

    Fingerprint

    Dive into the research topics of 'When Machine Unlearning Jeopardizes Privacy'. Together they form a unique fingerprint.
    • When Machine Unlearning Jeopardizes Privacy

      Chen, M., Zhang, Z., Wang, T., Backes, M., Humbert, M. & Zhang, Y., 14 Sept 2021, arXiv, p. 896-911, 16 p.

      Research output: Working paper / PreprintPreprintAcademic

      Open Access
      File
      2 Downloads (Pure)

    Cite this