Private and Secure Distributed Deep Learning: A Survey

Corinne Allaart*, Saba Amiri*, Henri Bal*, Adam Belloum*, Leon Gommans*, Aart Van Halteren*, Sander Klous*

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Traditionally, deep learning practitioners would bring data into a central repository for model training and inference. Recent developments in distributed learning, such as federated learning and deep learning as a service (DLaaS), do not require centralized data and instead push computing to where the distributed datasets reside. These decentralized training schemes, however, introduce additional security and privacy challenges. This survey first structures the field of distributed learning into two main paradigms and then provides an overview of the recently published protective measures for each. This work highlights both secure training methods as well as private inference measures. Our analyses show that recent publications, while being highly dependent on the problem definition, report progress in terms of security, privacy, and efficiency. Nevertheless, we also identify several current issues within the private and secure distributed deep learning (PSDDL) field that require more research. We discuss these issues and provide a general overview of how they might be resolved.

Original languageEnglish
Article number3703452
Pages (from-to)1-43
Number of pages43
JournalACM Computing Surveys
Volume57
Issue number4
Early online date9 Dec 2024
DOIs
Publication statusE-pub ahead of print - 9 Dec 2024

Bibliographical note

Publisher Copyright:
© 2024 Copyright held by the owner/author(s).

Keywords

  • Deep learning
  • distributed learning
  • privacy
  • security

Fingerprint

Dive into the research topics of 'Private and Secure Distributed Deep Learning: A Survey'. Together they form a unique fingerprint.

Cite this