TY - GEN
T1 - Property inference attacks on convolutional neural networks
T2 - 18th International Conference on Security and Cryptography, SECRYPT 2021
AU - Parisot, M.
AU - Balázs, null
AU - Spagnuelo, D.
PY - 2021
Y1 - 2021
N2 - Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedMachine learning models’ goal is to make correct predictions for specific tasks by learning important properties and patterns from data. By doing so, there is a chance that the model learns properties that are unrelated to its primary task. Property Inference Attacks exploit this and aim to infer from a given model (i.e., the target model) properties about the training dataset seemingly unrelated to the model’s primary goal. If the training data is sensitive, such an attack could lead to privacy leakage. In this paper, we investigate the influence of the target model’s complexity on the accuracy of this type of attack, focusing on convolutional neural network classifiers. We perform attacks on models that are trained on facial images to predict whether someone’s mouth is open. Our attacks’ goal is to infer whether the training dataset is balanced gender-wise. Our findings reveal that the risk of a privacy breach is present independently of the target model’s complexity: for all studied architectures, the attack’s accuracy is clearly over the baseline.
AB - Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedMachine learning models’ goal is to make correct predictions for specific tasks by learning important properties and patterns from data. By doing so, there is a chance that the model learns properties that are unrelated to its primary task. Property Inference Attacks exploit this and aim to infer from a given model (i.e., the target model) properties about the training dataset seemingly unrelated to the model’s primary goal. If the training data is sensitive, such an attack could lead to privacy leakage. In this paper, we investigate the influence of the target model’s complexity on the accuracy of this type of attack, focusing on convolutional neural network classifiers. We perform attacks on models that are trained on facial images to predict whether someone’s mouth is open. Our attacks’ goal is to infer whether the training dataset is balanced gender-wise. Our findings reveal that the risk of a privacy breach is present independently of the target model’s complexity: for all studied architectures, the attack’s accuracy is clearly over the baseline.
U2 - 10.5220/0010555607150721
DO - 10.5220/0010555607150721
M3 - Conference contribution
SP - 715
EP - 721
BT - Proceedings of the 18th International Conference on Security and Cryptography, SECRYPT 2021
A2 - di Vimercati, S.De.C.
A2 - Samarati, P.
PB - SciTePress
Y2 - 6 July 2021 through 8 July 2021
ER -