1,032 research outputs found
Membership Privacy for Machine Learning Models Through Knowledge Transfer
Large capacity machine learning (ML) models are prone to membership inference
attacks (MIAs), which aim to infer whether the target sample is a member of the
target model's training dataset. The serious privacy concerns due to the
membership inference have motivated multiple defenses against MIAs, e.g.,
differential privacy and adversarial regularization. Unfortunately, these
defenses produce ML models with unacceptably low classification performances.
Our work proposes a new defense, called distillation for membership privacy
(DMP), against MIAs that preserves the utility of the resulting models
significantly better than prior defenses. DMP leverages knowledge distillation
to train ML models with membership privacy. We provide a novel criterion to
tune the data used for knowledge transfer in order to amplify the membership
privacy of DMP. Our extensive evaluation shows that DMP provides significantly
better tradeoffs between membership privacy and classification accuracies
compared to state-of-the-art MIA defenses. For instance, DMP achieves ~100%
accuracy improvement over adversarial regularization for DenseNet trained on
CIFAR100, for similar membership privacy (measured using MIA risk): when the
MIA risk is 53.7%, adversarially regularized DenseNet is 33.6% accurate, while
DMP-trained DenseNet is 65.3% accurate.Comment: To Appear in the 35th AAAI Conference on Artificial Intelligence,
202
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
The arms race between attacks and defenses for machine learning models has
come to a forefront in recent years, in both the security community and the
privacy community. However, one big limitation of previous research is that the
security domain and the privacy domain have typically been considered
separately. It is thus unclear whether the defense methods in one domain will
have any unexpected impact on the other domain.
In this paper, we take a step towards resolving this limitation by combining
the two domains. In particular, we measure the success of membership inference
attacks against six state-of-the-art defense methods that mitigate the risk of
adversarial examples (i.e., evasion attacks). Membership inference attacks
determine whether or not an individual data record has been part of a model's
training set. The accuracy of such attacks reflects the information leakage of
training algorithms about individual members of the training set. Adversarial
defense methods against adversarial examples influence the model's decision
boundaries such that model predictions remain unchanged for a small area around
each input. However, this objective is optimized on training data. Thus,
individual data records in the training set have a significant influence on
robust models. This makes the models more vulnerable to inference attacks.
To perform the membership inference attacks, we leverage the existing
inference methods that exploit model predictions. We also propose two new
inference methods that exploit structural properties of robust models on
adversarially perturbed data. Our experimental evaluation demonstrates that
compared with the natural training (undefended) approach, adversarial defense
methods can indeed increase the target model's risk against membership
inference attacks.Comment: ACM CCS 2019, code is available at
https://github.com/inspire-group/privacy-vs-robustnes
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they
remember information about their training data. We design white-box inference
attacks to perform a comprehensive privacy analysis of deep learning models. We
measure the privacy leakage through parameters of fully trained models as well
as the parameter updates of models during training. We design inference
algorithms for both centralized and federated learning, with respect to passive
and active inference attackers, and assuming different adversary prior
knowledge.
We evaluate our novel white-box membership inference attacks against deep
learning algorithms to trace their training data records. We show that a
straightforward extension of the known black-box attacks to the white-box
setting (through analyzing the outputs of activation functions) is ineffective.
We therefore design new algorithms tailored to the white-box setting by
exploiting the privacy vulnerabilities of the stochastic gradient descent
algorithm, which is the algorithm used to train deep neural networks. We
investigate the reasons why deep learning models may leak information about
their training data. We then show that even well-generalized models are
significantly susceptible to white-box membership inference attacks, by
analyzing state-of-the-art pre-trained and publicly available models for the
CIFAR dataset. We also show how adversarial participants, in the federated
learning setting, can successfully run active membership inference attacks
against other participants, even when the global model achieves high prediction
accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP
Machine Learning Models that Remember Too Much
Machine learning (ML) is becoming a commodity. Numerous ML frameworks and
services are available to data holders who are not ML experts but want to train
predictive models on their data. It is important that ML models trained on
sensitive inputs (e.g., personal images or documents) not leak too much
information about the training data.
We consider a malicious ML provider who supplies model-training code to the
data holder, does not observe the training, but then obtains white- or
black-box access to the resulting model. In this setting, we design and
implement practical algorithms, some of them very similar to standard ML
techniques such as regularization and data augmentation, that "memorize"
information about the training dataset in the model yet the model is as
accurate and predictive as a conventionally trained model. We then explain how
the adversary can extract memorized information from the model.
We evaluate our techniques on standard ML tasks for image classification
(CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20
Newsgroups and IMDB). In all cases, we show how our algorithms create models
that have high predictive power yet allow accurate extraction of subsets of
their training data
- …