34,647 research outputs found
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
The arms race between attacks and defenses for machine learning models has
come to a forefront in recent years, in both the security community and the
privacy community. However, one big limitation of previous research is that the
security domain and the privacy domain have typically been considered
separately. It is thus unclear whether the defense methods in one domain will
have any unexpected impact on the other domain.
In this paper, we take a step towards resolving this limitation by combining
the two domains. In particular, we measure the success of membership inference
attacks against six state-of-the-art defense methods that mitigate the risk of
adversarial examples (i.e., evasion attacks). Membership inference attacks
determine whether or not an individual data record has been part of a model's
training set. The accuracy of such attacks reflects the information leakage of
training algorithms about individual members of the training set. Adversarial
defense methods against adversarial examples influence the model's decision
boundaries such that model predictions remain unchanged for a small area around
each input. However, this objective is optimized on training data. Thus,
individual data records in the training set have a significant influence on
robust models. This makes the models more vulnerable to inference attacks.
To perform the membership inference attacks, we leverage the existing
inference methods that exploit model predictions. We also propose two new
inference methods that exploit structural properties of robust models on
adversarially perturbed data. Our experimental evaluation demonstrates that
compared with the natural training (undefended) approach, adversarial defense
methods can indeed increase the target model's risk against membership
inference attacks.Comment: ACM CCS 2019, code is available at
https://github.com/inspire-group/privacy-vs-robustnes
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they
remember information about their training data. We design white-box inference
attacks to perform a comprehensive privacy analysis of deep learning models. We
measure the privacy leakage through parameters of fully trained models as well
as the parameter updates of models during training. We design inference
algorithms for both centralized and federated learning, with respect to passive
and active inference attackers, and assuming different adversary prior
knowledge.
We evaluate our novel white-box membership inference attacks against deep
learning algorithms to trace their training data records. We show that a
straightforward extension of the known black-box attacks to the white-box
setting (through analyzing the outputs of activation functions) is ineffective.
We therefore design new algorithms tailored to the white-box setting by
exploiting the privacy vulnerabilities of the stochastic gradient descent
algorithm, which is the algorithm used to train deep neural networks. We
investigate the reasons why deep learning models may leak information about
their training data. We then show that even well-generalized models are
significantly susceptible to white-box membership inference attacks, by
analyzing state-of-the-art pre-trained and publicly available models for the
CIFAR dataset. We also show how adversarial participants, in the federated
learning setting, can successfully run active membership inference attacks
against other participants, even when the global model achieves high prediction
accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP
Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
Machine learning algorithms, when applied to sensitive data, pose a distinct
threat to privacy. A growing body of prior work demonstrates that models
produced by these algorithms may leak specific private information in the
training data to an attacker, either through the models' structure or their
observable behavior. However, the underlying cause of this privacy risk is not
well understood beyond a handful of anecdotal accounts that suggest overfitting
and influence might play a role.
This paper examines the effect that overfitting and influence have on the
ability of an attacker to learn information about the training data from
machine learning models, either through training set membership inference or
attribute inference attacks. Using both formal and empirical analyses, we
illustrate a clear relationship between these factors and the privacy risk that
arises in several popular machine learning algorithms. We find that overfitting
is sufficient to allow an attacker to perform membership inference and, when
the target attribute meets certain conditions about its influence, attribute
inference attacks. Interestingly, our formal analysis also shows that
overfitting is not necessary for these attacks and begins to shed light on what
other factors may be in play. Finally, we explore the connection between
membership inference and attribute inference, showing that there are deep
connections between the two that lead to effective new attacks
Distribution inference risks: Identifying and mitigating sources of leakage
A large body of work shows that machine learning (ML) models can leak
sensitive or confidential information about their training data. Recently,
leakage due to distribution inference (or property inference) attacks is
gaining attention. In this attack, the goal of an adversary is to infer
distributional information about the training data. So far, research on
distribution inference has focused on demonstrating successful attacks, with
little attention given to identifying the potential causes of the leakage and
to proposing mitigations. To bridge this gap, as our main contribution, we
theoretically and empirically analyze the sources of information leakage that
allows an adversary to perpetrate distribution inference attacks. We identify
three sources of leakage: (1) memorizing specific information about the
(expected label given the feature values) of interest to the
adversary, (2) wrong inductive bias of the model, and (3) finiteness of the
training data. Next, based on our analysis, we propose principled mitigation
techniques against distribution inference attacks. Specifically, we demonstrate
that causal learning techniques are more resilient to a particular type of
distribution inference risk termed distributional membership inference than
associative learning methods. And lastly, we present a formalization of
distribution inference that allows for reasoning about more general adversaries
than was previously possible.Comment: 14 pages, 8 figure
Data Poisoning Attacks Against Multimodal Encoders
Traditional machine learning (ML) models usually rely on large-scale labeled
datasets to achieve strong performance. However, such labeled datasets are
often challenging and expensive to obtain. Also, the predefined categories
limit the model's ability to generalize to other visual concepts as additional
labeled data is required. On the contrary, the newly emerged multimodal model,
which contains both visual and linguistic modalities, learns the concept of
images from the raw text. It is a promising way to solve the above problems as
it can use easy-to-collect image-text pairs to construct the training dataset
and the raw texts contain almost unlimited categories according to their
semantics. However, learning from a large-scale unlabeled dataset also exposes
the model to the risk of potential poisoning attacks, whereby the adversary
aims to perturb the model's training dataset to trigger malicious behaviors in
it. Previous work mainly focuses on the visual modality. In this paper, we
instead focus on answering two questions: (1) Is the linguistic modality also
vulnerable to poisoning attacks? and (2) Which modality is most vulnerable? To
answer the two questions, we conduct three types of poisoning attacks against
CLIP, the most representative multimodal contrastive learning framework.
Extensive evaluations on different datasets and model architectures show that
all three attacks can perform well on the linguistic modality with only a
relatively low poisoning rate and limited epochs. Also, we observe that the
poisoning effect differs between different modalities, i.e., with lower MinRank
in the visual modality and with higher Hit@K when K is small in the linguistic
modality. To mitigate the attacks, we propose both pre-training and
post-training defenses. We empirically show that both defenses can
significantly reduce the attack performance while preserving the model's
utility
- …