24,413 research outputs found
Measuring Information Leakage using Generalized Gain Functions
International audienceThis paper introduces g-leakage, a rich general- ization of the min-entropy model of quantitative information flow. In g-leakage, the benefit that an adversary derives from a certain guess about a secret is specified using a gain function g. Gain functions allow a wide variety of operational scenarios to be modeled, including those where the adversary benefits from guessing a value close to the secret, guessing a part of the secret, guessing a property of the secret, or guessing the secret within some number of tries. We prove important properties of g-leakage, including bounds between min-capacity, g-capacity, and Shannon capacity. We also show a deep connection between a strong leakage ordering on two channels, C1 and C2, and the possibility of factoring C1 into C2 C3 , for some C3 . Based on this connection, we propose a generalization of the Lattice of Information from deterministic to probabilistic channels
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they
remember information about their training data. We design white-box inference
attacks to perform a comprehensive privacy analysis of deep learning models. We
measure the privacy leakage through parameters of fully trained models as well
as the parameter updates of models during training. We design inference
algorithms for both centralized and federated learning, with respect to passive
and active inference attackers, and assuming different adversary prior
knowledge.
We evaluate our novel white-box membership inference attacks against deep
learning algorithms to trace their training data records. We show that a
straightforward extension of the known black-box attacks to the white-box
setting (through analyzing the outputs of activation functions) is ineffective.
We therefore design new algorithms tailored to the white-box setting by
exploiting the privacy vulnerabilities of the stochastic gradient descent
algorithm, which is the algorithm used to train deep neural networks. We
investigate the reasons why deep learning models may leak information about
their training data. We then show that even well-generalized models are
significantly susceptible to white-box membership inference attacks, by
analyzing state-of-the-art pre-trained and publicly available models for the
CIFAR dataset. We also show how adversarial participants, in the federated
learning setting, can successfully run active membership inference attacks
against other participants, even when the global model achieves high prediction
accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP
Information Leakage Games
We consider a game-theoretic setting to model the interplay between attacker
and defender in the context of information flow, and to reason about their
optimal strategies. In contrast with standard game theory, in our games the
utility of a mixed strategy is a convex function of the distribution on the
defender's pure actions, rather than the expected value of their utilities.
Nevertheless, the important properties of game theory, notably the existence of
a Nash equilibrium, still hold for our (zero-sum) leakage games, and we provide
algorithms to compute the corresponding optimal strategies. As typical in
(simultaneous) game theory, the optimal strategy is usually mixed, i.e.,
probabilistic, for both the attacker and the defender. From the point of view
of information flow, this was to be expected in the case of the defender, since
it is well known that randomization at the level of the system design may help
to reduce information leaks. Regarding the attacker, however, this seems the
first work (w.r.t. the literature in information flow) proving formally that in
certain cases the optimal attack strategy is necessarily probabilistic
- …