21,094 research outputs found
Roulette: A Semantic Privacy-Preserving Device-Edge Collaborative Inference Framework for Deep Learning Classification Tasks
Deep learning classifiers are crucial in the age of artificial intelligence.
The device-edge-based collaborative inference has been widely adopted as an
efficient framework for promoting its applications in IoT and 5G/6G networks.
However, it suffers from accuracy degradation under non-i.i.d. data
distribution and privacy disclosure. For accuracy degradation, direct use of
transfer learning and split learning is high cost and privacy issues remain.
For privacy disclosure, cryptography-based approaches lead to a huge overhead.
Other lightweight methods assume that the ground truth is non-sensitive and can
be exposed. But for many applications, the ground truth is the user's crucial
privacy-sensitive information. In this paper, we propose a framework of
Roulette, which is a task-oriented semantic privacy-preserving collaborative
inference framework for deep learning classifiers. More than input data, we
treat the ground truth of the data as private information. We develop a novel
paradigm of split learning where the back-end DNN is frozen and the front-end
DNN is retrained to be both a feature extractor and an encryptor. Moreover, we
provide a differential privacy guarantee and analyze the hardness of ground
truth inference attacks. To validate the proposed Roulette, we conduct
extensive performance evaluations using realistic datasets, which demonstrate
that Roulette can effectively defend against various attacks and meanwhile
achieve good model accuracy. In a situation where the non-i.i.d. is very
severe, Roulette improves the inference accuracy by 21\% averaged over
benchmarks, while making the accuracy of discrimination attacks almost
equivalent to random guessing
A Hybrid Approach to Privacy-Preserving Federated Learning
Federated learning facilitates the collaborative training of models without
the sharing of raw data. However, recent attacks demonstrate that simply
maintaining data locality during training processes does not provide sufficient
privacy guarantees. Rather, we need a federated learning system capable of
preventing inference over both the messages exchanged during training and the
final trained model while ensuring the resulting model also has acceptable
predictive accuracy. Existing federated learning approaches either use secure
multiparty computation (SMC) which is vulnerable to inference or differential
privacy which can lead to low accuracy given a large number of parties with
relatively small amounts of data each. In this paper, we present an alternative
approach that utilizes both differential privacy and SMC to balance these
trade-offs. Combining differential privacy with secure multiparty computation
enables us to reduce the growth of noise injection as the number of parties
increases without sacrificing privacy while maintaining a pre-defined rate of
trust. Our system is therefore a scalable approach that protects against
inference threats and produces models with high accuracy. Additionally, our
system can be used to train a variety of machine learning models, which we
validate with experimental results on 3 different machine learning algorithms.
Our experiments demonstrate that our approach out-performs state of the art
solutions
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they
remember information about their training data. We design white-box inference
attacks to perform a comprehensive privacy analysis of deep learning models. We
measure the privacy leakage through parameters of fully trained models as well
as the parameter updates of models during training. We design inference
algorithms for both centralized and federated learning, with respect to passive
and active inference attackers, and assuming different adversary prior
knowledge.
We evaluate our novel white-box membership inference attacks against deep
learning algorithms to trace their training data records. We show that a
straightforward extension of the known black-box attacks to the white-box
setting (through analyzing the outputs of activation functions) is ineffective.
We therefore design new algorithms tailored to the white-box setting by
exploiting the privacy vulnerabilities of the stochastic gradient descent
algorithm, which is the algorithm used to train deep neural networks. We
investigate the reasons why deep learning models may leak information about
their training data. We then show that even well-generalized models are
significantly susceptible to white-box membership inference attacks, by
analyzing state-of-the-art pre-trained and publicly available models for the
CIFAR dataset. We also show how adversarial participants, in the federated
learning setting, can successfully run active membership inference attacks
against other participants, even when the global model achieves high prediction
accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP
- …