502 research outputs found
RecUP-FL: Reconciling Utility and Privacy in Federated Learning via User-configurable Privacy Defense
Federated learning (FL) provides a variety of privacy advantages by allowing
clients to collaboratively train a model without sharing their private data.
However, recent studies have shown that private information can still be leaked
through shared gradients. To further minimize the risk of privacy leakage,
existing defenses usually require clients to locally modify their gradients
(e.g., differential privacy) prior to sharing with the server. While these
approaches are effective in certain cases, they regard the entire data as a
single entity to protect, which usually comes at a large cost in model utility.
In this paper, we seek to reconcile utility and privacy in FL by proposing a
user-configurable privacy defense, RecUP-FL, that can better focus on the
user-specified sensitive attributes while obtaining significant improvements in
utility over traditional defenses. Moreover, we observe that existing inference
attacks often rely on a machine learning model to extract the private
information (e.g., attributes). We thus formulate such a privacy defense as an
adversarial learning problem, where RecUP-FL generates slight perturbations
that can be added to the gradients before sharing to fool adversary models. To
improve the transferability to un-queryable black-box adversary models,
inspired by the idea of meta-learning, RecUP-FL forms a model zoo containing a
set of substitute models and iteratively alternates between simulations of the
white-box and the black-box adversarial attack scenarios to generate
perturbations. Extensive experiments on four datasets under various adversarial
settings (both attribute inference attack and data reconstruction attack) show
that RecUP-FL can meet user-specified privacy constraints over the sensitive
attributes while significantly improving the model utility compared with
state-of-the-art privacy defenses
Adversarial Robustness in Unsupervised Machine Learning: A Systematic Review
As the adoption of machine learning models increases, ensuring robust models
against adversarial attacks is increasingly important. With unsupervised
machine learning gaining more attention, ensuring it is robust against attacks
is vital. This paper conducts a systematic literature review on the robustness
of unsupervised learning, collecting 86 papers. Our results show that most
research focuses on privacy attacks, which have effective defenses; however,
many attacks lack effective and general defensive measures. Based on the
results, we formulate a model on the properties of an attack on unsupervised
learning, contributing to future research by providing a model to use.Comment: 38 pages, 11 figure
A Survey of Privacy Attacks in Machine Learning
As machine learning becomes more widely used, the need to study its
implications in security and privacy becomes more urgent. Although the body of
work in privacy has been steadily growing over the past few years, research on
the privacy aspects of machine learning has received less focus than the
security aspects. Our contribution in this research is an analysis of more than
40 papers related to privacy attacks against machine learning that have been
published during the past seven years. We propose an attack taxonomy, together
with a threat model that allows the categorization of different attacks based
on the adversarial knowledge, and the assets under attack. An initial
exploration of the causes of privacy leaks is presented, as well as a detailed
analysis of the different attacks. Finally, we present an overview of the most
commonly proposed defenses and a discussion of the open problems and future
directions identified during our analysis.Comment: Under revie
Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms
The advent of federated learning has facilitated large-scale data exchange
amongst machine learning models while maintaining privacy. Despite its brief
history, federated learning is rapidly evolving to make wider use more
practical. One of the most significant advancements in this domain is the
incorporation of transfer learning into federated learning, which overcomes
fundamental constraints of primary federated learning, particularly in terms of
security. This chapter performs a comprehensive survey on the intersection of
federated and transfer learning from a security point of view. The main goal of
this study is to uncover potential vulnerabilities and defense mechanisms that
might compromise the privacy and performance of systems that use federated and
transfer learning.Comment: Accepted for publication in edited book titled "Federated and
Transfer Learning", Springer, Cha
A Survey on Federated Learning Poisoning Attacks and Defenses
As one kind of distributed machine learning technique, federated learning
enables multiple clients to build a model across decentralized data
collaboratively without explicitly aggregating the data. Due to its ability to
break data silos, federated learning has received increasing attention in many
fields, including finance, healthcare, and education. However, the invisibility
of clients' training data and the local training process result in some
security issues. Recently, many works have been proposed to research the
security attacks and defenses in federated learning, but there has been no
special survey on poisoning attacks on federated learning and the corresponding
defenses. In this paper, we investigate the most advanced schemes of federated
learning poisoning attacks and defenses and point out the future directions in
these areas
- …