568 research outputs found
Federated Learning for Generalization, Robustness, Fairness: A Survey and Benchmark
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of
federated learning, an influx of approaches have delivered towards different
realistic challenges. In this survey, we provide a systematic overview of the
important and recent developments of research on federated learning. Firstly,
we introduce the study history and terminology definition of this area. Then,
we comprehensively review three basic lines of research: generalization,
robustness, and fairness, by introducing their respective background concepts,
task settings, and main challenges. We also offer a detailed overview of
representative literature on both methods and datasets. We further benchmark
the reviewed methods on several well-known datasets. Finally, we point out
several open issues in this field and suggest opportunities for further
research. We also provide a public website to continuously track developments
in this fast advancing field: https://github.com/WenkeHuang/MarsFL.Comment: 22 pages, 4 figure
Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning
In today's data-driven landscape, the delicate equilibrium between
safeguarding user privacy and unleashing data potential stands as a paramount
concern. Federated learning, which enables collaborative model training without
necessitating data sharing, has emerged as a privacy-centric solution. This
decentralized approach brings forth security challenges, notably poisoning and
backdoor attacks where malicious entities inject corrupted data. Our research,
initially spurred by test-time evasion attacks, investigates the intersection
of adversarial training and backdoor attacks within federated learning,
introducing Adversarial Robustness Unhardening (ARU). ARU is employed by a
subset of adversaries to intentionally undermine model robustness during
decentralized training, rendering models susceptible to a broader range of
evasion attacks. We present extensive empirical experiments evaluating ARU's
impact on adversarial training and existing robust aggregation defenses against
poisoning and backdoor attacks. Our findings inform strategies for enhancing
ARU to counter current defensive measures and highlight the limitations of
existing defenses, offering insights into bolstering defenses against ARU.Comment: 8 pages, 6 main pages of text, 4 figures, 2 tables. Made for a
Neurips workshop on backdoor attack
Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Federated learning (FL) is vulnerable to poisoning attacks, where adversaries
corrupt the global aggregation results and cause denial-of-service (DoS).
Unlike recent model poisoning attacks that optimize the amplitude of malicious
perturbations along certain prescribed directions to cause DoS, we propose a
Flexible Model Poisoning Attack (FMPA) that can achieve versatile attack goals.
We consider a practical threat scenario where no extra knowledge about the FL
system (e.g., aggregation rules or updates on benign devices) is available to
adversaries. FMPA exploits the global historical information to construct an
estimator that predicts the next round of the global model as a benign
reference. It then fine-tunes the reference model to obtain the desired
poisoned model with low accuracy and small perturbations. Besides the goal of
causing DoS, FMPA can be naturally extended to launch a fine-grained
controllable attack, making it possible to precisely reduce the global
accuracy. Armed with precise control, malicious FL service providers can gain
advantages over their competitors without getting noticed, hence opening a new
attack surface in FL other than DoS. Even for the purpose of DoS, experiments
show that FMPA significantly decreases the global accuracy, outperforming six
state-of-the-art attacks.Comment: This paper has been accepted by the 32st International Joint
Conference on Artificial Intelligence (IJCAI-23, Main Track
- …