452 research outputs found
AFLGuard: Byzantine-robust Asynchronous Federated Learning
Federated learning (FL) is an emerging machine learning paradigm, in which
clients jointly learn a model with the help of a cloud server. A fundamental
challenge of FL is that the clients are often heterogeneous, e.g., they have
different computing powers, and thus the clients may send model updates to the
server with substantially different delays. Asynchronous FL aims to address
this challenge by enabling the server to update the model once any client's
model update reaches it without waiting for other clients' model updates.
However, like synchronous FL, asynchronous FL is also vulnerable to poisoning
attacks, in which malicious clients manipulate the model via poisoning their
local data and/or model updates sent to the server. Byzantine-robust FL aims to
defend against poisoning attacks. In particular, Byzantine-robust FL can learn
an accurate model even if some clients are malicious and have Byzantine
behaviors. However, most existing studies on Byzantine-robust FL focused on
synchronous FL, leaving asynchronous FL largely unexplored. In this work, we
bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL
method. We show that, both theoretically and empirically, AFLGuard is robust
against various existing and adaptive poisoning attacks (both untargeted and
targeted). Moreover, AFLGuard outperforms existing Byzantine-robust
asynchronous FL methods.Comment: Accepted by ACSAC 202
SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection
In this work, we introduce SureFED, a novel framework for byzantine robust
federated learning. Unlike many existing defense methods that rely on
statistically robust quantities, making them vulnerable to stealthy and
colluding attacks, SureFED establishes trust using the local information of
benign clients. SureFED utilizes an uncertainty aware model evaluation and
introspection to safeguard against poisoning attacks. In particular, each
client independently trains a clean local model exclusively using its local
dataset, acting as the reference point for evaluating model updates. SureFED
leverages Bayesian models that provide model uncertainties and play a crucial
role in the model evaluation process. Our framework exhibits robustness even
when the majority of clients are compromised, remains agnostic to the number of
malicious clients, and is well-suited for non-IID settings. We theoretically
prove the robustness of our algorithm against data and model poisoning attacks
in a decentralized linear regression setting. Proof-of Concept evaluations on
benchmark image classification data demonstrate the superiority of SureFED over
the state of the art defense methods under various colluding and non-colluding
data and model poisoning attacks
Privacy and Robustness in Federated Learning: Attacks and Defenses
As data are increasingly being stored in different silos and societies
becoming more aware of data privacy issues, the traditional centralized
training of artificial intelligence (AI) models is facing efficiency and
privacy challenges. Recently, federated learning (FL) has emerged as an
alternative solution and continue to thrive in this new reality. Existing FL
protocol design has been shown to be vulnerable to adversaries within or
outside of the system, compromising data privacy and system robustness. Besides
training powerful global models, it is of paramount importance to design FL
systems that have privacy guarantees and are resistant to different types of
adversaries. In this paper, we conduct the first comprehensive survey on this
topic. Through a concise introduction to the concept of FL, and a unique
taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against
robustness; 3) inference attacks and defenses against privacy, we provide an
accessible review of this important topic. We highlight the intuitions, key
techniques as well as fundamental assumptions adopted by various attacks and
defenses. Finally, we discuss promising future research directions towards
robust and privacy-preserving federated learning.Comment: arXiv admin note: text overlap with arXiv:2003.02133; text overlap
with arXiv:1911.11815 by other author
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks
Federated learning is a promising direction to tackle the privacy issues
related to sharing patients' sensitive data. Often, federated systems in the
medical image analysis domain assume that the participating local clients are
\textit{honest}. Several studies report mechanisms through which a set of
malicious clients can be introduced that can poison the federated setup,
hampering the performance of the global model. To overcome this, robust
aggregation methods have been proposed that defend against those attacks. We
observe that most of the state-of-the-art robust aggregation methods are
heavily dependent on the distance between the parameters or gradients of
malicious clients and benign clients, which makes them prone to local model
poisoning attacks when the parameters or gradients of malicious and benign
clients are close. Leveraging this, we introduce DISBELIEVE, a local model
poisoning attack that creates malicious parameters or gradients such that their
distance to benign clients' parameters or gradients is low respectively but at
the same time their adverse effect on the global model's performance is high.
Experiments on three publicly available medical image datasets demonstrate the
efficacy of the proposed DISBELIEVE attack as it significantly lowers the
performance of the state-of-the-art \textit{robust aggregation} methods for
medical image analysis. Furthermore, compared to state-of-the-art local model
poisoning attacks, DISBELIEVE attack is also effective on natural images where
we observe a severe drop in classification performance of the global model for
multi-class classification on benchmark dataset CIFAR-10.Comment: Accepted by MICCAI 2023 - DeCa
FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
This paper introduces FedMLSecurity, a benchmark that simulates adversarial
attacks and corresponding defense mechanisms in Federated Learning (FL). As an
integral module of the open-sourced library FedML that facilitates FL algorithm
development and performance comparison, FedMLSecurity enhances the security
assessment capacity of FedML. FedMLSecurity comprises two principal components:
FedMLAttacker, which simulates attacks injected into FL training, and
FedMLDefender, which emulates defensive strategies designed to mitigate the
impacts of the attacks. FedMLSecurity is open-sourced 1 and is customizable to
a wide range of machine learning models (e.g., Logistic Regression, ResNet,
GAN, etc.) and federated optimizers (e.g., FedAVG, FedOPT, FedNOVA, etc.).
Experimental evaluations in this paper also demonstrate the ease of application
of FedMLSecurity to Large Language Models (LLMs), further reinforcing its
versatility and practical utility in various scenarios
Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning
Federated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data,
is vulnerable to Byzatine poisoning adversarial attacks. We argue that the federated learning model has to avoid those kind of
adversarial attacks through filtering out the adversarial clients by means of the federated aggregation operator. We propose a
dynamic federated aggregation operator that dynamically discards those adversarial clients and allows to prevent the corruption of
the global learning model. We assess it as a defense against adversarial attacks deploying a deep learning classification model in
a federated learning setting on the Fed-EMNIST Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the
dynamic selection of the clients to aggregate enhances the performance of the global learning model and discards the adversarial
and poor (with low quality models) clients.R&D&I grants - MCIN/AEI, Spain PID-2020-119478GB-I00
PID2020-116118GA-I00
EQC2018-005-084-PERDF A way of making EuropeMCIN/AEI FPU18/04475
IJC2018-036092-
A Secure Federated Learning Framework for Residential Short Term Load Forecasting
Smart meter measurements, though critical for accurate demand forecasting,
face several drawbacks including consumers' privacy, data breach issues, to
name a few. Recent literature has explored Federated Learning (FL) as a
promising privacy-preserving machine learning alternative which enables
collaborative learning of a model without exposing private raw data for short
term load forecasting. Despite its virtue, standard FL is still vulnerable to
an intractable cyber threat known as Byzantine attack carried out by faulty
and/or malicious clients. Therefore, to improve the robustness of federated
short-term load forecasting against Byzantine threats, we develop a
state-of-the-art differentially private secured FL-based framework that ensures
the privacy of the individual smart meter's data while protect the security of
FL models and architecture. Our proposed framework leverages the idea of
gradient quantization through the Sign Stochastic Gradient Descent (SignSGD)
algorithm, where the clients only transmit the `sign' of the gradient to the
control centre after local model training. As we highlight through our
experiments involving benchmark neural networks with a set of Byzantine attack
models, our proposed approach mitigates such threats quite effectively and thus
outperforms conventional Fed-SGD models
- …