4 research outputs found
FedGT: Identification of Malicious Clients in Federated Learning with Secure Aggregation
We propose FedGT, a novel framework for identifying malicious clients in
federated learning with secure aggregation. Inspired by group testing, the
framework leverages overlapping groups of clients to identify the presence of
malicious clients in the groups via a decoding operation. The clients
identified as malicious are then removed from the training of the model, which
is performed over the remaining clients. By choosing the size, number, and
overlap between groups, FedGT strikes a balance between privacy and security.
Specifically, the server learns the aggregated model of the clients in each
group - vanilla federated learning and secure aggregation correspond to the
extreme cases of FedGT with group size equal to one and the total number of
clients, respectively. The effectiveness of FedGT is demonstrated through
extensive experiments on the MNIST, CIFAR-10, and ISIC2019 datasets in a
cross-silo setting under different data-poisoning attacks. These experiments
showcase FedGT's ability to identify malicious clients, resulting in high model
utility. We further show that FedGT significantly outperforms the private
robust aggregation approach based on the geometric median recently proposed by
Pillutla et al. on heterogeneous client data (ISIC2019) and in the presence of
targeted attacks (CIFAR-10 and ISIC2019).Comment: 27 pages, 13 figure