Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning

Abstract

Federated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data, is vulnerable to Byzatine poisoning adversarial attacks. We argue that the federated learning model has to avoid those kind of adversarial attacks through filtering out the adversarial clients by means of the federated aggregation operator. We propose a dynamic federated aggregation operator that dynamically discards those adversarial clients and allows to prevent the corruption of the global learning model. We assess it as a defense against adversarial attacks deploying a deep learning classification model in a federated learning setting on the Fed-EMNIST Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model and discards the adversarial and poor (with low quality models) clients.R&D&I grants - MCIN/AEI, Spain PID-2020-119478GB-I00 PID2020-116118GA-I00 EQC2018-005-084-PERDF A way of making EuropeMCIN/AEI FPU18/04475 IJC2018-036092-

    Similar works