A Survey on Federated Learning Poisoning Attacks and Defenses

Abstract

As one kind of distributed machine learning technique, federated learning enables multiple clients to build a model across decentralized data collaboratively without explicitly aggregating the data. Due to its ability to break data silos, federated learning has received increasing attention in many fields, including finance, healthcare, and education. However, the invisibility of clients' training data and the local training process result in some security issues. Recently, many works have been proposed to research the security attacks and defenses in federated learning, but there has been no special survey on poisoning attacks on federated learning and the corresponding defenses. In this paper, we investigate the most advanced schemes of federated learning poisoning attacks and defenses and point out the future directions in these areas

    Similar works

    Full text

    thumbnail-image

    Available Versions