352 research outputs found

    Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee

    Full text link
    The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical analysis on its convergence, and II) account for random system failures and adversarial attacks. Towards this end, we propose the first FRL framework the convergence of which is guaranteed and tolerant to less than half of the participating agents being random system failures or adversarial attackers. We prove that the sample efficiency of the proposed framework is guaranteed to improve with the number of agents and is able to account for such potential failures or attacks. All theoretical results are empirically verified on various RL benchmark tasks.Comment: Published at NeurIPS 2021. Extended version with proofs and additional experimental details and results. New version changes: reduced file size of figures; added a diagram illustrating the problem setting; added link to code on GitHub; modified proof for Theorem 6 (highlighted in red

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    Secure and Efficient Federated Learning in Edge Computing

    Get PDF
    Federated Learning (FL) has emerged as a promising paradigm for privacy-preserving Machine Learning (ML). It enables distributed end devices (clients) to collaboratively train a shared global model without exposing their local data. However, FL typically assumes that all clients are benign and trust the coordinating central server, which is unrealistic for many real-world scenarios. In practice, clients can harm the FL process by sharing poisonous model updates (known as poisoning attack) or sending counterfeit yet harmless parameters to the central server to obtain the trained global model without actual contribution (known as free-riding attack), while the central server could malfunction or misbehave. Moreover, the deployment of FL for real-world applications is hindered by the high communication overhead between the server and clients that are often at the network edge with limited bandwidth. This thesis aims to develop novel FL approaches toward secure and efficient FL in edge computing. First, a novel lightweight blockchain-based FL framework is devised to mitigate the single point of failure of traditional FL. This is achieved by removing the centralized model aggregation to the distributed blockchain nodes. Incorporating the Inter-Planetary File System and Verifiable Random Function, the proposed framework is energy-efficient and scalable with the blockchain network size. Next, a secure and efficient federated edge learning system is proposed, based on the developed blockchain-based FL framework, with a communication-efficient training scheme to reduce the communication cost of clients and a secure model aggregation protocol to build defense against poisoning attacks. Then, an original Shapley value-based defense mechanism is designed to further enhance the robustness of FL, not only against adversarial poisoning attack but also the stealthy free-riding attack. Extensive experiments show that the proposed approach can detect typical free-riding attacks with high precision and is resistant to poisoning attacks launched by adversarial clients
    • …
    corecore