558 research outputs found

    Tiny Groups Tackle Byzantine Adversaries

    Full text link
    A popular technique for tolerating malicious faults in open distributed systems is to establish small groups of participants, each of which has a non-faulty majority. These groups are used as building blocks to design attack-resistant algorithms. Despite over a decade of active research, current constructions require group sizes of O(logn)O(\log n), where nn is the number of participants in the system. This group size is important since communication and state costs scale polynomially with this parameter. Given the stubbornness of this logarithmic barrier, a natural question is whether better bounds are possible. Here, we consider an attacker that controls a constant fraction of the total computational resources in the system. By leveraging proof-of-work (PoW), we demonstrate how to reduce the group size exponentially to O(loglogn)O(\log\log n) while maintaining strong security guarantees. This reduction in group size yields a significant improvement in communication and state costs.Comment: This work is supported by the National Science Foundation grant CCF 1613772 and a C Spire Research Gif

    SPFL: A Self-purified Federated Learning Method Against Poisoning Attacks

    Full text link
    While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties of FL. To tackle this problem, we propose a self-purified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to supervise the training of aggregated model in each iteration. The purification is performed by an attention-guided self-knowledge distillation where the teacher and student models are optimized locally for task loss, distillation loss and attention-based loss simultaneously. SPFL imposes no restriction on the communication protocol and aggregator at the server. It can work in tandem with any existing secure aggregation algorithms and protocols for augmented security and privacy guarantee. We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against various poisoning attacks. The attack success rate of SPFL trained model is at most 3%\% above that of a clean model, even if the poisoning attack is launched in every iteration with all but one malicious clients in the system. Meantime, it improves the model quality on normal inputs compared to FedAvg, either under attack or in the absence of an attack

    Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting

    Full text link
    Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model. A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks. However, Byzantine clients can still circumvent robust AGRs when data is non-Identically and Independently Distributed (non-IID). In this paper, we first reveal the root causes of performance degradation of current robust AGRs in non-IID settings: the curse of dimensionality and gradient heterogeneity. In order to address this issue, we propose GAS, a \shorten approach that can successfully adapt existing robust AGRs to non-IID settings. We also provide a detailed convergence analysis when the existing robust AGRs are combined with GAS. Experiments on various real-world datasets verify the efficacy of our proposed GAS. The implementation code is provided in https://github.com/YuchenLiu-a/byzantine-gas
    corecore