FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority

Abstract

\textit{Privacy} and \textit{Byzantine-robustness} are two major concerns of federated learning (FL), but mitigating both threats simultaneously is highly challenging: privacy-preserving strategies prohibit access to individual model updates to avoid leakage, while Byzantine-robust methods require access for comprehensive mathematical analysis. Besides, most Byzantine-robust methods only work in the \textit{honest-majority} setting. We present FLOD\mathsf{FLOD}, a novel oblivious defender for private Byzantine-robust FL in dishonest-majority setting. Basically, we propose a novel Hamming distance-based aggregation method to resist >1/2>1/2 Byzantine attacks using a small \textit{root-dataset} and \textit{server-model} for bootstrapping trust. Furthermore, we employ two non-colluding servers and use additive homomorphic encryption (AHE\mathsf{AHE}) and secure two-party computation (2PC) primitives to construct efficient privacy-preserving building blocks for secure aggregation, in which we propose two novel in-depth variants of Beaver Multiplication triples (MT) to reduce the overhead of Bit to Arithmetic (Bit2A\mathsf{Bit2A}) conversion and vector weighted sum aggregation (VSWA\mathsf{VSWA}) significantly. Experiments on real-world and synthetic datasets demonstrate our effectiveness and efficiency: (\romannumeral1) FLOD\mathsf{FLOD} defeats known Byzantine attacks with a negligible effect on accuracy and convergence, (\romannumeral2) achieves a reduction of 2×\approx 2\times for offline (resp. online) overhead of Bit2A\mathsf{Bit2A} and VSWA\mathsf{VSWA} compared to ABY\mathsf{ABY}-AHE\mathsf{AHE} (resp. ABY\mathsf{ABY}-MT\mathsf{MT}) based methods (NDSS\u2715), (\romannumeral3) and reduces total online communication and run-time by 167167-1416×1416\times and 3.13.1-7.4×7.4\times compared to FLGUARD\mathsf{FLGUARD} (Crypto Eprint 2021/025)

    Similar works