4 research outputs found
Dordis: Efficient Federated Learning with Dropout-Resilient Differential Privacy
Federated learning (FL) is increasingly deployed among multiple clients to
train a shared model over decentralized data. To address privacy concerns, FL
systems need to safeguard the clients' data from disclosure during training and
control data leakage through trained models when exposed to untrusted domains.
Distributed differential privacy (DP) offers an appealing solution in this
regard as it achieves a balanced tradeoff between privacy and utility without a
trusted server. However, existing distributed DP mechanisms are impractical in
the presence of client dropout, resulting in poor privacy guarantees or
degraded training accuracy. In addition, these mechanisms suffer from severe
efficiency issues.
We present Dordis, a distributed differentially private FL framework that is
highly efficient and resilient to client dropout. Specifically, we develop a
novel `add-then-remove' scheme that enforces a required noise level precisely
in each training round, even if some sampled clients drop out. This ensures
that the privacy budget is utilized prudently, despite unpredictable client
dynamics. To boost performance, Dordis operates as a distributed parallel
architecture via encapsulating the communication and computation operations
into stages. It automatically divides the global model aggregation into several
chunk-aggregation tasks and pipelines them for optimal speedup. Large-scale
deployment evaluations demonstrate that Dordis efficiently handles client
dropout in various realistic FL scenarios, achieving the optimal
privacy-utility tradeoff and accelerating training by up to 2.4
compared to existing solutions.Comment: This article has been accepted to ACM EuroSys '2