A traditional federated learning (FL) allows clients to collaboratively train
a global model under the coordination of a central server, which sparks great
interests in exploiting the private data distributed on clients. However, once
the central server suffers from a single point of failure, it will lead to
system crash. In addition, FL usually involves a large number of clients, which
requires expensive communication costs. These challenges inspire a
communication-efficient design of decentralized FL. In this paper, we propose
an efficient and privacy-preserving global model training protocol in the
context of FL in large-scale peer-to-peer networks, CFL. The proposed CFL
protocol aggregates local contributions hierarchically by a cluster-based
aggregation mode, as well as a leverged authenticated encryption scheme to
ensure the security communication, whose key is distributed by a modified
secure communication key establishment protocol. Theoretical analyses show that
CFL guarantees the privacy of local model update parameters, as well as
integrity and authenticity under the widespread internal semi-honest and
external malicious threat models. In particular, the proposed key revocation
based on public voting can effectively defense against external adversaries
hijacking honest participants to ensure the confidentiality of the
communication keys. Moreover, the modified secure communication key
establishment protocol indeed achieves high network connectivity probability to
ensure transmission security of the system