Despite the great potential of Federated Learning (FL) in large-scale
distributed learning, the current system is still subject to several privacy
issues due to the fact that local models trained by clients are exposed to the
central server. Consequently, secure aggregation protocols for FL have been
developed to conceal the local models from the server. However, we show that,
by manipulating the client selection process, the server can circumvent the
secure aggregation to learn the local models of a victim client, indicating
that secure aggregation alone is inadequate for privacy protection. To tackle
this issue, we leverage blockchain technology to propose a verifiable client
selection protocol. Owing to the immutability and transparency of blockchain,
our proposed protocol enforces a random selection of clients, making the server
unable to control the selection process at its discretion. We present security
proofs showing that our protocol is secure against this attack. Additionally,
we conduct several experiments on an Ethereum-like blockchain to demonstrate
the feasibility and practicality of our solution.Comment: IEEE ICBC 202