Federated learning (FL) has enabled multiple data owners (a.k.a. FL clients)
to train machine learning models collaboratively without revealing private
data. Since the FL server can only engage a limited number of clients in each
training round, FL client selection has become an important research problem.
Existing approaches generally focus on either enhancing FL model performance or
enhancing the fair treatment of FL clients. The problem of balancing
performance and fairness considerations when selecting FL clients remains open.
To address this problem, we propose the Fairness-aware Federated Client
Selection (FairFedCS) approach. Based on Lyapunov optimization, it dynamically
adjusts FL clients' selection probabilities by jointly considering their
reputations, times of participation in FL tasks and contributions to the
resulting model performance. By not using threshold-based reputation filtering,
it provides FL clients with opportunities to redeem their reputations after a
perceived poor performance, thereby further enhancing fair client treatment.
Extensive experiments based on real-world multimedia datasets show that
FairFedCS achieves 19.6% higher fairness and 0.73% higher test accuracy on
average than the best-performing state-of-the-art approach.Comment: Accepted by ICME 202