Federated learning (FL) has garnered considerable attention due to its
privacy-preserving feature. Nonetheless, the lack of freedom in managing user
data can lead to group fairness issues, where models might be biased towards
sensitive factors such as race or gender, even if they are trained using a
legally compliant process. To redress this concern, this paper proposes a novel
FL algorithm designed explicitly to address group fairness issues. We show
empirically on CelebA and ImSitu datasets that the proposed method can improve
fairness both quantitatively and qualitatively with minimal loss in accuracy in
the presence of statistical heterogeneity and with different numbers of
clients. Besides improving fairness, the proposed FL algorithm is compatible
with local differential privacy (LDP), has negligible communication costs, and
results in minimal overhead when migrating existing FL systems from the common
FL protocol such as FederatedAveraging (FedAvg). We also provide the
theoretical convergence rate guarantee for the proposed algorithm and the
required noise level of the Gaussian mechanism to achieve desired LDP. This
innovative approach holds significant potential to enhance the fairness and
effectiveness of FL systems, particularly in sensitive applications such as
healthcare or criminal justice.Comment: the main paper has 8 pages and the supplementary material has 12
pages. At the time of uploading, it is currently under review in ECA