Federated Learning (FL) is a distributed machine learning scheme that enables
clients to train a shared global model without exchanging local data. The
presence of label noise can severely degrade the FL performance, and some
existing studies have focused on algorithm design for label denoising. However,
they ignored the important issue that clients may not apply costly label
denoising strategies due to them being self-interested and having heterogeneous
valuations on the FL performance. To fill this gap, we model the clients'
interactions as a novel label denoising game and characterize its equilibrium.
We also analyze the price of stability, which quantifies the difference in the
system performance (e.g., global model accuracy, social welfare) between the
equilibrium outcome and the socially optimal solution. We prove that the
equilibrium outcome always leads to a lower global model accuracy than the
socially optimal solution does. We further design an efficient algorithm to
compute the socially optimal solution. Numerical experiments on MNIST dataset
show that the price of stability increases as the clients' data become noisier,
calling for an effective incentive mechanism.Comment: Accepted to IEEE GLOBECOM 202