Federated learning allows mobile clients to jointly train a global model
without sending their private data to a central server. Extensive works have
studied the performance guarantee of the global model, however, it is still
unclear how each individual client influences the collaborative training
process. In this work, we defined a new notion, called {\em Fed-Influence}, to
quantify this influence over the model parameters, and proposed an effective
and efficient algorithm to estimate this metric. In particular, our design
satisfies several desirable properties: (1) it requires neither retraining nor
retracing, adding only linear computational overhead to clients and the server;
(2) it strictly maintains the tenets of federated learning, without revealing
any client's local private data; and (3) it works well on both convex and
non-convex loss functions, and does not require the final model to be optimal.
Empirical results on a synthetic dataset and the FEMNIST dataset demonstrate
that our estimation method can approximate Fed-Influence with small bias.
Further, we show an application of Fed-Influence in model debugging.Comment: Accepted at AAAI 202