Federated learning algorithms perform reasonably well on independent and
identically distributed (IID) data. They, on the other hand, suffer greatly
from heterogeneous environments, i.e., Non-IID data. Despite the fact that many
research projects have been done to address this issue, recent findings
indicate that they are still sub-optimal when compared to training on IID data.
In this work, we carefully analyze the existing methods in heterogeneous
environments. Interestingly, we find that regularizing the classifier's outputs
is quite effective in preventing performance degradation on Non-IID data.
Motivated by this, we propose Learning from Drift (LfD), a novel method for
effectively training the model in heterogeneous settings. Our scheme
encapsulates two key components: drift estimation and drift regularization.
Specifically, LfD first estimates how different the local model is from the
global model (i.e., drift). The local model is then regularized such that it
does not fall in the direction of the estimated drift. In the experiment, we
evaluate each method through the lens of the five aspects of federated
learning, i.e., Generalization, Heterogeneity, Scalability, Forgetting, and
Efficiency. Comprehensive evaluation results clearly support the superiority of
LfD in federated learning with Non-IID data