As a distributed machine learning technique, federated learning (FL) requires
clients to collaboratively train a shared model with an edge server without
leaking their local data. However, the heterogeneous data distribution among
clients often leads to a decrease in model performance. To tackle this issue,
this paper introduces a prototype-based regularization strategy to address the
heterogeneity in the data distribution. Specifically, the regularization
process involves the server aggregating local prototypes from distributed
clients to generate a global prototype, which is then sent back to the
individual clients to guide their local training. The experimental results on
MNIST and Fashion-MNIST show that our proposal achieves improvements of 3.3%
and 8.9% in average test accuracy, respectively, compared to the most popular
baseline FedAvg. Furthermore, our approach has a fast convergence rate in
heterogeneous settings