Personalized federated learning (PFL) is an approach proposed to address the
issue of poor convergence on heterogeneous data. However, most existing PFL
frameworks require strong assumptions for convergence. In this paper, we
propose an alternating direction method of multipliers (ADMM) for training PFL
models with Moreau envelope (FLAME), which achieves a sublinear convergence
rate, relying on the relatively weak assumption of gradient Lipschitz
continuity. Moreover, due to the gradient-free nature of ADMM, FLAME alleviates
the need for hyperparameter tuning, particularly in avoiding the adjustment of
the learning rate when training the global model. In addition, we propose a
biased client selection strategy to expedite the convergence of training of PFL
models. Our theoretical analysis establishes the global convergence under both
unbiased and biased client selection strategies. Our experiments validate that
FLAME, when trained on heterogeneous data, outperforms state-of-the-art methods
in terms of model performance. Regarding communication efficiency, it exhibits
an average speedup of 3.75x compared to the baselines. Furthermore,
experimental results validate that the biased client selection strategy speeds
up the convergence of both personalized and global models.Comment: 15 page