Personalization aims to characterize individual preferences and is widely
applied across many fields. However, conventional personalized methods operate
in a centralized manner and potentially expose the raw data when pooling
individual information. In this paper, with privacy considerations, we develop
a flexible and interpretable personalized framework within the paradigm of
Federated Learning, called PPFL (Population Personalized Federated Learning).
By leveraging canonical models to capture fundamental characteristics among the
heterogeneous population and employing membership vectors to reveal clients'
preferences, it models the heterogeneity as clients' varying preferences for
these characteristics and provides substantial insights into client
characteristics, which is lacking in existing Personalized Federated Learning
(PFL) methods. Furthermore, we explore the relationship between our method and
three main branches of PFL methods: multi-task PFL, clustered FL, and
decoupling PFL, and demonstrate the advantages of PPFL. To solve PPFL (a
non-convex constrained optimization problem), we propose a novel random block
coordinate descent algorithm and present the convergence property. We conduct
experiments on both pathological and practical datasets, and the results
validate the effectiveness of PPFL.Comment: 38 pages, 11 figure