A distinguishing characteristic of federated learning is that the (local)
client data could have statistical heterogeneity. This heterogeneity has
motivated the design of personalized learning, where individual (personalized)
models are trained, through collaboration. There have been various
personalization methods proposed in literature, with seemingly very different
forms and methods ranging from use of a single global model for local
regularization and model interpolation, to use of multiple global models for
personalized clustering, etc. In this work, we begin with a generative
framework that could potentially unify several different algorithms as well as
suggest new algorithms. We apply our generative framework to personalized
estimation, and connect it to the classical empirical Bayes' methodology. We
develop private personalized estimation under this framework. We then use our
generative framework for learning, which unifies several known personalized FL
algorithms and also suggests new ones; we propose and study a new algorithm
AdaPeD based on a Knowledge Distillation, which numerically outperforms several
known algorithms. We also develop privacy for personalized learning methods
with guarantees for user-level privacy and composition. We numerically evaluate
the performance as well as the privacy for both the estimation and learning
problems, demonstrating the advantages of our proposed methods