4 research outputs found
PeFLL: A Lifelong Learning Approach to Personalized Federated Learning
Personalized federated learning (pFL) has emerged as a popular approach to
dealing with the challenge of statistical heterogeneity between the data
distributions of the participating clients. Instead of learning a single global
model, pFL aims to learn an individual model for each client while still making
use of the data available at other clients. In this work, we present PeFLL, a
new pFL approach rooted in lifelong learning that performs well not only on
clients present during its training phase, but also on any that may emerge in
the future. PeFLL learns to output client specific models by jointly training
an embedding network and a hypernetwork. The embedding network learns to
represent clients in a latent descriptor space in a way that reflects their
similarity to each other. The hypernetwork learns a mapping from this latent
space to the space of possible client models. We demonstrate experimentally
that PeFLL produces models of superior accuracy compared to previous methods,
especially for clients not seen during training, and that it scales well to
large numbers of clients. Moreover, generating a personalized model for a new
client is efficient as no additional fine-tuning or optimization is required by
either the client or the server. We also present theoretical results supporting
PeFLL in the form of a new PAC-Bayesian generalization bound for lifelong
learning and we prove the convergence of our proposed optimization procedure
Communication-Efficient Federated Learning With Data and Client Heterogeneity
Federated Learning (FL) enables large-scale distributed training of machine
learning models, while still allowing individual nodes to maintain data
locally.
However, executing FL at scale comes with inherent practical challenges:
1) heterogeneity of the local node data distributions,
2) heterogeneity of node computational speeds (asynchrony),
but also 3) constraints in the amount of communication between the clients
and the server.
In this work, we present the first variant of the classic federated averaging
(FedAvg) algorithm
which, at the same time, supports data heterogeneity, partial client
asynchrony, and communication compression.
Our algorithm comes with a rigorous analysis showing that, in spite of these
system relaxations,
it can provide similar convergence to FedAvg in interesting parameter
regimes.
Experimental results in the rigorous LEAF benchmark on setups of up to
nodes show that our algorithm ensures fast convergence for standard federated
tasks, improving upon prior quantized and asynchronous approaches