5 research outputs found
Personalized Graph Federated Learning with Differential Privacy
This paper presents a personalized graph federated learning (PGFL) framework
in which distributedly connected servers and their respective edge devices
collaboratively learn device or cluster-specific models while maintaining the
privacy of every individual device. The proposed approach exploits similarities
among different models to provide a more relevant experience for each device,
even in situations with diverse data distributions and disproportionate
datasets. Furthermore, to ensure a secure and efficient approach to
collaborative personalized learning, we study a variant of the PGFL
implementation that utilizes differential privacy, specifically
zero-concentrated differential privacy, where a noise sequence perturbs model
exchanges. Our mathematical analysis shows that the proposed privacy-preserving
PGFL algorithm converges to the optimal cluster-specific solution for each
cluster in linear time. It also shows that exploiting similarities among
clusters leads to an alternative output whose distance to the original solution
is bounded, and that this bound can be adjusted by modifying the algorithm's
hyperparameters. Further, our analysis shows that the algorithm ensures local
differential privacy for all clients in terms of zero-concentrated differential
privacy. Finally, the performance of the proposed PGFL algorithm is examined by
performing numerical experiments in the context of regression and
classification using synthetic data and the MNIST dataset
Multitask Online Mirror Descent
We introduce and analyze MT-OMD, a multitask generalization of Online Mirror
Descent (OMD) which operates by sharing updates between tasks. We prove that
the regret of MT-OMD is of order , where
is the task variance according to the geometry induced by the
regularizer, is the number of tasks, and is the time horizon. Whenever
tasks are similar, that is , our method improves upon the
bound obtained by running independent OMDs on each task. We further
provide a matching lower bound, and show that our multitask extensions of
Online Gradient Descent and Exponentiated Gradient, two major instances of OMD,
enjoy closed-form updates, making them easy to use in practice. Finally, we
present experiments on both synthetic and real-world datasets supporting our
findings
Heterogeneous Federated Learning: State-of-the-art and Research Challenges
Federated learning (FL) has drawn increasing attention owing to its potential
use in large-scale industrial applications. Existing federated learning works
mainly focus on model homogeneous settings. However, practical federated
learning typically faces the heterogeneity of data distributions, model
architectures, network environments, and hardware devices among participant
clients. Heterogeneous Federated Learning (HFL) is much more challenging, and
corresponding solutions are diverse and complex. Therefore, a systematic survey
on this topic about the research challenges and state-of-the-art is essential.
In this survey, we firstly summarize the various research challenges in HFL
from five aspects: statistical heterogeneity, model heterogeneity,
communication heterogeneity, device heterogeneity, and additional challenges.
In addition, recent advances in HFL are reviewed and a new taxonomy of existing
HFL methods is proposed with an in-depth analysis of their pros and cons. We
classify existing methods from three different levels according to the HFL
procedure: data-level, model-level, and server-level. Finally, several critical
and promising future research directions in HFL are discussed, which may
facilitate further developments in this field. A periodically updated
collection on HFL is available at https://github.com/marswhu/HFL_Survey.Comment: 42 pages, 11 figures, and 4 table