960 research outputs found
Applications of Federated Learning in Smart Cities: Recent Advances, Taxonomy, and Open Challenges
Federated learning plays an important role in the process of smart cities.
With the development of big data and artificial intelligence, there is a
problem of data privacy protection in this process. Federated learning is
capable of solving this problem. This paper starts with the current
developments of federated learning and its applications in various fields. We
conduct a comprehensive investigation. This paper summarize the latest research
on the application of federated learning in various fields of smart cities.
In-depth understanding of the current development of federated learning from
the Internet of Things, transportation, communications, finance, medical and
other fields. Before that, we introduce the background, definition and key
technologies of federated learning. Further more, we review the key
technologies and the latest results. Finally, we discuss the future
applications and research directions of federated learning in smart cities
Towards Efficient Communications in Federated Learning: A Contemporary Survey
In the traditional distributed machine learning scenario, the user's private
data is transmitted between nodes and a central server, which results in great
potential privacy risks. In order to balance the issues of data privacy and
joint training of models, federated learning (FL) is proposed as a special
distributed machine learning with a privacy protection mechanism, which can
realize multi-party collaborative computing without revealing the original
data. However, in practice, FL faces many challenging communication problems.
This review aims to clarify the relationship between these communication
problems, and focus on systematically analyzing the research progress of FL
communication work from three perspectives: communication efficiency,
communication environment, and communication resource allocation. Firstly, we
sort out the current challenges existing in the communications of FL. Secondly,
we have compiled articles related to FL communications, and then describe the
development trend of the entire field guided by the logical relationship
between them. Finally, we point out the future research directions for
communications in FL
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
Deep neural networks are susceptible to various inference attacks as they
remember information about their training data. We design white-box inference
attacks to perform a comprehensive privacy analysis of deep learning models. We
measure the privacy leakage through parameters of fully trained models as well
as the parameter updates of models during training. We design inference
algorithms for both centralized and federated learning, with respect to passive
and active inference attackers, and assuming different adversary prior
knowledge.
We evaluate our novel white-box membership inference attacks against deep
learning algorithms to trace their training data records. We show that a
straightforward extension of the known black-box attacks to the white-box
setting (through analyzing the outputs of activation functions) is ineffective.
We therefore design new algorithms tailored to the white-box setting by
exploiting the privacy vulnerabilities of the stochastic gradient descent
algorithm, which is the algorithm used to train deep neural networks. We
investigate the reasons why deep learning models may leak information about
their training data. We then show that even well-generalized models are
significantly susceptible to white-box membership inference attacks, by
analyzing state-of-the-art pre-trained and publicly available models for the
CIFAR dataset. We also show how adversarial participants, in the federated
learning setting, can successfully run active membership inference attacks
against other participants, even when the global model achieves high prediction
accuracies.Comment: 2019 IEEE Symposium on Security and Privacy (SP
Vertical Federated Learning
Vertical Federated Learning (VFL) is a federated learning setting where
multiple parties with different features about the same set of users jointly
train machine learning models without exposing their raw data or model
parameters. Motivated by the rapid growth in VFL research and real-world
applications, we provide a comprehensive review of the concept and algorithms
of VFL, as well as current advances and challenges in various aspects,
including effectiveness, efficiency, and privacy. We provide an exhaustive
categorization for VFL settings and privacy-preserving protocols and
comprehensively analyze the privacy attacks and defense strategies for each
protocol. In the end, we propose a unified framework, termed VFLow, which
considers the VFL problem under communication, computation, privacy, and
effectiveness constraints. Finally, we review the most recent advances in
industrial applications, highlighting open challenges and future directions for
VFL
Federated Multi-Armed Bandits
Federated multi-armed bandits (FMAB) is a new bandit paradigm that parallels
the federated learning (FL) framework in supervised learning. It is inspired by
practical applications in cognitive radio and recommender systems, and enjoys
features that are analogous to FL. This paper proposes a general framework of
FMAB and then studies two specific federated bandit models. We first study the
approximate model where the heterogeneous local models are random realizations
of the global model from an unknown distribution. This model introduces a new
uncertainty of client sampling, as the global model may not be reliably learned
even if the finite local models are perfectly known. Furthermore, this
uncertainty cannot be quantified a priori without knowledge of the
suboptimality gap. We solve the approximate model by proposing Federated Double
UCB (Fed2-UCB), which constructs a novel "double UCB" principle accounting for
uncertainties from both arm and client sampling. We show that gradually
admitting new clients is critical in achieving an O(log(T)) regret while
explicitly considering the communication cost. The exact model, where the
global bandit model is the exact average of heterogeneous local models, is then
studied as a special case. We show that, somewhat surprisingly, the
order-optimal regret can be achieved independent of the number of clients with
a careful choice of the update periodicity. Experiments using both synthetic
and real-world datasets corroborate the theoretical analysis and demonstrate
the effectiveness and efficiency of the proposed algorithms.Comment: AAAI 2021, Camera Ready. Code is available at:
https://github.com/ShenGroup/FMA
- …