10,758 research outputs found
Federated Learning and Wireless Communications
Federated learning becomes increasingly attractive in the areas of wireless
communications and machine learning due to its powerful functions and potential
applications. In contrast to other machine learning tools that require no
communication resources, federated learning exploits communications between the
central server and the distributed local clients to train and optimize a
machine learning model. Therefore, how to efficiently assign limited
communication resources to train a federated learning model becomes critical to
performance optimization. On the other hand, federated learning, as a brand new
tool, can potentially enhance the intelligence of wireless networks. In this
article, we provide a comprehensive overview on the relationship between
federated learning and wireless communications, including basic principle of
federated learning, efficient communications for training a federated learning
model, and federated learning for intelligent wireless applications. We also
identify some future research challenges and directions at the end of this
article
Federated Machine Learning: Concept and Applications
Today's AI still faces two major challenges. One is that in most industries,
data exists in the form of isolated islands. The other is the strengthening of
data privacy and security. We propose a possible solution to these challenges:
secure federated learning. Beyond the federated learning framework first
proposed by Google in 2016, we introduce a comprehensive secure federated
learning framework, which includes horizontal federated learning, vertical
federated learning and federated transfer learning. We provide definitions,
architectures and applications for the federated learning framework, and
provide a comprehensive survey of existing works on this subject. In addition,
we propose building data networks among organizations based on federated
mechanisms as an effective solution to allow knowledge to be shared without
compromising user privacy
Interpret Federated Learning with Shapley Values
Federated Learning is introduced to protect privacy by distributing training
data into multiple parties. Each party trains its own model and a meta-model is
constructed from the sub models. In this way the details of the data are not
disclosed in between each party. In this paper we investigate the model
interpretation methods for Federated Learning, specifically on the measurement
of feature importance of vertical Federated Learning where feature space of the
data is divided into two parties, namely host and guest. For host party to
interpret a single prediction of vertical Federated Learning model, the
interpretation results, namely the feature importance, are very likely to
reveal the protected data from guest party. We propose a method to balance the
model interpretability and data privacy in vertical Federated Learning by using
Shapley values to reveal detailed feature importance for host features and a
unified importance value for federated guest features. Our experiments indicate
robust and informative results for interpreting Federated Learning models
Incentive Design for Efficient Federated Learning in Mobile Networks: A Contract Theory Approach
To strengthen data privacy and security, federated learning as an emerging
machine learning technique is proposed to enable large-scale nodes, e.g.,
mobile devices, to distributedly train and globally share models without
revealing their local data. This technique can not only significantly improve
privacy protection for mobile devices, but also ensure good performance of the
trained results collectively. Currently, most the existing studies focus on
optimizing federated learning algorithms to improve model training performance.
However, incentive mechanisms to motivate the mobile devices to join model
training have been largely overlooked. The mobile devices suffer from
considerable overhead in terms of computation and communication during the
federated model training process. Without well-designed incentive,
self-interested mobile devices will be unwilling to join federated learning
tasks, which hinders the adoption of federated learning. To bridge this gap, in
this paper, we adopt the contract theory to design an effective incentive
mechanism for simulating the mobile devices with high-quality (i.e.,
high-accuracy) data to participate in federated learning. Numerical results
demonstrate that the proposed mechanism is efficient for federated learning
with improved learning accuracy.Comment: submitted to the conference for potential publicatio
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Federated learning is a distributed form of machine learning where both the
training data and model training are decentralized. In this paper, we use
federated learning in a commercial, global-scale setting to train, evaluate and
deploy a model to improve virtual keyboard search suggestion quality without
direct access to the underlying user data. We describe our observations in
federated training, compare metrics to live deployments, and present resulting
quality increases. In whole, we demonstrate how federated learning can be
applied end-to-end to both improve user experiences and enhance user privacy
Bayesian Nonparametric Federated Learning of Neural Networks
In federated learning problems, data is scattered across different servers
and exchanging or pooling it is often impractical or prohibited. We develop a
Bayesian nonparametric framework for federated learning with neural networks.
Each data server is assumed to provide local neural network weights, which are
modeled through our framework. We then develop an inference approach that
allows us to synthesize a more expressive global network without additional
supervision, data pooling and with as few as a single communication round. We
then demonstrate the efficacy of our approach on federated learning problems
simulated from two popular image classification datasets.Comment: ICML 201
Agnostic Federated Learning
A key learning scenario in large-scale applications is that of federated
learning, where a centralized model is trained based on data originating from a
large number of clients. We argue that, with the existing training and
inference, federated models can be biased towards different clients. Instead,
we propose a new framework of agnostic federated learning, where the
centralized model is optimized for any target distribution formed by a mixture
of the client distributions. We further show that this framework naturally
yields a notion of fairness. We present data-dependent Rademacher complexity
guarantees for learning with this objective, which guide the definition of an
algorithm for agnostic federated learning. We also give a fast stochastic
optimization algorithm for solving the corresponding optimization problem, for
which we prove convergence bounds, assuming a convex loss function and
hypothesis set. We further empirically demonstrate the benefits of our approach
in several datasets. Beyond federated learning, our framework and algorithm can
be of interest to other learning scenarios such as cloud computing, domain
adaptation, drifting, and other contexts where the training and test
distributions do not coincide.Comment: 30 page
How To Backdoor Federated Learning
Federated learning enables thousands of participants to construct a deep
learning model without sharing their private training data with each other. For
example, multiple smartphones can jointly train a next-word predictor for
keyboards without revealing what individual users type. We demonstrate that any
participant in federated learning can introduce hidden backdoor functionality
into the joint global model, e.g., to ensure that an image classifier assigns
an attacker-chosen label to images with certain features, or that a word
predictor completes certain sentences with an attacker-chosen word.
We design and evaluate a new model-poisoning methodology based on model
replacement. An attacker selected in a single round of federated learning can
cause the global model to immediately reach 100% accuracy on the backdoor task.
We evaluate the attack under different assumptions for the standard
federated-learning tasks and show that it greatly outperforms data poisoning.
Our generic constrain-and-scale technique also evades anomaly detection-based
defenses by incorporating the evasion into the attacker's loss function during
training
Towards Federated Learning at Scale: System Design
Federated Learning is a distributed machine learning approach which enables
model training on a large corpus of decentralized data. We have built a
scalable production system for Federated Learning in the domain of mobile
devices, based on TensorFlow. In this paper, we describe the resulting
high-level design, sketch some of the challenges and their solutions, and touch
upon the open problems and future directions
A Federated Learning Framework for Healthcare IoT devices
The Internet of Things (IoT) revolution has shown potential to give rise to
many medical applications with access to large volumes of healthcare data
collected by IoT devices. However, the increasing demand for healthcare data
privacy and security makes each IoT device an isolated island of data. Further,
the limited computation and communication capacity of wearable healthcare
devices restrict the application of vanilla federated learning. To this end, we
propose an advanced federated learning framework to train deep neural networks,
where the network is partitioned and allocated to IoT devices and a centralized
server. Then most of the training computation is handled by the powerful
server. The sparsification of activations and gradients significantly reduces
the communication overhead. Empirical study have suggested that the proposed
framework guarantees a low accuracy loss, while only requiring 0.2% of the
synchronization traffic in vanilla federated learning
- …