131 research outputs found
Mobile Edge Computing and Artificial Intelligence: A Mutually-Beneficial Relationship
This article provides an overview of mobile edge computing (MEC) and
artificial intelligence (AI) and discusses the mutually-beneficial relationship
between them. AI provides revolutionary solutions in nearly every important
aspect of the MEC offloading process, such as resource management and
scheduling. On the other hand, MEC servers are utilized to avail a distributed
and parallelized learning framework, namely mobile edge learning.Comment: 6 pages, 2 figures, IEEE ComSoc Technical Committees Newslette
Federated Learning and Wireless Communications
Federated learning becomes increasingly attractive in the areas of wireless
communications and machine learning due to its powerful functions and potential
applications. In contrast to other machine learning tools that require no
communication resources, federated learning exploits communications between the
central server and the distributed local clients to train and optimize a
machine learning model. Therefore, how to efficiently assign limited
communication resources to train a federated learning model becomes critical to
performance optimization. On the other hand, federated learning, as a brand new
tool, can potentially enhance the intelligence of wireless networks. In this
article, we provide a comprehensive overview on the relationship between
federated learning and wireless communications, including basic principle of
federated learning, efficient communications for training a federated learning
model, and federated learning for intelligent wireless applications. We also
identify some future research challenges and directions at the end of this
article
Adaptive Task Allocation for Asynchronous Federated and Parallelized Mobile Edge Learning
This paper proposes a scheme to efficiently execute distributed learning
tasks in an asynchronous manner while minimizing the gradient staleness on
wireless edge nodes with heterogeneous computing and communication capacities.
The approach considered in this paper ensures that all devices work for a
certain duration that covers the time for data/model distribution, learning
iterations, model collection and global aggregation. The resulting problem is
an integer non-convex program with quadratic equality constraints as well as
linear equality and inequality constraints. Because the problem is NP-hard, we
relax the integer constraints in order to solve it efficiently with available
solvers. Analytical bounds are derived using the KKT conditions and Lagrangian
analysis in conjunction with the suggest-and-improve approach. Results show
that our approach reduces the gradient staleness and can offer better accuracy
than the synchronous scheme and the asynchronous scheme with equal task
allocation.Comment: 7 pages, 3 figures, submitted to IEEE TVT as a correspondence paper
(conference paper), 3 Appendice
Overcoming Forgetting in Federated Learning on Non-IID Data
We tackle the problem of Federated Learning in the non i.i.d. case, in which
local models drift apart, inhibiting learning. Building on an analogy with
Lifelong Learning, we adapt a solution for catastrophic forgetting to Federated
Learning. We add a penalty term to the loss function, compelling all local
models to converge to a shared optimum. We show that this can be done
efficiently for communication (adding no further privacy risks), scaling with
the number of nodes in the distributed setting. Our experiments show that this
method is superior to competing ones for image recognition on the MNIST
dataset.Comment: Accepted to NeurIPS 2019 Workshop on Federated Learning for Data
Privacy and Confidentialit
LEAF: A Benchmark for Federated Settings
Modern federated networks, such as those comprised of wearable devices,
mobile phones, or autonomous vehicles, generate massive amounts of data each
day. This wealth of data can help to learn models that can improve the user
experience on each device. However, the scale and heterogeneity of federated
data presents new challenges in research areas such as federated learning,
meta-learning, and multi-task learning. As the machine learning community
begins to tackle these challenges, we are at a critical time to ensure that
developments made in these areas are grounded with realistic benchmarks. To
this end, we propose LEAF, a modular benchmarking framework for learning in
federated settings. LEAF includes a suite of open-source federated datasets, a
rigorous evaluation framework, and a set of reference implementations, all
geared towards capturing the obstacles and intricacies of practical federated
environments
E-Tree Learning: A Novel Decentralized Model Learning Framework for Edge AI
Traditionally, AI models are trained on the central cloud with data collected
from end devices. This leads to high communication cost, long response time and
privacy concerns. Recently Edge empowered AI, namely Edge AI, has been proposed
to support AI model learning and deployment at the network edge closer to the
data sources. Existing research including federated learning adopts a
centralized architecture for model learning where a central server aggregates
the model updates from the clients/workers. The centralized architecture has
drawbacks such as performance bottleneck, poor scalability and single point of
failure. In this paper, we propose a novel decentralized model learning
approach, namely E-Tree, which makes use of a well-designed tree structure
imposed on the edge devices. The tree structure and the locations and orders of
aggregation on the tree are optimally designed to improve the training
convergency and model accuracy. In particular, we design an efficient device
clustering algorithm, named by KMA, for E-Tree by taking into account the data
distribution on the devices as well as the the network distance. Evaluation
results show E-Tree significantly outperforms the benchmark approaches such as
federated learning and Gossip learning under NonIID data in terms of model
accuracy and convergency.Comment: IEEE Internet of Things Journal, 202
Differential Privacy-enabled Federated Learning for Sensitive Health Data
Leveraging real-world health data for machine learning tasks requires
addressing many practical challenges, such as distributed data silos, privacy
concerns with creating a centralized database from person-specific sensitive
data, resource constraints for transferring and integrating data from multiple
sites, and risk of a single point of failure. In this paper, we introduce a
federated learning framework that can learn a global model from distributed
health data held locally at different sites. The framework offers two levels of
privacy protection. First, it does not move or share raw data across sites or
with a centralized server during the model training process. Second, it uses a
differential privacy mechanism to further protect the model from potential
privacy attacks. We perform a comprehensive evaluation of our approach on two
healthcare applications, using real-world electronic health data of 1 million
patients. We demonstrate the feasibility and effectiveness of the federated
learning framework in offering an elevated level of privacy and maintaining
utility of the global model.Comment: Machine Learning for Health (ML4H) at NeurIPS 201
Federated Neuromorphic Learning of Spiking Neural Networks for Low-Power Edge Intelligence
Spiking Neural Networks (SNNs) offer a promising alternative to conventional
Artificial Neural Networks (ANNs) for the implementation of on-device low-power
online learning and inference. On-device training is, however, constrained by
the limited amount of data available at each device. In this paper, we propose
to mitigate this problem via cooperative training through Federated Learning
(FL). To this end, we introduce an online FL-based learning rule for networked
on-device SNNs, which we refer to as FL-SNN. FL-SNN leverages local feedback
signals within each SNN, in lieu of backpropagation, and global feedback
through communication via a base station. The scheme demonstrates significant
advantages over separate training and features a flexible trade-off between
communication load and accuracy via the selective exchange of synaptic weights.Comment: submitted for conference publicatio
Age-Based Scheduling Policy for Federated Learning in Mobile Edge Networks
Federated learning (FL) is a machine learning model that preserves data
privacy in the training process. Specifically, FL brings the model directly to
the user equipments (UEs) for local training, where an edge server periodically
collects the trained parameters to produce an improved model and sends it back
to the UEs. However, since communication usually occurs through a limited
spectrum, only a portion of the UEs can update their parameters upon each
global aggregation. As such, new scheduling algorithms have to be engineered to
facilitate the full implementation of FL. In this paper, based on a metric
termed the age of update (AoU), we propose a scheduling policy by jointly
accounting for the staleness of the received parameters and the instantaneous
channel qualities to improve the running efficiency of FL. The proposed
algorithm has low complexity and its effectiveness is demonstrated by Monte
Carol simulations
D2D-Enabled Data Sharing for Distributed Machine Learning at Wireless Network Edge
Mobile edge learning is an emerging technique that enables distributed edge
devices to collaborate in training shared machine learning models by exploiting
their local data samples and communication and computation resources. To deal
with the straggler dilemma issue faced in this technique, this paper proposes a
new device to device enabled data sharing approach, in which different edge
devices share their data samples among each other over communication links, in
order to properly adjust their computation loads for increasing the training
speed. Under this setup, we optimize the radio resource allocation for both
data sharing and distributed training, with the objective of minimizing the
total training delay under fixed numbers of local and global iterations.
Numerical results show that the proposed data sharing design significantly
reduces the training delay, and also enhances the training accuracy when the
data samples are non independent and identically distributed among edge
devices.Comment: Submit to IEEE Wireless Communications Letter
- …