1,116 research outputs found

    Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe rapidly expanding number of IoT devices is generating huge quantities of data, but public concern over data privacy means users are apprehensive to send data to a central server for Machine Learning (ML) purposes. The easilychanged behaviours of edge infrastructure that Software Defined Networking provides makes it possible to collate IoT data at edge servers and gateways, where Federated Learning (FL) can be performed: building a central model without uploading data to the server. FedAvg is a FL algorithm which has been the subject of much study, however it suffers from a large number of rounds to convergence with non-Independent, Identically Distributed (non-IID) client datasets and high communication costs per round. We propose adapting FedAvg to use a distributed form of Adam optimisation, greatly reducing the number of rounds to convergence, along with novel compression techniques, to produce Communication-Efficient FedAvg (CE-FedAvg). We perform extensive experiments with the MNIST/CIFAR-10 datasets, IID/non-IID client data, varying numbers of clients, client participation rates, and compression rates. These show CE-FedAvg can converge to a target accuracy in up to 6× less rounds than similarly compressed FedAvg, while uploading up to 3× less data, and is more robust to aggressive compression. Experiments on an edge-computing-like testbed using Raspberry Pi clients also show CE-FedAvg is able to reach a target accuracy in up to 1.7× less real time than FedAvg.Engineering and Physical Sciences Research Council (EPSRC

    Federated Learning's Blessing: FedAvg has Linear Speedup

    Full text link
    Federated learning (FL) learns a model jointly from a set of participating devices without sharing each other's privately held data. The characteristics of non-iid data across the network, low device participation, and the mandate that data remain private bring challenges in understanding the convergence of FL algorithms, particularly in regards to how convergence scales with the number of participating devices. In this paper, we focus on Federated Averaging (FedAvg)--the most widely used and effective FL algorithm in use today--and provide a comprehensive study of its convergence rate. Although FedAvg has recently been studied by an emerging line of literature, it remains open as to how FedAvg's convergence scales with the number of participating devices in the FL setting--a crucial question whose answer would shed light on the performance of FedAvg in large FL systems. We fill this gap by establishing convergence guarantees for FedAvg under three classes of problems: strongly convex smooth, convex smooth, and overparameterized strongly convex smooth problems. We show that FedAvg enjoys linear speedup in each case, although with different convergence rates. For each class, we also characterize the corresponding convergence rates for the Nesterov accelerated FedAvg algorithm in the FL setting: to the best of our knowledge, these are the first linear speedup guarantees for FedAvg when Nesterov acceleration is used. To accelerate FedAvg, we also design a new momentum-based FL algorithm that further improves the convergence rate in overparameterized linear regression problems. Empirical studies of the algorithms in various settings have supported our theoretical results

    Continual Local Training for Better Initialization of Federated Models

    Full text link
    Federated learning (FL) refers to the learning paradigm that trains machine learning models directly in the decentralized systems consisting of smart edge devices without transmitting the raw data, which avoids the heavy communication costs and privacy concerns. Given the typical heterogeneous data distributions in such situations, the popular FL algorithm \emph{Federated Averaging} (FedAvg) suffers from weight divergence and thus cannot achieve a competitive performance for the global model (denoted as the \emph{initial performance} in FL) compared to centralized methods. In this paper, we propose the local continual training strategy to address this problem. Importance weights are evaluated on a small proxy dataset on the central server and then used to constrain the local training. With this additional term, we alleviate the weight divergence and continually integrate the knowledge on different local clients into the global model, which ensures a better generalization ability. Experiments on various FL settings demonstrate that our method significantly improves the initial performance of federated models with few extra communication costs.Comment: This paper has been accepted to 2020 IEEE International Conference on Image Processing (ICIP 2020
    • …
    corecore