3,507 research outputs found

    Applications of Federated Learning in Smart Cities: Recent Advances, Taxonomy, and Open Challenges

    Full text link
    Federated learning plays an important role in the process of smart cities. With the development of big data and artificial intelligence, there is a problem of data privacy protection in this process. Federated learning is capable of solving this problem. This paper starts with the current developments of federated learning and its applications in various fields. We conduct a comprehensive investigation. This paper summarize the latest research on the application of federated learning in various fields of smart cities. In-depth understanding of the current development of federated learning from the Internet of Things, transportation, communications, finance, medical and other fields. Before that, we introduce the background, definition and key technologies of federated learning. Further more, we review the key technologies and the latest results. Finally, we discuss the future applications and research directions of federated learning in smart cities

    Federated Embedded Systems – a review of the literature in related fields

    Get PDF
    This report is concerned with the vision of smart interconnected objects, a vision that has attracted much attention lately. In this paper, embedded, interconnected, open, and heterogeneous control systems are in focus, formally referred to as Federated Embedded Systems. To place FES into a context, a review of some related research directions is presented. This review includes such concepts as systems of systems, cyber-physical systems, ubiquitous computing, internet of things, and multi-agent systems. Interestingly, the reviewed fields seem to overlap with each other in an increasing number of ways

    Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

    Full text link
    Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks

    Decentralized Federated Learning on the Edge over Wireless Mesh Networks

    Full text link
    The rapid growth of Internet of Things (IoT) devices has generated vast amounts of data, leading to the emergence of federated learning as a novel distributed machine learning paradigm. Federated learning enables model training at the edge, leveraging the processing capacity of edge devices while preserving privacy and mitigating data transfer bottlenecks. However, the conventional centralized federated learning architecture suffers from a single point of failure and susceptibility to malicious attacks. In this study, we delve into an alternative approach called decentralized federated learning (DFL) conducted over a wireless mesh network as the communication backbone. We perform a comprehensive network performance analysis using stochastic geometry theory and physical interference models, offering fresh insights into the convergence analysis of DFL. Additionally, we conduct system simulations to assess the proposed decentralized architecture under various network parameters and different aggregator methods such as FedAvg, Krum and Median methods. Our model is trained on the widely recognized EMNIST dataset for benchmarking handwritten digit classification. To minimize the model's size at the edge and reduce communication overhead, we employ a cutting-edge compression technique based on genetic algorithms. Our simulation results reveal that the compressed decentralized architecture achieves performance comparable to the baseline centralized architecture and traditional DFL in terms of accuracy and average loss for our classification task. Moreover, it significantly reduces the size of shared models over the wireless channel by compressing participants' local model sizes to nearly half of their original size compared to the baselines, effectively reducing complexity and communication overhead

    Resource optimized federated learning-enabled cognitive internet of things for smart industries

    Get PDF
    Leveraging the cognitive Internet of things (C-IoT), emerging computing technologies, and machine learning schemes for industries can assist in streamlining manufacturing processes, revolutionizing operational analytics, and maintaining factory efficiency. However, further adoption of centralized machine learning in industries seems to be restricted due to data privacy issues. Federated learning has the potential to bring about predictive features in industrial systems without leaking private information. However, its implementation involves key challenges including resource optimization, robustness, and security. In this article, we propose a novel dispersed federated learning (DFL) framework to provide resource optimization, whereby distributed fashion of learning offers robustness. We formulate an integer linear optimization problem to minimize the overall federated learning cost for the DFL framework. To solve the formulated problem, first, we decompose it into two sub-problems: association and resource allocation problem. Second, we relax the association and resource allocation sub-problems to make them convex optimization problems. Later, we use the rounding technique to obtain binary association and resource allocation variables. Our proposed algorithm works in an iterative manner by fixing one problem variable (for example, association) and compute the other (for example, resource allocation). The iterative algorithm continues until convergence of the formulated cost optimization problem. Furthermore, we compare the proposed DFL with two schemes; namely, random resource allocation and random association. Numerical results show the superiority of the proposed DFL scheme. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved
    • …
    corecore