61 research outputs found

    Fair Selection of Edge Nodes to Participate in Clustered Federated Multitask Learning

    Full text link
    Clustered federated Multitask learning is introduced as an efficient technique when data is unbalanced and distributed amongst clients in a non-independent and identically distributed manner. While a similarity metric can provide client groups with specialized models according to their data distribution, this process can be time-consuming because the server needs to capture all data distribution first from all clients to perform the correct clustering. Due to resource and time constraints at the network edge, only a fraction of devices {is} selected every round, necessitating the need for an efficient scheduling technique to address these issues. Thus, this paper introduces a two-phased client selection and scheduling approach to improve the convergence speed while capturing all data distributions. This approach ensures correct clustering and fairness between clients by leveraging bandwidth reuse for participants spent a longer time training their models and exploiting the heterogeneity in the devices to schedule the participants according to their delay. The server then performs the clustering depending on predetermined thresholds and stopping criteria. When a specified cluster approximates a stopping point, the server employs a greedy selection for that cluster by picking the devices with lower delay and better resources. The convergence analysis is provided, showing the relationship between the proposed scheduling approach and the convergence rate of the specialized models to obtain convergence bounds under non-i.i.d. data distribution. We carry out extensive simulations, and the results demonstrate that the proposed algorithms reduce training time and improve the convergence speed while equipping every user with a customized model tailored to its data distribution.Comment: To appear in IEEE Transactions on Network and Service Management, Special issue on Federated Learning for the Management of Networked System

    Machine learning methods for service placement : a systematic review

    Get PDF
    With the growth of real-time and latency-sensitive applications in the Internet of Everything (IoE), service placement cannot rely on cloud computing alone. In response to this need, several computing paradigms, such as Mobile Edge Computing (MEC), Ultra-dense Edge Computing (UDEC), and Fog Computing (FC), have emerged. These paradigms aim to bring computing resources closer to the end user, reducing delay and wasted backhaul bandwidth. One of the major challenges of these new paradigms is the limitation of edge resources and the dependencies between different service parts. Some solutions, such as microservice architecture, allow different parts of an application to be processed simultaneously. However, due to the ever-increasing number of devices and incoming tasks, the problem of service placement cannot be solved today by relying on rule-based deterministic solutions. In such a dynamic and complex environment, many factors can influence the solution. Optimization and Machine Learning (ML) are two well-known tools that have been used most for service placement. Both methods typically use a cost function. Optimization is usually a way to define the difference between the predicted and actual value, while ML aims to minimize the cost function. In simpler terms, ML aims to minimize the gap between prediction and reality based on historical data. Instead of relying on explicit rules, ML uses prediction based on historical data. Due to the NP-hard nature of the service placement problem, classical optimization methods are not sufficient. Instead, metaheuristic and heuristic methods are widely used. In addition, the ever-changing big data in IoE environments requires the use of specific ML methods. In this systematic review, we present a taxonomy of ML methods for the service placement problem. Our findings show that 96% of applications use a distributed microservice architecture. Also, 51% of the studies are based on on-demand resource estimation methods and 81% are multi-objective. This article also outlines open questions and future research trends. Our literature review shows that one of the most important trends in ML is reinforcement learning, with a 56% share of research

    Quantum Machine Learning for 6G Communication Networks: State-of-the-Art and Vision for the Future

    Get PDF
    The upcoming 5th Generation (5G) of wireless networks is expected to lay a foundation of intelligent networks with the provision of some isolated Artificial Intelligence (AI) operations. However, fully-intelligent network orchestration and management for providing innovative services will only be realized in Beyond 5G (B5G) networks. To this end, we envisage that the 6th Generation (6G) of wireless networks will be driven by on-demand self-reconfiguration to ensure a many-fold increase in the network performanceandservicetypes.Theincreasinglystringentperformancerequirementsofemergingnetworks may finally trigger the deployment of some interesting new technologies such as large intelligent surfaces, electromagnetic-orbital angular momentum, visible light communications and cell-free communications – tonameafew.Ourvisionfor6Gis–amassivelyconnectedcomplexnetworkcapableofrapidlyresponding to the users’ service calls through real-time learning of the network state as described by the network-edge (e.g., base-station locations, cache contents, etc.), air interface (e.g., radio spectrum, propagation channel, etc.), and the user-side (e.g., battery-life, locations, etc.). The multi-state, multi-dimensional nature of the network state, requiring real-time knowledge, can be viewed as a quantum uncertainty problem. In this regard, the emerging paradigms of Machine Learning (ML), Quantum Computing (QC), and Quantum ML (QML) and their synergies with communication networks can be considered as core 6G enablers. Considering these potentials, starting with the 5G target services and enabling technologies, we provide a comprehensivereviewoftherelatedstate-of-the-artinthedomainsofML(includingdeeplearning),QCand QML, and identify their potential benefits, issues and use cases for their applications in the B5G networks. Subsequently,weproposeanovelQC-assistedandQML-basedframeworkfor6Gcommunicationnetworks whilearticulatingitschallengesandpotentialenablingtechnologiesatthenetwork-infrastructure,networkedge, air interface and user-end. Finally, some promising future research directions for the quantum- and QML-assisted B5G networks are identified and discussed

    Unleashing the Power of Edge-Cloud Generative AI in Mobile Networks: A Survey of AIGC Services

    Full text link
    Artificial Intelligence-Generated Content (AIGC) is an automated method for generating, manipulating, and modifying valuable and diverse data using AI algorithms creatively. This survey paper focuses on the deployment of AIGC applications, e.g., ChatGPT and Dall-E, at mobile edge networks, namely mobile AIGC networks, that provide personalized and customized AIGC services in real time while maintaining user privacy. We begin by introducing the background and fundamentals of generative models and the lifecycle of AIGC services at mobile AIGC networks, which includes data collection, training, finetuning, inference, and product management. We then discuss the collaborative cloud-edge-mobile infrastructure and technologies required to support AIGC services and enable users to access AIGC at mobile edge networks. Furthermore, we explore AIGCdriven creative applications and use cases for mobile AIGC networks. Additionally, we discuss the implementation, security, and privacy challenges of deploying mobile AIGC networks. Finally, we highlight some future research directions and open issues for the full realization of mobile AIGC networks

    Novel Processing and Transmission Techniques Leveraging Edge Computing for Smart Health Systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Energy-Sustainable IoT Connectivity: Vision, Technological Enablers, Challenges, and Future Directions

    Full text link
    Technology solutions must effectively balance economic growth, social equity, and environmental integrity to achieve a sustainable society. Notably, although the Internet of Things (IoT) paradigm constitutes a key sustainability enabler, critical issues such as the increasing maintenance operations, energy consumption, and manufacturing/disposal of IoT devices have long-term negative economic, societal, and environmental impacts and must be efficiently addressed. This calls for self-sustainable IoT ecosystems requiring minimal external resources and intervention, effectively utilizing renewable energy sources, and recycling materials whenever possible, thus encompassing energy sustainability. In this work, we focus on energy-sustainable IoT during the operation phase, although our discussions sometimes extend to other sustainability aspects and IoT lifecycle phases. Specifically, we provide a fresh look at energy-sustainable IoT and identify energy provision, transfer, and energy efficiency as the three main energy-related processes whose harmonious coexistence pushes toward realizing self-sustainable IoT systems. Their main related technologies, recent advances, challenges, and research directions are also discussed. Moreover, we overview relevant performance metrics to assess the energy-sustainability potential of a certain technique, technology, device, or network and list some target values for the next generation of wireless systems. Overall, this paper offers insights that are valuable for advancing sustainability goals for present and future generations.Comment: 25 figures, 12 tables, submitted to IEEE Open Journal of the Communications Societ

    A comprehensive survey on radio resource management in 5G HetNets: current solutions, future trends and open issues

    Get PDF
    The 5G network technologies are intended to accommodate innovative services with a large influx of data traffic with lower energy consumption and increased quality of service and user quality of experience levels. In order to meet 5G expectations, heterogeneous networks (HetNets) have been introduced. They involve deployment of additional low power nodes within the coverage area of conventional high power nodes and their placement closer to user underlay HetNets. Due to the increased density of small-cell networks and radio access technologies, radio resource management (RRM) for potential 5G HetNets has emerged as a critical avenue. It plays a pivotal role in enhancing spectrum utilization, load balancing, and network energy efficiency. In this paper, we summarize the key challenges i.e., cross-tier interference, co-tier interference, and user association-resource-power allocation (UA-RA-PA) emerging in 5G HetNets and highlight their significance. In addition, we present a comprehensive survey of RRM schemes based on interference management (IM), UA-RA-PA and combined approaches (UA-RA-PA + IM). We introduce a taxonomy for individual (IM, UA-RA-PA) and combined approaches as a framework for systematically studying the existing schemes. These schemes are also qualitatively analyzed and compared to each other. Finally, challenges and opportunities for RRM in 5G are outlined, and design guidelines along with possible solutions for advanced mechanisms are presented
    corecore