220 research outputs found

    Unmanned Aerial Vehicle-Enabled Mobile Edge Computing for 5G and Beyond

    Get PDF
    The technological evolution of the fifth generation (5G) and beyond wireless networks not only enables the ubiquitous connectivity of massive user equipments (UEs), i.e., smartphones, laptops, tablets, but also boosts the development of various kinds of emerging applications, such as smart navigation, augmented reality (AR), virtual reality (VR) and online gaming. However, due to the limited battery capacity and computational capability such as central processing unit (CPU), storage, memory of UEs, running these computationally intensive applications is challenging for UEs in terms of latency and energy consumption. In order to realize the metrics of 5G, such as higher data rate and reliability, lower latency, energy reduction, etc, mobile edge computing (MEC) and unmanned aerial vehicles (UAVs) are developed as the key technologies of 5G. Essentially, the combination of MEC and UAV is becoming more and more important in current communication systems. Precisely, as the MEC server is deployed at the edge network, more and more applications can benefit from task offloading, which could save more energy and reduce round trip latency. Additionally, the implementation of UAV in 5G and beyond networks could play various roles, such as relaying, data collection, delivery, SWIFT, which can flexibly enhance the QoS of customers and reduce the load of network. In this regard, the main objective of this thesis is to investigate the UAV-enabled MEC system, and propose novel artificial intelligence (AI)-based algorithms for optimizing some challenging variables like the computation resource, the offloading strategy (user association) and UAVs’ trajectory. To achieve this, some of existing research challenges in UAV-enabled MEC can be tackled by some proposed AI or DRL based approaches in this thesis. First of all, a multi-UAV enabled MEC (UAVE) is studied, where several UAVs are deployed as flying MEC platform to provide computing resource to ground UEs. In this context, the user association between multiple UEs and UAVs, the resource allocation from UAVs to UEs are optimized by the proposed reinforcement learning-based user association and resource allocation (RLAA) algorithm, which is based on the well known Q-learning method and aims at minimizing the overall energy consumption of UEs. Note that in the architecture of Q-learning, a Q-table is implemented to restore the information of all state and action pairs, which will be kept updating until the convergence is obtained. The proposed RLAA algorithm is shown to achieve the optimal performance with comparison to the exhaustive search in small scale and have considerable performance gain over typical algorithms in large-scale cases. Then, in order to tackle the more complicated problems in UAV-enabled MEC system, we first propose a convex optimization based trajectory control algorithm (CAT), which jointly optimizes the user association, resource allocation and trajectory of UAVs in the iterative way, aiming at minimizing the overall energy consumption of UEs. Considering the dynamics of communication environment, we further propose a deep reinforcement learning based trajectory control algorithm (RAT), which deploys deep neural network (DNN) and reinforcement learning (RL) techniques. Precisely, we apply DNN to optimize the UAV trajectory with continuous manner and optimize the user association and resource allocation based on matching algorithm. It performs more stable during the training procedure. The simulation results prove that the proposed CAT and RAT algorithms both achieve considerable performance and outperform other traditional benckmarks. Next, another metric named geographical fairness in UAV enabled MEC system is considered. In order to make the DRL based approaches more practical and easy to be implemented in real world, we further consider the multi agent reinforcement learning system. To this end, a multi-agent deep reinforcement learning based trajectory control algorithm (MAT) is proposed to optimize the UAV trajectory, in which each of UAV is instructed by its dedicated agent. The experimental results prove that it has considerable performance benefits over other traditional algorithms and can flexibly adjusts according to the change of environment. Finally, the integration of UAV in emergence situation is studied, where an UAV is deployed to support ground UEs for emergence communications. A deep Q network (DQN) based algorithm is proposed to optimize the UAV trajectory, the power control of each UE, while considering the number of UEs served, the fairness, and the overall uplink data rate. The numerical simulations demonstrate that the proposed DQN based algorithm outperforms the existing benchmark algorithms

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Joint Trajectory and Resource Optimization of MEC-Assisted UAVs in Sub-THz Networks: A Resources-based Multi-Agent Proximal Policy Optimization DRL with Attention Mechanism

    Full text link
    THz band communication technology will be used in the 6G networks to enable high-speed and high-capacity data service demands. However, THz-communication losses arise owing to limitations, i.e., molecular absorption, rain attenuation, and coverage range. Furthermore, to maintain steady THz-communications and overcome coverage distances in rural and suburban regions, the required number of BSs is very high. Consequently, a new communication platform that enables aerial communication services is required. Furthermore, the airborne platform supports LoS communications rather than NLoS communications, which helps overcome these losses. Therefore, in this work, we investigate the deployment and resource optimization for MEC-enabled UAVs, which can provide THz-based communications in remote regions. To this end, we formulate an optimization problem to minimize the sum of the energy consumption of both MEC-UAV and MUs and the delay incurred by MUs under the given task information. The formulated problem is a MINLP problem, which is NP-hard. We decompose the main problem into two subproblems to address the formulated problem. We solve the first subproblem with a standard optimization solver, i.e., CVXPY, due to its convex nature. To solve the second subproblem, we design a RMAPPO DRL algorithm with an attention mechanism. The considered attention mechanism is utilized for encoding a diverse number of observations. This is designed by the network coordinator to provide a differentiated fit reward to each agent in the network. The simulation results show that the proposed algorithm outperforms the benchmark and yields a network utility which is 2.22%2.22\%, 15.55%15.55\%, and 17.77%17.77\% more than the benchmarks.Comment: 13 pages, 12 figure

    A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems

    Get PDF
    Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033

    Energy-efficient non-orthogonal multiple access for wireless communication system

    Get PDF
    Non-orthogonal multiple access (NOMA) has been recognized as a potential solution for enhancing the throughput of next-generation wireless communications. NOMA is a potential option for 5G networks due to its superiority in providing better spectrum efficiency (SE) compared to orthogonal multiple access (OMA). From the perspective of green communication, energy efficiency (EE) has become a new performance indicator. A systematic literature review is conducted to investigate the available energy efficient approach researchers have employed in NOMA. We identified 19 subcategories related to EE in NOMA out of 108 publications where 92 publications are from the IEEE website. To help the reader comprehend, a summary for each category is explained and elaborated in detail. From the literature review, it had been observed that NOMA can enhance the EE of wireless communication systems. At the end of this survey, future research particularly in machine learning algorithms such as reinforcement learning (RL) and deep reinforcement learning (DRL) for NOMA are also discussed

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    A Comprehensive Overview on 5G-and-Beyond Networks with UAVs: From Communications to Sensing and Intelligence

    Full text link
    Due to the advancements in cellular technologies and the dense deployment of cellular infrastructure, integrating unmanned aerial vehicles (UAVs) into the fifth-generation (5G) and beyond cellular networks is a promising solution to achieve safe UAV operation as well as enabling diversified applications with mission-specific payload data delivery. In particular, 5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communications (mMTC). On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in three-dimensional (3D) space. On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference. Besides the requirement of high-performance wireless communications, the ability to support effective and efficient sensing as well as network intelligence is also essential for 5G-and-beyond 3D heterogeneous wireless networks with coexisting aerial and ground users. In this paper, we provide a comprehensive overview of the latest research efforts on integrating UAVs into cellular networks, with an emphasis on how to exploit advanced techniques (e.g., intelligent reflecting surface, short packet transmission, energy harvesting, joint communication and radar sensing, and edge intelligence) to meet the diversified service requirements of next-generation wireless systems. Moreover, we highlight important directions for further investigation in future work.Comment: Accepted by IEEE JSA
    corecore