2,060 research outputs found

    Unmanned Aerial Vehicle-Enabled Mobile Edge Computing for 5G and Beyond

    Get PDF
    The technological evolution of the fifth generation (5G) and beyond wireless networks not only enables the ubiquitous connectivity of massive user equipments (UEs), i.e., smartphones, laptops, tablets, but also boosts the development of various kinds of emerging applications, such as smart navigation, augmented reality (AR), virtual reality (VR) and online gaming. However, due to the limited battery capacity and computational capability such as central processing unit (CPU), storage, memory of UEs, running these computationally intensive applications is challenging for UEs in terms of latency and energy consumption. In order to realize the metrics of 5G, such as higher data rate and reliability, lower latency, energy reduction, etc, mobile edge computing (MEC) and unmanned aerial vehicles (UAVs) are developed as the key technologies of 5G. Essentially, the combination of MEC and UAV is becoming more and more important in current communication systems. Precisely, as the MEC server is deployed at the edge network, more and more applications can benefit from task offloading, which could save more energy and reduce round trip latency. Additionally, the implementation of UAV in 5G and beyond networks could play various roles, such as relaying, data collection, delivery, SWIFT, which can flexibly enhance the QoS of customers and reduce the load of network. In this regard, the main objective of this thesis is to investigate the UAV-enabled MEC system, and propose novel artificial intelligence (AI)-based algorithms for optimizing some challenging variables like the computation resource, the offloading strategy (user association) and UAVs’ trajectory. To achieve this, some of existing research challenges in UAV-enabled MEC can be tackled by some proposed AI or DRL based approaches in this thesis. First of all, a multi-UAV enabled MEC (UAVE) is studied, where several UAVs are deployed as flying MEC platform to provide computing resource to ground UEs. In this context, the user association between multiple UEs and UAVs, the resource allocation from UAVs to UEs are optimized by the proposed reinforcement learning-based user association and resource allocation (RLAA) algorithm, which is based on the well known Q-learning method and aims at minimizing the overall energy consumption of UEs. Note that in the architecture of Q-learning, a Q-table is implemented to restore the information of all state and action pairs, which will be kept updating until the convergence is obtained. The proposed RLAA algorithm is shown to achieve the optimal performance with comparison to the exhaustive search in small scale and have considerable performance gain over typical algorithms in large-scale cases. Then, in order to tackle the more complicated problems in UAV-enabled MEC system, we first propose a convex optimization based trajectory control algorithm (CAT), which jointly optimizes the user association, resource allocation and trajectory of UAVs in the iterative way, aiming at minimizing the overall energy consumption of UEs. Considering the dynamics of communication environment, we further propose a deep reinforcement learning based trajectory control algorithm (RAT), which deploys deep neural network (DNN) and reinforcement learning (RL) techniques. Precisely, we apply DNN to optimize the UAV trajectory with continuous manner and optimize the user association and resource allocation based on matching algorithm. It performs more stable during the training procedure. The simulation results prove that the proposed CAT and RAT algorithms both achieve considerable performance and outperform other traditional benckmarks. Next, another metric named geographical fairness in UAV enabled MEC system is considered. In order to make the DRL based approaches more practical and easy to be implemented in real world, we further consider the multi agent reinforcement learning system. To this end, a multi-agent deep reinforcement learning based trajectory control algorithm (MAT) is proposed to optimize the UAV trajectory, in which each of UAV is instructed by its dedicated agent. The experimental results prove that it has considerable performance benefits over other traditional algorithms and can flexibly adjusts according to the change of environment. Finally, the integration of UAV in emergence situation is studied, where an UAV is deployed to support ground UEs for emergence communications. A deep Q network (DQN) based algorithm is proposed to optimize the UAV trajectory, the power control of each UE, while considering the number of UEs served, the fairness, and the overall uplink data rate. The numerical simulations demonstrate that the proposed DQN based algorithm outperforms the existing benchmark algorithms

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Echo State Learning for Wireless Virtual Reality Resource Allocation in UAV-enabled LTE-U Networks

    Full text link
    In this paper, the problem of resource management is studied for a network of wireless virtual reality (VR) users communicating using an unmanned aerial vehicle (UAV)-enabled LTE-U network. In the studied model, the UAVs act as VR control centers that collect tracking information from the VR users over the wireless uplink and, then, send the constructed VR images to the VR users over an LTE-U downlink. Therefore, resource allocation in such a UAV-enabled LTE-U network must jointly consider the uplink and downlink links over both licensed and unlicensed bands. In such a VR setting, the UAVs can dynamically adjust the image quality and format of each VR image to change the data size of each VR image, then meet the delay requirement. Therefore, resource allocation must also take into account the image quality and format. This VR-centric resource allocation problem is formulated as a noncooperative game that enables a joint allocation of licensed and unlicensed spectrum bands, as well as a dynamic adaptation of VR image quality and format. To solve this game, a learning algorithm based on the machine learning tools of echo state networks (ESNs) with leaky integrator neurons is proposed. Unlike conventional ESN based learning algorithms that are suitable for discrete-time systems, the proposed algorithm can dynamically adjust the update speed of the ESN's state and, hence, it can enable the UAVs to learn the continuous dynamics of their associated VR users. Simulation results show that the proposed algorithm achieves up to 14% and 27.1% gains in terms of total VR QoE for all users compared to Q-learning using LTE-U and Q-learning using LTE
    • …
    corecore