65 research outputs found

    Interference Management by Harnessing Multi-Domain Resources in Spectrum-Sharing Aided Satellite-Ground Integrated Networks

    Full text link
    A spectrum-sharing satellite-ground integrated network is conceived, consisting of a pair of non-geostationary orbit (NGSO) constellations and multiple terrestrial base stations, which impose the co-frequency interference (CFI) on each other. The CFI may increase upon increasing the number of satellites. To manage the potentially severe interference, we propose to rely on joint multi-domain resource aided interference management (JMDR-IM). Specifically, the coverage overlap of the constellations considered is analyzed. Then, multi-domain resources - including both the beam-domain and power-domain - are jointly utilized for managing the CFI in an overlapping coverage region. This joint resource utilization is performed by relying on our specifically designed beam-shut-off and switching based beam scheduling, as well as on long short-term memory based joint autoregressive moving average assisted deep Q network aided power scheduling. Moreover, the outage probability (OP) of the proposed JMDR-IM scheme is derived, and the asymptotic analysis of the OP is also provided. Our performance evaluations demonstrate the superiority of the proposed JMDR-IM scheme in terms of its increased throughput and reduced OP.Comment: Submitted to IEEE Transactions on Vehicular Technology, Under revie

    UAV-Assisted Space-Air-Ground Integrated Networks: A Technical Review of Recent Learning Algorithms

    Full text link
    Recent technological advancements in space, air and ground components have made possible a new network paradigm called "space-air-ground integrated network" (SAGIN). Unmanned aerial vehicles (UAVs) play a key role in SAGINs. However, due to UAVs' high dynamics and complexity, the real-world deployment of a SAGIN becomes a major barrier for realizing such SAGINs. Compared to the space and terrestrial components, UAVs are expected to meet performance requirements with high flexibility and dynamics using limited resources. Therefore, employing UAVs in various usage scenarios requires well-designed planning in algorithmic approaches. In this paper, we provide a comprehensive review of recent learning-based algorithmic approaches. We consider possible reward functions and discuss the state-of-the-art algorithms for optimizing the reward functions, including Q-learning, deep Q-learning, multi-armed bandit (MAB), particle swarm optimization (PSO) and satisfaction-based learning algorithms. Unlike other survey papers, we focus on the methodological perspective of the optimization problem, which can be applicable to various UAV-assisted missions on a SAGIN using these algorithms. We simulate users and environments according to real-world scenarios and compare the learning-based and PSO-based methods in terms of throughput, load, fairness, computation time, etc. We also implement and evaluate the 2-dimensional (2D) and 3-dimensional (3D) variations of these algorithms to reflect different deployment cases. Our simulation suggests that the 33D satisfaction-based learning algorithm outperforms the other approaches for various metrics in most cases. We discuss some open challenges at the end and our findings aim to provide design guidelines for algorithm selections while optimizing the deployment of UAV-assisted SAGINs.Comment: Submitted to the IEEE Internet of Things Journal in June 202

    Revolutionizing Future Connectivity: A Contemporary Survey on AI-empowered Satellite-based Non-Terrestrial Networks in 6G

    Full text link
    Non-Terrestrial Networks (NTN) are expected to be a critical component of 6th Generation (6G) networks, providing ubiquitous, continuous, and scalable services. Satellites emerge as the primary enabler for NTN, leveraging their extensive coverage, stable orbits, scalability, and adherence to international regulations. However, satellite-based NTN presents unique challenges, including long propagation delay, high Doppler shift, frequent handovers, spectrum sharing complexities, and intricate beam and resource allocation, among others. The integration of NTNs into existing terrestrial networks in 6G introduces a range of novel challenges, including task offloading, network routing, network slicing, and many more. To tackle all these obstacles, this paper proposes Artificial Intelligence (AI) as a promising solution, harnessing its ability to capture intricate correlations among diverse network parameters. We begin by providing a comprehensive background on NTN and AI, highlighting the potential of AI techniques in addressing various NTN challenges. Next, we present an overview of existing works, emphasizing AI as an enabling tool for satellite-based NTN, and explore potential research directions. Furthermore, we discuss ongoing research efforts that aim to enable AI in satellite-based NTN through software-defined implementations, while also discussing the associated challenges. Finally, we conclude by providing insights and recommendations for enabling AI-driven satellite-based NTN in future 6G networks.Comment: 40 pages, 19 Figure, 10 Tables, Surve

    Five Facets of 6G: Research Challenges and Opportunities

    Full text link
    Whilst the fifth-generation (5G) systems are being rolled out across the globe, researchers have turned their attention to the exploration of radical next-generation solutions. At this early evolutionary stage we survey five main research facets of this field, namely {\em Facet~1: next-generation architectures, spectrum and services, Facet~2: next-generation networking, Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing, as well as Facet~5: applications of deep learning in 6G networks.} In this paper, we have provided a critical appraisal of the literature of promising techniques ranging from the associated architectures, networking, applications as well as designs. We have portrayed a plethora of heterogeneous architectures relying on cooperative hybrid networks supported by diverse access and transmission mechanisms. The vulnerabilities of these techniques are also addressed and carefully considered for highlighting the most of promising future research directions. Additionally, we have listed a rich suite of learning-driven optimization techniques. We conclude by observing the evolutionary paradigm-shift that has taken place from pure single-component bandwidth-efficiency, power-efficiency or delay-optimization towards multi-component designs, as exemplified by the twin-component ultra-reliable low-latency mode of the 5G system. We advocate a further evolutionary step towards multi-component Pareto optimization, which requires the exploration of the entire Pareto front of all optiomal solutions, where none of the components of the objective function may be improved without degrading at least one of the other components

    Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey

    Full text link
    The ongoing amalgamation of UAV and ML techniques is creating a significant synergy and empowering UAVs with unprecedented intelligence and autonomy. This survey aims to provide a timely and comprehensive overview of ML techniques used in UAV operations and communications and identify the potential growth areas and research gaps. We emphasise the four key components of UAV operations and communications to which ML can significantly contribute, namely, perception and feature extraction, feature interpretation and regeneration, trajectory and mission planning, and aerodynamic control and operation. We classify the latest popular ML tools based on their applications to the four components and conduct gap analyses. This survey also takes a step forward by pointing out significant challenges in the upcoming realm of ML-aided automated UAV operations and communications. It is revealed that different ML techniques dominate the applications to the four key modules of UAV operations and communications. While there is an increasing trend of cross-module designs, little effort has been devoted to an end-to-end ML framework, from perception and feature extraction to aerodynamic control and operation. It is also unveiled that the reliability and trust of ML in UAV operations and applications require significant attention before full automation of UAVs and potential cooperation between UAVs and humans come to fruition.Comment: 36 pages, 304 references, 19 Figure

    Distributed 3D-Beam Reforming for Hovering-Tolerant UAVs Communication over Coexistence: A Deep-Q Learning for Intelligent Space-Air-Ground Integrated Networks

    Full text link
    In this paper, we present a novel distributed UAVs beam reforming approach to dynamically form and reform a space-selective beam path in addressing the coexistence with satellite and terrestrial communications. Despite the unique advantage to support wider coverage in UAV-enabled cellular communications, the challenges reside in the array responses' sensitivity to random rotational motion and the hovering nature of the UAVs. A model-free reinforcement learning (RL) based unified UAV beam selection and tracking approach is presented to effectively realize the dynamic distributed and collaborative beamforming. The combined impact of the UAVs' hovering and rotational motions is considered while addressing the impairment due to the interference from the orbiting satellites and neighboring networks. The main objectives of this work are two-fold: first, to acquire the channel awareness to uncover its impairments; second, to overcome the beam distortion to meet the quality of service (QoS) requirements. To overcome the impact of the interference and to maximize the beamforming gain, we define and apply a new optimal UAV selection algorithm based on the brute force criteria. Results demonstrate that the detrimental effects of the channel fading and the interference from the orbiting satellites and neighboring networks can be overcome using the proposed approach. Subsequently, an RL algorithm based on Deep Q-Network (DQN) is developed for real-time beam tracking. By augmenting the system with the impairments due to hovering and rotational motion, we show that the proposed DQN algorithm can reform the beam in real-time with negligible error. It is demonstrated that the proposed DQN algorithm attains an exceptional performance improvement. We show that it requires a few iterations only for fine-tuning its parameters without observing any plateaus irrespective of the hovering tolerance

    Application of NOMA in 6G Networks: Future Vision and Research Opportunities for Next Generation Multiple Access

    Full text link
    As a prominent member of the next generation multiple access (NGMA) family, non-orthogonal multiple access (NOMA) has been recognized as a promising multiple access candidate for the sixth-generation (6G) networks. This article focuses on applying NOMA in 6G networks, with an emphasis on proposing the so-called "One Basic Principle plus Four New" concept. Starting with the basic NOMA principle, the importance of successive interference cancellation (SIC) becomes evident. In particular, the advantages and drawbacks of both the channel state information based SIC and quality-of-service based SIC are discussed. Then, the application of NOMA to meet the new 6G performance requirements, especially for massive connectivity, is explored. Furthermore, the integration of NOMA with new physical layer techniques is considered, followed by introducing new application scenarios for NOMA towards 6G. Finally, the application of machine learning in NOMA networks is investigated, ushering in the machine learning empowered NGMA era.Comment: 14 pages, 5 figures, 1 tabl

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Multi-Drone-Cell 3D Trajectory Planning and Resource Allocation for Drone-Assisted Radio Access Networks

    Get PDF
    Equipped with communication modules, drones can perform as drone-cells (DCs) that provide on-demand communication services to users in various scenarios, such as traffic monitoring, Internet of things (IoT) data collections, and temporal communication provisioning. As the aerial relay nodes between terrestrial users and base stations (BSs), DCs are leveraged to extend wireless connections for uncovered users of radio access networks (RAN), which forms the drone-assisted RAN (DA-RAN). In DA-RAN, the communication coverage, quality-of-service (QoS) performance and deployment flexibility can be improved due to the line-of-sight DC-to-ground (D2G) wireless links and the dynamic deployment capabilities of DCs. Considering the special mobility pattern, channel model, energy consumption, and other features of DCs, it is essential yet challenging to design the flying trajectories and resource allocation schemes for DA-RAN. In specific, given the emerging D2G communication models and dynamic deployment capability of DCs, new DC deployment strategies are required by DA-RAN. Moreover, to exploit the fully controlled mobility of DCs and promote the user fairness, the flying trajectories of DCs and the D2G communications must be jointly optimized. Further, to serve the high-mobility users (e.g. vehicular users) whose mobility patterns are hard to be modeled, both the trajectory planning and resource allocation schemes for DA-RAN should be re-designed to adapt to the variations of terrestrial traffic. To address the above challenges, in this thesis, we propose a DA-RAN architecture in which multiple DCs are leveraged to relay data between BSs and terrestrial users. Based on the theoretical analyses of the D2G communication, DC energy consumption, and DC mobility features, the deployment, trajectory planning and communication resource allocation of multiple DCs are jointly investigated for both quasi-static and high-mobility users. We first analyze the communication coverage, drone-to-BS (D2B) backhaul link quality, and optimal flying height of the DC according to the state-of-the-art drone-to-user (D2U) and D2B channel models. We then formulate the multi-DC three-dimensional (3D) deployment problem with the objective of maximizing the ratio of effectively covered users while guaranteeing D2B link qualities. To solve the problem, a per-drone iterated particle swarm optimization (DI-PSO) algorithm is proposed, which prevents the large particle searching space and the high violating probability of constraints existing in the pure PSO based algorithm. Simulations show that the DI-PSO algorithm can achieve higher coverage ratio with less complexity comparing to the pure PSO based algorithm. Secondly, to improve overall network performance and the fairness among edge and central users, we design 3D trajectories for multiple DCs in DA-RAN. The multi-DC 3D trajectory planning and scheduling is formulated as a mixed integer non-linear programming (MINLP) problem with the objective of maximizing the average D2U throughput. To address the non-convexity and NP-hardness of the MINLP problem due to the 3D trajectory, we first decouple the MINLP problem into multiple integer linear programming and quasi-convex sub-problems in which user association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DC 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation scheme and a search-based start slot scheduling scheme are also designed to improve network performance and control mutual interference between DCs, respectively. Compared with the static DBS deployment, the proposed trajectory planning scheme can achieve much lower average value and standard deviation of D2U pathloss, which indicate the improvements of network throughput and user fairness. Thirdly, considering the highly dynamic and uncertain environment composed by high-mobility users, we propose a hierarchical deep reinforcement learning (DRL) based multi-DC trajectory planning and resource allocation (HDRLTPRA) scheme for high-mobility users. The objective is to maximize the accumulative network throughput while satisfying user fairness, DC power consumption, and DC-to-ground link quality constraints. To address the high uncertainties of environment, we decouple the multi-DC TPRA problem into two hierarchical sub-problems, i.e., the higher-level global trajectory planning sub-problem and the lower-level local TPRA sub-problem. First, the global trajectory planning sub-problem is to address trajectory planning for multiple DCs in the RAN over a long time period. To solve the sub-problem, we propose a multi-agent DRL based global trajectory planning (MARL-GTP) algorithm in which the non-stationary state space caused by multi-DC environment is addressed by the multi-agent fingerprint technique. Second, based on the global trajectory planning results, the local TPRA (LTPRA) sub-problem is investigated independently for each DC to control the movement and transmit power allocation based on the real-time user traffic variations. A deep deterministic policy gradient based LTPRA (DDPG-LTPRA) algorithm is then proposed to solve the LTPRA sub-problem. With the two algorithms addressing both sub-problems at different decision granularities, the multi-DC TPRA problem can be resolved by the HDRLTPRA scheme. Simulation results show that 40% network throughput improvement can be achieved by the proposed HDRLTPRA scheme over the non-learning-based TPRA scheme. In summary, we have investigated the multi-DC 3D deployment, trajectory planning and communication resource allocation in DA-RAN considering different user mobility patterns in this thesis. The proposed schemes and theoretical results should provide useful guidelines for future research in DC trajectory planning, resource allocation, as well as the real deployment of DCs in complex environments with diversified users

    Unmanned Aerial Vehicle-Enabled Mobile Edge Computing for 5G and Beyond

    Get PDF
    The technological evolution of the fifth generation (5G) and beyond wireless networks not only enables the ubiquitous connectivity of massive user equipments (UEs), i.e., smartphones, laptops, tablets, but also boosts the development of various kinds of emerging applications, such as smart navigation, augmented reality (AR), virtual reality (VR) and online gaming. However, due to the limited battery capacity and computational capability such as central processing unit (CPU), storage, memory of UEs, running these computationally intensive applications is challenging for UEs in terms of latency and energy consumption. In order to realize the metrics of 5G, such as higher data rate and reliability, lower latency, energy reduction, etc, mobile edge computing (MEC) and unmanned aerial vehicles (UAVs) are developed as the key technologies of 5G. Essentially, the combination of MEC and UAV is becoming more and more important in current communication systems. Precisely, as the MEC server is deployed at the edge network, more and more applications can benefit from task offloading, which could save more energy and reduce round trip latency. Additionally, the implementation of UAV in 5G and beyond networks could play various roles, such as relaying, data collection, delivery, SWIFT, which can flexibly enhance the QoS of customers and reduce the load of network. In this regard, the main objective of this thesis is to investigate the UAV-enabled MEC system, and propose novel artificial intelligence (AI)-based algorithms for optimizing some challenging variables like the computation resource, the offloading strategy (user association) and UAVs’ trajectory. To achieve this, some of existing research challenges in UAV-enabled MEC can be tackled by some proposed AI or DRL based approaches in this thesis. First of all, a multi-UAV enabled MEC (UAVE) is studied, where several UAVs are deployed as flying MEC platform to provide computing resource to ground UEs. In this context, the user association between multiple UEs and UAVs, the resource allocation from UAVs to UEs are optimized by the proposed reinforcement learning-based user association and resource allocation (RLAA) algorithm, which is based on the well known Q-learning method and aims at minimizing the overall energy consumption of UEs. Note that in the architecture of Q-learning, a Q-table is implemented to restore the information of all state and action pairs, which will be kept updating until the convergence is obtained. The proposed RLAA algorithm is shown to achieve the optimal performance with comparison to the exhaustive search in small scale and have considerable performance gain over typical algorithms in large-scale cases. Then, in order to tackle the more complicated problems in UAV-enabled MEC system, we first propose a convex optimization based trajectory control algorithm (CAT), which jointly optimizes the user association, resource allocation and trajectory of UAVs in the iterative way, aiming at minimizing the overall energy consumption of UEs. Considering the dynamics of communication environment, we further propose a deep reinforcement learning based trajectory control algorithm (RAT), which deploys deep neural network (DNN) and reinforcement learning (RL) techniques. Precisely, we apply DNN to optimize the UAV trajectory with continuous manner and optimize the user association and resource allocation based on matching algorithm. It performs more stable during the training procedure. The simulation results prove that the proposed CAT and RAT algorithms both achieve considerable performance and outperform other traditional benckmarks. Next, another metric named geographical fairness in UAV enabled MEC system is considered. In order to make the DRL based approaches more practical and easy to be implemented in real world, we further consider the multi agent reinforcement learning system. To this end, a multi-agent deep reinforcement learning based trajectory control algorithm (MAT) is proposed to optimize the UAV trajectory, in which each of UAV is instructed by its dedicated agent. The experimental results prove that it has considerable performance benefits over other traditional algorithms and can flexibly adjusts according to the change of environment. Finally, the integration of UAV in emergence situation is studied, where an UAV is deployed to support ground UEs for emergence communications. A deep Q network (DQN) based algorithm is proposed to optimize the UAV trajectory, the power control of each UE, while considering the number of UEs served, the fairness, and the overall uplink data rate. The numerical simulations demonstrate that the proposed DQN based algorithm outperforms the existing benchmark algorithms
    • …
    corecore