12 research outputs found

    A Markovian model for satellite integrated cognitive and D2D HetNets

    Get PDF
    Next-generation wireless systems are expected to provide bandwidth-hungry services in a cost-efficient and ubiquitous manner. D2D communications, spectrum sharing and heterogeneous network architectures (HetNets) are touted as crucial enablers to attain these goals. Moreover, the shifting characteristics of network traffic towards content consumption necessitate content-centric architectures and protocols. In this work, we propose a comprehensive analytical model for a content-oriented heterogeneous wireless network with cognitive capability. We model our HetNet architecture with a Continuous Time Markov Chain (CTMC) and characterize the trade-off between energy efficiency and system goodput. We elaborate on novel elements in our model, namely the integration of universal source concept (modeling the content retrieval operation from external networks), caching and overlaying in D2D mode. Besides, our investigation on network mode selection provides further insight on how resource allocation and performance are intertwined

    Connectivity mode management for user devices in heterogeneous D2D networks

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Device-to-Device (D2D) paradigm is one of the key enablers for boosting the energy efficiency (EE) of networks. For the actualization of EE in multi-mode heterogeneous D2D networks, resource allocation should be executed accordingly. In that regard, we study resource allocation management in heterogeneous D2D networks where there is a LEO satellite and a cellular base station (BS) with multi-mode user devices (satellite, BS and D2D connectivity modes) dispersed in the terrestrial cell. We formulate the EE optimization of mode selection constrained by a designated total network service capacity threshold. We also implement a sub-optimal pattern search algorithm (PSA) with the aim of enhancing EE while keeping total system service capacity at an acceptable level and compare it to alternative resource management schemes. Based on our simulation results, PSA achieves the most beneficial EE value in contrast to other schemes while still managing to meet the goodput threshold. Additionally, we discuss the multi-mode nature of our heterogeneous network from EE and goodput perspectives

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Software-defined Networking enabled Resource Management and Security Provisioning in 5G Heterogeneous Networks

    Get PDF
    Due to the explosive growth of mobile data traffic and the shortage of spectral resources, 5G networks are envisioned to have a densified heterogeneous network (HetNet) architecture, combining multiple radio access technologies (multi-RATs) into a single holistic network. The co-existing of multi-tier architectures bring new challenges, especially on resource management and security provisioning, due to the lack of common interface and consistent policy across HetNets. In this thesis, we aim to address the technical challenges of data traffic management, coordinated spectrum sharing and security provisioning in 5G HetNets through the introduction of a programmable management platform based on Software-defined networking (SDN). To address the spectrum shortage problem in cellular networks, cellular data traffic is efficiently offloaded to the Wi-Fi network, and the quality of service of user applications is guaranteed with the proposed delay tolerance based partial data offloading algorithm. A two-layered information collection is also applied to best load balancing decision-making. Numerical results show that the proposed schemes exploit an SDN controller\u27s global view of the HetNets and take optimized resource allocation decisions. To support growing vehicle-generated data traffic in 5G-vehicle ad hoc networks (VANET), SDN-enabled adaptive vehicle clustering algorithm is proposed based on the real-time road traffic condition collected from HetNet infrastructure. Traffic offloading is achieved within each cluster and dynamic beamformed transmission is also applied to improve trunk link communication quality. To further achieve a coordinated spectrum sharing across HetNets, an SDN enabled orchestrated spectrum sharing scheme that integrates participating HetNets into an amalgamated network through a common configuration interface and real-time information exchange is proposed. In order to effectively protect incumbent users, a real-time 3D interference map is developed to guide the spectrum access based on the SDN global view. MATLAB simulations confirm that average interference at incumbents is reduced as well as the average number of denied access. Moreover, to tackle the contradiction between more stringent latency requirement of 5G and the potential delay induced by frequent authentications in 5G small cells and HetNets, an SDN-enabled fast authentication scheme is proposed in this thesis to simplify authentication handover, through sharing of user-dependent secure context information (SCI) among related access points. The proposed SCI is a weighted combination of user-specific attributes, which provides unique fingerprint of the specific device without additional hardware and computation cost. Numerical results show that the proposed non-cryptographic authentication scheme achieves comparable security with traditional cryptographic algorithms, while reduces authentication complexity and latency especially when network load is high

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Resource Management and Backhaul Routing in Millimeter-Wave IAB Networks Using Deep Reinforcement Learning

    Get PDF
    Thesis (PhD (Electronic Engineering))--University of Pretoria, 2023..The increased densification of wireless networks has led to the development of integrated access and backhaul (IAB) networks. In this thesis, deep reinforcement learning was applied to solve resource management and backhaul routing problems in millimeter-wave IAB networks. In the research work, a resource management solution that aims to avoid congestion for access users in an IAB network was proposed and implemented. The proposed solution applies deep reinforcement learning to learn an optimized policy that aims to achieve effective resource allocation whilst minimizing congestion and satisfying the user requirements. In addition, a deep reinforcement learning-based backhaul adaptation strategy that leverages a recursive discrete choice model was implemented in simulation. Simulation results where the proposed algorithms were compared with two baseline methods showed that the proposed scheme provides better throughput and delay performance.Sentech Chair in Broadband Wireless Multimedia Communications.Electrical, Electronic and Computer EngineeringPhD (Electronic Engineering)Unrestricte

    Multi-Drone-Cell 3D Trajectory Planning and Resource Allocation for Drone-Assisted Radio Access Networks

    Get PDF
    Equipped with communication modules, drones can perform as drone-cells (DCs) that provide on-demand communication services to users in various scenarios, such as traffic monitoring, Internet of things (IoT) data collections, and temporal communication provisioning. As the aerial relay nodes between terrestrial users and base stations (BSs), DCs are leveraged to extend wireless connections for uncovered users of radio access networks (RAN), which forms the drone-assisted RAN (DA-RAN). In DA-RAN, the communication coverage, quality-of-service (QoS) performance and deployment flexibility can be improved due to the line-of-sight DC-to-ground (D2G) wireless links and the dynamic deployment capabilities of DCs. Considering the special mobility pattern, channel model, energy consumption, and other features of DCs, it is essential yet challenging to design the flying trajectories and resource allocation schemes for DA-RAN. In specific, given the emerging D2G communication models and dynamic deployment capability of DCs, new DC deployment strategies are required by DA-RAN. Moreover, to exploit the fully controlled mobility of DCs and promote the user fairness, the flying trajectories of DCs and the D2G communications must be jointly optimized. Further, to serve the high-mobility users (e.g. vehicular users) whose mobility patterns are hard to be modeled, both the trajectory planning and resource allocation schemes for DA-RAN should be re-designed to adapt to the variations of terrestrial traffic. To address the above challenges, in this thesis, we propose a DA-RAN architecture in which multiple DCs are leveraged to relay data between BSs and terrestrial users. Based on the theoretical analyses of the D2G communication, DC energy consumption, and DC mobility features, the deployment, trajectory planning and communication resource allocation of multiple DCs are jointly investigated for both quasi-static and high-mobility users. We first analyze the communication coverage, drone-to-BS (D2B) backhaul link quality, and optimal flying height of the DC according to the state-of-the-art drone-to-user (D2U) and D2B channel models. We then formulate the multi-DC three-dimensional (3D) deployment problem with the objective of maximizing the ratio of effectively covered users while guaranteeing D2B link qualities. To solve the problem, a per-drone iterated particle swarm optimization (DI-PSO) algorithm is proposed, which prevents the large particle searching space and the high violating probability of constraints existing in the pure PSO based algorithm. Simulations show that the DI-PSO algorithm can achieve higher coverage ratio with less complexity comparing to the pure PSO based algorithm. Secondly, to improve overall network performance and the fairness among edge and central users, we design 3D trajectories for multiple DCs in DA-RAN. The multi-DC 3D trajectory planning and scheduling is formulated as a mixed integer non-linear programming (MINLP) problem with the objective of maximizing the average D2U throughput. To address the non-convexity and NP-hardness of the MINLP problem due to the 3D trajectory, we first decouple the MINLP problem into multiple integer linear programming and quasi-convex sub-problems in which user association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DC 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation scheme and a search-based start slot scheduling scheme are also designed to improve network performance and control mutual interference between DCs, respectively. Compared with the static DBS deployment, the proposed trajectory planning scheme can achieve much lower average value and standard deviation of D2U pathloss, which indicate the improvements of network throughput and user fairness. Thirdly, considering the highly dynamic and uncertain environment composed by high-mobility users, we propose a hierarchical deep reinforcement learning (DRL) based multi-DC trajectory planning and resource allocation (HDRLTPRA) scheme for high-mobility users. The objective is to maximize the accumulative network throughput while satisfying user fairness, DC power consumption, and DC-to-ground link quality constraints. To address the high uncertainties of environment, we decouple the multi-DC TPRA problem into two hierarchical sub-problems, i.e., the higher-level global trajectory planning sub-problem and the lower-level local TPRA sub-problem. First, the global trajectory planning sub-problem is to address trajectory planning for multiple DCs in the RAN over a long time period. To solve the sub-problem, we propose a multi-agent DRL based global trajectory planning (MARL-GTP) algorithm in which the non-stationary state space caused by multi-DC environment is addressed by the multi-agent fingerprint technique. Second, based on the global trajectory planning results, the local TPRA (LTPRA) sub-problem is investigated independently for each DC to control the movement and transmit power allocation based on the real-time user traffic variations. A deep deterministic policy gradient based LTPRA (DDPG-LTPRA) algorithm is then proposed to solve the LTPRA sub-problem. With the two algorithms addressing both sub-problems at different decision granularities, the multi-DC TPRA problem can be resolved by the HDRLTPRA scheme. Simulation results show that 40% network throughput improvement can be achieved by the proposed HDRLTPRA scheme over the non-learning-based TPRA scheme. In summary, we have investigated the multi-DC 3D deployment, trajectory planning and communication resource allocation in DA-RAN considering different user mobility patterns in this thesis. The proposed schemes and theoretical results should provide useful guidelines for future research in DC trajectory planning, resource allocation, as well as the real deployment of DCs in complex environments with diversified users
    corecore