15 research outputs found

    A survey of machine learning applications to handover management in 5G and beyond

    Get PDF
    Handover (HO) is one of the key aspects of next-generation (NG) cellular communication networks that need to be properly managed since it poses multiple threats to quality-of-service (QoS) such as the reduction in the average throughput as well as service interruptions. With the introduction of new enablers for fifth-generation (5G) networks, such as millimetre wave (mm-wave) communications, network densification, Internet of things (IoT), etc., HO management is provisioned to be more challenging as the number of base stations (BSs) per unit area, and the number of connections has been dramatically rising. Considering the stringent requirements that have been newly released in the standards of 5G networks, the level of the challenge is multiplied. To this end, intelligent HO management schemes have been proposed and tested in the literature, paving the way for tackling these challenges more efficiently and effectively. In this survey, we aim at revealing the current status of cellular networks and discussing mobility and HO management in 5G alongside the general characteristics of 5G networks. We provide an extensive tutorial on HO management in 5G networks accompanied by a discussion on machine learning (ML) applications to HO management. A novel taxonomy in terms of the source of data to be utilized in training ML algorithms is produced, where two broad categories are considered; namely, visual data and network data. The state-of-the-art on ML-aided HO management in cellular networks under each category is extensively reviewed with the most recent studies, and the challenges, as well as future research directions, are detailed

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&

    Unmanned aerial vehicle communications for civil applications: a review

    Get PDF
    The use of drones, formally known as unmanned aerial vehicles (UAVs), has significantly increased across a variety of applications over the past few years. This is due to the rapid advancement towards the design and production of inexpensive and dependable UAVs and the growing request for the utilization of such platforms particularly in civil applications. With their intrinsic attributes such as high mobility, rapid deployment and flexible altitude, UAVs have the potential to be utilized in many wireless system applications. On the one hand, UAVs are able to operate as flying mobile terminals within wireless/cellular networks to support a variety of missions such as goods delivery, search and rescue, precision agriculture monitoring, and remote sensing. On the other hand, UAVs can be utilized as aerial base stations to increase wireless communication coverage, reliability, and the capacity of wireless systems without additional investment in wireless systems infrastructure. The aim of this article is to review the current applications of UAVs for civil and commercial purposes. The focus of this paper is on the challenges and communication requirements associated with UAV-based communication systems. This article initially classifies UAVs in terms of various parameters, some of which can impact UAVs’ communication performance. It then provides an overview of aerial networking and investigates UAVs routing protocols specifically, which are considered as one of the challenges in UAV communication. This article later investigates the use of UAV networks in a variety of civil applications and considers many challenges and communication demands of these applications. Subsequently, different types of simulation platforms are investigated from a communication and networking viewpoint. Finally, it identifies areas of future research

    Unmanned Aerial Vehicle (UAV)-Enabled Wireless Communications and Networking

    Get PDF
    The emerging massive density of human-held and machine-type nodes implies larger traffic deviatiolns in the future than we are facing today. In the future, the network will be characterized by a high degree of flexibility, allowing it to adapt smoothly, autonomously, and efficiently to the quickly changing traffic demands both in time and space. This flexibility cannot be achieved when the network’s infrastructure remains static. To this end, the topic of UAVs (unmanned aerial vehicles) have enabled wireless communications, and networking has received increased attention. As mentioned above, the network must serve a massive density of nodes that can be either human-held (user devices) or machine-type nodes (sensors). If we wish to properly serve these nodes and optimize their data, a proper wireless connection is fundamental. This can be achieved by using UAV-enabled communication and networks. This Special Issue addresses the many existing issues that still exist to allow UAV-enabled wireless communications and networking to be properly rolled out

    Improved handover decision scheme for 5g mm-wave communication: optimum base station selection using machine learning approach.

    Get PDF
    A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Information and Communication Science and Engineering of the Nelson Mandela African Institution of Science and TechnologyThe rapid growth in mobile and wireless devices has led to an exponential demand for data traf fic and exacerbated the burden on conventional wireless networks. Fifth generation (5G) and beyond networks are expected to not only accommodate this growth in data demand but also provide additional services beyond the capability of existing wireless networks, while main taining a high quality-of-experience (QoE) for users. The need for several orders of magnitude increase in system capacity has necessitated the use of millimetre wave (mm-wave) frequencies as well as the proliferation of low-power small cells overlaying the existing macro-cell layer. These approaches offer a potential increase in throughput in magnitudes of several gigabits per second and a reduction in transmission latency, but they also present new challenges. For exam ple, mm-wave frequencies have higher propagation losses and a limited coverage area, thereby escalating mobility challenges such as more frequent handovers (HOs). In addition, the ad vent of low-power small cells with smaller footprints also causes signal fluctuations across the network, resulting in repeated HOs (ping-pong) from one small cell (SC) to another. Therefore, efficient HO management is very critical in future cellular networks since frequent HOs pose multiple threats to the quality-of-service (QoS), such as a reduction in the system throughput as well as service interruptions, which results in a poor QoE for the user. How ever, HO management is a significant challenge in 5G networks due to the use of mm-wave frequencies which have much smaller footprints. To address these challenges, this work in vestigates the HO performance of 5G mm-wave networks and proposes a novel method for achieving seamless user mobility in dense networks. The proposed model is based on a double deep reinforcement learning (DDRL) algorithm. To test the performance of the model, a com parative study was made between the proposed approach and benchmark solutions, including a benchmark developed as part of this thesis. The evaluation metrics considered include system throughput, execution time, ping-pong, and the scalability of the solutions. The results reveal that the developed DDRL-based solution vastly outperforms not only conventional methods but also other machine-learning-based benchmark techniques. The main contribution of this thesis is to provide an intelligent framework for mobility man agement in the connected state (i.e HO management) in 5G. Though primarily developed for mm-wave links between UEs and BSs in ultra-dense heterogeneous networks (UDHNs), the proposed framework can also be applied to sub-6 GHz frequencies

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book
    corecore