195 research outputs found

    Partially Blind Handovers for mmWave New Radio Aided by Sub-6 GHz LTE Signaling

    Full text link
    For a base station that supports cellular communications in sub-6 GHz LTE and millimeter (mmWave) bands, we propose a supervised machine learning algorithm to improve the success rate in the handover between the two radio frequencies using sub-6 GHz and mmWave prior channel measurements within a temporal window. The main contributions of our paper are to 1) introduce partially blind handovers, 2) employ machine learning to perform handover success predictions from sub-6 GHz to mmWave frequencies, and 3) show that this machine learning based algorithm combined with partially blind handovers can improve the handover success rate in a realistic network setup of colocated cells. Simulation results show improvement in handover success rates for our proposed algorithm compared to standard handover algorithms.Comment: (c) 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Smart Pattern V2I Handover Based on Machine Learning Vehicle Classification

    Get PDF
    The mmwave frequencies will be widely used in future vehicular communications. At these frequencies, the radio channel becomes much more vulnerable to slight changes in the environment like motions of the device, reflections or blockage. In high mobility vehicular communications the rapidly changing vehicle environments and the large overheads due to frequent beam training are the critical disadvantages in developing these systems at mmwave frequencies. Hence, smart beam management procedures are desired to establish and maintain the radio channels. In this thesis, we propose that using the positions and respective velocities of the vehicles in the dynamic selection of the beam pair, and then adapting to the changing environments using machine learning algorithms, can improve both network performance and communication stability in high mobility vehicular communications

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    A Survey on Cellular-connected UAVs: Design Challenges, Enabling 5G/B5G Innovations, and Experimental Advancements

    Full text link
    As an emerging field of aerial robotics, Unmanned Aerial Vehicles (UAVs) have gained significant research interest within the wireless networking research community. As soon as national legislations allow UAVs to fly autonomously, we will see swarms of UAV populating the sky of our smart cities to accomplish different missions: parcel delivery, infrastructure monitoring, event filming, surveillance, tracking, etc. The UAV ecosystem can benefit from existing 5G/B5G cellular networks, which can be exploited in different ways to enhance UAV communications. Because of the inherent characteristics of UAV pertaining to flexible mobility in 3D space, autonomous operation and intelligent placement, these smart devices cater to wide range of wireless applications and use cases. This work aims at presenting an in-depth exploration of integration synergies between 5G/B5G cellular systems and UAV technology, where the UAV is integrated as a new aerial User Equipment (UE) to existing cellular networks. In this integration, the UAVs perform the role of flying users within cellular coverage, thus they are termed as cellular-connected UAVs (a.k.a. UAV-UE, drone-UE, 5G-connected drone, or aerial user). The main focus of this work is to present an extensive study of integration challenges along with key 5G/B5G technological innovations and ongoing efforts in design prototyping and field trials corroborating cellular-connected UAVs. This study highlights recent progress updates with respect to 3GPP standardization and emphasizes socio-economic concerns that must be accounted before successful adoption of this promising technology. Various open problems paving the path to future research opportunities are also discussed.Comment: 30 pages, 18 figures, 9 tables, 102 references, journal submissio
    • …
    corecore