52 research outputs found

    Hybrid Precoding in Cooperative Millimeter Wave Networks

    Full text link
    In this paper, we study the performance of cooperative millimeter wave (mmWave) networks with hybrid precoding architectures. Considering joint transmissions and BS silence strategy, we propose hybrid precoding algorithms which minimize the sum power consumption of the base stations (BSs), for both fully- and partially-connected hybrid precoding (FHP and PHP, respectively) schemes, for single-carrier and orthogonal frequency-division multiplexing systems. We reformulate the analog precoding part as an equal-gain transmission problem, which only depends on the channel information, and the digital precoding part as a relaxed convex semidefinite program subject to per-user quality-of-service constraints that gives the optimal sum power consumption in terms of the BS silence strategy. In order to reduce the complexity of the hybrid precoding algorithm with optimal BS silence strategy, we propose a sub-optimal hybrid precoding algorithm that iteratively put BSs with small power into the silent mode. The simulation results show that, depending on the parameter settings, the power consumption of the PHP may be dominated by the RF transmit power and it may result in a larger power consumption than the FHP. For the cases with 2 BSs and 4 users, implementation of the FHP and the PHP in cooperative networks reduces the required RF transmit power, compared to the case in a non-cooperative network, by 71% and 56%, respectively.Comment: 16 pages, 10 figures, accepted to IEEE Transactions on Wireless Communication

    Coverage Analysis and Cooperative Hybrid Precoding for 5G Cellular Networks

    Get PDF
    5G innovations have been made in both the network deployment and the transceiver architectures in order to increase coverage, energy- and spectrum-efficiency. Future base stations (BSs) are expected to be densely deployed in places such as walls and lamp posts and cover a smaller area compared to current macro BS systems. Using large spectrum at millimeter-wave (mmWave) frequency bands and highly directional beamforming with large antenna arrays, 5G will bring gigabit-per-second data rate and low-latency communications and enable many novel services such as high-speed mmWave wireless interconnections between devices, vehicular communications, etc.. Moreover, mmWave communication systems will be based on novel hybrid beamforming architectures which have reduced hardware power consumption and cost. Thus, for better understanding of 5G performance and limitations, one of the main goals in this thesis is to analyze new models that give tractable performance metrics for dense small BS networks. Another goal in this thesis is to study mmWave hybrid beamforming schemes which enable joint transmissions in multi-cell multi-user systems. In the thesis, we show the advantages of small cells in increasing the coverage probability and reducing the path loss and shadowing, and we show the value of cooperation in terms of power consumption and outage. In [Paper A] we derive analytical expressions for the successful reception probability of the equal gain combining receiver in a network where interfering transmitters are distributed according to a Poisson point process and interfering signals are spatially correlated. The results show that the spatial correlation reduces the successful reception probability and the effect of the spatial correlation increases with the number of antennas.\ua0[Paper B] follows to study the performance of a partial zero forcing receiver. The results are simulated in an environment with blockages and are analyzed under both Rayleigh and Rician channels. The coverage probability is shown to be maximized when using a subset of antennas\u27 degree-of-freedom for useful signal enhancement and using the remaining degrees of freedom for canceling the interference from strongest interferers. Finally, in [Paper C], we propose a hybrid beamforming scheme which minimizes the total power consumption of a multi-cell multi-user network, subject to per-user quality-of-service constraints. The proposed scheme is based on decoupling the analog precoding and digital precoding. The analog precoders are only dependent on the local channel state information at each BS. Then, the digital precoders are obtained by solving a relaxed convex optimization for given analog precoders. Simulation results show that the proposed algorithm leads to almost the same RF transmit power as that of fully digital precoding, while saving considerable hardware power due to the reduced number of RF chains and digital-to-analog converters

    A Tutorial on Beam Management for 3GPP NR at mmWave Frequencies

    Full text link
    The millimeter wave (mmWave) frequencies offer the availability of huge bandwidths to provide unprecedented data rates to next-generation cellular mobile terminals. However, mmWave links are highly susceptible to rapid channel variations and suffer from severe free-space pathloss and atmospheric absorption. To address these challenges, the base stations and the mobile terminals will use highly directional antennas to achieve sufficient link budget in wide area networks. The consequence is the need for precise alignment of the transmitter and the receiver beams, an operation which may increase the latency of establishing a link, and has important implications for control layer procedures, such as initial access, handover and beam tracking. This tutorial provides an overview of recently proposed measurement techniques for beam and mobility management in mmWave cellular networks, and gives insights into the design of accurate, reactive and robust control schemes suitable for a 3GPP NR cellular network. We will illustrate that the best strategy depends on the specific environment in which the nodes are deployed, and give guidelines to inform the optimal choice as a function of the system parameters.Comment: 22 pages, 19 figures, 10 tables, published in IEEE Communications Surveys and Tutorials. Please cite it as M. Giordani, M. Polese, A. Roy, D. Castor and M. Zorzi, "A Tutorial on Beam Management for 3GPP NR at mmWave Frequencies," in IEEE Communications Surveys & Tutorials, vol. 21, no. 1, pp. 173-196, First quarter 201

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    On the Intersection of Communication and Machine Learning

    Get PDF
    The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier

    Study, Measurements and Characterisation of a 5G system using a Mobile Network Operator Testbed

    Get PDF
    The goals for 5G are aggressive. It promises to deliver enhanced end-user experience by offering new applications and services through gigabit speeds, and significantly improved performance and reliability. The enhanced mobile broadband (eMBB) 5G use case, for instance, targets peak data rates as high as 20 Gbps in the downlink (DL) and 10 Gbps in the uplink (UL). While there are different ways to improve data rates, spectrum is at the core of enabling higher mobile broadband data rates. 5G New Radio (NR) specifies new frequency bands below 6 GHz and also extends into mmWave frequencies where more contiguous bandwidth is available for sending lots of data. However, at mmWave frequencies, signals are more susceptible to impairments. Hence, extra consideration is needed to determine test approaches that provide the precision required to accurately evaluate 5G components and devices. Therefore, the aim of the thesis is to provide a deep dive into 5G technology, explore its testing and validation, and thereafter present the OTE (Hellenic Telecommunications Organisation) 5G testbed, including measurement results obtained and its characterisation based on key performance indicators (KPIs)
    corecore