89 research outputs found
Learning-Based Predictive Transmitter-Receiver Beam Alignment in Millimeter Wave Fixed Wireless Access Links
Millimeter wave (mmwave) fixed wireless access is a key enabler of 5G and beyond small cell network deployment, exploiting the abundant mmwave spectrum to provide Gbps backhaul and access links. Large antenna arrays and extremely directional beamforming are necessary to combat the mmwave path loss. However, narrow beams increase sensitivity to physical perturbations caused by environmental factors. To address this issue, in this paper we propose a predictive transmit-receive beam alignment process. We construct an explicit mapping between transmit (or receive) beams and physical coordinates via a Gaussian process, which can incorporate environmental uncertainty. To make full use of underlying correlation between transmitter and receiver and accumulated experiences, we further construct a hierarchical Bayesian learning model and design an efficient beam predictive algorithm. To reduce dependency on physical position measurements, a reverse mapping that predicts physical coordinates from beam experiences is further constructed. The designed algorithms enjoy two folds of advantages. Firstly, thanks to Bayesian learning, a good performance can be achieved even for a small sample setting as low as 10 samples in our scenarios, which drastically reduces training time and is therefore very appealing for wireless communications. Secondly, in contrast to most existing algorithms that only output one beam in each time-slot, the designed algorithms generate the most promising beam subset, which improves the robustness to environmental uncertainty. Simulation results demonstrate the effectiveness and superiority of the designed algorithms against the state of the art
Beam Drift in Millimeter Wave Links: Beamwidth Tradeoffs and Learning Based Optimization
Millimeter wave (mmwave) communications, envisaged for the next generation wireless networks, rely on large antenna arrays and very narrow, high-gain beams. This poses significant challenges to beam alignment between transmitter and receiver, which has attracted considerable research attention. Even when alignment is achieved, the link is subject to beam drift (BD). BD, caused by non-ideal features inherent in practical beams and rapidly changing environments, is referred to as the phenomenon that the center of main-lobe of the used beam deviates from the real dominant channel direction, which further deteriorates the system’s performance. To mitigate the BD effect, in this paper we first theoretically analyze the BD effect on the performance of outage probability as well as effective achievable rate, which takes practical factors (e.g., the rate of change of the environment, beam width, transmit power) into account. Then, different from conventional practice, we propose a novel design philosophy where multi-resolution beams with varying beam widths are used for data transmission while narrow beams are employed for beam training. Finally, we design an efficient learning based algorithm which can adaptively choose an appropriate beam width according to the environment. Simulation results demonstrate the effectiveness and superiority of our proposals
Robust Symbol-Level Precoding Beyond CSI Models: A Probabilistic-Learning Based Approach
The use of large-scale antenna arrays poses great difficulties in obtaining perfect channel state information (CSI) in multi-antenna communication systems, which is essential for precoding optimization. To tackle this issue, in this paper we propose a probabilistic-learning based approach (PLA), aiming at alleviating the requirement of perfect CSI. The rationale is that the existing precoding algorithms that output a single precoder are often overconfident in their abilities and the obtained CSI. To avoid overconfidence, we incorporate the idea of regularization in machine learning (ML) into precoding models, so as to limit representative abilities of the precoding models. Compared to the state-of-the-art robust precoding designs, an important advantage of PLA is that CSI uncertainty models are not required. As a specific application of PLA, we design an efficient robust symbol-level hybrid precoding algorithm for the millimeter wave system and confirm the effectiveness of PLA via simulations
Low-Rank Channel Estimation for Millimeter Wave and Terahertz Hybrid MIMO Systems
Massive multiple-input multiple-output (MIMO) is one of the fundamental technologies for 5G and beyond. The increased number of antenna elements at both the transmitter and the receiver translates into a large-dimension channel matrix. In addition, the power requirements for the massive MIMO systems are high, especially when fully digital transceivers are deployed. To address this challenge, hybrid analog-digital transceivers are considered a viable alternative. However, for hybrid systems, the number of observations during each channel use is reduced. The high dimensions of the channel matrix and the reduced number of observations make the channel estimation task challenging. Thus, channel estimation may require increased training overhead and higher computational complexity.
The need for high data rates is increasing rapidly, forcing a shift of wireless communication towards higher frequency bands such as millimeter Wave (mmWave) and terahertz (THz). The wireless channel at these bands is comprised of only a few dominant paths. This makes the channel sparse in the angular domain and the resulting channel matrix has a low rank. This thesis aims to provide channel estimation solutions benefiting from the low rankness and sparse nature of the channel. The motivation behind this thesis is to offer a desirable trade-off between training overhead and computational complexity while providing a desirable estimate of the channel
Beam Training and Tracking with Limited Sampling Sets: Exploiting Environment Priors
Beam training and tracking (BTT) are key technologies for millimeter wave communications. However, since the effectiveness of BTT methods heavily depends on wireless environments, complexity and randomness of practical environments severely limit the application scope of many BTT algorithms and even invalidate them. To tackle this issue, from the perspective of stochastic process (SP), in this paper we propose to model beam directions as a SP and address the problem of BTT via process inference. The benefit of the SP design methodology is that environment priors and uncertainties can be naturally taken into account (e.g., to encode them into SP distribution) to improve prediction efficiencies (e.g., accuracy and robustness). We take the Gaussian process (GP) as an example to elaborate on the design methodology and propose novel learning methods to optimize the prediction models. In particular, beam training subset is optimized based on derived posterior distribution. The GP-based SP methodology enjoys two advantages. First, good performance can be achieved even for small data, which is very appealing in dynamic communication scenarios. Second, in contrast to most BTT algorithms that only predict a single beam, our algorithms output an optimizable beam subset, which enables a flexible tradeoff between training overhead and desired performance. Simulation results show the superiority of our approach
Millimeter Wave Beamforming Training: A Reinforcement Learning Approach
Beamforming training (BT) is considered as an essential process to accomplish the communications in the millimeter wave (mmWave) band, i.e., 30 ~ 300 GHz. This process aims to find out the best transmit/receive antenna beams to compensate the impairments of the mmWave channel and successfully establish the mmWave link. Typically, the mmWave BT process is highly-time consuming affecting the overall throughput and energy consumption of the mmWave link establishment. In this paper, a machine learning (ML) approach, specifically reinforcement learning (RL), is utilized for enabling the mmWave BT process by modeling it as a multi-armed bandit (MAB) problem with the aim of maximizing the long-term throughput of the constructed mmWave link. Based on this formulation, MAB algorithms such as upper confidence bound (UCB), Thompson sampling (TS), epsilon-greedy (e-greedy), are utilized to address the problem and accomplish the mmWave BT process. Numerical simulations confirm the superior performance of the proposed MAB approach over the existing mmWave BT techniques. Â
A Tutorial on Environment-Aware Communications via Channel Knowledge Map for 6G
Sixth-generation (6G) mobile communication networks are expected to have
dense infrastructures, large-dimensional channels, cost-effective hardware,
diversified positioning methods, and enhanced intelligence. Such trends bring
both new challenges and opportunities for the practical design of 6G. On one
hand, acquiring channel state information (CSI) in real time for all wireless
links becomes quite challenging in 6G. On the other hand, there would be
numerous data sources in 6G containing high-quality location-tagged channel
data, making it possible to better learn the local wireless environment. By
exploiting such new opportunities and for tackling the CSI acquisition
challenge, there is a promising paradigm shift from the conventional
environment-unaware communications to the new environment-aware communications
based on the novel approach of channel knowledge map (CKM). This article aims
to provide a comprehensive tutorial overview on environment-aware
communications enabled by CKM to fully harness its benefits for 6G. First, the
basic concept of CKM is presented, and a comparison of CKM with various
existing channel inference techniques is discussed. Next, the main techniques
for CKM construction are discussed, including both the model-free and
model-assisted approaches. Furthermore, a general framework is presented for
the utilization of CKM to achieve environment-aware communications, followed by
some typical CKM-aided communication scenarios. Finally, important open
problems in CKM research are highlighted and potential solutions are discussed
to inspire future work
Recommended from our members
Array Architectures and Physical Layer Design for Millimeter-Wave Communications Beyond 5G
Ever increasing demands in mobile data rates have resulted in exploration of millimeter-wave (mmW) frequencies for the next generation (5G) wireless networks. Communications at mmW frequencies is presented with two keys challenges. Firstly, high propagation loss requires base stations (BSs) and user equipment (UEs) to use a large number of antennas and narrow beams to close the link with sufficient received signal power. Consequently, communications using narrow beams create a new challenge in channel estimation and link establishment based on fine angular probing. Current mmW system use analog phased arrays that can probe only one angle at the time which results in high latency during link establishment and channel tracking. It is desirable to design low latency beam training by exploring both physical layer designs and array architectures that could replace current 5G approaches and pave the way to the communications for frequency bands in higher mmW band and sub-THz region where larger antenna arrays and communications bandwidth can be exploited. To this end, we propose a novel signal processing techniques exploiting unique properties of mmW channel, and show both theoretically, in simulation and experiments its advantages over conventional approaches. Secondly, we explore different array architecture design and analyze their trade-offs between spectral efficiency and power consumption and area. For comprehensive comparison, we have developed a methodology for optimal design of system parameters for different array architecture candidates based on the spectral efficiency target, and use these parameters to estimate the array area and power consumption based on the circuits reported in the literature. We show that the hybrid analog and digital architectures have severe scalability concerns in radio frequency signal distribution with increased array size and spatial multiplexing levels, while the fully-digital array architectures have the best performance and power/area trade-offs.The developed approaches are based on a cross-disciplinary research that combines innovation in model based signal processing, machine learning, and radio hardware. This work is the first to apply compressive sensing (CS), a signal processing tool that exploits sparsity of mmW channel model, to accelerate beam training of mmW cellular system. The algorithm is designed to address practical issues including the requirement of cell discovery and synchronization that involves estimation of angular channel together with carrier frequency offset and timing offsets. We have analyzed the algorithm performance in the 5G compliant simulation and showed that an order of magnitude saving is achieved in initial access latency for the desired channel estimation accuracy. Moreover, we are the first to develop and implement a neural network assisted compressive beam alignment to deal with hardware impairments in mmW radios. We have used 60GHz mmW testbed to perform experiments and show that neural networks approach enhances alignment rate compared to CS. To further accelerate beam training, we proposed a novel frequency selective probing beams using the true-time-delay (TTD) analog array architecture. Our approach utilizes different subcarriers to scan different directions, and achieves a single-shot beam alignment, the fastest approach reported to date. Our comprehensive analysis of different array architectures and exploration of emerging architectures enabled us to develop an order of magnitude faster and energy efficient approaches for initial access and channel estimation in mmW systems
- …