89 research outputs found

    Learning-Based Predictive Transmitter-Receiver Beam Alignment in Millimeter Wave Fixed Wireless Access Links

    Get PDF
    Millimeter wave (mmwave) fixed wireless access is a key enabler of 5G and beyond small cell network deployment, exploiting the abundant mmwave spectrum to provide Gbps backhaul and access links. Large antenna arrays and extremely directional beamforming are necessary to combat the mmwave path loss. However, narrow beams increase sensitivity to physical perturbations caused by environmental factors. To address this issue, in this paper we propose a predictive transmit-receive beam alignment process. We construct an explicit mapping between transmit (or receive) beams and physical coordinates via a Gaussian process, which can incorporate environmental uncertainty. To make full use of underlying correlation between transmitter and receiver and accumulated experiences, we further construct a hierarchical Bayesian learning model and design an efficient beam predictive algorithm. To reduce dependency on physical position measurements, a reverse mapping that predicts physical coordinates from beam experiences is further constructed. The designed algorithms enjoy two folds of advantages. Firstly, thanks to Bayesian learning, a good performance can be achieved even for a small sample setting as low as 10 samples in our scenarios, which drastically reduces training time and is therefore very appealing for wireless communications. Secondly, in contrast to most existing algorithms that only output one beam in each time-slot, the designed algorithms generate the most promising beam subset, which improves the robustness to environmental uncertainty. Simulation results demonstrate the effectiveness and superiority of the designed algorithms against the state of the art

    Beam Drift in Millimeter Wave Links: Beamwidth Tradeoffs and Learning Based Optimization

    Get PDF
    Millimeter wave (mmwave) communications, envisaged for the next generation wireless networks, rely on large antenna arrays and very narrow, high-gain beams. This poses significant challenges to beam alignment between transmitter and receiver, which has attracted considerable research attention. Even when alignment is achieved, the link is subject to beam drift (BD). BD, caused by non-ideal features inherent in practical beams and rapidly changing environments, is referred to as the phenomenon that the center of main-lobe of the used beam deviates from the real dominant channel direction, which further deteriorates the system’s performance. To mitigate the BD effect, in this paper we first theoretically analyze the BD effect on the performance of outage probability as well as effective achievable rate, which takes practical factors (e.g., the rate of change of the environment, beam width, transmit power) into account. Then, different from conventional practice, we propose a novel design philosophy where multi-resolution beams with varying beam widths are used for data transmission while narrow beams are employed for beam training. Finally, we design an efficient learning based algorithm which can adaptively choose an appropriate beam width according to the environment. Simulation results demonstrate the effectiveness and superiority of our proposals

    Robust Symbol-Level Precoding Beyond CSI Models: A Probabilistic-Learning Based Approach

    Get PDF
    The use of large-scale antenna arrays poses great difficulties in obtaining perfect channel state information (CSI) in multi-antenna communication systems, which is essential for precoding optimization. To tackle this issue, in this paper we propose a probabilistic-learning based approach (PLA), aiming at alleviating the requirement of perfect CSI. The rationale is that the existing precoding algorithms that output a single precoder are often overconfident in their abilities and the obtained CSI. To avoid overconfidence, we incorporate the idea of regularization in machine learning (ML) into precoding models, so as to limit representative abilities of the precoding models. Compared to the state-of-the-art robust precoding designs, an important advantage of PLA is that CSI uncertainty models are not required. As a specific application of PLA, we design an efficient robust symbol-level hybrid precoding algorithm for the millimeter wave system and confirm the effectiveness of PLA via simulations

    Low-Rank Channel Estimation for Millimeter Wave and Terahertz Hybrid MIMO Systems

    Get PDF
    Massive multiple-input multiple-output (MIMO) is one of the fundamental technologies for 5G and beyond. The increased number of antenna elements at both the transmitter and the receiver translates into a large-dimension channel matrix. In addition, the power requirements for the massive MIMO systems are high, especially when fully digital transceivers are deployed. To address this challenge, hybrid analog-digital transceivers are considered a viable alternative. However, for hybrid systems, the number of observations during each channel use is reduced. The high dimensions of the channel matrix and the reduced number of observations make the channel estimation task challenging. Thus, channel estimation may require increased training overhead and higher computational complexity. The need for high data rates is increasing rapidly, forcing a shift of wireless communication towards higher frequency bands such as millimeter Wave (mmWave) and terahertz (THz). The wireless channel at these bands is comprised of only a few dominant paths. This makes the channel sparse in the angular domain and the resulting channel matrix has a low rank. This thesis aims to provide channel estimation solutions benefiting from the low rankness and sparse nature of the channel. The motivation behind this thesis is to offer a desirable trade-off between training overhead and computational complexity while providing a desirable estimate of the channel

    Beam Training and Tracking with Limited Sampling Sets: Exploiting Environment Priors

    Get PDF
    Beam training and tracking (BTT) are key technologies for millimeter wave communications. However, since the effectiveness of BTT methods heavily depends on wireless environments, complexity and randomness of practical environments severely limit the application scope of many BTT algorithms and even invalidate them. To tackle this issue, from the perspective of stochastic process (SP), in this paper we propose to model beam directions as a SP and address the problem of BTT via process inference. The benefit of the SP design methodology is that environment priors and uncertainties can be naturally taken into account (e.g., to encode them into SP distribution) to improve prediction efficiencies (e.g., accuracy and robustness). We take the Gaussian process (GP) as an example to elaborate on the design methodology and propose novel learning methods to optimize the prediction models. In particular, beam training subset is optimized based on derived posterior distribution. The GP-based SP methodology enjoys two advantages. First, good performance can be achieved even for small data, which is very appealing in dynamic communication scenarios. Second, in contrast to most BTT algorithms that only predict a single beam, our algorithms output an optimizable beam subset, which enables a flexible tradeoff between training overhead and desired performance. Simulation results show the superiority of our approach

    Millimeter Wave Beamforming Training: A Reinforcement Learning Approach

    Get PDF
    Beamforming training (BT) is considered as an essential process to accomplish the communications in the millimeter wave (mmWave) band, i.e., 30 ~ 300 GHz. This process aims to find out the best transmit/receive antenna beams to compensate the impairments of the mmWave channel and successfully establish the mmWave link. Typically, the mmWave BT process is highly-time consuming affecting the overall throughput and energy consumption of the mmWave link establishment. In this paper, a machine learning (ML) approach, specifically reinforcement learning (RL), is utilized for enabling the mmWave BT process by modeling it as a multi-armed bandit (MAB) problem with the aim of maximizing the long-term throughput of the constructed mmWave link. Based on this formulation, MAB algorithms such as upper confidence bound (UCB), Thompson sampling (TS), epsilon-greedy (e-greedy), are utilized to address the problem and accomplish the mmWave BT process. Numerical simulations confirm the superior performance of the proposed MAB approach over the existing mmWave BT techniques.   

    A Tutorial on Environment-Aware Communications via Channel Knowledge Map for 6G

    Full text link
    Sixth-generation (6G) mobile communication networks are expected to have dense infrastructures, large-dimensional channels, cost-effective hardware, diversified positioning methods, and enhanced intelligence. Such trends bring both new challenges and opportunities for the practical design of 6G. On one hand, acquiring channel state information (CSI) in real time for all wireless links becomes quite challenging in 6G. On the other hand, there would be numerous data sources in 6G containing high-quality location-tagged channel data, making it possible to better learn the local wireless environment. By exploiting such new opportunities and for tackling the CSI acquisition challenge, there is a promising paradigm shift from the conventional environment-unaware communications to the new environment-aware communications based on the novel approach of channel knowledge map (CKM). This article aims to provide a comprehensive tutorial overview on environment-aware communications enabled by CKM to fully harness its benefits for 6G. First, the basic concept of CKM is presented, and a comparison of CKM with various existing channel inference techniques is discussed. Next, the main techniques for CKM construction are discussed, including both the model-free and model-assisted approaches. Furthermore, a general framework is presented for the utilization of CKM to achieve environment-aware communications, followed by some typical CKM-aided communication scenarios. Finally, important open problems in CKM research are highlighted and potential solutions are discussed to inspire future work
    • …
    corecore