62 research outputs found

    Beam Drift in Millimeter Wave Links: Beamwidth Tradeoffs and Learning Based Optimization

    Get PDF
    Millimeter wave (mmwave) communications, envisaged for the next generation wireless networks, rely on large antenna arrays and very narrow, high-gain beams. This poses significant challenges to beam alignment between transmitter and receiver, which has attracted considerable research attention. Even when alignment is achieved, the link is subject to beam drift (BD). BD, caused by non-ideal features inherent in practical beams and rapidly changing environments, is referred to as the phenomenon that the center of main-lobe of the used beam deviates from the real dominant channel direction, which further deteriorates the system’s performance. To mitigate the BD effect, in this paper we first theoretically analyze the BD effect on the performance of outage probability as well as effective achievable rate, which takes practical factors (e.g., the rate of change of the environment, beam width, transmit power) into account. Then, different from conventional practice, we propose a novel design philosophy where multi-resolution beams with varying beam widths are used for data transmission while narrow beams are employed for beam training. Finally, we design an efficient learning based algorithm which can adaptively choose an appropriate beam width according to the environment. Simulation results demonstrate the effectiveness and superiority of our proposals

    Radar Enhanced Multi-Armed Bandit for Rapid Beam Selection in Millimeter Wave Communications

    Full text link
    Multi-arm bandit (MAB) algorithms have been used to learn optimal beams for millimeter wave communication systems. Here, the complexity of learning the optimal beam linearly scales with the number of beams, leading to high latency when there are a large number of beams. In this work, we propose to integrate radar with communication to enhance the MAB learning performance by searching only those beams where the radar detects a scatterer. Further, we use radar to distinguish the beams that show mobile targets from those which indicate the presence of static clutter, thereby reducing the number of beams to scan. Simulations show that our proposed radar-enhanced MAB reduces the exploration time by searching only the beams with distinct radar mobile targets resulting in improved throughput.Comment: 5 pages, 6 figure

    Millimeter Wave Beamforming Training: A Reinforcement Learning Approach

    Get PDF
    Beamforming training (BT) is considered as an essential process to accomplish the communications in the millimeter wave (mmWave) band, i.e., 30 ~ 300 GHz. This process aims to find out the best transmit/receive antenna beams to compensate the impairments of the mmWave channel and successfully establish the mmWave link. Typically, the mmWave BT process is highly-time consuming affecting the overall throughput and energy consumption of the mmWave link establishment. In this paper, a machine learning (ML) approach, specifically reinforcement learning (RL), is utilized for enabling the mmWave BT process by modeling it as a multi-armed bandit (MAB) problem with the aim of maximizing the long-term throughput of the constructed mmWave link. Based on this formulation, MAB algorithms such as upper confidence bound (UCB), Thompson sampling (TS), epsilon-greedy (e-greedy), are utilized to address the problem and accomplish the mmWave BT process. Numerical simulations confirm the superior performance of the proposed MAB approach over the existing mmWave BT techniques.   

    Best Arm Identification Based Beam Acquisition in Stationary and Abruptly Changing Environments

    Full text link
    We study the initial beam acquisition problem in millimeter wave (mm-wave) networks from the perspective of best arm identification in multi-armed bandits (MABs). For the stationary environment, we propose a novel algorithm called concurrent beam exploration, CBE, in which multiple beams are grouped based on the beam indices and are simultaneously activated to detect the presence of the user. The best beam is then identified using a Hamming decoding strategy. For the case of orthogonal and highly directional thin beams, we characterize the performance of CBE in terms of the probability of missed detection and false alarm in a beam group (BG). Leveraging this, we derive the probability of beam selection error and prove that CBE outperforms the state-of-the-art strategies in this metric. Then, for the abruptly changing environments, e.g., in the case of moving blockages, we characterize the performance of the classical sequential halving (SH) algorithm. In particular, we derive the conditions on the distribution of the change for which the beam selection error is exponentially bounded. In case the change is restricted to a subset of the beams, we devise a strategy called K-sequential halving and exhaustive search, K-SHES, that leads to an improved bound for the beam selection error as compared to SH. This policy is particularly useful when a near-optimal beam becomes optimal during the beam-selection procedure due to abruptly changing channel conditions. Finally, we demonstrate the efficacy of the proposed scheme by employing it in a tandem beam refinement and data transmission scheme
    • …
    corecore