62 research outputs found
Beam Drift in Millimeter Wave Links: Beamwidth Tradeoffs and Learning Based Optimization
Millimeter wave (mmwave) communications, envisaged for the next generation wireless networks, rely on large antenna arrays and very narrow, high-gain beams. This poses significant challenges to beam alignment between transmitter and receiver, which has attracted considerable research attention. Even when alignment is achieved, the link is subject to beam drift (BD). BD, caused by non-ideal features inherent in practical beams and rapidly changing environments, is referred to as the phenomenon that the center of main-lobe of the used beam deviates from the real dominant channel direction, which further deteriorates the system’s performance. To mitigate the BD effect, in this paper we first theoretically analyze the BD effect on the performance of outage probability as well as effective achievable rate, which takes practical factors (e.g., the rate of change of the environment, beam width, transmit power) into account. Then, different from conventional practice, we propose a novel design philosophy where multi-resolution beams with varying beam widths are used for data transmission while narrow beams are employed for beam training. Finally, we design an efficient learning based algorithm which can adaptively choose an appropriate beam width according to the environment. Simulation results demonstrate the effectiveness and superiority of our proposals
Radar Enhanced Multi-Armed Bandit for Rapid Beam Selection in Millimeter Wave Communications
Multi-arm bandit (MAB) algorithms have been used to learn optimal beams for
millimeter wave communication systems. Here, the complexity of learning the
optimal beam linearly scales with the number of beams, leading to high latency
when there are a large number of beams. In this work, we propose to integrate
radar with communication to enhance the MAB learning performance by searching
only those beams where the radar detects a scatterer. Further, we use radar to
distinguish the beams that show mobile targets from those which indicate the
presence of static clutter, thereby reducing the number of beams to scan.
Simulations show that our proposed radar-enhanced MAB reduces the exploration
time by searching only the beams with distinct radar mobile targets resulting
in improved throughput.Comment: 5 pages, 6 figure
Millimeter Wave Beamforming Training: A Reinforcement Learning Approach
Beamforming training (BT) is considered as an essential process to accomplish the communications in the millimeter wave (mmWave) band, i.e., 30 ~ 300 GHz. This process aims to find out the best transmit/receive antenna beams to compensate the impairments of the mmWave channel and successfully establish the mmWave link. Typically, the mmWave BT process is highly-time consuming affecting the overall throughput and energy consumption of the mmWave link establishment. In this paper, a machine learning (ML) approach, specifically reinforcement learning (RL), is utilized for enabling the mmWave BT process by modeling it as a multi-armed bandit (MAB) problem with the aim of maximizing the long-term throughput of the constructed mmWave link. Based on this formulation, MAB algorithms such as upper confidence bound (UCB), Thompson sampling (TS), epsilon-greedy (e-greedy), are utilized to address the problem and accomplish the mmWave BT process. Numerical simulations confirm the superior performance of the proposed MAB approach over the existing mmWave BT techniques. Â
Best Arm Identification Based Beam Acquisition in Stationary and Abruptly Changing Environments
We study the initial beam acquisition problem in millimeter wave (mm-wave)
networks from the perspective of best arm identification in multi-armed bandits
(MABs). For the stationary environment, we propose a novel algorithm called
concurrent beam exploration, CBE, in which multiple beams are grouped based on
the beam indices and are simultaneously activated to detect the presence of the
user. The best beam is then identified using a Hamming decoding strategy. For
the case of orthogonal and highly directional thin beams, we characterize the
performance of CBE in terms of the probability of missed detection and false
alarm in a beam group (BG). Leveraging this, we derive the probability of beam
selection error and prove that CBE outperforms the state-of-the-art strategies
in this metric.
Then, for the abruptly changing environments, e.g., in the case of moving
blockages, we characterize the performance of the classical sequential halving
(SH) algorithm. In particular, we derive the conditions on the distribution of
the change for which the beam selection error is exponentially bounded. In case
the change is restricted to a subset of the beams, we devise a strategy called
K-sequential halving and exhaustive search, K-SHES, that leads to an improved
bound for the beam selection error as compared to SH. This policy is
particularly useful when a near-optimal beam becomes optimal during the
beam-selection procedure due to abruptly changing channel conditions. Finally,
we demonstrate the efficacy of the proposed scheme by employing it in a tandem
beam refinement and data transmission scheme
Recommended from our members
Beam alignment for millimeter wave vehicular communications
Millimeter wave (mmWave) has the potential to provide vehicles with high data rate communications that will enable a whole new range of applications. Its use, however, is not straightforward due to its challenging propagation characteristics. One approach to overcome the propagation challenge is the use of directional beams, but it requires a proper alignment and presents a challenging engineering problem, especially under the high vehicular mobility.
In this dissertation, fast and efficient beam alignment solutions suitable for vehicular applications are developed. To better quantify the problem, first the impact of directional beams on the temporal variation of the channels is investigated theoretically. The proposed model includes both the Doppler effect and the pointing error due to mobility. The channel coherence time is derived, and a new concept called the beam coherence time is proposed for capturing the overhead of mmWave beam alignment.
Next, an efficient learning-based beam alignment framework is proposed. The core of this framework is the beam pair selection methods that use side information (position in this case) and past beam measurements to identify promising beam directions and eliminate unnecessary beam training. Three offline learning methods for beam pair selection are proposed: two statistics-based and one machine learning-based methods. The two statistical learning methods consist of a heuristic and an optimal selection that minimizes the misalignment probability. The third one uses a learning-to-rank approach from the recommender system literature. The proposed approach shows an order of magnitude lower overhead than existing standard (IEEE 802.11ad) enabling it to support large arrays at high speed.
Finally, an online version of the optimal statistical learning method is developed. The solution is based on the upper confidence bound algorithm with a newly introduced risk-aware feature that helps avoid severe misalignment during the learning. Along with the online beam pair selection, an online beam pair refinement is also proposed for learning to adapt the codebook to the environment to further maximize the beamforming gain. The combined solution shows a fast learning behavior that can quickly achieve positive gain over the exhaustive search on the original (and unrefined) codebook. The results show that side information can help reduce mmWave link configuration overhead.Electrical and Computer Engineerin
- …