473 research outputs found
Robust Location-Aided Beam Alignment in Millimeter Wave Massive MIMO
Location-aided beam alignment has been proposed recently as a potential
approach for fast link establishment in millimeter wave (mmWave) massive MIMO
(mMIMO) communications. However, due to mobility and other imperfections in the
estimation process, the spatial information obtained at the base station (BS)
and the user (UE) is likely to be noisy, degrading beam alignment performance.
In this paper, we introduce a robust beam alignment framework in order to
exhibit resilience with respect to this problem. We first recast beam alignment
as a decentralized coordination problem where BS and UE seek coordination on
the basis of correlated yet individual position information. We formulate the
optimum beam alignment solution as the solution of a Bayesian team decision
problem. We then propose a suite of algorithms to approach optimality with
reduced complexity. The effectiveness of the robust beam alignment procedure,
compared with classical designs, is then verified on simulation settings with
varying location information accuracies.Comment: 24 pages, 7 figures. The short version of this paper has been
accepted to IEEE Globecom 201
Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)
Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression
Contextual Beamforming: Exploiting Location and AI for Enhanced Wireless Telecommunication Performance
The pervasive nature of wireless telecommunication has made it the foundation
for mainstream technologies like automation, smart vehicles, virtual reality,
and unmanned aerial vehicles. As these technologies experience widespread
adoption in our daily lives, ensuring the reliable performance of cellular
networks in mobile scenarios has become a paramount challenge. Beamforming, an
integral component of modern mobile networks, enables spatial selectivity and
improves network quality. However, many beamforming techniques are iterative,
introducing unwanted latency to the system. In recent times, there has been a
growing interest in leveraging mobile users' location information to expedite
beamforming processes. This paper explores the concept of contextual
beamforming, discussing its advantages, disadvantages and implications.
Notably, the study presents an impressive 53% improvement in signal-to-noise
ratio (SNR) by implementing the adaptive beamforming (MRT) algorithm compared
to scenarios without beamforming. It further elucidates how MRT contributes to
contextual beamforming. The importance of localization in implementing
contextual beamforming is also examined. Additionally, the paper delves into
the use of artificial intelligence schemes, including machine learning and deep
learning, in implementing contextual beamforming techniques that leverage user
location information. Based on the comprehensive review, the results suggest
that the combination of MRT and Zero forcing (ZF) techniques, alongside deep
neural networks (DNN) employing Bayesian Optimization (BO), represents the most
promising approach for contextual beamforming. Furthermore, the study discusses
the future potential of programmable switches, such as Tofino, in enabling
location-aware beamforming
Fastening the Initial Access in 5G NR Sidelink for 6G V2X Networks
The ever-increasing demand for intelligent, automated, and connected mobility
solutions pushes for the development of an innovative sixth Generation (6G) of
cellular networks. A radical transformation on the physical layer of vehicular
communications is planned, with a paradigm shift towards beam-based millimeter
Waves or sub-Terahertz communications, which require precise beam pointing for
guaranteeing the communication link, especially in high mobility. A key design
aspect is a fast and proactive Initial Access (IA) algorithm to select the
optimal beam to be used. In this work, we investigate alternative IA techniques
to fasten the current fifth-generation (5G) standard, targeting an efficient 6G
design. First, we discuss cooperative position-based schemes that rely on the
position information. Then, motivated by the intuition of a non-uniform
distribution of the communication directions due to road topology constraints,
we design two Probabilistic Codebook (PCB) techniques of prioritized beams. In
the first one, the PCBs are built leveraging past collected traffic
information, while in the second one, we use the Hough Transform over the
digital map to extract dominant road directions. We also show that the
information coming from the angular probability distribution allows designing
non-uniform codebook quantization, reducing the degradation of the performances
compared to uniform one. Numerical simulation on realistic scenarios shows that
PCBs-based beam selection outperforms the 5G standard in terms of the number of
IA trials, with a performance comparable to position-based methods, without
requiring the signaling of sensitive information
Millimeter Wave Beamforming Training: A Reinforcement Learning Approach
Beamforming training (BT) is considered as an essential process to accomplish the communications in the millimeter wave (mmWave) band, i.e., 30 ~ 300 GHz. This process aims to find out the best transmit/receive antenna beams to compensate the impairments of the mmWave channel and successfully establish the mmWave link. Typically, the mmWave BT process is highly-time consuming affecting the overall throughput and energy consumption of the mmWave link establishment. In this paper, a machine learning (ML) approach, specifically reinforcement learning (RL), is utilized for enabling the mmWave BT process by modeling it as a multi-armed bandit (MAB) problem with the aim of maximizing the long-term throughput of the constructed mmWave link. Based on this formulation, MAB algorithms such as upper confidence bound (UCB), Thompson sampling (TS), epsilon-greedy (e-greedy), are utilized to address the problem and accomplish the mmWave BT process. Numerical simulations confirm the superior performance of the proposed MAB approach over the existing mmWave BT techniques. Â
- …