78 research outputs found
A Kronecker-Based Sparse Compressive Sensing Matrix for Millimeter Wave Beam Alignment
Millimeter wave beam alignment (BA) is a challenging problem especially for
large number of antennas. Compressed sensing (CS) tools have been exploited due
to the sparse nature of such channels. This paper presents a novel
deterministic CS approach for BA. Our proposed sensing matrix which has a
Kronecker-based structure is sparse, which means it is computationally
efficient. We show that our proposed sensing matrix satisfies the restricted
isometry property (RIP) condition, which guarantees the reconstruction of the
sparse vector. Our approach outperforms existing random beamforming techniques
in practical low signal to noise ratio (SNR) scenarios.Comment: Accepted to 13th International Conference on Signal Processing and
Communication Systems (ICSPCS'2019
Recommended from our members
Array Architectures and Physical Layer Design for Millimeter-Wave Communications Beyond 5G
Ever increasing demands in mobile data rates have resulted in exploration of millimeter-wave (mmW) frequencies for the next generation (5G) wireless networks. Communications at mmW frequencies is presented with two keys challenges. Firstly, high propagation loss requires base stations (BSs) and user equipment (UEs) to use a large number of antennas and narrow beams to close the link with sufficient received signal power. Consequently, communications using narrow beams create a new challenge in channel estimation and link establishment based on fine angular probing. Current mmW system use analog phased arrays that can probe only one angle at the time which results in high latency during link establishment and channel tracking. It is desirable to design low latency beam training by exploring both physical layer designs and array architectures that could replace current 5G approaches and pave the way to the communications for frequency bands in higher mmW band and sub-THz region where larger antenna arrays and communications bandwidth can be exploited. To this end, we propose a novel signal processing techniques exploiting unique properties of mmW channel, and show both theoretically, in simulation and experiments its advantages over conventional approaches. Secondly, we explore different array architecture design and analyze their trade-offs between spectral efficiency and power consumption and area. For comprehensive comparison, we have developed a methodology for optimal design of system parameters for different array architecture candidates based on the spectral efficiency target, and use these parameters to estimate the array area and power consumption based on the circuits reported in the literature. We show that the hybrid analog and digital architectures have severe scalability concerns in radio frequency signal distribution with increased array size and spatial multiplexing levels, while the fully-digital array architectures have the best performance and power/area trade-offs.The developed approaches are based on a cross-disciplinary research that combines innovation in model based signal processing, machine learning, and radio hardware. This work is the first to apply compressive sensing (CS), a signal processing tool that exploits sparsity of mmW channel model, to accelerate beam training of mmW cellular system. The algorithm is designed to address practical issues including the requirement of cell discovery and synchronization that involves estimation of angular channel together with carrier frequency offset and timing offsets. We have analyzed the algorithm performance in the 5G compliant simulation and showed that an order of magnitude saving is achieved in initial access latency for the desired channel estimation accuracy. Moreover, we are the first to develop and implement a neural network assisted compressive beam alignment to deal with hardware impairments in mmW radios. We have used 60GHz mmW testbed to perform experiments and show that neural networks approach enhances alignment rate compared to CS. To further accelerate beam training, we proposed a novel frequency selective probing beams using the true-time-delay (TTD) analog array architecture. Our approach utilizes different subcarriers to scan different directions, and achieves a single-shot beam alignment, the fastest approach reported to date. Our comprehensive analysis of different array architectures and exploration of emerging architectures enabled us to develop an order of magnitude faster and energy efficient approaches for initial access and channel estimation in mmW systems
Fast channel estimation in the transformed spatial domain for analog millimeter wave systems
Fast channel estimation in millimeter-wave (mmWave) systems is a fundamental enabler of high-gain beamforming, which boosts coverage and capacity. The channel estimation stage typically involves an initial beam training process where a subset of the possible beam directions at the transmitter and receiver is scanned along a predefined codebook. Unfortunately, the high number of transmit and receive antennas deployed in mmWave systems increase the complexity of the beam selection and channel estimation tasks. In this work, we tackle the channel estimation problem in analog systems from a different perspective than used by previous works. In particular, we propose to move the channel estimation problem from the angular domain into the transformed spatial domain, in which estimating the angles of arrivals and departures corresponds to estimating the angular frequencies of paths constituting the mmWave channel. The proposed approach, referred to as transformed spatial domain channel estimation (TSDCE) algorithm, exhibits robustness to additive white Gaussian noise by combining low-rank approximations and sample autocorrelation functions for each path in the transformed spatial domain. Numerical results evaluate the mean square error of the channel estimation and the direction of arrival estimation capability. TSDCE significantly reduces the first, while exhibiting a remarkably low computational complexity compared with well-known benchmarking schemes
Contextual Beamforming: Exploiting Location and AI for Enhanced Wireless Telecommunication Performance
The pervasive nature of wireless telecommunication has made it the foundation
for mainstream technologies like automation, smart vehicles, virtual reality,
and unmanned aerial vehicles. As these technologies experience widespread
adoption in our daily lives, ensuring the reliable performance of cellular
networks in mobile scenarios has become a paramount challenge. Beamforming, an
integral component of modern mobile networks, enables spatial selectivity and
improves network quality. However, many beamforming techniques are iterative,
introducing unwanted latency to the system. In recent times, there has been a
growing interest in leveraging mobile users' location information to expedite
beamforming processes. This paper explores the concept of contextual
beamforming, discussing its advantages, disadvantages and implications.
Notably, the study presents an impressive 53% improvement in signal-to-noise
ratio (SNR) by implementing the adaptive beamforming (MRT) algorithm compared
to scenarios without beamforming. It further elucidates how MRT contributes to
contextual beamforming. The importance of localization in implementing
contextual beamforming is also examined. Additionally, the paper delves into
the use of artificial intelligence schemes, including machine learning and deep
learning, in implementing contextual beamforming techniques that leverage user
location information. Based on the comprehensive review, the results suggest
that the combination of MRT and Zero forcing (ZF) techniques, alongside deep
neural networks (DNN) employing Bayesian Optimization (BO), represents the most
promising approach for contextual beamforming. Furthermore, the study discusses
the future potential of programmable switches, such as Tofino, in enabling
location-aware beamforming
Recommended from our members
Modeling and analyzing the evolution of cellular networks using stochastic geometry
The increasing complexity of cellular network due to its continuous evolution has made the conventional system level simulations time consuming and cost prohibitive. By modeling base station (BS) and user locations as spatial point processes, stochastic geometry has recently been recognized as a tractable and efficient analytical tool to quantify key performance metrics. The goal of this dissertation is to leverage stochastic geometry to develop an accurate spatial point process model for the conventional homogeneous macro cellular network, and to address the design and analysis challenges for the emerging cellular networks that will explore new spectrum for cellular communications. First, this dissertation proposes to use the repulsive determinantal point processes (DPPs) as an accurate model for macro BS locations in a cellular network. Based on three unique computational properties of the DPPs, the exact expressions of several fundamental performance metrics for cellular networks with DPP configured BSs are analytically derived and numerically evaluated. Using hypothesis testing for various performance metrics of interest, the DPPs are validated to be more accurate than the Poisson point process (PPP) or the deterministic grid model. Then the focus of this dissertation shifts to emerging networks that exploit new spectrum for cellular communications. One promising option is to allow the centrally scheduled cellular system to also access the unlicensed spectrum, wherein a carrier sensing multiple access with collision avoidance (CSMA/CA) protocol is usually used, as in Wi-Fi. A stochastic geometry-based analytical framework is developed to characterize the performance metrics for neighboring Wi-Fi and cellular networks under various coexistence mechanisms. In order to guarantee fair coexistence with Wi-Fi, it is shown that the cellular network needs to adopt either a discontinuous transmission pattern or its own CSMA/CA like mechanisms. Next, this dissertation considers cellular networks operating in the millimeter wave (mmWave) band, where directional beamforming is required to establish viable connections. Therefore, a major design challenge is to learn the necessary beamforming directions through the procedures that establish the initial connection between the mobile user and the network. These procedures are referred to as initial access, wherein cell search on the downlink and random access on the uplink are the two major steps. Stochastic geometry is again utilized to develop a unified analytical framework for three directional initial access protocols under a high mobility scenario where users and random blockers are moving with high speed. The expected delay for a user to succeed in initial access, and the average user-perceived downlink throughput that accounts for the initial access overhead, are derived for all three protocols. In particular, the protocol that has low beam-sweeping overhead during cell search is found to achieve a good trade-off between the initial access delay and user-perceived throughput performance. Finally, in contrast to the high mobility scenario for initial access, the directional cell search delay in a slow mobile network is analyzed. Specifically, the BS and user locations are fixed for long period of time, and therefore a strong temporal correlation for SINR is experienced. A closed-form expression for the expected cell search delay is derived, indicating that the expected cell search delay is infinite for noise-limited networks (e.g., mmWave) whenever the non-line-of-sight path loss exponent is larger than 2. By contrast, the expected cell search delay for interference-limited networks is proved to be infinite when the number of beams to search at the BS is smaller than a certain threshold, and finite otherwise.Electrical and Computer Engineerin
- …