82 research outputs found
A Tutorial on Extremely Large-Scale MIMO for 6G: Fundamentals, Signal Processing, and Applications
Extremely large-scale multiple-input-multiple-output (XL-MIMO), which offers
vast spatial degrees of freedom, has emerged as a potentially pivotal enabling
technology for the sixth generation (6G) of wireless mobile networks. With its
growing significance, both opportunities and challenges are concurrently
manifesting. This paper presents a comprehensive survey of research on XL-MIMO
wireless systems. In particular, we introduce four XL-MIMO hardware
architectures: uniform linear array (ULA)-based XL-MIMO, uniform planar array
(UPA)-based XL-MIMO utilizing either patch antennas or point antennas, and
continuous aperture (CAP)-based XL-MIMO. We comprehensively analyze and discuss
their characteristics and interrelationships. Following this, we examine exact
and approximate near-field channel models for XL-MIMO. Given the distinct
electromagnetic properties of near-field communications, we present a range of
channel models to demonstrate the benefits of XL-MIMO. We further motivate and
discuss low-complexity signal processing schemes to promote the practical
implementation of XL-MIMO. Furthermore, we explore the interplay between
XL-MIMO and other emergent 6G technologies. Finally, we outline several
compelling research directions for future XL-MIMO wireless communication
systems.Comment: 38 pages, 10 figure
Low-Rank Channel Estimation for Millimeter Wave and Terahertz Hybrid MIMO Systems
Massive multiple-input multiple-output (MIMO) is one of the fundamental technologies for 5G and beyond. The increased number of antenna elements at both the transmitter and the receiver translates into a large-dimension channel matrix. In addition, the power requirements for the massive MIMO systems are high, especially when fully digital transceivers are deployed. To address this challenge, hybrid analog-digital transceivers are considered a viable alternative. However, for hybrid systems, the number of observations during each channel use is reduced. The high dimensions of the channel matrix and the reduced number of observations make the channel estimation task challenging. Thus, channel estimation may require increased training overhead and higher computational complexity.
The need for high data rates is increasing rapidly, forcing a shift of wireless communication towards higher frequency bands such as millimeter Wave (mmWave) and terahertz (THz). The wireless channel at these bands is comprised of only a few dominant paths. This makes the channel sparse in the angular domain and the resulting channel matrix has a low rank. This thesis aims to provide channel estimation solutions benefiting from the low rankness and sparse nature of the channel. The motivation behind this thesis is to offer a desirable trade-off between training overhead and computational complexity while providing a desirable estimate of the channel
Machine Learning for Metasurfaces Design and Their Applications
Metasurfaces (MTSs) are increasingly emerging as enabling technologies to
meet the demands for multi-functional, small form-factor, efficient,
reconfigurable, tunable, and low-cost radio-frequency (RF) components because
of their ability to manipulate waves in a sub-wavelength thickness through
modified boundary conditions. They enable the design of reconfigurable
intelligent surfaces (RISs) for adaptable wireless channels and smart radio
environments, wherein the inherently stochastic nature of the wireless
environment is transformed into a programmable propagation channel. In
particular, space-limited RF applications, such as communications and radar,
that have strict radiation requirements are currently being investigated for
potential RIS deployment. The RIS comprises sub-wavelength units or meta-atoms,
which are independently controlled and whose geometry and material determine
the spectral response of the RIS. Conventionally, designing RIS to yield the
desired EM response requires trial and error by iteratively investigating a
large possibility of various geometries and materials through thousands of
full-wave EM simulations. In this context, machine/deep learning (ML/DL)
techniques are proving critical in reducing the computational cost and time of
RIS inverse design. Instead of explicitly solving Maxwell's equations, DL
models learn physics-based relationships through supervised training data. The
ML/DL techniques also aid in RIS deployment for numerous wireless applications,
which requires dealing with multiple channel links between the base station
(BS) and the users. As a result, the BS and RIS beamformers require a joint
design, wherein the RIS elements must be rapidly reconfigured. This chapter
provides a synopsis of DL techniques for both inverse RIS design and
RIS-assisted wireless systems.Comment: Book chapter, 70 pages, 12 figures, 2 tables. arXiv admin note:
substantial text overlap with arXiv:2101.09131, arXiv:2009.0254
- …