61 research outputs found
Downlink Extrapolation for FDD Multiple Antenna Systems Through Neural Network Using Extracted Uplink Path Gains
When base stations (BSs) are deployed with multiple antennas, they need to
have downlink (DL) channel state information (CSI) to optimize downlink
transmissions by beamforming. The DL CSI is usually measured at mobile stations
(MSs) through DL training and fed back to the BS in frequency division
duplexing (FDD). The DL training and uplink (UL) feedback might become
infeasible due to insufficient coherence time interval when the channel rapidly
changes due to high speed of MSs. Without the feedback from MSs, it may be
possible for the BS to directly obtain the DL CSI using the inherent relation
of UL and DL channels even in FDD, which is called DL extrapolation. Although
the exact relation would be highly nonlinear, previous studies have shown that
a neural network (NN) can be used to estimate the DL CSI from the UL CSI at the
BS. Most of previous works on this line of research trained the NN using full
dimensional UL and DL channels; however, the NN training complexity becomes
severe as the number of antennas at the BS increases. To reduce the training
complexity and improve DL CSI estimation quality, this paper proposes a novel
DL extrapolation technique using simplified input and output of the NN. It is
shown through many measurement campaigns that the UL and DL channels still
share common components like path delays and angles in FDD. The proposed
technique first extracts these common coefficients from the UL and DL channels
and trains the NN only using the path gains, which depend on frequency bands,
with reduced dimension compared to the full UL and DL channels. Extensive
simulation results show that the proposed technique outperforms the
conventional approach, which relies on the full UL and DL channels to train the
NN, regardless of the speed of MSs.Comment: accepted for IEEE Acces
Transformer-Empowered 6G Intelligent Networks: From Massive MIMO Processing to Semantic Communication
It is anticipated that 6G wireless networks will accelerate the convergence
of the physical and cyber worlds and enable a paradigm-shift in the way we
deploy and exploit communication networks. Machine learning, in particular deep
learning (DL), is expected to be one of the key technological enablers of 6G by
offering a new paradigm for the design and optimization of networks with a high
level of intelligence. In this article, we introduce an emerging DL
architecture, known as the transformer, and discuss its potential impact on 6G
network design. We first discuss the differences between the transformer and
classical DL architectures, and emphasize the transformer's self-attention
mechanism and strong representation capabilities, which make it particularly
appealing for tackling various challenges in wireless network design.
Specifically, we propose transformer-based solutions for various massive
multiple-input multiple-output (MIMO) and semantic communication problems, and
show their superiority compared to other architectures. Finally, we discuss key
challenges and open issues in transformer-based solutions, and identify future
research directions for their deployment in intelligent 6G networks.Comment: 9 pages, 6 figures. The current version has been accepted by IEEE
Wireless Communications Magzin
A Learnable Optimization and Regularization Approach to Massive MIMO CSI Feedback
Channel state information (CSI) plays a critical role in achieving the
potential benefits of massive multiple input multiple output (MIMO) systems. In
frequency division duplex (FDD) massive MIMO systems, the base station (BS)
relies on sustained and accurate CSI feedback from the users. However, due to
the large number of antennas and users being served in massive MIMO systems,
feedback overhead can become a bottleneck. In this paper, we propose a
model-driven deep learning method for CSI feedback, called learnable
optimization and regularization algorithm (LORA). Instead of using l1-norm as
the regularization term, a learnable regularization module is introduced in
LORA to automatically adapt to the characteristics of CSI. We unfold the
conventional iterative shrinkage-thresholding algorithm (ISTA) to a neural
network and learn both the optimization process and regularization term by
end-toend training. We show that LORA improves the CSI feedback accuracy and
speed. Besides, a novel learnable quantization method and the corresponding
training scheme are proposed, and it is shown that LORA can operate
successfully at different bit rates, providing flexibility in terms of the CSI
feedback overhead. Various realistic scenarios are considered to demonstrate
the effectiveness and robustness of LORA through numerical simulations
- …