712 research outputs found
Multiuser MIMO-OFDM for Next-Generation Wireless Systems
This overview portrays the 40-year evolution of orthogonal frequency division multiplexing (OFDM) research. The amelioration of powerful multicarrier OFDM arrangements with multiple-input multiple-output (MIMO) systems has numerous benefits, which are detailed in this treatise. We continue by highlighting the limitations of conventional detection and channel estimation techniques designed for multiuser MIMO OFDM systems in the so-called rank-deficient scenarios, where the number of users supported or the number of transmit antennas employed exceeds the number of receiver antennas. This is often encountered in practice, unless we limit the number of users granted access in the base station’s or radio port’s coverage area. Following a historical perspective on the associated design problems and their state-of-the-art solutions, the second half of this treatise details a range of classic multiuser detectors (MUDs) designed for MIMO-OFDM systems and characterizes their achievable performance. A further section aims for identifying novel cutting-edge genetic algorithm (GA)-aided detector solutions, which have found numerous applications in wireless communications in recent years. In an effort to stimulate the cross pollination of ideas across the machine learning, optimization, signal processing, and wireless communications research communities, we will review the broadly applicable principles of various GA-assisted optimization techniques, which were recently proposed also for employment inmultiuser MIMO OFDM. In order to stimulate new research, we demonstrate that the family of GA-aided MUDs is capable of achieving a near-optimum performance at the cost of a significantly lower computational complexity than that imposed by their optimum maximum-likelihood (ML) MUD aided counterparts. The paper is concluded by outlining a range of future research options that may find their way into next-generation wireless systems
Deep Learning for Frame Error Probability Prediction in BICM-OFDM Systems
In the context of wireless communications, we propose a deep learning
approach to learn the mapping from the instantaneous state of a frequency
selective fading channel to the corresponding frame error probability (FEP) for
an arbitrary set of transmission parameters. We propose an abstract model of a
bit interleaved coded modulation (BICM) orthogonal frequency division
multiplexing (OFDM) link chain and show that the maximum likelihood (ML)
estimator of the model parameters estimates the true FEP distribution. Further,
we exploit deep neural networks as a general purpose tool to implement our
model and propose a training scheme for which, even while training with the
binary frame error events (i.e., ACKs / NACKs), the network outputs converge to
the FEP conditioned on the input channel state. We provide simulation results
that demonstrate gains in the FEP prediction accuracy with our approach as
compared to the traditional effective exponential SIR metric (EESM) approach
for a range of channel code rates, and show that these gains can be exploited
to increase the link throughput.Comment: Submitted to 2018 IEEE International Conference on Acoustics, Speech
and Signal Processin
Deep Learning Based on Orthogonal Approximate Message Passing for CP-Free OFDM
Channel estimation and signal detection are very challenging for an
orthogonal frequency division multiplexing (OFDM) system without cyclic prefix
(CP). In this article, deep learning based on orthogonal approximate message
passing (DL-OAMP) is used to address these problems. The DL-OAMP receiver
includes a channel estimation neural network (CE-Net) and a signal detection
neural network based on OAMP, called OAMP-Net. The CE-Net is initialized by the
least square channel estimation algorithm and refined by minimum mean-squared
error (MMSE) neural network. The OAMP-Net is established by unfolding the
iterative OAMP algorithm and adding some trainable parameters to improve the
detection performance. The DL-OAMP receiver is with low complexity and can
estimate time-varying channels with only a single training. Simulation results
demonstrate that the bit-error rate (BER) of the proposed scheme is lower than
those of competitive algorithms for high-order modulation.Comment: 5 pages, 4 figures, updated manuscript, International Conference on
Acoustics, Speech and Signal Processing (ICASSP 2019). arXiv admin note:
substantial text overlap with arXiv:1903.0476
Performance Improvement of Neural Network Based RLS Channel Estimators in MIMO-OFDM Systems
The objective of this study was tointroduce a recursive least squares (RLS) parameter estimatorenhanced by using a neural network (NN) to facilitate the computing of a bit error rate (BER) (error reduction) during channels estimation of a multiple input-multiple output orthogonal frequency division multiplexing (MIMO-OFDM) system over a Rayleigh multipath fading channel.Recursive least square is an efficient approach to neural network training:first, the neural network estimator learns to adapt to the channel variations then it estimates the channel frequency response. Simulation results show that the proposed method has better performance compared to the conventional methods least square (LS) and the original RLS and it is more robust at high speed mobility
Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology
L'abstract è presente nell'allegato / the abstract is in the attachmen
- …