3 research outputs found
A Reduced Complexity Ungerboeck Receiver for Quantized Wideband Massive SC-MIMO
Employing low resolution analog-to-digital converters in massive
multiple-input multiple-output (MIMO) has many advantages in terms of total
power consumption, cost and feasibility of such systems. However, such
advantages come together with significant challenges in channel estimation and
data detection due to the severe quantization noise present. In this study, we
propose a novel iterative receiver for quantized uplink single carrier MIMO
(SC-MIMO) utilizing an efficient message passing algorithm based on the
Bussgang decomposition and Ungerboeck factorization, which avoids the use of a
complex whitening filter. A reduced state sequence estimator with bidirectional
decision feedback is also derived, achieving remarkable complexity reduction
compared to the existing receivers for quantized SC-MIMO in the literature,
without any requirement on the sparsity of the transmission channel. Moreover,
the linear minimum mean-square-error (LMMSE) channel estimator for SC-MIMO
under frequency-selective channel, which do not require any cyclic-prefix
overhead, is also derived. We observe that the proposed receiver has
significant performance gains with respect to the existing receivers in the
literature under imperfect channel state information.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Recommended from our members
A Study on the Impact of Integrating Reinforcement Learning for Channel Prediction and Power Allocation Scheme in MISO-NOMA System
Data Availability Statement: Not applicable.Copyright © 2023 by the authors. In this study, the influence of adopting Reinforcement Learning (RL) to predict the channel parameters for user devices in a Power Domain Multi-Input Single-Output Non-Orthogonal Multiple Access (MISO-NOMA) system is inspected. In the channel prediction-based RL approach, the Q-learning algorithm is developed and incorporated into the NOMA system so that the developed Q-model can be employed to predict the channel coefficients for every user device. The purpose of adopting the developed Q-learning procedure is to maximize the received downlink sum-rate and decrease the estimation loss. To satisfy this aim, the developed Q-algorithm is initialized using different channel statistics and then the algorithm is updated based on the interaction with the environment in order to approximate the channel coefficients for each device. The predicted parameters are utilized at the receiver side to recover the desired data. Furthermore, based on maximizing the sum-rate of the examined user devices, the power factors for each user can be deduced analytically to allocate the optimal power factor for every user device in the system. In addition, this work inspects how the channel prediction based on the developed Q-learning model, and the power allocation policy, can both be incorporated for the purpose of multiuser recognition in the examined MISO-NOMA system. Simulation results, based on several performance metrics, have demonstrated that the developed Q-learning algorithm can be a competitive algorithm for channel estimation when compared to different benchmark schemes such as deep learning-based long short-term memory (LSTM), RL based actor-critic algorithm, RL based state-action-reward-state-action (SARSA) algorithm, and standard channel estimation scheme based on minimum mean square error procedure.This research received no external funding
Data-Aided Channel Estimator for MIMO Systems via Reinforcement Learning
This paper presents a data-aided channel estimator that reduces the channel estimation error of the conventional linear minimum-mean-squared-error (LMMSE) method for multiple-input multiple-output communication systems. The basic idea is to selectively exploit detected symbol vectors obtained from data detection as additional pilot signals. To optimize the selection of the detected symbol vectors, a Markov decision process (MDP) is defined which finds the best selection to minimize the mean-squared-error (MSE) of the channel estimate. Then a reinforcement learning algorithm is developed to solve this MDP in a computationally efficient manner. Simulation results demonstrate that the presented channel estimator significantly reduces the MSE of the channel estimate and therefore improves the block error rate of the system, compared to the conventional LMMSE method.1