530 research outputs found

    High Performance Interference Suppression in Multi-User Massive MIMO Detector

    Full text link
    In this paper, we propose a new nonlinear detector with improved interference suppression in Multi-User Multiple Input, Multiple Output (MU-MIMO) system. The proposed detector is a combination of the following parts: QR decomposition (QRD), low complexity users sorting before QRD, sorting-reduced (SR) K-best method and minimum mean square error (MMSE) pre-processing. Our method outperforms a linear interference rejection combining (IRC, i.e. MMSE naturally) method significantly in both strong interference and additive white noise scenarios with both ideal and real channel estimations. This result has wide application importance for scenarios with strong interference, i.e. when co-located users utilize the internet in stadium, highway, shopping center, etc. Simulation results are presented for the non-line of sight 3D-UMa model of 5G QuaDRiGa 2.0 channel for 16 highly correlated single-antenna users with QAM16 modulation in 64 antennas of Massive MIMO system. The performance was compared with MMSE and other detection approaches.Comment: Accepted for presentation at the VTC2020-Spring conferenc

    Theoretical Performance Bound of Uplink Channel Estimation Accuracy in Massive MIMO

    Full text link
    In this paper, we present a new performance bound for uplink channel estimation (CE) accuracy in the Massive Multiple Input Multiple Output (MIMO) system. The proposed approach is based on noise power prediction after the CE unit. Our method outperforms the accuracy of a well-known Cramer-Rao lower bound (CRLB) due to considering more statistics since performance strongly depends on a number of channel taps and power ratio between them. Simulation results are presented for the non-line of sight (NLOS) 3D-UMa model of 5G QuaDRiGa 2.0 channel and compared with CRLB and state-of-the-art CE algorithms.Comment: accepted for presentation in a poster session at the ICASSP 2020 conferenc

    Contextual Beamforming: Exploiting Location and AI for Enhanced Wireless Telecommunication Performance

    Full text link
    The pervasive nature of wireless telecommunication has made it the foundation for mainstream technologies like automation, smart vehicles, virtual reality, and unmanned aerial vehicles. As these technologies experience widespread adoption in our daily lives, ensuring the reliable performance of cellular networks in mobile scenarios has become a paramount challenge. Beamforming, an integral component of modern mobile networks, enables spatial selectivity and improves network quality. However, many beamforming techniques are iterative, introducing unwanted latency to the system. In recent times, there has been a growing interest in leveraging mobile users' location information to expedite beamforming processes. This paper explores the concept of contextual beamforming, discussing its advantages, disadvantages and implications. Notably, the study presents an impressive 53% improvement in signal-to-noise ratio (SNR) by implementing the adaptive beamforming (MRT) algorithm compared to scenarios without beamforming. It further elucidates how MRT contributes to contextual beamforming. The importance of localization in implementing contextual beamforming is also examined. Additionally, the paper delves into the use of artificial intelligence schemes, including machine learning and deep learning, in implementing contextual beamforming techniques that leverage user location information. Based on the comprehensive review, the results suggest that the combination of MRT and Zero forcing (ZF) techniques, alongside deep neural networks (DNN) employing Bayesian Optimization (BO), represents the most promising approach for contextual beamforming. Furthermore, the study discusses the future potential of programmable switches, such as Tofino, in enabling location-aware beamforming

    An Orthogonal-SGD based Learning Approach for MIMO Detection under Multiple Channel Models

    Get PDF
    In this paper, an orthogonal stochastic gradient descent (O-SGD) based learning approach is proposed to tackle the wireless channel over-training problem inherent in artificial neural network (ANN)-assisted MIMO signal detection. Our basic idea lies in the discovery and exploitation of the training-sample orthogonality between the current training epoch and past training epochs. Unlike the conventional SGD that updates the neural network simply based upon current training samples, O-SGD discovers the correlation between current training samples and historical training data, and then updates the neural network with those uncorrelated components. The network updating occurs only in those identified null subspaces. By such means, the neural network can understand and memorize uncorrelated components between different wireless channels, and thus is more robust to wireless channel variations. This hypothesis is confirmed through our extensive computer simulations as well as performance comparison with the conventional SGD approach.Comment: 6 pages, 4 figures, conferenc
    • …
    corecore