327 research outputs found

    Large-Scale MIMO Detection for 3GPP LTE: Algorithms and FPGA Implementations

    Full text link
    Large-scale (or massive) multiple-input multiple-output (MIMO) is expected to be one of the key technologies in next-generation multi-user cellular systems, based on the upcoming 3GPP LTE Release 12 standard, for example. In this work, we propose - to the best of our knowledge - the first VLSI design enabling high-throughput data detection in single-carrier frequency-division multiple access (SC-FDMA)-based large-scale MIMO systems. We propose a new approximate matrix inversion algorithm relying on a Neumann series expansion, which substantially reduces the complexity of linear data detection. We analyze the associated error, and we compare its performance and complexity to those of an exact linear detector. We present corresponding VLSI architectures, which perform exact and approximate soft-output detection for large-scale MIMO systems with various antenna/user configurations. Reference implementation results for a Xilinx Virtex-7 XC7VX980T FPGA show that our designs are able to achieve more than 600 Mb/s for a 128 antenna, 8 user 3GPP LTE-based large-scale MIMO system. We finally provide a performance/complexity trade-off comparison using the presented FPGA designs, which reveals that the detector circuit of choice is determined by the ratio between BS antennas and users, as well as the desired error-rate performance.Comment: To appear in the IEEE Journal of Selected Topics in Signal Processin

    Low Complexity V-BLAST MIMO-OFDM Detector by Successive Iterations Reduction

    Full text link
    V-BLAST detection method suffers large computational complexity due to its successive detection of symbols. In this paper, we propose a modified V-BLAST algorithm to decrease the computational complexity by reducing the number of detection iterations required in MIMO communication systems. We begin by showing the existence of a maximum number of iterations, beyond which, no significant improvement is obtained. We establish a criterion for the number of maximum effective iterations. We propose a modified algorithm that uses the measured SNR to dynamically set the number of iterations to achieve an acceptable bit-error rate. Then, we replace the feedback algorithm with an approximate linear function to reduce the complexity. Simulations show that significant reduction in computational complexity is achieved compared to the ordinary V-BLAST, while maintaining a good BER performance.Comment: 6 pages, 7 figures, 2 tables. The final publication is available at www.aece.r

    FlexCore: Massively Parallel and Flexible Processing for Large MIMO Access Points

    Get PDF
    Large MIMO base stations remain among wireless network designers’ best tools for increasing wireless throughput while serving many clients, but current system designs, sacrifice throughput with simple linear MIMO detection algorithms. Higher-performance detection techniques are known, but remain off the table because these systems parallelize their computation at the level of a whole OFDM subcarrier, sufficing only for the less demanding linear detection approaches they opt for. This paper presents FlexCore, the first computational architecture capable of parallelizing the detection of large numbers of mutually-interfering information streams at a granularity below individual OFDM subcarriers, in a nearly-embarrassingly parallel manner while utilizing any number of available processing elements. For 12 clients sending 64-QAM symbols to a 12-antenna base station, our WARP testbed evaluation shows similar network throughput to the state-of-the-art while using an order of magnitude fewer processing elements. For the same scenario, our combined WARP-GPU testbed evaluation demonstrates a 19x computational speedup, with 97% increased energy efficiency when compared with the state of the art. Finally, for the same scenario, an FPGA-based comparison between FlexCore and the state of the art shows that FlexCore can achieve up to 96% better energy efficiency, and can offer up to 32x the processing throughput

    Signal detection for 3GPP LTE downlink: algorithm and implementation.

    Get PDF
    In this paper, we investigate an efficient signal detection algorithm, which combines lattice reduction (LR) and list decoding (LD) techniques for the 3rd generation long term evolution (LTE) downlink systems. The resulting detector, called LRLD based detector, is carried out within the framework of successive interference cancellation (SIC), which takes full advantages of the reliable LR detection. We then extend our studies to the implementation possibility of the LRLD based detector and provide reference for the possible real silicon implementation. Simulation results show that the proposed detector provides a near maximum likelihood (ML) performance with a significantly reduced complexity

    Decision-Directed Channel Estimation Implementation for Spectral Efficiency Improvement in Mobile MIMO-OFDM

    Get PDF
    Channel estimation algorithms and their implementations for mobile receivers are considered in this paper. The 3GPP long term evolution (LTE) based pilot structure is used as a benchmark in a multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) receiver. The decision directed (DD) space alternating generalized expectation-maximization (SAGE) algorithm is used to improve the performance from that of the pilot symbol based least-squares (LS) channel estimator. The performance is improved with high user velocities, where the pilot symbol density is not sufficient. Minimum mean square error (MMSE) filtering is also used in estimating the channel in between pilot symbols. The pilot overhead can be reduced to a third of the LTE pilot overhead with DD channel estimation, obtaining a ten percent increase in data throughput. Complexity reduction and latency issues are considered in the architecture design. The pilot based LS, MMSE and the SAGE channel estimators are implemented with a high level synthesis tool, synthesized with the UMC 0.18 μm CMOS technology and the performance-complexity trade-offs are studied. The MMSE estimator improves the performance from the simple LS estimator with LTE pilot structure and has low power consumption. The SAGE estimator has high power consumption but can be used with reduced pilot density to increase the data rate.National Science FoundationTekesElektrobitRenesas Mobile EuropeAcademy of FinlandNokia Siemens NetworksXilin
    • …
    corecore