441 research outputs found

    Dual-lattice ordering and partial lattice reduction for SIC-based MIMO detection

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.In this paper, we propose low-complexity lattice detection algorithms for successive interference cancelation (SIC) in multi-input multi-output (MIMO) communications. First, we present a dual-lattice view of the vertical Bell Labs Layered Space-Time (V-BLAST) detection. We show that V-BLAST ordering is equivalent to applying sorted QR decomposition to the dual basis, or equivalently, applying sorted Cholesky decomposition to the associated Gram matrix. This new view results in lower detection complexity and allows simultaneous ordering and detection. Second, we propose a partial reduction algorithm that only performs lattice reduction for the last several, weak substreams, whose implementation is also facilitated by the dual-lattice view. By tuning the block size of the partial reduction (hence the complexity), it can achieve a variable diversity order, hence offering a graceful tradeoff between performance and complexity for SIC-based MIMO detection. Numerical results are presented to compare the computational costs and to verify the achieved diversity order

    Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding

    Full text link
    Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32

    On the Proximity Factors of Lattice Reduction-Aided Decoding

    Full text link
    Lattice reduction-aided decoding features reduced decoding complexity and near-optimum performance in multi-input multi-output communications. In this paper, a quantitative analysis of lattice reduction-aided decoding is presented. To this aim, the proximity factors are defined to measure the worst-case losses in distances relative to closest point search (in an infinite lattice). Upper bounds on the proximity factors are derived, which are functions of the dimension nn of the lattice alone. The study is then extended to the dual-basis reduction. It is found that the bounds for dual basis reduction may be smaller. Reasonably good bounds are derived in many cases. The constant bounds on proximity factors not only imply the same diversity order in fading channels, but also relate the error probabilities of (infinite) lattice decoding and lattice reduction-aided decoding.Comment: remove redundant figure

    DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models

    Full text link
    The work identifies the first general, explicit, and non-random MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions on Information Theor
    • 

    corecore