662 research outputs found

    On the Proximity Factors of Lattice Reduction-Aided Decoding

    Full text link
    Lattice reduction-aided decoding features reduced decoding complexity and near-optimum performance in multi-input multi-output communications. In this paper, a quantitative analysis of lattice reduction-aided decoding is presented. To this aim, the proximity factors are defined to measure the worst-case losses in distances relative to closest point search (in an infinite lattice). Upper bounds on the proximity factors are derived, which are functions of the dimension nn of the lattice alone. The study is then extended to the dual-basis reduction. It is found that the bounds for dual basis reduction may be smaller. Reasonably good bounds are derived in many cases. The constant bounds on proximity factors not only imply the same diversity order in fading channels, but also relate the error probabilities of (infinite) lattice decoding and lattice reduction-aided decoding.Comment: remove redundant figure

    Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding

    Full text link
    Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2γ)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γO(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γO(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models

    Full text link
    The work identifies the first general, explicit, and non-random MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions on Information Theor

    Integer-Forcing MIMO Linear Receivers Based on Lattice Reduction

    Full text link
    A new architecture called integer-forcing (IF) linear receiver has been recently proposed for multiple-input multiple-output (MIMO) fading channels, wherein an appropriate integer linear combination of the received symbols has to be computed as a part of the decoding process. In this paper, we propose a method based on Hermite-Korkine-Zolotareff (HKZ) and Minkowski lattice basis reduction algorithms to obtain the integer coefficients for the IF receiver. We show that the proposed method provides a lower bound on the ergodic rate, and achieves the full receive diversity. Suitability of complex Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm (CLLL) to solve the problem is also investigated. Furthermore, we establish the connection between the proposed IF linear receivers and lattice reduction-aided MIMO detectors (with equivalent complexity), and point out the advantages of the former class of receivers over the latter. For the 2×22 \times 2 and 4×44\times 4 MIMO channels, we compare the coded-block error rate and bit error rate of the proposed approach with that of other linear receivers. Simulation results show that the proposed approach outperforms the zero-forcing (ZF) receiver, minimum mean square error (MMSE) receiver, and the lattice reduction-aided MIMO detectors.Comment: 9 figures and 11 pages. Modified the title, abstract and some parts of the paper. Major change from v1: Added new results on applicability of the CLLL reductio
    corecore