759 research outputs found

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2Îł)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γ≈O(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γ≈O(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    On the sphere-decoding algorithm I. Expected complexity

    Get PDF
    The problem of finding the least-squares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worst-case complexity of the integer least-squares problem, we study its expected complexity, averaged over the noise and over the lattice. For the "sphere decoding" algorithm of Fincke and Pohst, we find a closed-form expression for the expected complexity, both for the infinite and finite lattice. It is demonstrated in the second part of this paper that, for a wide range of signal-to-noise ratios (SNRs) and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real time - a result with many practical implications
    • 

    corecore