421 research outputs found

    On the Proximity Factors of Lattice Reduction-Aided Decoding

    Full text link
    Lattice reduction-aided decoding features reduced decoding complexity and near-optimum performance in multi-input multi-output communications. In this paper, a quantitative analysis of lattice reduction-aided decoding is presented. To this aim, the proximity factors are defined to measure the worst-case losses in distances relative to closest point search (in an infinite lattice). Upper bounds on the proximity factors are derived, which are functions of the dimension nn of the lattice alone. The study is then extended to the dual-basis reduction. It is found that the bounds for dual basis reduction may be smaller. Reasonably good bounds are derived in many cases. The constant bounds on proximity factors not only imply the same diversity order in fading channels, but also relate the error probabilities of (infinite) lattice decoding and lattice reduction-aided decoding.Comment: remove redundant figure

    Decoding by Embedding: Correct Decoding Radius and DMT Optimality

    Get PDF
    The closest vector problem (CVP) and shortest (nonzero) vector problem (SVP) are the core algorithmic problems on Euclidean lattices. They are central to the applications of lattices in many problems of communications and cryptography. Kannan's \emph{embedding technique} is a powerful technique for solving the approximate CVP, yet its remarkable practical performance is not well understood. In this paper, the embedding technique is analyzed from a \emph{bounded distance decoding} (BDD) viewpoint. We present two complementary analyses of the embedding technique: We establish a reduction from BDD to Hermite SVP (via unique SVP), which can be used along with any Hermite SVP solver (including, among others, the Lenstra, Lenstra and Lov\'asz (LLL) algorithm), and show that, in the special case of LLL, it performs at least as well as Babai's nearest plane algorithm (LLL-aided SIC). The former analysis helps to explain the folklore practical observation that unique SVP is easier than standard approximate SVP. It is proven that when the LLL algorithm is employed, the embedding technique can solve the CVP provided that the noise norm is smaller than a decoding radius λ1/(2γ)\lambda_1/(2\gamma), where λ1\lambda_1 is the minimum distance of the lattice, and γO(2n/4)\gamma \approx O(2^{n/4}). This substantially improves the previously best known correct decoding bound γO(2n)\gamma \approx {O}(2^{n}). Focusing on the applications of BDD to decoding of multiple-input multiple-output (MIMO) systems, we also prove that BDD of the regularized lattice is optimal in terms of the diversity-multiplexing gain tradeoff (DMT), and propose practical variants of embedding decoding which require no knowledge of the minimum distance of the lattice and/or further improve the error performance.Comment: To appear in IEEE Transactions on Information Theor

    Classification of eight dimensional perfect forms

    Get PDF
    In this paper, we classify the perfect lattices in dimension 8. There are 10916 of them. Our classification heavily relies on exploiting symmetry in polyhedral computations. Here we describe algorithms making the classification possible.Comment: 14 page

    Integer-Forcing MIMO Linear Receivers Based on Lattice Reduction

    Full text link
    A new architecture called integer-forcing (IF) linear receiver has been recently proposed for multiple-input multiple-output (MIMO) fading channels, wherein an appropriate integer linear combination of the received symbols has to be computed as a part of the decoding process. In this paper, we propose a method based on Hermite-Korkine-Zolotareff (HKZ) and Minkowski lattice basis reduction algorithms to obtain the integer coefficients for the IF receiver. We show that the proposed method provides a lower bound on the ergodic rate, and achieves the full receive diversity. Suitability of complex Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm (CLLL) to solve the problem is also investigated. Furthermore, we establish the connection between the proposed IF linear receivers and lattice reduction-aided MIMO detectors (with equivalent complexity), and point out the advantages of the former class of receivers over the latter. For the 2×22 \times 2 and 4×44\times 4 MIMO channels, we compare the coded-block error rate and bit error rate of the proposed approach with that of other linear receivers. Simulation results show that the proposed approach outperforms the zero-forcing (ZF) receiver, minimum mean square error (MMSE) receiver, and the lattice reduction-aided MIMO detectors.Comment: 9 figures and 11 pages. Modified the title, abstract and some parts of the paper. Major change from v1: Added new results on applicability of the CLLL reductio

    Tensor-based trapdoors for CVP and their application to public key cryptography

    Get PDF
    We propose two trapdoors for the Closest-Vector-Problem in lattices (CVP) related to the lattice tensor product. Using these trapdoors we set up a lattice-based cryptosystem which resembles to the McEliece scheme

    Dual lattice attacks for closest vector problems (with preprocessing)

    Get PDF
    The dual attack has long been considered a relevant attack on lattice-based cryptographic schemes relying on the hardness of learning with errors (LWE) and its structured variants. As solving LWE corresponds to finding a nearest point on a lattice, one may naturally wonder how efficient this dual approach is for solving more general closest vector problems, such as the classical closest vector problem (CVP), the variants bounded distance decoding (BDD) and approximate CVP, and preprocessing versions of these problems. While primal, sieving-based solutions to these problems (with preprocessing) were recently studied in a series of works on approximate Voronoi cells, for the dual attack no such overview exists, especially for problems with preprocessing. With one of the take-away messages of the approximate Voronoi cell line of work being that primal attacks work well for approximate CVP(P) but scale poorly for BDD(P), one may wonder if the dual attack suffers the same drawbacks, or if it is a better method for solving BDD(P). In this work we provide an overview of cost estimates for dual algorithms for solving these \u27\u27classical\u27\u27 closest lattice vector problems. Heuristically we expect to solve the search version of average-case CVPP in time and space 20.293d+o(d)2^{0.293d + o(d)}. For the distinguishing version of average-case CVPP, where we wish to distinguish between random targets and targets planted at distance approximately the Gaussian heuristic from the lattice, we obtain the same complexity in the single-target model, and we obtain query time and space complexities of 20.195d+o(d)2^{0.195d + o(d)} in the multi-target setting, where we are given a large number of targets from either target distribution. This suggests an inequivalence between distinguishing and searching, as we do not expect a similar improvement in the multi-target setting to hold for search-CVPP. We analyze three slightly different decoders, both for distinguishing and searching, and experimentally obtain concrete cost estimates for the dual attack in dimensions 5050 to 8080, which confirm our heuristic assumptions, and show that the hidden order terms in the asymptotic estimates are quite small. Our main take-away message is that the dual attack appears to mirror the approximate Voronoi cell line of work -- whereas using approximate Voronoi cells works well for approximate CVP(P) but scales poorly for BDD(P), the dual approach scales well for BDD(P) instances but performs poorly on approximate CVP(P)

    Recursive lattice reduction -- A framework for finding short lattice vectors

    Full text link
    We propose a new framework called recursive lattice reduction for finding short non-zero vectors in a lattice or for finding dense sublattices of a lattice. At a high level, the framework works by recursively searching for dense sublattices of dense sublattices (or their duals). Eventually, the procedure encounters a recursive call on a lattice L\mathcal{L} with relatively low rank kk, at which point we simply use a known algorithm to find a short non-zero vector in L\mathcal{L}. We view our framework as complementary to basis reduction algorithms, which similarly work to reduce an nn-dimensional lattice problem with some approximation factor γ\gamma to an exact lattice problem in dimension k<nk < n, with a tradeoff between γ\gamma, nn, and kk. Our framework provides an alternative and arguably simpler perspective, which in particular can be described without explicitly referencing any specific basis of the lattice, Gram-Schmidt vectors, or even projection (though implementations of algorithms in this framework will likely make use of such things). We present a number of specific instantiations of our framework. Our main concrete result is a reduction that matches the tradeoff between γ\gamma, nn, and kk achieved by the best-known basis reduction algorithms (in terms of the Hermite factor, up to low-order terms) across all parameter regimes. In fact, this reduction also can be used to find dense sublattices with any rank \ell satisfying min{,n}nk+1\min\{\ell,n-\ell\} \leq n-k+1, using only an oracle for SVP (or even just Hermite SVP) in kk dimensions, which is itself a novel result (as far as the authors know). We also show a very simple reduction that achieves the same tradeoff in quasipolynomial time. Finally, we present an automated approach for searching for algorithms in this framework that (provably) achieve better approximations with fewer oracle calls
    corecore