23 research outputs found

    The closest vector problem in tensored root lattices of type A and in their duals

    Get PDF
    In this work we consider the closest vector problem (CVP)—a problem also known as maximum-likelihood decoding—in the tensor of two root lattices of type A ((Formula presented.)), as well as in their duals ((Formula presented.)). This problem is mainly motivated by lattice based cryptography, where the cyclotomic rings (Formula presented.) (resp. its co-different (Formula presented.)) play a central role, and turn out to be isomorphic as lattices to tensors of (Formula presented.) lattices (resp. A root lattices). In particular, our results lead to solving CVP in (Formula presented.) and in (Formula presented.) for conductors of the form (Formula presented.) for any two odd primes p, q. For the primal case (Formula presented.), we provide a full characterization of the Voronoi region in terms of simple cycles in the complete directed bipartite graph (Formula presented.). This leads—relying on the Bellman-Ford algorithm for negative cycle detection—to a CVP algorithm running in polynomial time. Precisely, our algorithm performs (Formula presented.) operations on reals, where l is the number of bits per coordinate of the input target. For the dual case, we use a gluing-construction to solve CVP in sub-exponential time (Formula presented.)

    The closest vector problem in tensored root lattices of type A and in their duals

    Get PDF
    In this work we consider the closest vector problem (CVP) ---a problem also known as maximum-likelihood decoding--- in the tensor of two root lattices of type A (AmAnA_m \otimes A_n), as well as in their duals (AmAnA^*_m \otimes A^*_n). This problem is mainly motivated by {\em lattice based cryptography}, where the cyclotomic rings Z[ζc]\mathbb Z[\zeta_c] (resp. its co-different Z[ζc]\mathbb Z[\zeta_c]^\vee) play a central role, and turn out to be isomorphic as lattices to tensors of AA^* lattices (resp. AA root lattices). In particular, our results lead to solving CVP in Z[ζc]\mathbb Z[\zeta_c] and in Z[ζc]\mathbb Z[\zeta_c]^\vee for conductors of the form c=2αpβqγc = 2^\alpha p^\beta q^\gamma for any two odd primes p,qp,q. For the primal case AmAnA_m \otimes A_n, we provide a full characterization of the Voronoi region in terms of simple cycles in the complete directed bipartite graph Km+1,n+1K_{m+1,n+1}. This leads ---relying on the Bellman-Ford algorithm for negative cycle detection--- to a CVP algorithm running in *polynomial time*. Precisely, our algorithm performs O(l m2n2min{m,n})O(l\ m^2 n^2 \min\{m,n\}) operations on reals, where ll is the number of bits per coordinate of the input target. For the dual case, we use a gluing-construction to solve CVP in sub-exponential time O(nmn+1)O(n m^{n+1})

    Lattice-based locality sensitive hashing is optimal

    Get PDF
    Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC ‘98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes “nearby” points to the same bucket and “far away” points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS ‘06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the ℓ2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA ‘07) and O’Donnell, Wu and Zhou (TOCT ‘14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem

    Lattice-based locality sensitive hashing is optimal

    Get PDF
    Locality sensitive hashing (LSH) was introduced by Indyk and Motwani (STOC ‘98) to give the first sublinear time algorithm for the c-approximate nearest neighbor (ANN) problem using only polynomial space. At a high level, an LSH family hashes “nearby” points to the same bucket and “far away” points to different buckets. The quality of measure of an LSH family is its LSH exponent, which helps determine both query time and space usage. In a seminal work, Andoni and Indyk (FOCS ‘06) constructed an LSH family based on random ball partitionings of space that achieves an LSH exponent of 1/c2 for the l2 norm, which was later shown to be optimal by Motwani, Naor and Panigrahy (SIDMA ‘07) and O’Donnell, Wu and Zhou (TOCT ‘14). Although optimal in the LSH exponent, the ball partitioning approach is computationally expensive. So, in the same work, Andoni and Indyk proposed a simpler and more practical hashing scheme based on Euclidean lattices and provided computational results using the 24-dimensional Leech lattice. However, no theoretical analysis of the scheme was given, thus leaving open the question of finding the exponent of lattice based LSH. In this work, we resolve this question by showing the existence of lattices achieving the optimal LSH exponent of 1/c2 using techniques from the geometry of numbers. At a more conceptual level, our results show that optimal LSH space partitions can have periodic structure. Understanding the extent to which additional structure can be imposed on these partitions, e.g. to yield low space and query complexity, remains an important open problem

    Error Correction and Ciphertext Quantization in Lattice Cryptography

    Get PDF
    Recent work in the design of rate 1o(1)1 - o(1) lattice-based cryptosystems have used two distinct design paradigms, namely replacing the noise-tolerant encoding m(q/2)mm \mapsto (q/2)m present in many lattice-based cryptosystems with a more efficient encoding, and post-processing traditional lattice-based ciphertexts with a lossy compression algorithm, using a technique very similar to the technique of ``vector quantization\u27\u27 within coding theory. We introduce a framework for the design of lattice-based encryption that captures both of these paradigms, and prove information-theoretic rate bounds within this framework. These bounds separate the settings of trivial and non-trivial quantization, and show the impossibility of rate 1o(1)1 - o(1) encryption using both trivial quantization and polynomial modulus. They furthermore put strong limits on the rate of constructions that utilize lattices built by tensoring a lattice of small dimension with Zk\mathbb{Z}^k, which is ubiquitous in the literature. We additionally introduce a new cryptosystem, that matches the rate of the highest-rate currently known scheme, while encoding messages with a ``gadget\u27\u27, which may be useful for constructions of Fully Homomorphic Encryption

    Cryptographic decoding of the Leech lattice

    Get PDF
    Advancements in quantum computing have spurred the development of new asymmetric cryptographic primitives that are conjectured to be secure against quantum attackers. One promising class of these primitives is based on lattices, leading to encryption protocols based on the Learning With Errors (LWE) problem. Key exchange algorithms based on this problem are computationally efficient and enjoy on a strong worst-case hardness guarantee. However, despite recent improvements, the resulting handshake sizes are still significantly larger than those in use today. This thesis looks at the possibility of applying the Leech lattice code to one such scheme, with the goal of decreasing the size of the resulting handshake. We also look at the feasibility of a cryptographically safe implementation of a Leech lattice decoder (available at https://github.com/avanpo/leech-decoding), and the resulting impact on efficiency

    Gauge Theory, Ramification, And The Geometric Langlands Program

    Get PDF
    In the gauge theory approach to the geometric Langlands program, ramification can be described in terms of ``surface operators,'' which are supported on two-dimensional surfaces somewhat as Wilson or 't Hooft operators are supported on curves. We describe the relevant surface operators in N=4 super Yang-Mills theory, and the parameters they depend on, and analyze how S-duality acts on these parameters. Then, after compactifying on a Riemann surface, we show that the hypothesis of S-duality for surface operators leads to a natural extension of the geometric Langlands program for the case of tame ramification. The construction involves an action of the affine Weyl group on the cohomology of the moduli space of Higgs bundles with ramification, and an action of the affine braid group on A-branes or B-branes on this space.Comment: 160 p
    corecore