62 research outputs found

    Pseudorandom Knapsacks and the Sample Complexity of LWE Search-to-Decision Reductions

    Get PDF
    We study under what conditions the conjectured one-wayness of the knapsack function (with polynomially bounded inputs) over an arbitrary finite abelian group implies that the output of the function is pseudorandom, i.e., computationally indistinguishable from a uniformly chosen group element. Previous work of Impagliazzo and Naor (J. Cryptology 9(4):199-216, 1996) considers only specific families of finite abelian groups and uniformly chosen random \emph{binary} inputs. Our work substantially extends previous results and provides a much more general reduction that applies to arbitrary finite abelian groups and input distributions with polynomially bounded coefficients. As an application of the new result, we give \emph{sample preserving} search-to-decision reductions for the Learning With Errors (LWE) problem, introduced by Regev (J. ACM 56(6):34, 2009) and widely used in lattice-based cryptography

    On the Hardness of Learning With Errors with Binary Secrets

    Get PDF
    We give a simple proof that the decisional Learning With Errors (LWE) problem with binary secrets (and an arbitrary polynomial number of samples) is at least as hard as the standard LWE problem (with unrestricted, uniformly random secrets, and a bounded, quasi-linear number of samples). This proves that the binary-secret LWE distribution is pseudorandom, under standard worst-case complexity assumptions on lattice problems. Our results are similar to those proved by (Brakerski, Langlois, Peikert, Regev and Stehle, STOC 2013), but provide a shorter, more direct proof, and a small improvement in the noise growth of the reduction

    Circuit-ABE from LWE: Unbounded Attributes and Semi-adaptive Security

    Get PDF
    We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as 2d inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length. We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the “complexity leveraging” technique is not applicable for unbounded attributes). We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.National Science Foundation (U.S.) (Award CNS-1350619)National Science Foundation (U.S.) (Grant CNS-1413964)United States-Israel Binational Science Foundation (Grant 712307

    On the Hardness of Learning with Rounding over Small Modulus

    Get PDF
    We show the following reductions from the learning with errors problem (LWE) to the learning with rounding problem (LWR): (1) Learning the secret and (2) distinguishing samples from random strings is at least as hard for LWR as it is for LWE for efficient algorithms if the number of samples is no larger than O(q/Bp), where q is the LWR modulus, p is the rounding modulus and the noise is sampled from any distribution supported over the set {-B,...,B}. Our second result generalizes a theorem of Alwen, Krenn, Pietrzak and Wichs (CRYPTO 2013) and provides an alternate proof of it. Unlike Alwen et al., we do not impose any number theoretic restrictions on the modulus q. The first result also extends to variants of LWR and LWE over polynomial rings. As additional results we show that (3) distinguishing any number of LWR samples from random strings is of equivalent hardness to LWE whose noise distribution is uniform over the integers in the range [-q/2p,...,q/2p) provided q is a multiple of p and (4) the noise flooding technique for converting faulty LWE noise to a discrete Gaussian distribution can be applied whenever q = \Omega(B\sqrt{m}). All our reductions preserve sample complexity and have time complexity at most polynomial in q, the dimension, and the number of samples

    An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices

    Get PDF
    In this paper, we study the Learning With Errors problem and its binary variant, where secrets and errors are binary or taken in a small interval. We introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on a quantization step that generalizes and fine-tunes modulus switching. In general this new technique yields a significant gain in the constant in front of the exponent in the overall complexity. We illustrate this by solving p within half a day a LWE instance with dimension n = 128, modulus q=n2q = n^2, Gaussian noise α=1/(n/πlog⁥2n)\alpha = 1/(\sqrt{n/\pi} \log^2 n) and binary secret, using 2282^{28} samples, while the previous best result based on BKW claims a time complexity of 2742^{74} with 2602^{60} samples for the same parameters. We then introduce variants of BDD, GapSVP and UniqueSVP, where the target point is required to lie in the fundamental parallelepiped, and show how the previous algorithm is able to solve these variants in subexponential time. Moreover, we also show how the previous algorithm can be used to solve the BinaryLWE problem with n samples in subexponential time 2(ln⁥2/2+o(1))n/log⁥log⁥n2^{(\ln 2/2+o(1))n/\log \log n}. This analysis does not require any heuristic assumption, contrary to other algebraic approaches; instead, it uses a variant of an idea by Lyubashevsky to generate many samples from a small number of samples. This makes it possible to asymptotically and heuristically break the NTRU cryptosystem in subexponential time (without contradicting its security assumption). We are also able to solve subset sum problems in subexponential time for density o(1)o(1), which is of independent interest: for such density, the previous best algorithm requires exponential time. As a direct application, we can solve in subexponential time the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201

    Quantum Search-to-Decision Reduction for the LWE Problem

    Get PDF
    The learning with errors (LWE) problem is one of the fundamental problems in cryptography and it has many applications in post-quantum cryptography. There are two variants of the problem, the decisional-LWE problem, and the search-LWE problem. LWE search-to-decision reduction shows that the hardness of the search-LWE problem can be reduced to the hardness of the decisional-LWE problem. We initiate a study of quantum search-to-decision reduction for the LWE problem and propose a reduction that satisfies sample-preserving. In sample-preserving reduction, it preserves all parameters even the number of instances. Especially, our quantum reduction invokes the distinguisher only 22 times to solve the search-LWE problem, while classical reductions require a polynomial number of invocations. Furthermore, we give a way to amplify the success probability of the reduction algorithm. Our amplified reduction works with fewer LWE samples compared to the classical reduction that has a high success probability. Our reduction algorithm supports a wide class of error distributions and also provides a search-to-decision reduction for the learning parity with noise problem. In the process of constructing the search-to-decision reduction, we give a quantum Goldreich-Levin theorem over Zq\mathbb{Z}_q where qq is prime. In short, this theorem states that, if a hardcore predicate a⋅s(modq)a\cdot s \pmod q can be predicted with probability distinctly greater than 1/q1/q with respect to a uniformly random a∈Zqna\in\mathbb{Z}_q^n, then it is possible to determine s∈Zqns\in\mathbb{Z}_q^n

    Finding Significant Fourier Coefficients: Clarifications, Simplifications, Applications and Limitations

    Get PDF
    Ideas from Fourier analysis have been used in cryptography for the last three decades. Akavia, Goldwasser and Safra unified some of these ideas to give a complete algorithm that finds significant Fourier coefficients of functions on any finite abelian group. Their algorithm stimulated a lot of interest in the cryptography community, especially in the context of `bit security'. This manuscript attempts to be a friendly and comprehensive guide to the tools and results in this field. The intended readership is cryptographers who have heard about these tools and seek an understanding of their mechanics and their usefulness and limitations. A compact overview of the algorithm is presented with emphasis on the ideas behind it. We show how these ideas can be extended to a `modulus-switching' variant of the algorithm. We survey some applications of this algorithm, and explain that several results should be taken in the right context. In particular, we point out that some of the most important bit security problems are still open. Our original contributions include: a discussion of the limitations on the usefulness of these tools; an answer to an open question about the modular inversion hidden number problem

    Revisiting the Hardness of Binary Error LWE

    Get PDF
    Binary error LWE is the particular case of the learning with errors (LWE) problem in which errors are chosen in {0,1}\{0,1\}. It has various cryptographic applications, and in particular, has been used to construct efficient encryption schemes for use in constrained devices. Arora and Ge showed that the problem can be solved in polynomial time given a number of samples quadratic in the dimension nn. On the other hand, the problem is known to be as hard as standard LWE given only slightly more than nn samples. In this paper, we first examine more generally how the hardness of the problem varies with the number of available samples. Under standard heuristics on the Arora--Ge polynomial system, we show that, for any Ï”>0\epsilon >0, binary error LWE can be solved in polynomial time nO(1/Ï”)n^{O(1/\epsilon)} given ϔ⋅n2\epsilon\cdot n^{2} samples. Similarly, it can be solved in subexponential time 2O~(n1−α)2^{\tilde O(n^{1-\alpha})} given n1+αn^{1+\alpha} samples, for 0<α<10<\alpha<1. As a second contribution, we also generalize the binary error LWE to problem the case of a non-uniform error probability, and analyze the hardness of the non-uniform binary error LWE with respect to the error rate and the number of available samples. We show that, for any error rate 0<p<10 < p < 1, non-uniform binary error LWE is also as hard as worst-case lattice problems provided that the number of samples is suitably restricted. This is a generalization of Micciancio and Peikert\u27s hardness proof for uniform binary error LWE. Furthermore, we also discuss attacks on the problem when the number of available samples is linear but significantly larger than nn, and show that for sufficiently low error rates, subexponential or even polynomial time attacks are possible

    Hardness of SIS and LWE with Small Parameters

    Get PDF
    The Short Integer Solution (SIS) and Learning With Errors (LWE) problems are the foundations for countless applications in lattice-based cryptography, and are provably as hard as approximate lattice problems in the worst case. A important question from both a practical and theoretical perspective is how small their parameters can be made, while preserving their hardness. We prove two main results on SIS and LWE with small parameters. For SIS, we show that the problem retains its hardness for moduli q≄ÎČ⋅nÎŽq \geq \beta \cdot n^{\delta} for any constant ÎŽ>0\delta > 0, where ÎČ\beta is the bound on the Euclidean norm of the solution. This improves upon prior results which required q≄ÎČ⋅nlog⁥nq \geq \beta \cdot \sqrt{n \log n}, and is essentially optimal since the problem is trivially easy for q≀ÎČq \leq \beta. For LWE, we show that it remains hard even when the errors are small (e.g., uniformly random from {0,1}\{0,1\}), provided that the number of samples is small enough (e.g., linear in the dimension nn of the LWE secret). Prior results required the errors to have magnitude at least n\sqrt{n} and to come from a Gaussian-like distribution
    • 

    corecore