261 research outputs found

    Sampling From Arbitrary Centered Discrete Gaussians For Lattice-Based Cryptography

    Get PDF
    Non-Centered Discrete Gaussian sampling is a fundamental building block in many lattice-based constructions in cryptography, such as signature and identity-based encryption schemes. On the one hand, the center-dependent approaches, e.g. cumulative distribution tables (CDT), Knuth-Yao, the alias method, discrete Zigurat and their variants, are the fastest known algorithms to sample from a discrete Gaussian distribution. However, they use a relatively large precomputed table for each possible real center in [0,1) making them impracticable for non-centered discrete Gaussian sampling. On the other hand, rejection sampling allows to sample from a discrete Gaussian distribution for all real centers without prohibitive precomputation cost but needs costly floating-point arithmetic and several trials per sample. In this work, we study how to reduce the number of centers for which we have to precompute tables and propose a non-centered CDT algorithm with practicable size of precomputed tables as fast as its centered variant. Finally, we provide some experimental results for our open-source C++ implementation indicating that our sampler increases the rate of Peikert’s algorithm for sampling from arbitrary lattices (and cosets) by a factor 3 with precomputation storage up to 6.2 MB

    Markov Chain Monte Carlo Algorithms for Lattice Gaussian Sampling

    Full text link
    Sampling from a lattice Gaussian distribution is emerging as an important problem in various areas such as coding and cryptography. The default sampling algorithm --- Klein's algorithm yields a distribution close to the lattice Gaussian only if the standard deviation is sufficiently large. In this paper, we propose the Markov chain Monte Carlo (MCMC) method for lattice Gaussian sampling when this condition is not satisfied. In particular, we present a sampling algorithm based on Gibbs sampling, which converges to the target lattice Gaussian distribution for any value of the standard deviation. To improve the convergence rate, a more efficient algorithm referred to as Gibbs-Klein sampling is proposed, which samples block by block using Klein's algorithm. We show that Gibbs-Klein sampling yields a distribution close to the target lattice Gaussian, under a less stringent condition than that of the original Klein algorithm.Comment: 5 pages, 1 figure, IEEE International Symposium on Information Theory(ISIT) 201

    Attacks on the Search-RLWE problem with small errors

    Get PDF
    The Ring Learning-With-Errors (RLWE) problem shows great promise for post-quantum cryptography and homomorphic encryption. We describe a new attack on the non-dual search RLWE problem with small error widths, using ring homomorphisms to finite fields and the chi-squared statistical test. In particular, we identify a "subfield vulnerability" (Section 5.2) and give a new attack which finds this vulnerability by mapping to a finite field extension and detecting non-uniformity with respect to the number of elements in the subfield. We use this attack to give examples of vulnerable RLWE instances in Galois number fields. We also extend the well-known search-to-decision reduction result to Galois fields with any unramified prime modulus q, regardless of the residue degree f of q, and we use this in our attacks. The time complexity of our attack is O(nq2f), where n is the degree of K and f is the residue degree of q in K. We also show an attack on the non-dual (resp. dual) RLWE problem with narrow error distributions in prime cyclotomic rings when the modulus is a ramified prime (resp. any integer). We demonstrate the attacks in practice by finding many vulnerable instances and successfully attacking them. We include the code for all attacks

    An erf Analog for Discrete Gaussian Sampling

    Get PDF
    Most of the current lattice-based cryptosystems rely on finding Gaussian Samples from a lattice that are close to a given target. To that end, two popular distributions have been historically defined and studied: the Rounded Gaussian distribution and the Discrete Gaussian distribution. The first one is nearly trivial to sample: simply round the coordinates of continuous Gaussian samples to their nearest integer. Unfortunately, the security of resulting cryptosystems are not as well understood. In the opposite, the second distribution is only implicitly defined by a restriction of the support of the continuous Gaussian distribution to the discrete lattice points. Thus, algorithms to achieve such distribution are more involved, even in dimension one. The justification for exerting this computational effort is that the resulting lattice-based cryptographic schemes are validated by rigorous security proofs, often by leveraging the fact that the distribution is radial and discrete Gaussians behave well under convolutions, enabling arithmetic between samples, as well as decomposition across dimensions. In this work, we unify both worlds. We construct out of infinite series, the cumulative density function of a new continuous distribution that acts as surrogate for the cumulative distribution of the discrete Gaussian. If μ\mu is a center and xx a sample of this distribution, then rounding μ+x\mu+x yields a faithful Discrete Gaussian sample. This new sampling algorithm naturally splits into a pre-processing/offline phase and a very efficient online phase. The online phase is simple and has a trivial constant time implementation. Modulo the offline phase, our algorithm offers both the efficiency of rounding and the security guarantees associated with discrete Gaussian sampling

    Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures

    Get PDF
    We show direct and conceptually simple reductions between the classical learning with errors (LWE) problem and its continuous analog, CLWE (Bruna, Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful machinery of LWE-based cryptography to the applications of CLWE. For example, we obtain the hardness of CLWE under the classical worst-case hardness of the gap shortest vector problem. Previously, this was known only under quantum worst-case hardness of lattice problems. More broadly, with our reductions between the two problems, any future developments to LWE will also apply to CLWE and its downstream applications. As a concrete application, we show an improved hardness result for density estimation for mixtures of Gaussians. In this computational problem, given sample access to a mixture of Gaussians, the goal is to output a function that estimates the density function of the mixture. Under the (plausible and widely believed) exponential hardness of the classical LWE problem, we show that Gaussian mixture density estimation in Rn\mathbb{R}^n with roughly logn\log n Gaussian components given poly(n)\mathsf{poly}(n) samples requires time quasi-polynomial in nn. Under the (conservative) polynomial hardness of LWE, we show hardness of density estimation for nϵn^{\epsilon} Gaussians for any constant ϵ>0\epsilon > 0, which improves on Bruna, Regev, Song and Tang (STOC 2021), who show hardness for at least n\sqrt{n} Gaussians under polynomial (quantum) hardness assumptions. Our key technical tool is a reduction from classical LWE to LWE with kk-sparse secrets where the multiplicative increase in the noise is only O(k)O(\sqrt{k}), independent of the ambient dimension nn

    Constant-time discrete Gaussian sampling

    Get PDF
    © 2018 IEEE. Sampling from a discrete Gaussian distribution is an indispensable part of lattice-based cryptography. Several recent works have shown that the timing leakage from a non-constant-time implementation of the discrete Gaussian sampling algorithm could be exploited to recover the secret. In this paper, we propose a constant-time implementation of the Knuth-Yao random walk algorithm for performing constant-time discrete Gaussian sampling. Since the random walk is dictated by a set of input random bits, we can express the generated sample as a function of the input random bits. Hence, our constant-time implementation expresses the unique mapping of the input random-bits to the output sample-bits as a Boolean expression of the random-bits. We use bit-slicing to generate multiple samples in batches and thus increase the throughput of our constant-time sampling manifold. Our experiments on an Intel i7-Broadwell processor show that our method can be as much as 2.4 times faster than the constant-time implementation of cumulative distribution table based sampling and consumes exponentially less memory than the Knuth-Yao algorithm with shuffling for a similar level of security

    Isochronous Gaussian Sampling: From Inception to Implementation

    Get PDF
    Gaussian sampling over the integers is a crucial tool in lattice-based cryptography, but has proven over the recent years to be surprisingly challenging to perform in a generic, efficient and provable secure manner. In this work, we present a modular framework for generating discrete Gaussians with arbitrary center and standard deviation. Our framework is extremely simple, and it is precisely this simplicity that allowed us to make it easy to implement, provably secure, portable, efficient, and provably resistant against timing attacks. Our sampler is a good candidate for any trapdoor sampling and it is actually the one that has been recently implemented in the Falcon signature scheme. Our second contribution aims at systematizing the detection of implementation errors in Gaussian samplers. We provide a statistical testing suite for discrete Gaussians called SAGA (Statistically Acceptable GAussian). In a nutshell, our two contributions take a step towards trustable and robust Gaussian sampling real-world implementations

    COSAC: COmpact and Scalable Arbitrary-Centered Discrete Gaussian Sampling over Integers

    Get PDF
    The arbitrary-centered discrete Gaussian sampler is a fundamental subroutine in implementing lattice trapdoor sampling algorithms. However, existing approaches typically rely on either a fast implementation of another discrete Gaussian sampler or pre-computations with regards to some specific discrete Gaussian distributions with fixed centers and standard deviations. These approaches may only support sampling from standard deviations within a limited range, or cannot efficiently sample from arbitrary standard deviations determined on-the-fly at run-time. In this paper, we propose a compact and scalable rejection sampling algorithm by sampling from a continuous normal distribution and performing rejection sampling on rounded samples. Our scheme does not require pre-computations related to any specific discrete Gaussian distributions. Our scheme can sample from both arbitrary centers and arbitrary standard deviations determined on-the-fly at run-time. In addition, we show that our scheme only requires a low number of trials close to 2 per sample on average, and our scheme maintains good performance when scaling up the standard deviation. We also provide a concrete error analysis of our scheme based on the Renyi divergence. We implement our sampler and analyse its performance in terms of storage and speed compared to previous results. Our sampler\u27s running time is center-independent and is therefore applicable to implementation of convolution-style lattice trapdoor sampling and identity-based encryption resistant against timing side-channel attacks
    corecore