81 research outputs found

    Markov Chain Monte Carlo Algorithms for Lattice Gaussian Sampling

    Full text link
    Sampling from a lattice Gaussian distribution is emerging as an important problem in various areas such as coding and cryptography. The default sampling algorithm --- Klein's algorithm yields a distribution close to the lattice Gaussian only if the standard deviation is sufficiently large. In this paper, we propose the Markov chain Monte Carlo (MCMC) method for lattice Gaussian sampling when this condition is not satisfied. In particular, we present a sampling algorithm based on Gibbs sampling, which converges to the target lattice Gaussian distribution for any value of the standard deviation. To improve the convergence rate, a more efficient algorithm referred to as Gibbs-Klein sampling is proposed, which samples block by block using Klein's algorithm. We show that Gibbs-Klein sampling yields a distribution close to the target lattice Gaussian, under a less stringent condition than that of the original Klein algorithm.Comment: 5 pages, 1 figure, IEEE International Symposium on Information Theory(ISIT) 201

    Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases

    Get PDF
    The Hermite-Korkine-Zolotarev reduction plays a central role in strong lattice reduction algorithms. By building upon a technique introduced by Ajtai, we show the existence of Hermite-Korkine-Zolotarev reduced bases that are arguably least reduced. We prove that for such bases, Kannan's algorithm solving the shortest lattice vector problem requires d^{\frac{d}{2\e}(1+o(1))} bit operations in dimension dd. This matches the best complexity upper bound known for this algorithm. These bases also provide lower bounds on Schnorr's constants αd\alpha_d and βd\beta_d that are essentially equal to the best upper bounds. Finally, we also show the existence of particularly bad bases for Schnorr's hierarchy of reductions

    Time- and Space-Efficient Evaluation of Some Hypergeometric Constants

    Get PDF
    The currently best known algorithms for the numerical evaluation of hypergeometric constants such as ζ(3)\zeta(3) to dd decimal digits have time complexity O(M(d)log2d)O(M(d) \log^2 d) and space complexity of O(dlogd)O(d \log d) or O(d)O(d). Following work from Cheng, Gergel, Kim and Zima, we present a new algorithm with the same asymptotic complexity, but more efficient in practice. Our implementation of this algorithm improves slightly over existing programs for the computation of π\pi, and we announce a new record of 2 billion digits for ζ(3)\zeta(3)

    A long note on Mulders' short product

    Get PDF
    The short product of two power series is the meaningful part of the product of these objects, i.e., _i+j < n a_ib_j x^i+j. In , Mulders gives an algorithm to compute a short product faster than the full product in the case of Karatsuba's multiplication . This algorithm work by selecting a cutoff point k and performing a full kk product and two (n-k)(n-k) short products recursively. Mulders also gives an heuristically optimal cutoff point n. In this paper, we determine the optimal cutoff point in Mulders' algorithm. We also give a slightly more general description of Mulders' method

    Floating-Point L2L^2-Approximations

    Get PDF
    International audienceComputing good polynomial approximations to usual functions is an important topic for the computer evaluation of those functions. These approximations can be good under several criteria, the most desirable being probably that the relative error is as small as possible in the LL^{\infty} sense, i.e. everywhere on the interval under study. In the present paper, we investigate a simpler criterion, the L2L^2 case. Though finding a best polynomial L2L^2-approximation with real coefficients is quite easy, we show that if the coefficients are restricted to be floating point numbers to some precision, the problem becomes a general instance of the CVP problem, and hence is NP-hard. We investigate the practical behaviour of exact and approximate algorithms for this problem. The conclusion is that it is possible in a short amount of time to obtain a relative or absolute best L2L^2-approximation. The main applications are for large dimension, as a preliminary step of finding LL^{\infty}-approximations and for functions with large variations, for which relative best approximation is by far more interesting than absolute

    Primality Proving with Elliptic Curves

    Get PDF
    International audienceElliptic curves are fascinating mathematical objects. In this paper, we present the way they have been represented inside the {\sc Coq} system, and how we have proved that the classical composition law on the points is internal and gives them a group structure. We then describe how having elliptic curves inside a prover makes it possible to derive a checker for proving the primality of natural numbers

    Moyennes de certaines fonctions multiplicatives sur les entiers friables, 2

    Get PDF
    International audienceNous évaluons les moyennes indiquées dans le titre sous des hypothèses analytiques concernant la série de Dirichlet associé

    Terminating BKZ

    Get PDF
    Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT\u2791] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt\u2708] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension. In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis (bi)inQn×n(b_i)_{i\leq n} \in Q^{n \times n} of a lattice L and a block-size β\beta, and if terminated after Ω(n3β2(logn+loglogmaxibi))\Omega\left(\frac{n^3}{\beta^2}(\log n + \log \log \max_i \|\vec{b}_i\|)\right) calls to a β\beta-dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm 2γβn12(β1)+32(detL)1n\leq 2 \gamma_{\beta}^{\frac{n-1}{2(\beta-1)}+\frac{3}{2}} \cdot (\det L)^{\frac{1}{n}}, where~γββ\gamma_{\beta} \leq \beta is the maximum of Hermite\u27s constants in dimensions β\leq \beta. To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms

    The Middle Product Algorithm, I.

    Get PDF
    We present new algorithms for the inverse, division, and square root of power series. The key trick is a new algorithm --- MiddleProduct or, for short, MP --- computing the n middle coefficients of a (2n-1)*n full product in the same number of multiplications as a full n*n product. This improves previous work of Brent, Mulders, Karp and Markstein, Burnikel and Ziegler. A forthcoming paper will study the floating-point case
    corecore