81 research outputs found
Markov Chain Monte Carlo Algorithms for Lattice Gaussian Sampling
Sampling from a lattice Gaussian distribution is emerging as an important
problem in various areas such as coding and cryptography. The default sampling
algorithm --- Klein's algorithm yields a distribution close to the lattice
Gaussian only if the standard deviation is sufficiently large. In this paper,
we propose the Markov chain Monte Carlo (MCMC) method for lattice Gaussian
sampling when this condition is not satisfied. In particular, we present a
sampling algorithm based on Gibbs sampling, which converges to the target
lattice Gaussian distribution for any value of the standard deviation. To
improve the convergence rate, a more efficient algorithm referred to as
Gibbs-Klein sampling is proposed, which samples block by block using Klein's
algorithm. We show that Gibbs-Klein sampling yields a distribution close to the
target lattice Gaussian, under a less stringent condition than that of the
original Klein algorithm.Comment: 5 pages, 1 figure, IEEE International Symposium on Information
Theory(ISIT) 201
Worst-Case Hermite-Korkine-Zolotarev Reduced Lattice Bases
The Hermite-Korkine-Zolotarev reduction plays a central role in strong
lattice reduction algorithms. By building upon a technique introduced by Ajtai,
we show the existence of Hermite-Korkine-Zolotarev reduced bases that are
arguably least reduced. We prove that for such bases, Kannan's algorithm
solving the shortest lattice vector problem requires
d^{\frac{d}{2\e}(1+o(1))} bit operations in dimension . This matches the
best complexity upper bound known for this algorithm. These bases also provide
lower bounds on Schnorr's constants and that are
essentially equal to the best upper bounds. Finally, we also show the existence
of particularly bad bases for Schnorr's hierarchy of reductions
Time- and Space-Efficient Evaluation of Some Hypergeometric Constants
The currently best known algorithms for the numerical evaluation of
hypergeometric constants such as to decimal digits have time
complexity and space complexity of or .
Following work from Cheng, Gergel, Kim and Zima, we present a new algorithm
with the same asymptotic complexity, but more efficient in practice. Our
implementation of this algorithm improves slightly over existing programs for
the computation of , and we announce a new record of 2 billion digits for
A long note on Mulders' short product
The short product of two power series is the meaningful part of the product of these objects, i.e., _i+j < n a_ib_j x^i+j. In , Mulders gives an algorithm to compute a short product faster than the full product in the case of Karatsuba's multiplication . This algorithm work by selecting a cutoff point k and performing a full kk product and two (n-k)(n-k) short products recursively. Mulders also gives an heuristically optimal cutoff point n. In this paper, we determine the optimal cutoff point in Mulders' algorithm. We also give a slightly more general description of Mulders' method
Floating-Point -Approximations
International audienceComputing good polynomial approximations to usual functions is an important topic for the computer evaluation of those functions. These approximations can be good under several criteria, the most desirable being probably that the relative error is as small as possible in the sense, i.e. everywhere on the interval under study. In the present paper, we investigate a simpler criterion, the case. Though finding a best polynomial -approximation with real coefficients is quite easy, we show that if the coefficients are restricted to be floating point numbers to some precision, the problem becomes a general instance of the CVP problem, and hence is NP-hard. We investigate the practical behaviour of exact and approximate algorithms for this problem. The conclusion is that it is possible in a short amount of time to obtain a relative or absolute best -approximation. The main applications are for large dimension, as a preliminary step of finding -approximations and for functions with large variations, for which relative best approximation is by far more interesting than absolute
Proceedings of the 7th Conference on Real Numbers and Computers (RNC'7)
These are the proceedings of RNC7
Primality Proving with Elliptic Curves
International audienceElliptic curves are fascinating mathematical objects. In this paper, we present the way they have been represented inside the {\sc Coq} system, and how we have proved that the classical composition law on the points is internal and gives them a group structure. We then describe how having elliptic curves inside a prover makes it possible to derive a checker for proving the primality of natural numbers
Moyennes de certaines fonctions multiplicatives sur les entiers friables, 2
International audienceNous évaluons les moyennes indiquées dans le titre sous des hypothèses analytiques concernant la série de Dirichlet associé
Terminating BKZ
Strong lattice reduction is the key element for most attacks against lattice-based cryptosystems. Between the strongest but impractical HKZ reduction and the weak but fast LLL reduction, there have been several attempts to find efficient trade-offs. Among them, the BKZ algorithm introduced by Schnorr and Euchner [FCT\u2791] seems to achieve the best time/quality compromise in practice. However, no reasonable complexity upper bound is known for BKZ, and Gama and Nguyen [Eurocrypt\u2708] observed experimentally that its practical runtime seems to grow exponentially with the lattice dimension.
In this work, we show that BKZ can be terminated long before its completion, while still providing bases of excellent quality. More precisely, we show that if given as inputs a basis of a lattice L and a block-size , and if terminated after calls to a -dimensional HKZ-reduction (or SVP) subroutine, then BKZ returns a basis whose first vector has norm , where~ is the maximum of Hermite\u27s constants in dimensions . To obtain this result, we develop a completely new elementary technique based on discrete-time affine dynamical systems, which could lead to the design of improved lattice reduction algorithms
The Middle Product Algorithm, I.
We present new algorithms for the inverse, division, and square root of power series. The key trick is a new algorithm --- MiddleProduct or, for short, MP --- computing the n middle coefficients of a (2n-1)*n full product in the same number of multiplications as a full n*n product. This improves previous work of Brent, Mulders, Karp and Markstein, Burnikel and Ziegler. A forthcoming paper will study the floating-point case
- …