26 research outputs found
Orthogonalized Lattice Enumeration for Solving SVP
In 2014, the orthogonalized integer representation was proposed independently by Ding et al. using genetic algorithm and Fukase et al. using sampling technique to solve SVP. Their results are promising. In this paper, we consider sparse orthogonalized integer representations for shortest vectors and propose a new enumeration method, called orthognalized enumeration, by integrating such a representation. Furthermore, we present a mixed BKZ method, called MBKZ, by alternately applying orthognalized enumeration and other existing enumeration methods. Compared to the existing ones, our methods have greater efficiency and achieve exponential speedups both in theory and in practice for solving SVP. Implementations of our algorithms have been tested to be effective in solving challenging lattice problems. We also develop some new technique to reduce enumeration space which has been demonstrated to be efficient experimentally, though a quantitative analysis about its success probability is not available
Reduction algorithms for the cryptanalysis of lattice based asymmetrical cryptosystems
Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 79-91)Text in English; Abstract: Turkish and Englishxi, 119 leavesThe theory of lattices has attracted a great deal of attention in cryptology in recent years. Several cryptosystems are constructed based on the hardness of the lattice problems such as the shortest vector problem and the closest vector problem. The aim of this thesis is to study the most commonly used lattice basis reduction algorithms, namely Lenstra Lenstra Lovasz (LLL) and Block Kolmogorov Zolotarev (BKZ) algorithms, which are utilized to approximately solve the mentioned lattice based problems.Furthermore, the most popular variants of these algorithms in practice are evaluated experimentally by varying the common reduction parameter delta in order to propose some practical assessments about the effect of this parameter on the process of basis reduction.These kind of practical assessments are believed to have non-negligible impact on the theory of lattice reduction, and so the cryptanalysis of lattice cryptosystems, due to thefact that the contemporary nature of the reduction process is mainly controlled by theheuristics
Fast Lattice Point Enumeration with Minimal Overhead
Enumeration algorithms are the best currently known methods to solve lattice problems, both in theory (within the class of polynomial space algorithms), and in practice (where they are routinely used to evaluate the concrete security of lattice cryptography). However, there is an uncomfortable gap between our theoretical understanding and practical performance of lattice point enumeration algorithms.
The algorithms typically used in practice have worst-case asymptotic running time , but perform extremely well in practice, at least for all values of the lattice dimension for which experimentation is feasible. At the same time, theoretical algorithms
(Kannan, Mathematics of Operation Research 12(3):415-440, 1987) are asymptotically superior (achieving running time), but they are never used in practice because they incur a substantial overhead that makes them uncompetitive for all reasonable values of the lattice dimension . This gap is especially troublesome when algorithms are run in practice to evaluate the concrete security of a cryptosystem, and then experimental results are extrapolated to much larger dimension where solving lattice problems is computationally infeasible.
We introduce a new class of (polynomial space) lattice enumeration algorithms that simultaneously achieve asymptotic efficiency (meeting the theoretical time bound) and practicality, matching or surpassing the performance of practical algorithms already in moderately low dimension. Key technical contributions that allow us to achieve this result are a new analysis technique that allows us to greatly reduce the number of recursive calls performed during preprocessing (from super exponential in to single exponential, or even polynomial in ), a new enumeration technique that can be directly applied to projected lattice (basis) vectors, without the need to remove linear dependencies, and a modified block basis reduction method with fast (logarithmic) convergence properties. The last technique is used to obtain a new SVP enumeration procedure with running time, matching (even in the constant in the exponent) the optimal worst-case analysis (Hanrot and Stehlë, CRYPTO 2007)
of Kannan\u27s theoretical algorithm, but with far superior performance
in practice.
We complement our theoretical analysis with a comprehensive set of experiments that not only support our practicality claims, but also allow to estimate the cross-over point between different versions of enumeration algorithms, as well as asymptotically faster (but not quite practical) algorithms running in single exponential time and space
Hard Mathematical Problems in Cryptography and Coding Theory
In this thesis, we are concerned with certain interesting computationally hard problems and the complexities of their associated algorithms. All of these problems share a common feature in that they all arise from, or have applications to, cryptography, or the theory of error correcting codes. Each chapter in the thesis is based on a stand-alone paper which attacks a particular hard problem. The problems and the techniques employed in attacking them are described in detail. The first problem concerns integer factorization: given a positive integer . the problem is to find the unique prime factors of . This problem, which was historically of only academic interest to number theorists, has in recent decades assumed a central importance in public-key cryptography. We propose a method for factorizing a given integer using a graph-theoretic algorithm employing Binary Decision Diagrams (BDD). The second problem that we consider is related to the classification of certain naturally arising classes of error correcting codes, called self-dual additive codes over the finite field of four elements, . We address the problem of classifying self-dual additive codes, determining their weight enumerators, and computing their minimum distance. There is a natural relation between self-dual additive codes over and graphs via isotropic systems. Utilizing the properties of the corresponding graphs, and again employing Binary Decision Diagrams (BDD) to compute the weight enumerators, we can obtain a theoretical speed up of the previously developed algorithm for the classification of these codes. The third problem that we investigate deals with one of the central issues in cryptography, which has historical origins in the theory of geometry of numbers, namely the shortest vector problem in lattices. One method which is used both in theory and practice to solve the shortest vector problem is by enumeration algorithms. Lattice enumeration is an exhaustive search whose goal is to find the shortest vector given a lattice basis as input. In our work, we focus on speeding up the lattice enumeration algorithm, and we propose two new ideas to this end. The shortest vector in a lattice can be written as . where are integer coefficients and are the lattice basis vectors. We propose an enumeration algorithm, called hybrid enumeration, which is a greedy approach for computing a short interval of possible integer values for the coefficients of a shortest lattice vector. Second, we provide an algorithm for estimating the signs or of the coefficients of a shortest vector . Both of these algorithms results in a reduction in the number of nodes in the search tree. Finally, the fourth problem that we deal with arises in the arithmetic of the class groups of imaginary quadratic fields. We follow the results of Soleng and Gillibert pertaining to the class numbers of some sequence of imaginary quadratic fields arising in the arithmetic of elliptic and hyperelliptic curves and compute a bound on the effective estimates for the orders of class groups of a family of imaginary quadratic number fields. That is, suppose is a sequence of positive numbers tending to infinity. Given any positive real number . an effective estimate is to find the smallest positive integer depending on such that for all . In other words, given a constant . we find a value such that the order of the ideal class in the ring (provided by the homomorphism in Soleng's paper) is greater than for any . In summary, in this thesis we attack some hard problems in computer science arising from arithmetic, geometry of numbers, and coding theory, which have applications in the mathematical foundations of cryptography and error correcting codes
Lattice sparsification and the Approximate Closest Vector Problem
We give a deterministic algorithm for solving the
(1+\eps)-approximate Closest Vector Problem (CVP) on any
-dimensional lattice and in any near-symmetric norm in
2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm
builds on the lattice point enumeration techniques of Micciancio and
Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala
(FOCS 2011), and gives an elegant, deterministic alternative to the
"AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and
Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the
existence of a \poly(n)-space and -time algorithm for
exact CVP in the norm, the space complexity of our algorithm
can be reduced to polynomial.
Our main technical contribution is a method for "sparsifying" any
input lattice while approximately maintaining its metric structure. To
this end, we employ the idea of random sublattice restrictions, which
was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for
the purpose of proving hardness for the Shortest Vector Problem (SVP)
under norms.
A preliminary version of this paper appeared in the Proc. 24th Annual
ACM-SIAM Symp. on Discrete Algorithms (SODA'13)
(http://dx.doi.org/10.1137/1.9781611973105.78)
Estimation of the Success Probability of Random Sampling by the Gram-Charlier Approximation
The lattice basis reduction algorithm is a method for solving the
Shortest Vector Problem (SVP) on lattices. There are many variants of
the lattice basis reduction algorithm such as LLL, BKZ, and RSR. Though
BKZ has been used most widely, it is shown recently that some variants
of RSR are quite efficient for solving a high-dimensional SVP (they
achieved many best scores in TU Darmstadt SVP challenge). RSR repeats
alternately the generation of new very short lattice vectors from the
current basis (we call this procedure ``random sampling\u27\u27) and the
improvement of the current basis by utilizing the generated very short
lattice vectors. Therefore, it is important for investigating and
ameliorating RSR to estimate the success probability of finding very
short lattice vectors by combining the current basis. In this paper,
we propose a new method for estimating the success probability by the
Gram-Charlier approximation, which is a basic asymptotic expansion of
any probability distribution by utilizing the higher order cumulants
such as the skewness and the kurtosis. The proposed method uses a
``parametric\u27\u27 model for estimating the probability, which gives a
closed-form expression with a few parameters. Therefore, the proposed
method is much more efficient than the previous methods using the
non-parametric estimation. This enables us to investigate the lattice
basis reduction algorithm intensively in various situations and clarify
its properties. Numerical experiments verified that the Gram-Charlier
approximation can estimate the actual distribution quite accurately.
In addition, we investigated RSR and its variants by the proposed
method. Consequently, the results showed that the weighted random
sampling is useful for generating shorter lattice vectors. They also
showed that it is crucial for solving the SVP to improve the current
basis periodically
Time-Memory Trade-Off for Lattice Enumeration in a Ball
Enumeration algorithms in lattices are a well-known technique for solving the Short Vector Problem (SVP) and improving
blockwise lattice reduction algorithms.
Here, we propose a new algorithm for enumerating lattice point in a ball of radius
in time , where is the length of the shortest vector in the lattice . Then, we show how
this method can be used for solving SVP and the Closest Vector Problem (CVP)
with approximation factor in a -dimensional lattice in time .
Previous algorithms for enumerating take super-exponential running time with polynomial memory. For instance,
Kannan algorithm takes time , however ours also requires exponential memory and we propose different time/memory tradeoffs.
Recently, Aggarwal, Dadush, Regev and Stephens-Davidowitz describe a randomized algorithm with running
time at STOC\u27 15 for solving SVP and approximation version of SVP and CVP at FOCS\u2715.
However, it is not possible to use a
time/memory tradeoff for their algorithms. Their main result presents an algorithm that samples an exponential
number of random vectors in a Discrete Gaussian distribution with width below the smoothing parameter of the lattice.
Our algorithm is related to the hill climbing of Liu, Lyubashevsky and Micciancio from
RANDOM\u27 06 to solve the bounding decoding problem with preprocessing. It has been later improved by Dadush,
Regev, Stephens-Davidowitz for solving the CVP with preprocessing problem at CCC\u2714. However the latter algorithm only looks for
one lattice vector while we show that we can enumerate all lattice vectors in a ball. Finally, in these papers, they use a
preprocessing to obtain a succinct representation of some lattice function. We show in a first step that we
can obtain the same information using an exponential-time algorithm based on a collision search algorithm similar
to the reduction of Micciancio and Peikert for the SIS problem with small modulus at CRYPTO\u27 13
PotLLL: A Polynomial Time Version of LLL With Deep Insertions
Lattice reduction algorithms have numerous applications in number theory,
algebra, as well as in cryptanalysis. The most famous algorithm for lattice
reduction is the LLL algorithm. In polynomial time it computes a reduced basis
with provable output quality. One early improvement of the LLL algorithm was
LLL with deep insertions (DeepLLL). The output of this version of LLL has
higher quality in practice but the running time seems to explode. Weaker
variants of DeepLLL, where the insertions are restricted to blocks, behave
nicely in practice concerning the running time. However no proof of polynomial
running time is known. In this paper PotLLL, a new variant of DeepLLL with
provably polynomial running time, is presented. We compare the practical
behavior of the new algorithm to classical LLL, BKZ as well as blockwise
variants of DeepLLL regarding both the output quality and running time.Comment: 17 pages, 8 figures; extended version of arXiv:1212.5100 [cs.CR
An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices
In this paper, we study the Learning With Errors problem and its binary
variant, where secrets and errors are binary or taken in a small interval. We
introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on
a quantization step that generalizes and fine-tunes modulus switching. In
general this new technique yields a significant gain in the constant in front
of the exponent in the overall complexity. We illustrate this by solving p
within half a day a LWE instance with dimension n = 128, modulus ,
Gaussian noise and binary secret, using
samples, while the previous best result based on BKW claims a time
complexity of with samples for the same parameters. We then
introduce variants of BDD, GapSVP and UniqueSVP, where the target point is
required to lie in the fundamental parallelepiped, and show how the previous
algorithm is able to solve these variants in subexponential time. Moreover, we
also show how the previous algorithm can be used to solve the BinaryLWE problem
with n samples in subexponential time . This
analysis does not require any heuristic assumption, contrary to other algebraic
approaches; instead, it uses a variant of an idea by Lyubashevsky to generate
many samples from a small number of samples. This makes it possible to
asymptotically and heuristically break the NTRU cryptosystem in subexponential
time (without contradicting its security assumption). We are also able to solve
subset sum problems in subexponential time for density , which is of
independent interest: for such density, the previous best algorithm requires
exponential time. As a direct application, we can solve in subexponential time
the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201