617 research outputs found

    Self-avoiding walks and polygons on the triangular lattice

    Full text link
    We use new algorithms, based on the finite lattice method of series expansion, to extend the enumeration of self-avoiding walks and polygons on the triangular lattice to length 40 and 60, respectively. For self-avoiding walks to length 40 we also calculate series for the metric properties of mean-square end-to-end distance, mean-square radius of gyration and the mean-square distance of a monomer from the end points. For self-avoiding polygons to length 58 we calculate series for the mean-square radius of gyration and the first 10 moments of the area. Analysis of the series yields accurate estimates for the connective constant of triangular self-avoiding walks, Ό=4.150797226(26)\mu=4.150797226(26), and confirms to a high degree of accuracy several theoretical predictions for universal critical exponents and amplitude combinations.Comment: 24 pages, 6 figure

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    Parallel improved Schnorr-Euchner enumeration SE++ on shared and distributed memory systems, with and without extreme pruning

    Get PDF
    The security of lattice-based cryptography relies on the hardness of problems based on lattices, such as the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). This paper presents two parallel implementations for the SE++ with and without extreme pruning. The SE++ is an enumeration-based CVP-solver, which can be easily adapted to solve the SVP. We improved the SVP version of the SE++ with an optimization that avoids symmetric branches, improving its performance by a factor of ≈ 50%, and applied the extreme pruning technique to this improved version. The extreme pruning technique is the fastest way to compute the SVP with enumeration known to date. It solves the SVP for lattices in much higher dimensions in less time than implementations without extreme pruning. Our parallel implementation of the SE++ with extreme pruning targets distributed memory multi-core CPU systems, while our SE++ without extreme pruning is designed for shared memory multi-core CPU systems. These implementations address load balancing problems for optimal performance, with a master-slave mechanism on the distributed memory implementation, and specific bounds for task creation on the shared memory implementation. The parallel implementation for the SE++ without extreme pruning scales linearly for up to 8 threads and almost linearly for 16 threads. In addition, it also achieves super-linear speedups on some instances, as the workload may be shortened, since some threads may find shorter vectors at earlier points in time, compared to the sequential implementation. Tests with our Improved SE++ implementation showed that it outperforms the state of the art implementation by a factor of between 35% and 60%, while maintaining a scalability similar to the SE++ implementation. Our parallel implementation of the SE++ with extreme pruning achieves linear speedups for up to 8 (working) processes and speedups of up to 13x for 16 (working) processes(undefined)info:eu-repo/semantics/publishedVersio

    Hard Mathematical Problems in Cryptography and Coding Theory

    Get PDF
    In this thesis, we are concerned with certain interesting computationally hard problems and the complexities of their associated algorithms. All of these problems share a common feature in that they all arise from, or have applications to, cryptography, or the theory of error correcting codes. Each chapter in the thesis is based on a stand-alone paper which attacks a particular hard problem. The problems and the techniques employed in attacking them are described in detail. The first problem concerns integer factorization: given a positive integer NN. the problem is to find the unique prime factors of NN. This problem, which was historically of only academic interest to number theorists, has in recent decades assumed a central importance in public-key cryptography. We propose a method for factorizing a given integer using a graph-theoretic algorithm employing Binary Decision Diagrams (BDD). The second problem that we consider is related to the classification of certain naturally arising classes of error correcting codes, called self-dual additive codes over the finite field of four elements, GF(4)GF(4). We address the problem of classifying self-dual additive codes, determining their weight enumerators, and computing their minimum distance. There is a natural relation between self-dual additive codes over GF(4)GF(4) and graphs via isotropic systems. Utilizing the properties of the corresponding graphs, and again employing Binary Decision Diagrams (BDD) to compute the weight enumerators, we can obtain a theoretical speed up of the previously developed algorithm for the classification of these codes. The third problem that we investigate deals with one of the central issues in cryptography, which has historical origins in the theory of geometry of numbers, namely the shortest vector problem in lattices. One method which is used both in theory and practice to solve the shortest vector problem is by enumeration algorithms. Lattice enumeration is an exhaustive search whose goal is to find the shortest vector given a lattice basis as input. In our work, we focus on speeding up the lattice enumeration algorithm, and we propose two new ideas to this end. The shortest vector in a lattice can be written as s=v1b1+v2b2+
+vnbn{\bf s} = v_1{\bf b}_1+v_2{\bf b}_2+\ldots+v_n{\bf b}_n. where vi∈Zv_i \in \mathbb{Z} are integer coefficients and bi{\bf b}_i are the lattice basis vectors. We propose an enumeration algorithm, called hybrid enumeration, which is a greedy approach for computing a short interval of possible integer values for the coefficients viv_i of a shortest lattice vector. Second, we provide an algorithm for estimating the signs ++ or −- of the coefficients v1,v2,
,vnv_1,v_2,\ldots,v_n of a shortest vector s=∑i=1nvibi{\bf s}=\sum_{i=1}^{n} v_i{\bf b}_i. Both of these algorithms results in a reduction in the number of nodes in the search tree. Finally, the fourth problem that we deal with arises in the arithmetic of the class groups of imaginary quadratic fields. We follow the results of Soleng and Gillibert pertaining to the class numbers of some sequence of imaginary quadratic fields arising in the arithmetic of elliptic and hyperelliptic curves and compute a bound on the effective estimates for the orders of class groups of a family of imaginary quadratic number fields. That is, suppose f(n)f(n) is a sequence of positive numbers tending to infinity. Given any positive real number LL. an effective estimate is to find the smallest positive integer N=N(L)N = N(L) depending on LL such that f(n)>Lf(n) > L for all n>Nn > N. In other words, given a constant M>0M > 0. we find a value NN such that the order of the ideal class InI_n in the ring RnR_n (provided by the homomorphism in Soleng's paper) is greater than MM for any n>Nn>N. In summary, in this thesis we attack some hard problems in computer science arising from arithmetic, geometry of numbers, and coding theory, which have applications in the mathematical foundations of cryptography and error correcting codes

    Quantum Lattice Enumeration in Limited Depth

    Get PDF
    In 2018, Aono et al. (ASIACRYPT 2018) proposed to use quantum backtracking algorithms (Montanaro, TOC 2018; Ambainis and Kokainis, STOC 2017) to speedup lattice point enumeration. Quantum lattice sieving algorithms had already been proposed (Laarhoven et al., PQCRYPTO 2013), being shown to provide an asymptotic speedup over classical counterparts, but also to lose competitivity at relevant dimensions to cryptography if practical considerations on quantum computer architecture were taken into account (Albrecht et al., ASIACRYPT 2020). Aono et al.’s work argued that quantum walk speedups can be applied to lattice enumeration, achieving at least a quadratic asymptotic speedup à la Grover search while not requiring exponential amounts of quantum accessible classical memory, as it is the case for sieving. In this work, we explore how to lower bound the cost of using Aono et al.’s techniques on lattice enumeration with extreme cylinder pruning assuming a limit to the maximum depth that a quantum computation can achieve without decohering, with the objective of better understanding the practical applicability of quantum backtracking in lattice cryptanalysis

    Lattice Enumeration Using Extreme Pruning

    Get PDF
    International audienceLattice enumeration algorithms are the most basic algorithms for solving hard lattice problems such as the shortest vector problem and the closest vector problem, and are often used in public-key cryptanaly-sis either as standalone algorithms, or as subroutines in lattice reduction algorithms. Here we revisit these fundamental algorithms and show that surprising exponential speedups can be achieved both in theory and in practice by using a new technique, which we call extreme pruning. We also provide what is arguably the first sound analysis of pruning, which was introduced in the 1990s by Schnorr et al
    • 

    corecore