12 research outputs found

    Shortest vector from lattice sieving: A few dimensions for free

    Get PDF
    Asymptotically, the best known algorithms for solving the Shortest Vector Problem (SVP) in a lattice of dimension n are sieve algorithms, which have heuristic complexity estimates ranging from (4/3)n+o(n) down to (3/2)n/2+o(n) when Locality Sensitive Hashing techniques are used. Sieve algorithms are however outperformed by pruned enumeration algorithms in practice by several orders of magnitude, despite the larger super-exponential asymptotical complexity 2Θ(n log n) of the latter. In this work, we show a concrete improvement of sieve-type algorithms. Precisely, we show that a few calls to the sieve algorithm in lattices of dimension less than n - d solves SVP in dimension n, where d = Θ(n/ log n). Although our improvement is only sub-exponential, its practical effect in relevant dimensions is quite significant. We implemented it over a simple sieve algorithm with (4/3)n+o(n) complexity, and it outperforms the best sieve algorithms from the literature by a factor of 10 in dimensions 7080. It performs less than an order of magnitude slower than pruned enumeration in the same range. By design, this improvement can also be applied to most other variants of sieve algorithms, including LSH sieve algorithms and tuple-sieve algorithms. In this light, we may expect sieve-techniques to outperform pruned enumeration in practice in the near future

    Progressive lattice sieving

    Get PDF
    Most algorithms for hard lattice problems are based on the principle of rank reduction: to solve a problem in a dd-dimensional lattice, one first solves one or more problem instances in a sublattice of rank d−1d - 1, and then uses this information to find a solution to the original problem. Existing lattice sieving methods, however, tackle lattice problems such as the shortest vector problem (SVP) directly, and work with the full-rank lattice from the start. Lattice sieving further seems to benefit less from starting with reduced bases than other methods, and finding an approximate solution almost takes as long as finding an exact solution. These properties currently set sieving apart from other methods. In this work we consider a progressive approach to lattice sieving, where we gradually introduce new basis vectors only when the sieve has stabilized on the previous basis vectors. This leads to improved (heuristic) guarantees on finding approximate shortest vectors, a bigger practical impact of the quality of the basis on the run-time, better memory management, a smoother and more predictable behavior of the algorithm, and significantly faster convergence - compared to traditional approaches, we save between a factor 2020 to 4040 in the time complexity for SVP

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    LUSA: the HPC library for lattice-based cryptanalysis

    Get PDF
    This paper introduces LUSA - the Lattice Unified Set of Algorithms library - a C++ library that comprises many high performance, parallel implementations of lattice algorithms, with particular focus on lattice-based cryptanalysis. Currently, LUSA offers algorithms for lattice reduction and the SVP. % and the CVP. LUSA was designed to be 1) simple to install and use, 2) have no other dependencies, 3) be designed specifically for lattice-based cryptanalysis, including the majority of the most relevant algorithms in this field and 4) offer efficient, parallel and scalable methods for those algorithms. LUSA explores paralellism mainly at the thread level, being based on OpenMP. However the code is also written to be efficient at the cache and operation level, taking advantage of carefully sorted data structures and data level parallelism. This paper shows that LUSA delivers these promises, by being simple to use while consistently outperforming its counterparts, such as NTL, plll and fplll, and offering scalable, parallel implementations of the most relevant algorithms to date, which are currently not available in other libraries

    Heterogeneous implementation of a Voronoi cell-based SVP solver

    Get PDF
    This paper presents a new, heterogeneous CPU+GPU attacks against lattice-based (postquantum) cryptosystems based on the Shortest Vector Problem (SVP), a central problem in lattice-based cryptanalysis. To the best of our knowledge, this is the first SVP-attack against lattice-based cryptosystems using CPUs and GPUs simultaneously. We show that Voronoi-cell based CPU+GPU attacks, algorithmically improved in previous work, are suitable for the proposed massively parallel platforms. Results show that 1) heterogeneous platforms are useful in this scenario, as they increment the overall memory available in the system (as GPU's memory can be used effectively), a typical bottleneck for Voronoi-cell algorithms, and we have also been able to increase the performance of the algorithm on such a platform, by successfully using the GPU as a co-processor, 2) this attack can be successfully accelerated using conventional GPUs and 3) we can take advantage of multiple GPUs to attack lattice-based cryptosystems. Experimental results show a speedup up to 7.6× for 2 GPUs hosted by an Intel Xeon E5-2695 v2 CPU (12 cores ×2 sockets) using only 1 core and gains in the order of 20% for 2 GPUs hosted by the same machine using all 22 CPU threads (2 are reserved for orchestrating the GPUs), compared to single-CPU execution using the entire 24 threads available.This work was supported in part by the Instituto de Telecomunicações, in part by the Fundação para a Ciência e a Tecnologia (FCT) under Grant UID/EEA/50008/2019 and Grant PTDC/EEI-HAC/30485/2017, and in part by the National Funds through the Portuguese Funding Agency, FCT—Fundação para a Ciência e a Tecnologia, under Grant UID/EEA/50014/2019. The work of A. Mariano was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant 382285730

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehlé [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve

    Polytopes, Lattices, and Spherical Codes for the Nearest Neighbor Problem

    Get PDF
    We study locality-sensitive hash methods for the nearest neighbor problem for the angular distance, focusing on the approach of first projecting down onto a random low-dimensional subspace, and then partitioning the projected vectors according to the Voronoi cells induced by a well-chosen spherical code. This approach generalizes and interpolates between the fast but asymptotically suboptimal hyperplane hashing of Charikar [STOC 2002], and asymptotically optimal but practically often slower hash families of e.g. Andoni - Indyk [FOCS 2006], Andoni - Indyk - Nguyen - Razenshteyn [SODA 2014] and Andoni - Indyk - Laarhoven - Razenshteyn - Schmidt [NIPS 2015]. We set up a framework for analyzing the performance of any spherical code in this context, and we provide results for various codes appearing in the literature, such as those related to regular polytopes and root lattices. Similar to hyperplane hashing, and unlike e.g. cross-polytope hashing, our analysis of collision probabilities and query exponents is exact and does not hide any order terms which vanish only for large d, thus facilitating an easier parameter selection in practical applications. For the two-dimensional case, we analytically derive closed-form expressions for arbitrary spherical codes, and we show that the equilateral triangle is optimal, achieving a better performance than the two-dimensional analogues of hyperplane and cross-polytope hashing. In three and four dimensions, we numerically find that the tetrahedron and 5-cell (the 3-simplex and 4-simplex) and the 16-cell (the 4-orthoplex) achieve the best query exponents, while in five or more dimensions orthoplices appear to outperform regular simplices, as well as the root lattice families A_k and D_k in terms of minimizing the query exponent. We provide lower bounds based on spherical caps, and we predict that in higher dimensions, larger spherical codes exist which outperform orthoplices in terms of the query exponent, and we argue why using the D_k root lattices will likely lead to better results in practice as well (compared to using cross-polytopes), due to a better trade-off between the asymptotic query exponent and the concrete costs of hashing

    Hypercube LSH for Approximate near Neighbors

    Get PDF
    A celebrated technique for finding near neighbors for the angular distance involves using a set of random hyperplanes to partition the space into hash regions [Charikar, STOC 2002]. Experiments later showed that using a set of orthogonal hyperplanes, thereby partitioning the space into the Voronoi regions induced by a hypercube, leads to even better results [Terasawa and Tanaka, WADS 2007]. However, no theoretical explanation for this improvement was ever given, and it remained unclear how the resulting hypercube hash method scales in high dimensions. In this work, we provide explicit asymptotics for the collision probabilities when using hypercubes to partition the space. For instance, two near-orthogonal vectors are expected to collide with probability (1/pi)^d in dimension d, compared to (1/2)^d when using random hyperplanes. Vectors at angle pi/3 collide with probability (sqrt[3]/pi)^d, compared to (2/3)^d for random hyperplanes, and near-parallel vectors collide with similar asymptotic probabilities in both cases. For c-approximate nearest neighbor searching, this translates to a decrease in the exponent rho of locality-sensitive hashing (LSH) methods of a factor up to log2(pi) ~ 1.652 compared to hyperplane LSH. For c = 2, we obtain rho ~ 0.302 for hypercube LSH, improving upon the rho ~ 0.377 for hyperplane LSH. We further describe how to use hypercube LSH in practice, and we consider an example application in the area of lattice algorithms

    SCloud: Public Key Encryption and Key Encapsulation Mechanism Based on Learning with Errors

    Get PDF
    We propose a new family of public key encryption (PKE) and key encapsulation mechanism (KEM) schemes based on the plain learning with errors (LWE) problem. Two new design techniques are adopted in the proposed scheme named SCloud: the sampling method and the error-reconciliation mechanism. The new sampling method is obtained by studying the property of the convolution of central binomial distribution and bounded uniform distribution which can achieve higher efficiency and more flexibility w.r.t the parameter choice. Besides, it is shown to be more secure against the dual attack due to its advantage in distinguish property. The new error-reconciliation mechanism is constructed by combining the binary linear codes and Gray codes. It can reduce the size of parameters, and then improve the encryption/decryption efficiency as well as communication efficiency, by making full use of the encryption space. Based on these two techniques, SCloud can provide various sets of parameters for refined security level
    corecore