8 research outputs found

    Approximate Voronoi cells for lattices, revisited

    Get PDF
    We revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis-Laarhoven-De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than 20.076d+o(d)2^{0.076d + o(d)} memory, and we show how to obtain time-memory trade-offs even when using less than 20.048d+o(d)2^{0.048d + o(d)} memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as dd/2+o(d)d^{d/2 + o(d)}, matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem.Comment: 18 pages, 1 figur

    Sieve, Enumerate, Slice, and Lift: Hybrid Lattice Algorithms for SVP via CVPP

    Get PDF
    Motivated by recent results on solving large batches of closest vector problem (CVP) instances, we study how these techniques can be combined with lattice enumeration to obtain faster methods for solving the shortest vector problem (SVP) on high-dimensional lattices. Theoretically, under common heuristic assumptions we show how to solve SVP in dimension dd with a cost proportional to running a sieve in dimension dΘ(d/logd)d - \Theta(d / \log d), resulting in a 2Θ(d/logd)2^{\Theta(d / \log d)} speedup and memory reduction compared to running a full sieve. Combined with techniques from [Ducas, Eurocrypt 2018] we can asymptotically get a total of [log(13/9)+o(1)]d/logd[\log(13/9) + o(1)] \cdot d / \log d dimensions \textit{for free} for solving SVP. Practically, the main obstacles for observing a speedup in moderate dimensions appear to be that the leading constant in the Θ(d/logd)\Theta(d / \log d) term is rather small; that the overhead of the (batched) slicer may be large; and that competitive enumeration algorithms heavily rely on aggressive pruning techniques, which appear to be incompatible with our algorithms. These obstacles prevented this asymptotic speedup (compared to full sieving) from being observed in our experiments. However, it could be expected to become visible once optimized CVPP techniques are used in higher dimensional experiments

    The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier

    Get PDF
    Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways:-We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis\xe2\x80\x93Laarhoven\xe2\x80\x93De Weger [PQCrypto 2019] and Laarhoven\xc2\xa0[MathCrypt 2019].-We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities.-We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker\xe2\x80\x93Gama\xe2\x80\x93Joux [Cryptology ePrint Archive, 2015]. Using 20.185d+o(d)memory, we can solve a single CVPP instance in 20.264d+o(d)time.-We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using memory, we can heuristically solve CVPP instances in amortized time, for batches of size at least.Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure

    Lower bounds on lattice sieving and information set decoding

    Get PDF
    In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May-Ozerov, Eurocrypt\u2715; Becker-Ducas-Gama-Laarhoven, SODA\u2716]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker-Ducas-Gama-Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of 20.292d+o(d)2^{0.292d + o(d)} is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the 20.265d+o(d)2^{0.265d + o(d)} complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold-Kirshanova-Laarhoven, PKC 2018]. For the Hamming metric we derive new lower bounds for nearest neighbor searching which almost match the best upper bounds from the literature [May-Ozerov, Eurocrypt 2015]. As a consequence we derive conditional lower bounds on decoding attacks, showing that also here one should search for improvements elsewhere to significantly undermine security estimates from the literature

    Dual lattice attacks for closest vector problems (with preprocessing)

    Get PDF
    The dual attack has long been considered a relevant attack on lattice-based cryptographic schemes relying on the hardness of learning with errors (LWE) and its structured variants. As solving LWE corresponds to finding a nearest point on a lattice, one may naturally wonder how efficient this dual approach is for solving more general closest vector problems, such as the classical closest vector problem (CVP), the variants bounded distance decoding (BDD) and approximate CVP, and preprocessing versions of these problems. While primal, sieving-based solutions to these problems (with preprocessing) were recently studied in a series of works on approximate Voronoi cells, for the dual attack no such overview exists, especially for problems with preprocessing. With one of the take-away messages of the approximate Voronoi cell line of work being that primal attacks work well for approximate CVP(P) but scale poorly for BDD(P), one may wonder if the dual attack suffers the same drawbacks, or if it is a better method for solving BDD(P). In this work we provide an overview of cost estimates for dual algorithms for solving these \u27\u27classical\u27\u27 closest lattice vector problems. Heuristically we expect to solve the search version of average-case CVPP in time and space 20.293d+o(d)2^{0.293d + o(d)}. For the distinguishing version of average-case CVPP, where we wish to distinguish between random targets and targets planted at distance approximately the Gaussian heuristic from the lattice, we obtain the same complexity in the single-target model, and we obtain query time and space complexities of 20.195d+o(d)2^{0.195d + o(d)} in the multi-target setting, where we are given a large number of targets from either target distribution. This suggests an inequivalence between distinguishing and searching, as we do not expect a similar improvement in the multi-target setting to hold for search-CVPP. We analyze three slightly different decoders, both for distinguishing and searching, and experimentally obtain concrete cost estimates for the dual attack in dimensions 5050 to 8080, which confirm our heuristic assumptions, and show that the hidden order terms in the asymptotic estimates are quite small. Our main take-away message is that the dual attack appears to mirror the approximate Voronoi cell line of work -- whereas using approximate Voronoi cells works well for approximate CVP(P) but scales poorly for BDD(P), the dual approach scales well for BDD(P) instances but performs poorly on approximate CVP(P)

    Approximate Voronoi cells for lattices, revisited

    No full text
    We revisit the approximate Voronoi cells approach for solving the closest vector problem with preprocessing (CVPP) on high-dimensional lattices, and settle the open problem of Doulgerakis–Laarhoven–De Weger [PQCrypto, 2019] of determining exact asymptotics on the volume of these Voronoi cells under the Gaussian heuristic. As a result, we obtain improved upper bounds on the time complexity of the randomized iterative slicer when using less than 20.076d+o(d)20.076d+o(d)2^{0.076d + o(d)} memory, and we show how to obtain time–memory trade-offs even when using less than 20.048d+o(d)20.048d+o(d)2^{0.048d + o(d)} memory. We also settle the open problem of obtaining a continuous trade-off between the size of the advice and the query time complexity, as the time complexity with subexponential advice in our approach scales as dd/2+o(d)dd/2+o(d)d^{d/2 + o(d)} matching worst-case enumeration bounds, and achieving the same asymptotic scaling as average-case enumeration algorithms for the closest vector problem

    Approximate Voronoi cells for lattices, revisited

    No full text
    corecore