21,912 research outputs found

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    Fast optimization algorithms and the cosmological constant

    Get PDF
    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of an NP-hard problem. The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 1012010^{-120} in a randomly generated 10910^9-dimensional ADK landscape.Comment: 19 pages, 5 figure

    Two Compact Incremental Prime Sieves

    Get PDF
    A prime sieve is an algorithm that finds the primes up to a bound nn. We say that a prime sieve is incremental, if it can quickly determine if n+1n+1 is prime after having found all primes up to nn. We say a sieve is compact if it uses roughly n\sqrt{n} space or less. In this paper we present two new results: (1) We describe the rolling sieve, a practical, incremental prime sieve that takes O(nloglogn)O(n\log\log n) time and O(nlogn)O(\sqrt{n}\log n) bits of space, and (2) We show how to modify the sieve of Atkin and Bernstein (2004) to obtain a sieve that is simultaneously sublinear, compact, and incremental. The second result solves an open problem given by Paul Pritchard in 1994

    Sifting data in the real world

    Full text link
    In the real world, experimental data are rarely, if ever, distributed as a normal (Gaussian) distribution. As an example, a large set of data--such as the cross sections for particle scattering as a function of energy contained in the archives of the Particle Data Group--is a compendium of all published data, and hence, unscreened. Inspection of similar data sets quickly shows that, for many reasons, these data sets have many outliers--points well beyond what is expected from a normal distribution--thus ruling out the use of conventional χ2\chi^2 techniques. This note suggests an adaptive algorithm that allows a phenomenologist to apply to the data sample a sieve whose mesh is coarse enough to let the background fall through, but fine enough to retain the preponderance of the signal, thus sifting the data. A prescription is given for finding a robust estimate of the best-fit model parameters in the presence of a noisy background, together with a robust estimate of the model parameter errors, as well as a determination of the goodness-of-fit of the data to the theoretical hypothesis. Extensive computer simulations are carried out to test the algorithm for both its accuracy and stability under varying background conditions.Comment: 29 pages, 13 figures. Version to appear in Nucl. Instr. & Meth.

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page
    corecore