25,541 research outputs found

    Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search

    Full text link
    By applying Grover's quantum search algorithm to the lattice algorithms of Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and Stehl\'{e}, we obtain improved asymptotic quantum results for solving the shortest vector problem. With quantum computers we can provably find a shortest vector in time 21.799n+o(n)2^{1.799n + o(n)}, improving upon the classical time complexity of 22.465n+o(n)2^{2.465n + o(n)} of Pujol and Stehl\'{e} and the 22n+o(n)2^{2n + o(n)} of Micciancio and Voulgaris, while heuristically we expect to find a shortest vector in time 20.312n+o(n)2^{0.312n + o(n)}, improving upon the classical time complexity of 20.384n+o(n)2^{0.384n + o(n)} of Wang et al. These quantum complexities will be an important guide for the selection of parameters for post-quantum cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    On the Quantitative Hardness of CVP

    Full text link
    \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} For odd integers p1p \geq 1 (and p=p = \infty), we show that the Closest Vector Problem in the p\ell_p norm (\CVP_p) over rank nn lattices cannot be solved in 2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of p1p \geq 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of \CVP_2 (i.e., \CVP in the Euclidean norm), for which a 2n+o(n)2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p=p(n)2p = p(n) \neq 2 that approaches 22 as nn \to \infty. We also show a similar SETH-hardness result for \SVP_\infty; hardness of approximating \CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \CVP_p and \CVPP_p for any 1p<1 \leq p < \infty under different assumptions

    Solving the Closest Vector Problem in 2n2^n Time--- The Discrete Gaussian Strikes Again!

    Get PDF
    We give a 2n+o(n)2^{n+o(n)}-time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on nn-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic O~(4n)\widetilde{O}(4^{n})-time and O~(2n)\widetilde{O}(2^{n})-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, LtL- t, with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a 2n+o(n)2^{n+o(n)}-time algorithm for approximate CVP for any approximation factor γ=1+2o(n/logn)\gamma = 1+2^{-o(n/\log n)}. Second, we show that the approximate closest vectors to a target vector tt can be grouped into "lower-dimensional clusters," and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that "behaves well with these clusters." Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest

    A new Lenstra-type Algorithm for Quasiconvex Polynomial Integer Minimization with Complexity 2^O(n log n)

    Full text link
    We study the integer minimization of a quasiconvex polynomial with quasiconvex polynomial constraints. We propose a new algorithm that is an improvement upon the best known algorithm due to Heinz (Journal of Complexity, 2005). This improvement is achieved by applying a new modern Lenstra-type algorithm, finding optimal ellipsoid roundings, and considering sparse encodings of polynomials. For the bounded case, our algorithm attains a time-complexity of s (r l M d)^{O(1)} 2^{2n log_2(n) + O(n)} when M is a bound on the number of monomials in each polynomial and r is the binary encoding length of a bound on the feasible region. In the general case, s l^{O(1)} d^{O(n)} 2^{2n log_2(n) +O(n)}. In each we assume d>= 2 is a bound on the total degree of the polynomials and l bounds the maximum binary encoding size of the input.Comment: 28 pages, 10 figure

    An evaluation of best compromise search in graphs

    Get PDF
    This work evaluates two different approaches for multicriteria graph search problems using compromise preferences. This approach focuses search on a single solution that represents a balanced tradeoff between objectives, rather than on the whole set of Pareto optimal solutions. We review the main concepts underlying compromise preferences, and two main approaches proposed for their solution in heuristic graph problems: naive Pareto search (NAMOA ), and a k-shortest-path approach (kA ). The performance of both approaches is evaluated on sets of standard bicriterion road map problems. The experiments reveal that the k-shortest-path approach looses effectiveness in favor of naive Pareto search as graph size increases. The reasons for this behavior are analyzed and discussedPartially funded by P07-TIC-03018, Cons. Innovación, Ciencia y Empresa (Junta Andalucía), and Univ. Málaga, Campus Excel. Int. Andalucía Tec

    On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter

    Get PDF
    Conditional lower bounds for dynamic graph problems has received a great deal of attention in recent years. While many results are now known for the fully-dynamic case and such bounds often imply worst-case bounds for the partially dynamic setting, it seems much more difficult to prove amortized bounds for incremental and decremental algorithms. In this paper we consider partially dynamic versions of three classic problems in graph theory. Based on popular conjectures we show that: -- No algorithm with amortized update time O(n1ε)O(n^{1-\varepsilon}) exists for incremental or decremental maximum cardinality bipartite matching. This significantly improves on the O(m1/2ε)O(m^{1/2-\varepsilon}) bound for sparse graphs of Henzinger et al. [STOC'15] and O(n1/3ε)O(n^{1/3-\varepsilon}) bound of Kopelowitz, Pettie and Porat. Our linear bound also appears more natural. In addition, the result we present separates the node-addition model from the edge insertion model, as an algorithm with total update time O(mn)O(m\sqrt{n}) exists for the former by Bosek et al. [FOCS'14]. -- No algorithm with amortized update time O(m1ε)O(m^{1-\varepsilon}) exists for incremental or decremental maximum flow in directed and weighted sparse graphs. No such lower bound was known for partially dynamic maximum flow previously. Furthermore no algorithm with amortized update time O(n1ε)O(n^{1-\varepsilon}) exists for directed and unweighted graphs or undirected and weighted graphs. -- No algorithm with amortized update time O(n1/2ε)O(n^{1/2 - \varepsilon}) exists for incremental or decremental (4/3ε)(4/3-\varepsilon')-approximating the diameter of an unweighted graph. We also show a slightly stronger bound if node additions are allowed. [...]Comment: To appear at ICALP'16. Abstract truncated to fit arXiv limit
    corecore