25,541 research outputs found
Solving the Shortest Vector Problem in Lattices Faster Using Quantum Search
By applying Grover's quantum search algorithm to the lattice algorithms of
Micciancio and Voulgaris, Nguyen and Vidick, Wang et al., and Pujol and
Stehl\'{e}, we obtain improved asymptotic quantum results for solving the
shortest vector problem. With quantum computers we can provably find a shortest
vector in time , improving upon the classical time
complexity of of Pujol and Stehl\'{e} and the of Micciancio and Voulgaris, while heuristically we expect to find a
shortest vector in time , improving upon the classical time
complexity of of Wang et al. These quantum complexities
will be an important guide for the selection of parameters for post-quantum
cryptosystems based on the hardness of the shortest vector problem.Comment: 19 page
Faster tuple lattice sieving using spherical locality-sensitive filters
To overcome the large memory requirement of classical lattice sieving
algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS
2016] studied tuple lattice sieving, where tuples instead of pairs of lattice
vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017]
recently improved upon their results for arbitrary tuple sizes, for example
showing that a triple sieve can solve the shortest vector problem (SVP) in
dimension in time , using a technique similar to
locality-sensitive hashing for finding nearest neighbors.
In this work, we generalize the spherical locality-sensitive filters of
Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near
neighbor searching on dense data sets, and we apply these techniques to tuple
lattice sieving to obtain even better time complexities. For instance, our
triple sieve heuristically solves SVP in time . For
practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this
shows that a triple sieve uses less space and less time than the current best
near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology
ePrint Archive 2017/228, available at https://ia.cr/2017/122
On the Quantitative Hardness of CVP
For odd
integers (and ), we show that the Closest Vector Problem
in the norm (\CVP_p) over rank lattices cannot be solved in
2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential
Time Hypothesis (SETH) fails. We then extend this result to "almost all" values
of , not including the even integers. This comes tantalizingly close
to settling the quantitative time complexity of the important special case of
\CVP_2 (i.e., \CVP in the Euclidean norm), for which a -time
algorithm is known. In particular, our result applies for any
that approaches as .
We also show a similar SETH-hardness result for \SVP_\infty; hardness of
approximating \CVP_p to within some constant factor under the so-called
Gap-ETH assumption; and other quantitative hardness results for \CVP_p and
\CVPP_p for any under different assumptions
Solving the Closest Vector Problem in Time--- The Discrete Gaussian Strikes Again!
We give a -time and space randomized algorithm for solving the
exact Closest Vector Problem (CVP) on -dimensional Euclidean lattices. This
improves on the previous fastest algorithm, the deterministic
-time and -space algorithm of
Micciancio and Voulgaris.
We achieve our main result in three steps. First, we show how to modify the
sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian
sampling over lattice shifts, , with very low parameters. While the
actual algorithm is a natural generalization of [ADRS15], the analysis uses
substantial new ideas. This yields a -time algorithm for
approximate CVP for any approximation factor .
Second, we show that the approximate closest vectors to a target vector can
be grouped into "lower-dimensional clusters," and we use this to obtain a
recursive reduction from exact CVP to a variant of approximate CVP that
"behaves well with these clusters." Third, we show that our discrete Gaussian
sampling algorithm can be used to solve this variant of approximate CVP.
The analysis depends crucially on some new properties of the discrete
Gaussian distribution and approximate closest vectors, which might be of
independent interest
A new Lenstra-type Algorithm for Quasiconvex Polynomial Integer Minimization with Complexity 2^O(n log n)
We study the integer minimization of a quasiconvex polynomial with
quasiconvex polynomial constraints. We propose a new algorithm that is an
improvement upon the best known algorithm due to Heinz (Journal of Complexity,
2005). This improvement is achieved by applying a new modern Lenstra-type
algorithm, finding optimal ellipsoid roundings, and considering sparse
encodings of polynomials. For the bounded case, our algorithm attains a
time-complexity of s (r l M d)^{O(1)} 2^{2n log_2(n) + O(n)} when M is a bound
on the number of monomials in each polynomial and r is the binary encoding
length of a bound on the feasible region. In the general case, s l^{O(1)}
d^{O(n)} 2^{2n log_2(n) +O(n)}. In each we assume d>= 2 is a bound on the total
degree of the polynomials and l bounds the maximum binary encoding size of the
input.Comment: 28 pages, 10 figure
An evaluation of best compromise search in graphs
This work evaluates two different approaches for multicriteria graph
search problems using compromise preferences. This approach focuses search on
a single solution that represents a balanced tradeoff between objectives, rather
than on the whole set of Pareto optimal solutions. We review the main concepts
underlying compromise preferences, and two main approaches proposed for their
solution in heuristic graph problems: naive Pareto search (NAMOA
), and a k-shortest-path approach (kA
). The performance of both approaches is evaluated
on sets of standard bicriterion road map problems. The experiments reveal that
the k-shortest-path approach looses effectiveness in favor of naive Pareto search
as graph size increases. The reasons for this behavior are analyzed and discussedPartially funded by P07-TIC-03018, Cons. Innovación, Ciencia y
Empresa (Junta Andalucía), and Univ. Málaga, Campus Excel. Int. Andalucía Tec
On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter
Conditional lower bounds for dynamic graph problems has received a great deal
of attention in recent years. While many results are now known for the
fully-dynamic case and such bounds often imply worst-case bounds for the
partially dynamic setting, it seems much more difficult to prove amortized
bounds for incremental and decremental algorithms. In this paper we consider
partially dynamic versions of three classic problems in graph theory. Based on
popular conjectures we show that:
-- No algorithm with amortized update time exists for
incremental or decremental maximum cardinality bipartite matching. This
significantly improves on the bound for sparse graphs
of Henzinger et al. [STOC'15] and bound of Kopelowitz,
Pettie and Porat. Our linear bound also appears more natural. In addition, the
result we present separates the node-addition model from the edge insertion
model, as an algorithm with total update time exists for the
former by Bosek et al. [FOCS'14].
-- No algorithm with amortized update time exists for
incremental or decremental maximum flow in directed and weighted sparse graphs.
No such lower bound was known for partially dynamic maximum flow previously.
Furthermore no algorithm with amortized update time
exists for directed and unweighted graphs or undirected and weighted graphs.
-- No algorithm with amortized update time exists
for incremental or decremental -approximating the diameter
of an unweighted graph. We also show a slightly stronger bound if node
additions are allowed. [...]Comment: To appear at ICALP'16. Abstract truncated to fit arXiv limit
- …