22 research outputs found
Improved Hardness of BDD and SVP Under Gap-(S)ETH
We show improved fine-grained hardness of two key lattice problems in the
norm: Bounded Distance Decoding to within an factor of the
minimum distance () and the (decisional)
-approximate Shortest Vector Problem (),
assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH).
Specifically, we show:
1. For all , there is no -time algorithm for
for any constant ,
where and
is the kissing-number constant, unless non-uniform Gap-ETH is false.
2. For all , there is no -time algorithm for
for any constant , where
is explicit and satisfies for , , and as , unless randomized Gap-ETH is false.
3. For all and all , there
is no -time algorithm for for any constant
, where is explicit and
satisfies as for any fixed , unless non-uniform Gap-SETH is false.
4. For all , , and all , there is no -time algorithm for for
some constant , where is explicit and satisfies as , unless randomized Gap-SETH is false.Comment: ITCS 202
Approximate CVP_p in Time 2^{0.802 n}
We show that a constant factor approximation of the shortest and closest lattice vector problem w.r.t. any ?_p-norm can be computed in time 2^{(0.802 +?) n}. This matches the currently fastest constant factor approximation algorithm for the shortest vector problem w.r.t. ??. To obtain our result, we combine the latter algorithm w.r.t. ?? with geometric insights related to coverings
Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH
The k-Even Set problem is a parameterized variant of the Minimum Distance Problem of linear codes over F_2, which can be stated as follows: given a generator matrix A and an integer k, determine whether the code generated by A has distance at most k. Here, k is the parameter of the problem. The question of whether k-Even Set is fixed parameter tractable (FPT) has been repeatedly raised in literature and has earned its place in Downey and Fellows\u27 book (2013) as one of the "most infamous" open problems in the field of Parameterized Complexity.
In this work, we show that k-Even Set does not admit FPT algorithms under the (randomized) Gap Exponential Time Hypothesis (Gap-ETH) [Dinur\u2716, Manurangsi-Raghavendra\u2716]. In fact, our result rules out not only exact FPT algorithms, but also any constant factor FPT approximation algorithms for the problem. Furthermore, our result holds even under the following weaker assumption, which is also known as the Parameterized Inapproximability Hypothesis (PIH) [Lokshtanov et al.\u2717]: no (randomized) FPT algorithm can distinguish a satisfiable 2CSP instance from one which is only 0.99-satisfiable (where the parameter is the number of variables).
We also consider the parameterized k-Shortest Vector Problem (SVP), in which we are given a lattice whose basis vectors are integral and an integer k, and the goal is to determine whether the norm of the shortest vector (in the l_p norm for some fixed p) is at most k. Similar to k-Even Set, this problem is also a long-standing open problem in the field of Parameterized Complexity. We show that, for any p > 1, k-SVP is hard to approximate (in FPT time) to some constant factor, assuming PIH. Furthermore, for the case of p = 2, the inapproximability factor can be amplified to any constant
Imperfect Gaps in Gap-ETH and PCPs
We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s\u27}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest.
We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively
Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes
We give a
simple proof that the (approximate, decisional) Shortest Vector Problem is
\NP-hard under a randomized reduction. Specifically, we show that for any and any constant , the -approximate problem
in the norm (-\GapSVP_p) is not in unless \NP
\subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC
1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing
hardness of -\GapSVP_p using locally dense lattices. We construct
such lattices simply by applying "Construction A" to Reed-Solomon codes with
suitable parameters, and prove their local density via an elementary argument
originally used in the context of Craig lattices.
As in all known \NP-hardness results for \GapSVP_p with , our
reduction uses randomness. Indeed, it is a notorious open problem to prove
\NP-hardness via a deterministic reduction. To this end, we additionally
discuss potential directions and associated challenges for derandomizing our
reduction. In particular, we show that a close deterministic analogue of our
local density construction would improve on the state-of-the-art explicit
Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and
IEEE Trans. Inf. Theory 2006).
As a related contribution of independent interest, we also give a
polynomial-time algorithm for decoding -dimensional "Construction A
Reed-Solomon lattices" (with different parameters than those used in our
hardness proof) to a distance within an factor of
Minkowski's bound. This asymptotically matches the best known distance for
decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf.
Theory 2022), whose work we build on with a somewhat simpler construction and
analysis
Improved Classical and Quantum Algorithms for the Shortest Vector Problem via Bounded Distance Decoding
The most important computational problem on lattices is the Shortest Vector
Problem (SVP). In this paper, we present new algorithms that improve the
state-of-the-art for provable classical/quantum algorithms for SVP. We present
the following results. A new algorithm for SVP that provides a smooth
tradeoff between time complexity and memory requirement. For any positive
integer , our algorithm takes time and
requires memory. This tradeoff which ranges from
enumeration () to sieving ( constant), is a consequence of a new
time-memory tradeoff for Discrete Gaussian sampling above the smoothing
parameter.
A quantum algorithm for SVP that runs in time and
requires classical memory and poly(n) qubits. In Quantum Random
Access Memory (QRAM) model this algorithm takes only time and
requires a QRAM of size , poly(n) qubits and
classical space. This improves over the previously fastest classical (which is
also the fastest quantum) algorithm due to [ADRS15] that has a time and space
complexity .
A classical algorithm for SVP that runs in time
time and space. This improves over an algorithm of [CCL18] that
has the same space complexity.
The time complexity of our classical and quantum algorithms are obtained
using a known upper bound on a quantity related to the lattice kissing number
which is . We conjecture that for most lattices this quantity is a
. Assuming that this is the case, our classical algorithm runs in
time , our quantum algorithm runs in time
and our quantum algorithm in QRAM model runs in time .Comment: Faster Quantum Algorithm for SVP in QRAM, 43 pages, 4 figure
Hardness of Bounded Distance Decoding on Lattices in ?_p Norms
Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available.
Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)