53 research outputs found
Improved Hardness of BDD and SVP Under Gap-(S)ETH
We show improved fine-grained hardness of two key lattice problems in the
norm: Bounded Distance Decoding to within an factor of the
minimum distance () and the (decisional)
-approximate Shortest Vector Problem (),
assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH).
Specifically, we show:
1. For all , there is no -time algorithm for
for any constant ,
where and
is the kissing-number constant, unless non-uniform Gap-ETH is false.
2. For all , there is no -time algorithm for
for any constant , where
is explicit and satisfies for , , and as , unless randomized Gap-ETH is false.
3. For all and all , there
is no -time algorithm for for any constant
, where is explicit and
satisfies as for any fixed , unless non-uniform Gap-SETH is false.
4. For all , , and all , there is no -time algorithm for for
some constant , where is explicit and satisfies as , unless randomized Gap-SETH is false.Comment: ITCS 202
Hardness of Bounded Distance Decoding on Lattices in ?_p Norms
Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available.
Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)
Lattice Problems Beyond Polynomial Time
We study the complexity of lattice problems in a world where algorithms,
reductions, and protocols can run in superpolynomial time, revisiting four
foundational results: two worst-case to average-case reductions and two
protocols. We also show a novel protocol.
1. We prove that secret-key cryptography exists if
-approximate SVP is hard for -time
algorithms. I.e., we extend to our setting (Micciancio and Regev's improved
version of) Ajtai's celebrated polynomial-time worst-case to average-case
reduction from -approximate SVP to SIS.
2. We prove that public-key cryptography exists if
-approximate SVP is hard for -time
algorithms. This extends to our setting Regev's celebrated polynomial-time
worst-case to average-case reduction from -approximate
SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical,
generalizing Peikert's polynomial-time classical reduction from
-approximate SVP.
3. We show a -time coAM protocol for -approximate
CVP, generalizing the celebrated polynomial-time protocol for -CVP due to Goldreich and Goldwasser. These results show
complexity-theoretic barriers to extending the recent line of fine-grained
hardness results for CVP and SVP to larger approximation factors. (This result
also extends to arbitrary norms.)
4. We show a -time co-non-deterministic protocol for
-approximate SVP, generalizing the (also celebrated!)
polynomial-time protocol for -CVP due to Aharonov and Regev.
5. We give a novel coMA protocol for -approximate CVP with a
-time verifier.
All of the results described above are special cases of more general theorems
that achieve time-approximation factor tradeoffs
Parameterized Inapproximability of the Minimum Distance Problem over All Fields and the Shortest Vector Problem in All ℓpNorms
Funding Information: M. Cheraghchi’s research was partially supported by the National Science Foundation under Grants No. CCF-2006455 and CCF-2107345. V. Guruswami’s research was supported in part by NSF grants CCF-2228287 and CCF-2210823 and a Simons Investigator award. J. Ribeiro’s research was supported by NOVA LINCS (UIDB/04516/2020) with the financial support of FCT - Fundação para a Ciência e a Tecnologia and by the NSF grants CCF-1814603 and CCF-2107347 and the following grants of Vipul Goyal: the NSF award 1916939, DARPA SIEVE program, a gift from Ripple, a DoE NETL award, a JP Morgan Faculty Fellowship, a PNC center for financial services innovation award, and a Cylab seed funding award. Publisher Copyright: © 2023 ACM.We prove that the Minimum Distance Problem (MDP) on linear codes over any fixed finite field and parameterized by the input distance bound is W[1]-hard to approximate within any constant factor. We also prove analogous results for the parameterized Shortest Vector Problem (SVP) on integer lattices. Specifically, we prove that SVP in the p norm is W[1]-hard to approximate within any constant factor for any fixed p >1 and W[1]-hard to approximate within a factor approaching 2 for p=1. (We show hardness under randomized reductions in each case.) These results answer the main questions left open (and explicitly posed) by Bhattacharyya, Bonnet, Egri, Ghoshal, Karthik C. S., Lin, Manurangsi, and Marx (Journal of the ACM, 2021) on the complexity of parameterized MDP and SVP. For MDP, they established similar hardness for binary linear codes and left the case of general fields open. For SVP in p norms with p > 1, they showed inapproximability within some constant factor (depending on p) and left open showing such hardness for arbitrary constant factors. They also left open showing W[1]-hardness even of exact SVP in the 1 norm.publishersversionpublishe
Why we couldn't prove SETH hardness of the Closest Vector Problem for even norms, and of the Subset Sum Problem!
Recent work [BGS17,ABGS19] has shown SETH hardness of some constant factor
approximate CVP in the norm for any that is not an even integer.
This result was shown by giving a Karp reduction from -SAT on variables
to approximate CVP on a lattice of rank . In this work, we show a barrier
towards proving a similar result for CVP in the norm where is an
even integer. We show that for any , if for every , there
exists an efficient reduction that maps a -SAT instance on variables to
a -CVP instance for a lattice of rank at most in the
Euclidean norm, then . We prove a
similar result for -CVP for all even norms under a mild
additional promise that the ratio of the distance of the target from the
lattice and the shortest non-zero vector in the lattice is bounded by
.
Furthermore, we show that for any , and any even integer , if
for every , there exists an efficient reduction that maps a -SAT
instance on variables to a - instance for a lattice
of rank at most , then . The
result for SVP does not require any additional promise.
While prior results have indicated that lattice problems in the norm
(Euclidean norm) are easier than lattice problems in other norms, this is the
first result that shows a separation between these problems.
We achieve this by using a result by Dell and van Melkebeek [JACM, 2014] on
the impossibility of the existence of a reduction that compresses an arbitrary
-SAT instance into a string of length for any
. In addition to CVP, we also show that the same result holds for
the Subset-Sum problem using similar techniques.Comment: 32 pages, 3 figure
QSETH strikes again: finer quantum lower bounds for lattice problem, strong simulation, hitting set problem, and more
While seemingly undesirable, it is not a surprising fact that there are
certain problems for which quantum computers offer no computational advantage
over their respective classical counterparts. Moreover, there are problems for
which there is no `useful' computational advantage possible with the current
quantum hardware. This situation however can be beneficial if we don't want
quantum computers to solve certain problems fast - say problems relevant to
post-quantum cryptography. In such a situation, we would like to have evidence
that it is difficult to solve those problems on quantum computers; but what is
their exact complexity?
To do so one has to prove lower bounds, but proving unconditional time lower
bounds has never been easy. As a result, resorting to conditional lower bounds
has been quite popular in the classical community and is gaining momentum in
the quantum community. In this paper, by the use of the QSETH framework
[Buhrman-Patro-Speelman 2021], we are able to understand the quantum complexity
of a few natural variants of CNFSAT, such as parity-CNFSAT or counting-CNFSAT,
and also are able to comment on the non-trivial complexity of
approximate-#CNFSAT; both of these have interesting implications about the
complexity of (variations of) lattice problems, strong simulation and hitting
set problem, and more.
In the process, we explore the QSETH framework in greater detail than was
(required and) discussed in the original paper, thus also serving as a useful
guide on how to effectively use the QSETH framework.Comment: 34 pages, 2 tables, 2 figure
Approximate CVP_p in Time 2^{0.802 n}
We show that a constant factor approximation of the shortest and closest lattice vector problem w.r.t. any ?_p-norm can be computed in time 2^{(0.802 +?) n}. This matches the currently fastest constant factor approximation algorithm for the shortest vector problem w.r.t. ??. To obtain our result, we combine the latter algorithm w.r.t. ?? with geometric insights related to coverings
- …