22 research outputs found

    Improved Hardness of BDD and SVP Under Gap-(S)ETH

    Get PDF
    We show improved fine-grained hardness of two key lattice problems in the p\ell_p norm: Bounded Distance Decoding to within an α\alpha factor of the minimum distance (BDDp,α\mathrm{BDD}_{p, \alpha}) and the (decisional) γ\gamma-approximate Shortest Vector Problem (SVPp,γ\mathrm{SVP}_{p,\gamma}), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show: 1. For all p[1,)p \in [1, \infty), there is no 2o(n)2^{o(n)}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αkn\alpha > \alpha_\mathsf{kn}, where αkn=2ckn<0.98491\alpha_\mathsf{kn} = 2^{-c_\mathsf{kn}} < 0.98491 and cknc_\mathsf{kn} is the 2\ell_2 kissing-number constant, unless non-uniform Gap-ETH is false. 2. For all p[1,)p \in [1, \infty), there is no 2o(n)2^{o(n)}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αp\alpha > \alpha^\ddagger_p, where αp\alpha^\ddagger_p is explicit and satisfies αp=1\alpha^\ddagger_p = 1 for 1p21 \leq p \leq 2, αp2\alpha^\ddagger_p 2, and αp1/2\alpha^\ddagger_p \to 1/2 as pp \to \infty, unless randomized Gap-ETH is false. 3. For all p[1,)2Zp \in [1, \infty) \setminus 2 \mathbb{Z} and all C>1C > 1, there is no 2n/C2^{n/C}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αp,C\alpha > \alpha^\dagger_{p, C}, where αp,C\alpha^\dagger_{p, C} is explicit and satisfies αp,C1\alpha^\dagger_{p, C} \to 1 as CC \to \infty for any fixed p[1,)p \in [1, \infty), unless non-uniform Gap-SETH is false. 4. For all p>p02.1397p > p_0 \approx 2.1397, p2Zp \notin 2\mathbb{Z}, and all C>CpC > C_p, there is no 2n/C2^{n/C}-time algorithm for SVPp,γ\mathrm{SVP}_{p, \gamma} for some constant γ>1\gamma > 1, where Cp>1C_p > 1 is explicit and satisfies Cp1C_p \to 1 as pp \to \infty, unless randomized Gap-SETH is false.Comment: ITCS 202

    Approximate CVP_p in Time 2^{0.802 n}

    Get PDF
    We show that a constant factor approximation of the shortest and closest lattice vector problem w.r.t. any ?_p-norm can be computed in time 2^{(0.802 +?) n}. This matches the currently fastest constant factor approximation algorithm for the shortest vector problem w.r.t. ??. To obtain our result, we combine the latter algorithm w.r.t. ?? with geometric insights related to coverings

    Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH

    Get PDF
    The k-Even Set problem is a parameterized variant of the Minimum Distance Problem of linear codes over F_2, which can be stated as follows: given a generator matrix A and an integer k, determine whether the code generated by A has distance at most k. Here, k is the parameter of the problem. The question of whether k-Even Set is fixed parameter tractable (FPT) has been repeatedly raised in literature and has earned its place in Downey and Fellows\u27 book (2013) as one of the "most infamous" open problems in the field of Parameterized Complexity. In this work, we show that k-Even Set does not admit FPT algorithms under the (randomized) Gap Exponential Time Hypothesis (Gap-ETH) [Dinur\u2716, Manurangsi-Raghavendra\u2716]. In fact, our result rules out not only exact FPT algorithms, but also any constant factor FPT approximation algorithms for the problem. Furthermore, our result holds even under the following weaker assumption, which is also known as the Parameterized Inapproximability Hypothesis (PIH) [Lokshtanov et al.\u2717]: no (randomized) FPT algorithm can distinguish a satisfiable 2CSP instance from one which is only 0.99-satisfiable (where the parameter is the number of variables). We also consider the parameterized k-Shortest Vector Problem (SVP), in which we are given a lattice whose basis vectors are integral and an integer k, and the goal is to determine whether the norm of the shortest vector (in the l_p norm for some fixed p) is at most k. Similar to k-Even Set, this problem is also a long-standing open problem in the field of Parameterized Complexity. We show that, for any p > 1, k-SVP is hard to approximate (in FPT time) to some constant factor, assuming PIH. Furthermore, for the case of p = 2, the inapproximability factor can be amplified to any constant

    Imperfect Gaps in Gap-ETH and PCPs

    Get PDF
    We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s\u27}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest. We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively

    Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

    Get PDF
    \newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}We give a simple proof that the (approximate, decisional) Shortest Vector Problem is \NP-hard under a randomized reduction. Specifically, we show that for any p1p \geq 1 and any constant γ<21/p\gamma < 2^{1/p}, the γ\gamma-approximate problem in the p\ell_p norm (γ\gamma-\GapSVP_p) is not in RP\mathsf{RP} unless \NP \subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of γ\gamma-\GapSVP_p using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known \NP-hardness results for \GapSVP_p with p<p < \infty, our reduction uses randomness. Indeed, it is a notorious open problem to prove \NP-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding nn-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an O(logn)O(\sqrt{\log n}) factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis

    Improved Classical and Quantum Algorithms for the Shortest Vector Problem via Bounded Distance Decoding

    Get PDF
    The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. \bullet A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4qn4\leq q\leq \sqrt{n}, our algorithm takes q13n+o(n)q^{13n+o(n)} time and requires poly(n)q16n/q2poly(n)\cdot q^{16n/q^2} memory. This tradeoff which ranges from enumeration (q=nq=\sqrt{n}) to sieving (qq constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. \bullet A quantum algorithm for SVP that runs in time 20.953n+o(n)2^{0.953n+o(n)} and requires 20.5n+o(n)2^{0.5n+o(n)} classical memory and poly(n) qubits. In Quantum Random Access Memory (QRAM) model this algorithm takes only 20.873n+o(n)2^{0.873n+o(n)} time and requires a QRAM of size 20.1604n+o(n)2^{0.1604n+o(n)}, poly(n) qubits and 20.5n2^{0.5n} classical space. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [ADRS15] that has a time and space complexity 2n+o(n)2^{n+o(n)}. \bullet A classical algorithm for SVP that runs in time 21.741n+o(n)2^{1.741n+o(n)} time and 20.5n+o(n)2^{0.5n+o(n)} space. This improves over an algorithm of [CCL18] that has the same space complexity. The time complexity of our classical and quantum algorithms are obtained using a known upper bound on a quantity related to the lattice kissing number which is 20.402n2^{0.402n}. We conjecture that for most lattices this quantity is a 2o(n)2^{o(n)}. Assuming that this is the case, our classical algorithm runs in time 21.292n+o(n)2^{1.292n+o(n)}, our quantum algorithm runs in time 20.750n+o(n)2^{0.750n+o(n)} and our quantum algorithm in QRAM model runs in time 20.667n+o(n)2^{0.667n+o(n)}.Comment: Faster Quantum Algorithm for SVP in QRAM, 43 pages, 4 figure

    Hardness of Bounded Distance Decoding on Lattices in ?_p Norms

    Get PDF
    Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)
    corecore