123 research outputs found

    On the Lattice Distortion Problem

    Get PDF
    We introduce and study the \emph{Lattice Distortion Problem} (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a nO(logn)n^{O(\log n)} factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a 2O(nloglogn/logn)2^{O(n \log \log n/\log n)} factor of optimal in polynomial time and within a nO(logn)n^{O(\log n)} factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.Comment: This is the full version of a paper that appeared in ESA 201

    On the Quantitative Hardness of CVP

    Full text link
    \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} For odd integers p1p \geq 1 (and p=p = \infty), we show that the Closest Vector Problem in the p\ell_p norm (\CVP_p) over rank nn lattices cannot be solved in 2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of p1p \geq 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of \CVP_2 (i.e., \CVP in the Euclidean norm), for which a 2n+o(n)2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p=p(n)2p = p(n) \neq 2 that approaches 22 as nn \to \infty. We also show a similar SETH-hardness result for \SVP_\infty; hardness of approximating \CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \CVP_p and \CVPP_p for any 1p<1 \leq p < \infty under different assumptions

    Hardness of Bounded Distance Decoding on Lattices in ?_p Norms

    Get PDF
    Bounded Distance Decoding BDD_{p,?} is the problem of decoding a lattice when the target point is promised to be within an ? factor of the minimum distance of the lattice, in the ?_p norm. We prove that BDD_{p, ?} is NP-hard under randomized reductions where ? ? 1/2 as p ? ? (and for ? = 1/2 when p = ?), thereby showing the hardness of decoding for distances approaching the unique-decoding radius for large p. We also show fine-grained hardness for BDD_{p,?}. For example, we prove that for all p ? [1,?) ? 2? and constants C > 1, ? > 0, there is no 2^((1-?)n/C)-time algorithm for BDD_{p,?} for some constant ? (which approaches 1/2 as p ? ?), assuming the randomized Strong Exponential Time Hypothesis (SETH). Moreover, essentially all of our results also hold (under analogous non-uniform assumptions) for BDD with preprocessing, in which unbounded precomputation can be applied to the lattice before the target is available. Compared to prior work on the hardness of BDD_{p,?} by Liu, Lyubashevsky, and Micciancio (APPROX-RANDOM 2008), our results improve the values of ? for which the problem is known to be NP-hard for all p > p? ? 4.2773, and give the very first fine-grained hardness for BDD (in any norm). Our reductions rely on a special family of "locally dense" lattices in ?_p norms, which we construct by modifying the integer-lattice sparsification technique of Aggarwal and Stephens-Davidowitz (STOC 2018)

    Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes

    Get PDF
    \newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}We give a simple proof that the (approximate, decisional) Shortest Vector Problem is \NP-hard under a randomized reduction. Specifically, we show that for any p1p \geq 1 and any constant γ<21/p\gamma < 2^{1/p}, the γ\gamma-approximate problem in the p\ell_p norm (γ\gamma-\GapSVP_p) is not in RP\mathsf{RP} unless \NP \subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of γ\gamma-\GapSVP_p using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known \NP-hardness results for \GapSVP_p with p<p < \infty, our reduction uses randomness. Indeed, it is a notorious open problem to prove \NP-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding nn-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an O(logn)O(\sqrt{\log n}) factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis

    On Percolation and NP-Hardness

    Get PDF
    The edge-percolation and vertex-percolation random graph models start with an arbitrary graph G, and randomly delete edges or vertices of G with some fixed probability. We study the computational hardness of problems whose inputs are obtained by applying percolation to worst-case instances. Specifically, we show that a number of classical N P-hard graph problems remain essentially as hard on percolated instances as they are in the worst-case (assuming NP !subseteq BPP). We also prove hardness results for other NP-hard problems such as Constraint Satisfaction Problems, where random deletions are applied to clauses or variables. We focus on proving the hardness of the Maximum Independent Set problem and the Graph Coloring problem on percolated instances. To show this we establish the robustness of the corresponding parameters alpha(.) and Chi(.) to percolation, which may be of independent interest. Given a graph G, let G\u27 be the graph obtained by randomly deleting edges of G. We show that if alpha(G) is small, then alpha(G\u27) remains small with probability at least 0.99. Similarly, we show that if Chi(G) is large, then Chi(G\u27) remains large with probability at least 0.99

    Improved Hardness of BDD and SVP Under Gap-(S)ETH

    Get PDF
    We show improved fine-grained hardness of two key lattice problems in the p\ell_p norm: Bounded Distance Decoding to within an α\alpha factor of the minimum distance (BDDp,α\mathrm{BDD}_{p, \alpha}) and the (decisional) γ\gamma-approximate Shortest Vector Problem (SVPp,γ\mathrm{SVP}_{p,\gamma}), assuming variants of the Gap (Strong) Exponential Time Hypothesis (Gap-(S)ETH). Specifically, we show: 1. For all p[1,)p \in [1, \infty), there is no 2o(n)2^{o(n)}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αkn\alpha > \alpha_\mathsf{kn}, where αkn=2ckn<0.98491\alpha_\mathsf{kn} = 2^{-c_\mathsf{kn}} < 0.98491 and cknc_\mathsf{kn} is the 2\ell_2 kissing-number constant, unless non-uniform Gap-ETH is false. 2. For all p[1,)p \in [1, \infty), there is no 2o(n)2^{o(n)}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αp\alpha > \alpha^\ddagger_p, where αp\alpha^\ddagger_p is explicit and satisfies αp=1\alpha^\ddagger_p = 1 for 1p21 \leq p \leq 2, αp2\alpha^\ddagger_p 2, and αp1/2\alpha^\ddagger_p \to 1/2 as pp \to \infty, unless randomized Gap-ETH is false. 3. For all p[1,)2Zp \in [1, \infty) \setminus 2 \mathbb{Z} and all C>1C > 1, there is no 2n/C2^{n/C}-time algorithm for BDDp,α\mathrm{BDD}_{p, \alpha} for any constant α>αp,C\alpha > \alpha^\dagger_{p, C}, where αp,C\alpha^\dagger_{p, C} is explicit and satisfies αp,C1\alpha^\dagger_{p, C} \to 1 as CC \to \infty for any fixed p[1,)p \in [1, \infty), unless non-uniform Gap-SETH is false. 4. For all p>p02.1397p > p_0 \approx 2.1397, p2Zp \notin 2\mathbb{Z}, and all C>CpC > C_p, there is no 2n/C2^{n/C}-time algorithm for SVPp,γ\mathrm{SVP}_{p, \gamma} for some constant γ>1\gamma > 1, where Cp>1C_p > 1 is explicit and satisfies Cp1C_p \to 1 as pp \to \infty, unless randomized Gap-SETH is false.Comment: ITCS 202

    Parameterized Inapproximability of the Minimum Distance Problem over All Fields and the Shortest Vector Problem in All ℓpNorms

    Get PDF
    Funding Information: M. Cheraghchi’s research was partially supported by the National Science Foundation under Grants No. CCF-2006455 and CCF-2107345. V. Guruswami’s research was supported in part by NSF grants CCF-2228287 and CCF-2210823 and a Simons Investigator award. J. Ribeiro’s research was supported by NOVA LINCS (UIDB/04516/2020) with the financial support of FCT - Fundação para a Ciência e a Tecnologia and by the NSF grants CCF-1814603 and CCF-2107347 and the following grants of Vipul Goyal: the NSF award 1916939, DARPA SIEVE program, a gift from Ripple, a DoE NETL award, a JP Morgan Faculty Fellowship, a PNC center for financial services innovation award, and a Cylab seed funding award. Publisher Copyright: © 2023 ACM.We prove that the Minimum Distance Problem (MDP) on linear codes over any fixed finite field and parameterized by the input distance bound is W[1]-hard to approximate within any constant factor. We also prove analogous results for the parameterized Shortest Vector Problem (SVP) on integer lattices. Specifically, we prove that SVP in the p norm is W[1]-hard to approximate within any constant factor for any fixed p >1 and W[1]-hard to approximate within a factor approaching 2 for p=1. (We show hardness under randomized reductions in each case.) These results answer the main questions left open (and explicitly posed) by Bhattacharyya, Bonnet, Egri, Ghoshal, Karthik C. S., Lin, Manurangsi, and Marx (Journal of the ACM, 2021) on the complexity of parameterized MDP and SVP. For MDP, they established similar hardness for binary linear codes and left the case of general fields open. For SVP in p norms with p > 1, they showed inapproximability within some constant factor (depending on p) and left open showing such hardness for arbitrary constant factors. They also left open showing W[1]-hardness even of exact SVP in the 1 norm.publishersversionpublishe

    Matrix Multiplication Verification Using Coding Theory

    Full text link
    We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n×nn \times n matrices AA, BB, and CC as input, to decide whether AB=CAB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in O~(n2)\widetilde{O}(n^2) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(nω)o(n^{\omega}) time). To that end, we give two algorithms for MMV in the case where ABCAB - C is sparse. Specifically, when ABCAB - C has at most O(nδ)O(n^{\delta}) non-zero entries for a constant 0δ<20 \leq \delta < 2, we give (1) a deterministic O(nωε)O(n^{\omega - \varepsilon})-time algorithm for constant ε=ε(δ)>0\varepsilon = \varepsilon(\delta) > 0, and (2) a randomized O~(n2)\widetilde{O}(n^2)-time algorithm using δ/2log2n+O(1)\delta/2 \cdot \log_2 n + O(1) random bits. The former algorithm is faster than the deterministic algorithm of K\"{u}nnemann (ESA, 2018) when δ1.056\delta \geq 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log2n+O(1)\log_2 n + O(1) random bits (in turn fewer than Freivalds's algorithm). We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require Ω(nω)\Omega(n^{\omega}) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic O~(n2)\widetilde{O}(n^2)-time reductions)
    corecore