73 research outputs found

    On the Closest Vector Problem with a Distance Guarantee

    Get PDF
    We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any α<1/2\alpha < 1/2, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most α\alpha times the length of the shortest nonzero lattice vector, requires as preprocessing advice only NO~(nexp(α2n/(12α)2))N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2)) vectors, and runs in time O~(nN)\widetilde{O}(nN). As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of O(n/logn)O(n/\sqrt{\log n}) for search CVPP, improving on the previous best of O(n1.5)O(n^{1.5}) due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only O(n) vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance Decoding and the Closest Vector Problem with Preprocessing". Conference on Computational Complexity (2014

    Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard

    Full text link
    Maximum-likelihood decoding is one of the central algorithmic problems in coding theory. It has been known for over 25 years that maximum-likelihood decoding of general linear codes is NP-hard. Nevertheless, it was so far unknown whether maximum- likelihood decoding remains hard for any specific family of codes with nontrivial algebraic structure. In this paper, we prove that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes remains hard even with unlimited preprocessing, thereby strengthening a result of Bruck and Naor.Comment: 16 pages, no figure

    Parameterized Intractability of Even Set and Shortest Vector Problem from Gap-ETH

    Get PDF
    The k-Even Set problem is a parameterized variant of the Minimum Distance Problem of linear codes over F_2, which can be stated as follows: given a generator matrix A and an integer k, determine whether the code generated by A has distance at most k. Here, k is the parameter of the problem. The question of whether k-Even Set is fixed parameter tractable (FPT) has been repeatedly raised in literature and has earned its place in Downey and Fellows\u27 book (2013) as one of the "most infamous" open problems in the field of Parameterized Complexity. In this work, we show that k-Even Set does not admit FPT algorithms under the (randomized) Gap Exponential Time Hypothesis (Gap-ETH) [Dinur\u2716, Manurangsi-Raghavendra\u2716]. In fact, our result rules out not only exact FPT algorithms, but also any constant factor FPT approximation algorithms for the problem. Furthermore, our result holds even under the following weaker assumption, which is also known as the Parameterized Inapproximability Hypothesis (PIH) [Lokshtanov et al.\u2717]: no (randomized) FPT algorithm can distinguish a satisfiable 2CSP instance from one which is only 0.99-satisfiable (where the parameter is the number of variables). We also consider the parameterized k-Shortest Vector Problem (SVP), in which we are given a lattice whose basis vectors are integral and an integer k, and the goal is to determine whether the norm of the shortest vector (in the l_p norm for some fixed p) is at most k. Similar to k-Even Set, this problem is also a long-standing open problem in the field of Parameterized Complexity. We show that, for any p > 1, k-SVP is hard to approximate (in FPT time) to some constant factor, assuming PIH. Furthermore, for the case of p = 2, the inapproximability factor can be amplified to any constant

    Cryptography from tensor problems

    Get PDF
    We describe a new proposal for a trap-door one-way function. The new proposal belongs to the "multivariate quadratic" family but the trap-door is different from existing methods, and is simpler

    On the Quantitative Hardness of CVP

    Full text link
    \newcommand{\eps}{\varepsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} For odd integers p1p \geq 1 (and p=p = \infty), we show that the Closest Vector Problem in the p\ell_p norm (\CVP_p) over rank nn lattices cannot be solved in 2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of p1p \geq 1, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of \CVP_2 (i.e., \CVP in the Euclidean norm), for which a 2n+o(n)2^{n +o(n)}-time algorithm is known. In particular, our result applies for any p=p(n)2p = p(n) \neq 2 that approaches 22 as nn \to \infty. We also show a similar SETH-hardness result for \SVP_\infty; hardness of approximating \CVP_p to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \CVP_p and \CVPP_p for any 1p<1 \leq p < \infty under different assumptions

    Improved Lower Bounds for Approximating Parameterized Nearest Codeword and Related Problems under ETH

    Full text link
    In this paper we present a new gap-creating randomized self-reduction for parameterized Maximum Likelihood Decoding problem over Fp\mathbb{F}_p (kk-MLDp_p). The reduction takes a kk-MLDp_p instance with knk\cdot n vectors as input, runs in time f(k)nO(1)f(k)n^{O(1)} for some computable function ff, outputs a (3/2ε)(3/2-\varepsilon)-Gap-kk'-MLDp_p instance for any ε>0\varepsilon>0, where k=O(k2logk)k'=O(k^2\log k). Using this reduction, we show that assuming the randomized Exponential Time Hypothesis (ETH), no algorithms can approximate kk-MLDp_p (and therefore its dual problem kk-NCPp_p) within factor (3/2ε)(3/2-\varepsilon) in f(k)no(k/logk)f(k)\cdot n^{o(\sqrt{k/\log k})} time for any ε>0\varepsilon>0. We then use reduction by Bhattacharyya, Ghoshal, Karthik and Manurangsi (ICALP 2018) to amplify the (3/2ε)(3/2-\varepsilon)-gap to any constant. As a result, we show that assuming ETH, no algorithms can approximate kk-NCPp_p and kk-MDPp_p within γ\gamma-factor in f(k)no(kεγ)f(k)n^{o(k^{\varepsilon_\gamma})} time for some constant εγ>0\varepsilon_\gamma>0. Combining with the gap-preserving reduction by Bennett, Cheraghchi, Guruswami and Ribeiro (STOC 2023), we also obtain similar lower bounds for kk-MDPp_p, kk-CVPp_p and kk-SVPp_p. These results improve upon the previous f(k)nΩ(polylogk)f(k)n^{\Omega(\mathsf{poly} \log k)} lower bounds for these problems under ETH using reductions by Bhattacharyya et al. (J.ACM 2021) and Bennett et al. (STOC 2023).Comment: 32 pages, 3 figure

    A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms

    Get PDF
    Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions
    corecore