11 research outputs found
Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard
Maximum-likelihood decoding is one of the central algorithmic problems in
coding theory. It has been known for over 25 years that maximum-likelihood
decoding of general linear codes is NP-hard. Nevertheless, it was so far
unknown whether maximum- likelihood decoding remains hard for any specific
family of codes with nontrivial algebraic structure. In this paper, we prove
that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon
codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes
remains hard even with unlimited preprocessing, thereby strengthening a result
of Bruck and Naor.Comment: 16 pages, no figure
Linear-time nearest point algorithms for Coxeter lattices
The Coxeter lattices, which we denote , are a family of lattices
containing many of the important lattices in low dimensions. This includes
, , and their duals , and . We consider
the problem of finding a nearest point in a Coxeter lattice. We describe two
new algorithms, one with worst case arithmetic complexity and the
other with worst case complexity O(n) where is the dimension of the
lattice. We show that for the particular lattices and the
algorithms reduce to simple nearest point algorithms that already exist in the
literature.Comment: submitted to IEEE Transactions on Information Theor
On the Closest Vector Problem with a Distance Guarantee
We present a substantially more efficient variant, both in terms of running
time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky,
and Micciancio for solving CVPP (the preprocessing version of the Closest
Vector Problem, CVP) with a distance guarantee. For instance, for any , our algorithm finds the (unique) closest lattice point for any target
point whose distance from the lattice is at most times the length of
the shortest nonzero lattice vector, requires as preprocessing advice only vectors, and runs in
time .
As our second main contribution, we present reductions showing that it
suffices to solve CVP, both in its plain and preprocessing versions, when the
input target point is within some bounded distance of the lattice. The
reductions are based on ideas due to Kannan and a recent sparsification
technique due to Dadush and Kun. Combining our reductions with the LLM
algorithm gives an approximation factor of for search
CVPP, improving on the previous best of due to Lagarias, Lenstra,
and Schnorr. When combined with our improved algorithm we obtain, somewhat
surprisingly, that only O(n) vectors of preprocessing advice are sufficient to
solve CVPP with (the only slightly worse) approximation factor of O(n).Comment: An early version of the paper was titled "On Bounded Distance
Decoding and the Closest Vector Problem with Preprocessing". Conference on
Computational Complexity (2014
On the Quantitative Hardness of CVP
For odd
integers (and ), we show that the Closest Vector Problem
in the norm (\CVP_p) over rank lattices cannot be solved in
2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential
Time Hypothesis (SETH) fails. We then extend this result to "almost all" values
of , not including the even integers. This comes tantalizingly close
to settling the quantitative time complexity of the important special case of
\CVP_2 (i.e., \CVP in the Euclidean norm), for which a -time
algorithm is known. In particular, our result applies for any
that approaches as .
We also show a similar SETH-hardness result for \SVP_\infty; hardness of
approximating \CVP_p to within some constant factor under the so-called
Gap-ETH assumption; and other quantitative hardness results for \CVP_p and
\CVPP_p for any under different assumptions