796 research outputs found
On the Lattice Distortion Problem
We introduce and study the \emph{Lattice Distortion Problem} (LDP). LDP asks
how "similar" two lattices are. I.e., what is the minimal distortion of a
linear bijection between the two lattices? LDP generalizes the Lattice
Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply
asks whether the minimal distortion is one.
As our first contribution, we show that the distortion between any two
lattices is approximated up to a factor by a simple function of
their successive minima. Our methods are constructive, allowing us to compute
low-distortion mappings that are within a factor
of optimal in polynomial time and within a factor of optimal in
singly exponential time. Our algorithms rely on a notion of basis reduction
introduced by Seysen (Combinatorica 1993), which we show is intimately related
to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to
within any constant factor (under randomized reductions), by a reduction from
the Shortest Vector Problem.Comment: This is the full version of a paper that appeared in ESA 201
On the Quantitative Hardness of CVP
For odd
integers (and ), we show that the Closest Vector Problem
in the norm (\CVP_p) over rank lattices cannot be solved in
2^{(1-\eps) n} time for any constant \eps > 0 unless the Strong Exponential
Time Hypothesis (SETH) fails. We then extend this result to "almost all" values
of , not including the even integers. This comes tantalizingly close
to settling the quantitative time complexity of the important special case of
\CVP_2 (i.e., \CVP in the Euclidean norm), for which a -time
algorithm is known. In particular, our result applies for any
that approaches as .
We also show a similar SETH-hardness result for \SVP_\infty; hardness of
approximating \CVP_p to within some constant factor under the so-called
Gap-ETH assumption; and other quantitative hardness results for \CVP_p and
\CVPP_p for any under different assumptions
Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One
We show the first dimension-preserving search-to-decision reductions for
approximate SVP and CVP. In particular, for any ,
we obtain an efficient dimension-preserving reduction from -SVP to -GapSVP and an efficient dimension-preserving reduction
from -CVP to -GapCVP. These results generalize the known
equivalences of the search and decision versions of these problems in the exact
case when . For SVP, we actually obtain something slightly stronger
than a search-to-decision reduction---we reduce -SVP to
-unique SVP, a potentially easier problem than -GapSVP.Comment: Updated to acknowledge additional prior wor
New Shortest Lattice Vector Problems of Polynomial Complexity
The Shortest Lattice Vector (SLV) problem is in general hard to solve, except
for special cases (such as root lattices and lattices for which an obtuse
superbase is known). In this paper, we present a new class of SLV problems that
can be solved efficiently. Specifically, if for an -dimensional lattice, a
Gram matrix is known that can be written as the difference of a diagonal matrix
and a positive semidefinite matrix of rank (for some constant ), we show
that the SLV problem can be reduced to a -dimensional optimization problem
with countably many candidate points. Moreover, we show that the number of
candidate points is bounded by a polynomial function of the ratio of the
smallest diagonal element and the smallest eigenvalue of the Gram matrix.
Hence, as long as this ratio is upper bounded by a polynomial function of ,
the corresponding SLV problem can be solved in polynomial complexity. Our
investigations are motivated by the emergence of such lattices in the field of
Network Information Theory. Further applications may exist in other areas.Comment: 13 page
Solving the Closest Vector Problem in Time--- The Discrete Gaussian Strikes Again!
We give a -time and space randomized algorithm for solving the
exact Closest Vector Problem (CVP) on -dimensional Euclidean lattices. This
improves on the previous fastest algorithm, the deterministic
-time and -space algorithm of
Micciancio and Voulgaris.
We achieve our main result in three steps. First, we show how to modify the
sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian
sampling over lattice shifts, , with very low parameters. While the
actual algorithm is a natural generalization of [ADRS15], the analysis uses
substantial new ideas. This yields a -time algorithm for
approximate CVP for any approximation factor .
Second, we show that the approximate closest vectors to a target vector can
be grouped into "lower-dimensional clusters," and we use this to obtain a
recursive reduction from exact CVP to a variant of approximate CVP that
"behaves well with these clusters." Third, we show that our discrete Gaussian
sampling algorithm can be used to solve this variant of approximate CVP.
The analysis depends crucially on some new properties of the discrete
Gaussian distribution and approximate closest vectors, which might be of
independent interest
Compute-and-Forward: Finding the Best Equation
Compute-and-Forward is an emerging technique to deal with interference. It
allows the receiver to decode a suitably chosen integer linear combination of
the transmitted messages. The integer coefficients should be adapted to the
channel fading state. Optimizing these coefficients is a Shortest Lattice
Vector (SLV) problem. In general, the SLV problem is known to be prohibitively
complex. In this paper, we show that the particular SLV instance resulting from
the Compute-and-Forward problem can be solved in low polynomial complexity and
give an explicit deterministic algorithm that is guaranteed to find the optimal
solution.Comment: Paper presented at 52nd Allerton Conference, October 201
- …