11 research outputs found
Solving the Closest Vector Problem in Time--- The Discrete Gaussian Strikes Again!
We give a -time and space randomized algorithm for solving the
exact Closest Vector Problem (CVP) on -dimensional Euclidean lattices. This
improves on the previous fastest algorithm, the deterministic
-time and -space algorithm of
Micciancio and Voulgaris.
We achieve our main result in three steps. First, we show how to modify the
sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian
sampling over lattice shifts, , with very low parameters. While the
actual algorithm is a natural generalization of [ADRS15], the analysis uses
substantial new ideas. This yields a -time algorithm for
approximate CVP for any approximation factor .
Second, we show that the approximate closest vectors to a target vector can
be grouped into "lower-dimensional clusters," and we use this to obtain a
recursive reduction from exact CVP to a variant of approximate CVP that
"behaves well with these clusters." Third, we show that our discrete Gaussian
sampling algorithm can be used to solve this variant of approximate CVP.
The analysis depends crucially on some new properties of the discrete
Gaussian distribution and approximate closest vectors, which might be of
independent interest
Computing the theta function
Let be a positive definite quadratic
form and let be a point. We present a fully polynomial
randomized approximation scheme (FPRAS) for computing , provided the eigenvalues of lie in the interval roughly between
and and for computing ,
provided the eigenvalues of lie in the interval roughly between
and for some . To compute the first sum, we represent it as
the integral of an explicit log-concave function on , and to
compute the second sum, we use the reciprocity relation for theta functions. We
then apply our results to test the existence of many short integer vectors in a
given subspace , to estimate the distance from a given
point to a lattice, and to sample a random lattice point from the discrete
Gaussian distribution.Comment: 29 pages, various improvement
Dimension reduction techniques for the minimization of theta functions on lattices
We consider the minimization of theta functions
amongst
lattices , by reducing the dimension of the
problem, following as a motivation the case , where minimizers are
supposed to be either the BCC or the FCC lattices. A first way to reduce
dimension is by considering layered lattices, and minimize either among
competitors presenting different sequences of repetitions of the layers, or
among competitors presenting different shifts of the layers with respect to
each other. The second case presents the problem of minimizing theta functions
also on translated lattices, namely minimizing . Another way to reduce dimension is by considering
lattices with a product structure or by successively minimizing over concentric
layers. The first direction leads to the question of minimization amongst
orthorhombic lattices, whereas the second is relevant for asymptotics
questions, which we study in detail in two dimensions.Comment: 45 pages. 7 figure
On computing high-dimensional Riemann theta functions
Riemann theta functions play a crucial role in the field of nonlinear Fourier analysis, where they are used to realize inverse nonlinear Fourier transforms for periodic signals. The practical applicability of this approach has however been limited since Riemann theta functions are multi-dimensional Fourier series whose computation suffers from the curse of dimensionality. In this paper, we investigate several new approaches to compute Riemann theta functions with the goal of unlocking their practical potential. Our first contributions are novel theoretical lower and upper bounds on the series truncation error. These bounds allow us to rule out several of the existing approaches for the high-dimension regime. We then propose to consider low-rank tensor and hyperbolic cross based techniques. We first examine a tensor-train based algorithm which utilizes the popular scaling and squaring approach. We show theoretically that this approach cannot break the curse of dimensionality. Finally, we investigate two other tensor-train based methods numerically and compare them to hyperbolic cross based methods. Using finite-genus solutions of the Korteweg–de Vries (KdV) and nonlinear Schrödinger equation (NLS) equations, we demonstrate the accuracy of the proposed algorithms. The tensor-train based algorithms are shown to work well for low genus solutions with real arguments but are limited by memory for higher genera. The hyperbolic cross based algorithm also achieves high accuracy for low genus solutions. Its novelty is the ability to feasibly compute moderately accurate solutions (a relative error of magnitude 0.01) for high dimensions (up to 60). It therefore enables the computation of complex inverse nonlinear Fourier transforms that were so far out of reach