11 research outputs found

    Solving the Closest Vector Problem in 2n2^n Time--- The Discrete Gaussian Strikes Again!

    Get PDF
    We give a 2n+o(n)2^{n+o(n)}-time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on nn-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic O~(4n)\widetilde{O}(4^{n})-time and O~(2n)\widetilde{O}(2^{n})-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, LtL- t, with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a 2n+o(n)2^{n+o(n)}-time algorithm for approximate CVP for any approximation factor γ=1+2o(n/logn)\gamma = 1+2^{-o(n/\log n)}. Second, we show that the approximate closest vectors to a target vector tt can be grouped into "lower-dimensional clusters," and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that "behaves well with these clusters." Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest

    Computing the theta function

    Full text link
    Let f:RnRf: {\Bbb R}^n \longrightarrow {\Bbb R} be a positive definite quadratic form and let yRny \in {\Bbb R}^n be a point. We present a fully polynomial randomized approximation scheme (FPRAS) for computing xZnef(x)\sum_{x \in {\Bbb Z}^n} e^{-f(x)}, provided the eigenvalues of ff lie in the interval roughly between ss and ese^{s} and for computing xZnef(xy)\sum_{x \in {\Bbb Z}^n} e^{-f(x-y)}, provided the eigenvalues of ff lie in the interval roughly between ese^{-s} and s1s^{-1} for some s3s \geq 3. To compute the first sum, we represent it as the integral of an explicit log-concave function on Rn{\Bbb R}^n, and to compute the second sum, we use the reciprocity relation for theta functions. We then apply our results to test the existence of many short integer vectors in a given subspace LRnL \subset {\Bbb R}^n, to estimate the distance from a given point to a lattice, and to sample a random lattice point from the discrete Gaussian distribution.Comment: 29 pages, various improvement

    Dimension reduction techniques for the minimization of theta functions on lattices

    Full text link
    We consider the minimization of theta functions θ_Λ(α)=_pΛeπαp2\theta\_\Lambda(\alpha)=\sum\_{p\in\Lambda}e^{-\pi\alpha|p|^2} amongst lattices ΛRd\Lambda\subset \mathbb R^d, by reducing the dimension of the problem, following as a motivation the case d=3d=3, where minimizers are supposed to be either the BCC or the FCC lattices. A first way to reduce dimension is by considering layered lattices, and minimize either among competitors presenting different sequences of repetitions of the layers, or among competitors presenting different shifts of the layers with respect to each other. The second case presents the problem of minimizing theta functions also on translated lattices, namely minimizing (L,u)θ_L+u(α)(L,u)\mapsto \theta\_{L+u}(\alpha). Another way to reduce dimension is by considering lattices with a product structure or by successively minimizing over concentric layers. The first direction leads to the question of minimization amongst orthorhombic lattices, whereas the second is relevant for asymptotics questions, which we study in detail in two dimensions.Comment: 45 pages. 7 figure

    On computing high-dimensional Riemann theta functions

    Get PDF
    Riemann theta functions play a crucial role in the field of nonlinear Fourier analysis, where they are used to realize inverse nonlinear Fourier transforms for periodic signals. The practical applicability of this approach has however been limited since Riemann theta functions are multi-dimensional Fourier series whose computation suffers from the curse of dimensionality. In this paper, we investigate several new approaches to compute Riemann theta functions with the goal of unlocking their practical potential. Our first contributions are novel theoretical lower and upper bounds on the series truncation error. These bounds allow us to rule out several of the existing approaches for the high-dimension regime. We then propose to consider low-rank tensor and hyperbolic cross based techniques. We first examine a tensor-train based algorithm which utilizes the popular scaling and squaring approach. We show theoretically that this approach cannot break the curse of dimensionality. Finally, we investigate two other tensor-train based methods numerically and compare them to hyperbolic cross based methods. Using finite-genus solutions of the Korteweg–de Vries (KdV) and nonlinear Schrödinger equation (NLS) equations, we demonstrate the accuracy of the proposed algorithms. The tensor-train based algorithms are shown to work well for low genus solutions with real arguments but are limited by memory for higher genera. The hyperbolic cross based algorithm also achieves high accuracy for low genus solutions. Its novelty is the ability to feasibly compute moderately accurate solutions (a relative error of magnitude 0.01) for high dimensions (up to 60). It therefore enables the computation of complex inverse nonlinear Fourier transforms that were so far out of reach
    corecore