29 research outputs found

    Near-Optimal Deterministic Algorithms for Volume Computation and Lattice Problems via M-Ellipsoids

    Full text link
    We give a deterministic 2^{O(n)} algorithm for computing an M-ellipsoid of a convex body, matching a known lower bound. This has several interesting consequences including improved deterministic algorithms for volume estimation of convex bodies and the shortest and closest lattice vector problems under general norms

    Deterministic Construction of an Approximate M-Ellipsoid and its Application to Derandomizing Lattice Algorithms

    Full text link
    We give a deterministic O(log n)^n algorithm for the {\em Shortest Vector Problem (SVP)} of a lattice under {\em any} norm, improving on the previous best deterministic bound of n^O(n) for general norms and nearly matching the bound of 2^O(n) for the standard Euclidean norm established by Micciancio and Voulgaris (STOC 2010). Our algorithm can be viewed as a derandomization of the AKS randomized sieve algorithm, which can be used to solve SVP for any norm in 2^O(n) time with high probability. We use the technique of covering a convex body by ellipsoids, as introduced for lattice problems in (Dadush et al., FOCS 2011). Our main contribution is a deterministic approximation of an M-ellipsoid of any convex body. We achieve this via a convex programming formulation of the optimal ellipsoid with the objective function being an n-dimensional integral that we show can be approximated deterministically, a technique that appears to be of independent interest

    Randomized Rounding for the Largest Simplex Problem

    Full text link
    The maximum volume jj-simplex problem asks to compute the jj-dimensional simplex of maximum volume inside the convex hull of a given set of nn points in Qd\mathbb{Q}^d. We give a deterministic approximation algorithm for this problem which achieves an approximation ratio of ej/2+o(j)e^{j/2 + o(j)}. The problem is known to be NP\mathrm{NP}-hard to approximate within a factor of cjc^{j} for some constant c>1c > 1. Our algorithm also gives a factor ej+o(j)e^{j + o(j)} approximation for the problem of finding the principal j×jj\times j submatrix of a rank dd positive semidefinite matrix with the largest determinant. We achieve our approximation by rounding solutions to a generalization of the DD-optimal design problem, or, equivalently, the dual of an appropriate smallest enclosing ellipsoid problem. Our arguments give a short and simple proof of a restricted invertibility principle for determinants

    Lattice sparsification and the Approximate Closest Vector Problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the 2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)

    A polynomial time approximation scheme for computing the supremum of Gaussian processes

    Full text link
    We give a polynomial time approximation scheme (PTAS) for computing the supremum of a Gaussian process. That is, given a finite set of vectors VRdV\subseteq\mathbb{R}^d, we compute a (1+ε)(1+\varepsilon)-factor approximation to EXNd[supvVv,X]\mathop {\mathbb{E}}_{X\leftarrow\mathcal{N}^d}[\sup_{v\in V}|\langle v,X\rangle|] deterministically in time poly(d)VOε(1)\operatorname {poly}(d)\cdot|V|^{O_{\varepsilon}(1)}. Previously, only a constant factor deterministic polynomial time approximation algorithm was known due to the work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471]. This answers an open question of Lee (2010) and Ding [Ann. Probab. 42 (2014) 464-496]. The study of supremum of Gaussian processes is of considerable importance in probability with applications in functional analysis, convex geometry, and in light of the recent breakthrough work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471], to random walks on finite graphs. As such our result could be of use elsewhere. In particular, combining with the work of Ding [Ann. Probab. 42 (2014) 464-496], our result yields a PTAS for computing the cover time of bounded-degree graphs. Previously, such algorithms were known only for trees. Along the way, we also give an explicit oblivious estimator for semi-norms in Gaussian space with optimal query complexity. Our algorithm and its analysis are elementary in nature, using two classical comparison inequalities, Slepian's lemma and Kanter's lemma.Comment: Published in at http://dx.doi.org/10.1214/13-AAP997 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Lattice Sparsification and the approximate closest vector problem

    Get PDF
    We give a deterministic algorithm for solving the (1+\eps)-approximate Closest Vector Problem (CVP) on any nn-dimensional lattice and in any near-symmetric norm in 2^{O(n)}(1+1/\eps)^n time and 2^n\poly(n) space. Our algorithm builds on the lattice point enumeration techniques of Micciancio and Voulgaris (STOC 2010, SICOMP 2013) and Dadush, Peikert and Vempala (FOCS 2011), and gives an elegant, deterministic alternative to the "AKS Sieve"-based algorithms for (1+\eps)-CVP (Ajtai, Kumar, and Sivakumar; STOC 2001 and CCC 2002). Furthermore, assuming the existence of a \poly(n)-space and 2O(n)2^{O(n)}-time algorithm for exact CVP in the 2\ell_2 norm, the space complexity of our algorithm can be reduced to polynomial. Our main technical contribution is a method for "sparsifying" any input lattice while approximately maintaining its metric structure. To this end, we employ the idea of random sublattice restrictions, which was first employed by Khot (FOCS 2003, J. Comp. Syst. Sci. 2006) for the purpose of proving hardness for the Shortest Vector Problem (SVP) under p\ell_p norms. A preliminary version of this paper appeared in the Proc. 24th Annual ACM-SIAM Symp. on Discrete Algorithms (SODA'13) (http://dx.doi.org/10.1137/1.9781611973105.78)

    A parameterized view to the robust recoverable base problem of matroids under structural uncertainty

    Get PDF
    We study a robust recoverable version of the matroid base problem where the uncertainty is imposed on combinatorial structures rather than on weights as studied in the literature. We prove that the problem is NP-hard even when a given matroid is uniform or graphic. On the other hand, we prove that the problem is fixed-parameter tractable with respect to the number of scenarios
    corecore