17 research outputs found

    A Cubic Algorithm for Computing Gaussian Volume

    Full text link
    We present randomized algorithms for sampling the standard Gaussian distribution restricted to a convex set and for estimating the Gaussian measure of a convex set, in the general membership oracle model. The complexity of integration is O(n3)O^*(n^3) while the complexity of sampling is O(n3)O^*(n^3) for the first sample and O(n2)O^*(n^2) for every subsequent sample. These bounds improve on the corresponding state-of-the-art by a factor of nn. Our improvement comes from several aspects: better isoperimetry, smoother annealing, avoiding transformation to isotropic position and the use of the "speedy walk" in the analysis.Comment: 23 page

    Towards a Constructive Version of Banaszczyk\u27s Vector Balancing Theorem

    Get PDF
    An important theorem of Banaszczyk (Random Structures & Algorithms 1998) states that for any sequence of vectors of l_2 norm at most 1/5 and any convex body K of Gaussian measure 1/2 in R^n, there exists a signed combination of these vectors which lands inside K. A major open problem is to devise a constructive version of Banaszczyk\u27s vector balancing theorem, i.e. to find an efficient algorithm which constructs the signed combination. We make progress towards this goal along several fronts. As our first contribution, we show an equivalence between Banaszczyk\u27s theorem and the existence of O(1)-subgaussian distributions over signed combinations. For the case of symmetric convex bodies, our equivalence implies the existence of a universal signing algorithm (i.e. independent of the body), which simply samples from the subgaussian sign distribution and checks to see if the associated combination lands inside the body. For asymmetric convex bodies, we provide a novel recentering procedure, which allows us to reduce to the case where the body is symmetric. As our second main contribution, we show that the above framework can be efficiently implemented when the vectors have length O(1/sqrt{log n}), recovering Banaszczyk\u27s results under this stronger assumption. More precisely, we use random walk techniques to produce the required O(1)-subgaussian signing distributions when the vectors have length O(1/sqrt{log n}), and use a stochastic gradient ascent method to implement the recentering procedure for asymmetric bodies

    Practical Volume Computation of Structured Convex Bodies, and an Application to Modeling Portfolio Dependencies and Financial Crises

    Get PDF
    We examine volume computation of general-dimensional polytopes and more general convex bodies, defined as the intersection of a simplex by a family of parallel hyperplanes, and another family of parallel hyperplanes or a family of concentric ellipsoids. Such convex bodies appear in modeling and predicting financial crises. The impact of crises on the economy (labor, income, etc.) makes its detection of prime interest for the public in general and for policy makers in particular. Certain features of dependencies in the markets clearly identify times of turmoil. We describe the relationship between asset characteristics by means of a copula; each characteristic is either a linear or quadratic form of the portfolio components, hence the copula can be constructed by computing volumes of convex bodies. We design and implement practical algorithms in the exact and approximate setting, we experimentally juxtapose them and study the tradeoff of exactness and accuracy for speed. We analyze the following methods in order of increasing generality: rejection sampling relying on uniformly sampling the simplex, which is the fastest approach, but inaccurate for small volumes; exact formulae based on the computation of integrals of probability distribution functions, which are the method of choice for intersections with a single hyperplane; an optimized Lawrence sign decomposition method, since the polytopes at hand are shown to be simple with additional structure; Markov chain Monte Carlo algorithms using random walks based on the hit-and-run paradigm generalized to nonlinear convex bodies and relying on new methods for computing a ball enclosed in the given body, such as a second-order cone program; the latter is experimentally extended to non-convex bodies with very encouraging results. Our C++ software, based on CGAL and Eigen and available on github, is shown to be very effective in up to 100 dimensions. Our results offer novel, effective means of computing portfolio dependencies and an indicator of financial crises, which is shown to correctly identify past crises

    An Almost Constant Lower Bound of the Isoperimetric Coefficient in the KLS Conjecture

    Full text link
    We prove an almost constant lower bound of the isoperimetric coefficient in the KLS conjecture. The lower bound has the dimension dependency dod(1)d^{-o_d(1)}. When the dimension is large enough, our lower bound is tighter than the previous best bound which has the dimension dependency d1/4d^{-1/4}. Improving the current best lower bound of the isoperimetric coefficient in the KLS conjecture has many implications, including improvements of the current best bounds in Bourgain's slicing conjecture and in the thin-shell conjecture, better concentration inequalities for Lipschitz functions of log-concave measures and better mixing time bounds for MCMC sampling algorithms on log-concave measures.Comment: 25 pages, 1 figure, accepted in GAFA journa

    Fast MCMC sampling algorithms on polytopes

    Get PDF
    We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope. Both random walks are sampling algorithms derived from interior point methods. The former is based on volumetric-logarithmic barrier introduced by Vaidya whereas the latter uses John's ellipsoids. We show that the Vaidya walk mixes in significantly fewer steps than the logarithmic-barrier based Dikin walk studied in past work. For a polytope in Rd\mathbb{R}^d defined by n>dn >d linear constraints, we show that the mixing time from a warm start is bounded as O(n0.5d1.5)\mathcal{O}(n^{0.5}d^{1.5}), compared to the O(nd)\mathcal{O}(nd) mixing time bound for the Dikin walk. The cost of each step of the Vaidya walk is of the same order as the Dikin walk, and at most twice as large in terms of constant pre-factors. For the John walk, we prove an O(d2.5log4(n/d))\mathcal{O}(d^{2.5}\cdot\log^4(n/d)) bound on its mixing time and conjecture that an improved variant of it could achieve a mixing time of O(d2polylog(n/d))\mathcal{O}(d^2\cdot\text{polylog}(n/d)). Additionally, we propose variants of the Vaidya and John walks that mix in polynomial time from a deterministic starting point. The speed-up of the Vaidya walk over the Dikin walk are illustrated in numerical examples.Comment: 86 pages, 9 figures, First two authors contributed equall
    corecore