833 research outputs found

    On tail decay and moment estimates of a condition number for random linear conic systems

    Get PDF
    In this paper we study the distribution tails and the moments of a condition number which arises in the study of homogeneous systems of linear inequalities. We consider the case where this system is defined by a Gaussian random matrix and characterise the exact decay rates of the distribution tails, improve the existing moment estimates, and prove various limit theorems for large scale systems. Our results are of complexity theoretic interest, because interior-point methods and relaxation methods for the solution of systems of linear inequalities have running times that are bounded in terms of the logarithm and the square of the condition number respectively.\ud \ud Felipe Cucker has been substantially funded by a grant from the Research Grants Council of the Hong Kong SAR (project number CityU 1085/02P). Raphael Hauser has been supported by Felipe Cucker's grant from the Research Grants Council of the Hong Kong SAR (project number CityU 1085/02P) and through a grant of the Nuffield Foundation under the "Newly Appointed Lecturers" grant scheme, (project number NAL/00720/G)

    Algebraic Tail Decay of Condition Numbers for Random Conic Systems under a General Family of Input Distributions

    Get PDF
    We consider the conic feasibility problem associated with linear homogeneous systems of inequalities. The complexity of iterative algorithms for solving this problem depends on a condition number. When studying the typical behaviour of algorithms under stochastic input one is therefore naturally led to investigate the fatness of the distribution tails of the random condition number that ensues. We study an unprecedently general class of probability models for the random input matrix and show that the tails decay at algebraic rates with an exponent that naturally emerges when applying a theory of uniform absolute continuity which is also developed in this paper.\ud \ud Raphael Hauser was supported through grant NAL/00720/G from the Nuffield Foundation and through grant GR/M30975 from the Engineering and Physical Sciences Research Council of the UK. Tobias Müller was partially supported by EPSRC, the Department of Statistics, Bekker-la-Bastide fonds, Dr Hendrik Muller's Vaderlandsch fonds, and Prins Bernhard Cultuurfonds

    Robust Smoothed Analysis of a Condition Number for Linear Programming

    Full text link
    We perform a smoothed analysis of the GCC-condition number C(A) of the linear programming feasibility problem \exists x\in\R^{m+1} Ax < 0. Suppose that \bar{A} is any matrix with rows \bar{a_i} of euclidean norm 1 and, independently for all i, let a_i be a random perturbation of \bar{a_i} following the uniform distribution in the spherical disk in S^m of angular radius \arcsin\sigma and centered at \bar{a_i}. We prove that E(\ln C(A)) = O(mn / \sigma). A similar result was shown for Renegar's condition number and Gaussian perturbations by Dunagan, Spielman, and Teng [arXiv:cs.DS/0302011]. Our result is robust in the sense that it easily extends to radially symmetric probability distributions supported on a spherical disk of radius \arcsin\sigma, whose density may even have a singularity at the center of the perturbation. Our proofs combine ideas from a recent paper of B\"urgisser, Cucker, and Lotz (Math. Comp. 77, No. 263, 2008) with techniques of Dunagan et al.Comment: 34 pages. Version 3: only cosmetic change

    Generic Error Bounds for the Generalized Lasso with Sub-Exponential Data

    Get PDF
    This work performs a non-asymptotic analysis of the generalized Lasso under the assumption of sub-exponential data. Our main results continue recent research on the benchmark case of (sub-)Gaussian sample distributions and thereby explore what conclusions are still valid when going beyond. While many statistical features of the generalized Lasso remain unaffected (e.g., consistency), the key difference becomes manifested in the way how the complexity of the hypothesis set is measured. It turns out that the estimation error can be controlled by means of two complexity parameters that arise naturally from a generic-chaining-based proof strategy. The output model can be non-realizable, while the only requirement for the input vector is a generic concentration inequality of Bernstein-type, which can be implemented for a variety of sub-exponential distributions. This abstract approach allows us to reproduce, unify, and extend previously known guarantees for the generalized Lasso. In particular, we present applications to semi-parametric output models and phase retrieval via the lifted Lasso. Moreover, our findings are discussed in the context of sparse recovery and high-dimensional estimation problems

    Coverage processes on spheres and condition numbers for linear programming

    Get PDF
    This paper has two agendas. Firstly, we exhibit new results for coverage processes. Let p(n,m,α)p(n,m,\alpha) be the probability that nn spherical caps of angular radius α\alpha in SmS^m do not cover the whole sphere SmS^m. We give an exact formula for p(n,m,α)p(n,m,\alpha) in the case α[π/2,π]\alpha\in[\pi/2,\pi] and an upper bound for p(n,m,α)p(n,m,\alpha) in the case α[0,π/2]\alpha\in [0,\pi/2] which tends to p(n,m,π/2)p(n,m,\pi/2) when απ/2\alpha\to\pi/2. In the case α[0,π/2]\alpha\in[0,\pi/2] this yields upper bounds for the expected number of spherical caps of radius α\alpha that are needed to cover SmS^m. Secondly, we study the condition number C(A){\mathscr{C}}(A) of the linear programming feasibility problem xRm+1Ax0,x0\exists x\in\mathbb{R}^{m+1}Ax\le0,x\ne0 where ARn×(m+1)A\in\mathbb{R}^{n\times(m+1)} is randomly chosen according to the standard normal distribution. We exactly determine the distribution of C(A){\mathscr{C}}(A) conditioned to AA being feasible and provide an upper bound on the distribution function in the infeasible case. Using these results, we show that E(lnC(A))2ln(m+1)+3.31\mathbf{E}(\ln{\mathscr{C}}(A))\le2\ln(m+1)+3.31 for all n>mn>m, the sharpest bound for this expectancy as of today. Both agendas are related through a result which translates between coverage and condition.Comment: Published in at http://dx.doi.org/10.1214/09-AOP489 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations

    Full text link
    We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs---in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure
    corecore