46 research outputs found

    Some Characterizations and Properties of the "Distance to Ill-Posedness" and the Condition Measure of a Conic Linear System

    Get PDF
    A conic linear system is a system of the form P: find x that solves b- Ax E Cy, E Cx, where Cx and Cy are closed convex cones, and the data for the system is d = (A, b). This system is"well-posed" to the extent that (small) changes in the data (A, b) do not alter the status of the system (the system remains solvable or not). Intuitively, the more well-posed the system is, the easier it should be to solve the system or to demonstrate its infeasibility via a theorem of the alternative. Renegar defined the "distance to ill-posedness," p(d), to be the smallest distance of the data d = (A, b) to other data d = (A, b) for which the system P is "ill=posed," i.e., d = (A, b) is in the intersection of the closure of feasible and infeasible instances d' = (A', b') of P. Renegar also defined the "condition measure" of the data instance d as C(d) Alldll/p(d), and showed that this measure is a natural extension of the familiar condition measure associated with systems of linear equation. This study presents two categories of results related to p(d), the distance to ill-posedness, and C(d), the condition measure of d. The first category of results involves the approximation of p(d) as the optimal value of certain mathematical programs. We present ten different mathematical programs each of whose optimal values provides an approximation of p(d) to within certain constant factors, depending on whether P is feasible or not. The second category of results involves the existence of certain inscribed and intersecting balls involving the feasible region of P or the feasible region of its alternative system, in the spirit of the ellipsoid algorithm. These results roughly state that the feasible region of P (or its alternative system when P is not feasible) will contain a ball of radius r that is itself no more than a distance R from the origin, where the ratio R/r satisfies R/r _ ( -i) and R < O(n C(d)), where n is the dimension of the feasible region. Therefore the condition measure C(d) is a relevant tool in proving the existence of an inscribed ball in the feasible region of P that is not too far from the origin and whose radius is not too small

    Pre-Conditioners and Relations between Different Measures of Conditioning for Conic Linear Systems

    Get PDF
    In recent years, new and powerful research into "condition numbers" for convex optimization has been developed, aimed at capturing the intuitive notion of problem behavior. This research has been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as problem geometry, deformation under data perturbation, etc. This paper studies measures of conditioning for a conic linear system of the form (FPd): Ax = b, x E Cx, whose data is d = (A, b). We present a new measure of conditioning, denoted pd, and we show implications of lid for problem geometry and algorithm complexity, and demonstrate that the value of = id is independent of the specific data representation of (FPd). We then prove certain relations among a variety of condition measures for (FPd), including ld, pad, Xd, and C(d). We discuss some drawbacks of using the condition number C(d) as the sole measure of conditioning of a conic linear system, and we then introduce the notion of a "pre-conditioner" for (FPd) which results in an equivalent formulation (FPj) of (FPd) with a better condition number C(d). We characterize the best such pre-conditioner and provide an algorithm for constructing an equivalent data instance d whose condition number C(d) is within a known factor of the best possible

    Condition-Measure Bounds on the Behavior of the Central Trajectory of a Semi-Definete Program

    Get PDF
    We present bounds on various quantities of interest regarding the central trajectory of a semi-definite program (SDP), where the bounds are functions of Renegar's condition number C(d) and other naturally-occurring quantities such as the dimensions n and m. The condition number C(d) is defined in terms of the data instance d = (A, b, C) for SDP; it is the inverse of a relative measure of the distance of the data instance to the set of ill-posed data instances, that is, data instances for which arbitrary perturbations would make the corresponding SDP either feasible or infeasible. We provide upper and lower bounds on the solutions along the central trajectory, and upper bounds on changes in solutions and objective function values along the central trajectory when the data instance is perturbed and/or when the path parameter defining the central trajectory is changed. Based on these bounds, we prove that the solutions along the central trajectory grow at most linearly and at a rate proportional to the inverse of the distance to ill-posedness, and grow at least linearly and at a rate proportional to the inverse of C(d)2 , as the trajectory approaches an optimal solution to the SDP. Furthermore, the change in solutions and in objective function values along the central trajectory is at most linear in the size of the changes in the data. All such bounds involve polynomial functions of C(d), the size of the data, the distance to ill-posedness of the data, and the dimensions n and m of the SDP

    An Efficient Re-Scaled Perceptron Algorithm for Conic Systems

    Get PDF
    The classical perceptron algorithm is an elementary row-action/relaxation algorithm for solving a homogeneous linear inequality system Ax > 0. A natural condition measure associated with this algorithm is the Euclidean width T of the cone of feasible solutions, and the iteration complexity of the perceptron algorithm is bounded by 1/T^2, see Rosenblatt 1962. Dunagan and Vempala have developed a re-scaled version of the perceptron algorithm with an improved complexity of O(n ln(1/T)) iterations (with high probability), which is theoretically efficient in T, and in particular is polynomial-time in the bit-length model. We explore extensions of the concepts of these perceptron methods to the general homogeneous conic system Ax is an element of a set int K where K is a regular convex cone. We provide a conic extension of the re-scaled perceptron algorithm based on the notion of a deep-separation oracle of a cone, which essentially computes a certificate of strong separation. We give a general condition under which the re-scaled perceptron algorithm is itself theoretically efficient; this includes the cases when K is the cross-product of half-spaces, second-order cones, and the positive semi-definite cone

    Computational Complexity versus Statistical Performance on Sparse Recovery Problems

    Get PDF
    We show that several classical quantities controlling compressed sensing performance directly match classical parameters controlling algorithmic complexity. We first describe linearly convergent restart schemes on first-order methods solving a broad range of compressed sensing problems, where sharpness at the optimum controls convergence speed. We show that for sparse recovery problems, this sharpness can be written as a condition number, given by the ratio between true signal sparsity and the largest signal size that can be recovered by the observation matrix. In a similar vein, Renegar's condition number is a data-driven complexity measure for convex programs, generalizing classical condition numbers for linear systems. We show that for a broad class of compressed sensing problems, the worst case value of this algorithmic complexity measure taken over all signals matches the restricted singular value of the observation matrix which controls robust recovery performance. Overall, this means in both cases that, in compressed sensing problems, a single parameter directly controls both computational complexity and recovery performance. Numerical experiments illustrate these points using several classical algorithms.Comment: Final version, to appear in information and Inferenc

    Condition-based complexity of convex optimization in conic linear form via the ellipsoid algorithm

    Get PDF
    "September 1997."Includes bibliographical references (p. 28-29).by R.M. Freund and J.R. Vera

    Condition measures and properties of the central trajectory of a linear program

    Get PDF
    Includes bibliographical references (p. 37-39).M.A. Nunez and R.M. Freund
    corecore