52 research outputs found

    Asymptotic behavior of underlying NT paths in interior point methods for monotone semidefinite linear complementarity problems

    Get PDF
    2010-2011 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Asymptotic Behavior of HKM Paths in Interior Point Method for Monotone Semidefinite Linear Complementarity Problem: General Theory

    Get PDF
    Abstract An interior point method (IPM) defines a search direction at an interior point of the feasible region. These search directions form a direction field which in turn defines a system of ordinary differential equations (ODEs). Thus, it is natural to define the underlying paths of the IPM as the solutions of the systems of ODEs. In Then we show that if the given SDLCP has a unique solution, the first derivative of its off-central path, as a function of √ µ, is bounded. We work under the assumption that the given SDLCP satisfies strict complementarity condition

    Superlinear convergence of an infeasible predictor-corrector path-following interior point algorithm for a semidefinite linear complementarity problem using the Helmberg-Kojima-Monteiro direction

    Get PDF
    2010-2011 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Conic Optimization: Optimal Partition, Parametric, and Stability Analysis

    Get PDF
    A linear conic optimization problem consists of the minimization of a linear objective function over the intersection of an affine space and a closed convex cone. In recent years, linear conic optimization has received significant attention, partly due to the fact that we can take advantage of linear conic optimization to reformulate and approximate intractable optimization problems. Steady advances in computational optimization have enabled us to approximately solve a wide variety of linear conic optimization problems in polynomial time. Nevertheless, preprocessing methods, rounding procedures and sensitivity analysis tools are still the missing parts of conic optimization solvers. Given the output of a conic optimization solver, we need methodologies to generate approximate complementary solutions or to speed up the convergence to an exact optimal solution. A preprocessing method reduces the size of a problem by finding the minimal face of the cone which contains the set of feasible solutions. However, such a preprocessing method assumes the knowledge of an exact solution. More importantly, we need robust sensitivity and post-optimal analysis tools for an optimal solution of a linear conic optimization problem. Motivated by the vital importance of linear conic optimization, we take active steps to fill this gap.This thesis is concerned with several aspects of a linear conic optimization problem, from algorithm through solution identification, to parametric analysis, which have not been fully addressed in the literature. We specifically focus on three special classes of linear conic optimization problems, namely semidefinite and second-order conic optimization, and their common generalization, symmetric conic optimization. We propose a polynomial time algorithm for symmetric conic optimization problems. We show how to approximate/identify the optimal partition of semidefinite optimization and second-order conic optimization, a concept which has its origin in linear optimization. Further, we use the optimal partition information to either generate an approximate optimal solution or to speed up the convergence of a solution identification process to the unique optimal solution of the problem. Finally, we study the parametric analysis of semidefinite and second-order conic optimization problems. We investigate the behavior of the optimal partition and the optimal set mapping under perturbation of the objective function vector

    Error Bounds and Singularity Degree in Semidefinite Programming

    Get PDF
    An important process in optimization is to determine the quality of a proposed solution. This usually entails calculation of the distance of a proposed solution to the optimal set and is referred to as forward error. Since the optimal set is not known, we generally view forward error as intractable. An alternative to forward error is to measure the violation in the constraints or optimality conditions. This is referred to as backward error and it is generally easy to compute. A major issue in optimization occurs when a proposed solution has small backward error, i.e., looks good to the user, but has large forward error, i.e., is far from the optimal set. In 2001, Jos Sturm developed a remarkable upper bound on forward error for spectrahedra (optimal sets of semidefinite programs) in terms of backward error. His bound creates a hierarchy among spectrahedra that is based on singularity degree, an integer between 0 and n-1, derived from facial reduction. For problems with small singularity degree, forward error is similar to backward error, but this may not be true for problems with large singularity degree. In this thesis we provide a method to obtain numerical lower bounds on forward error, thereby complimenting the bounds of Sturm. While the bounds of Sturm identify good convergence, our bounds allow us to detect poor convergence. Our approach may also be used to provide lower bounds on singularity degree, a measure that is difficult to compute in some instances. We show that large singularity degree leads to some undesirable convergence properties for a specific family of central paths. We apply our results in a theoretical sense to some Toeplitz matrix completion problems and in a numerical sense to several test spectrahedra

    Real Algebraic Geometry With a View Toward Moment Problems and Optimization

    Get PDF
    Continuing the tradition initiated in MFO workshop held in 2014, the aim of this workshop was to foster the interaction between real algebraic geometry, operator theory, optimization, and algorithms for systems control. A particular emphasis was given to moment problems through an interesting dialogue between researchers working on these problems in finite and infinite dimensional settings, from which emerged new challenges and interdisciplinary applications

    Stochastic Minimization in the Conformal Bootstrap

    Get PDF
    corecore