32 research outputs found

    Duality Results for Conic Convex Programming

    Get PDF
    This paper presents a unified study of duality properties for the problem of minimizing a linear function over the intersection of an affine space with a convex cone in finite dimension. Existing duality results are carefully surveyed and some new duality properties are established. Examples are given to illustrate these new properties. The topics covered in this paper include Gordon-Stiemke type theorems, Farkas type theorems, perfect duality, Slater condition, regularization, Ramana's duality, and approximate dualities. The dual representations of various convex sets, convex cones and conic convex programs are also discussed

    Superlinear convergence of a symmetric primal-dual path following algorithm for semidefinite programming

    Get PDF
    This paper establishes the superlinear convergence of a symmetric primal-dual path following algorithm for semidefinite programming under the assumptions that the semidefinite program has a strictly complementary primal-dual optimal solution and that the size of the central path neighborhood tends to zero. The interior point algorithm considered here closely resembles the Mizuno-Todd-Ye predictor-corrector method for linear programming which is known to be quadratically convergent. It is shown that when the iterates are well centered, the duality gap is reduced superlinearly after each predictor step. Indeed, if each predictor step is succeeded by [TeX: rr] consecutive corrector steps then the predictor reduces the duality gap superlinearly with order [TeX: frac21+2−2r\\frac{2}{1+2^{-2r}}]. The proof relies on a careful analysis of the central path for semidefinite programming. It is shown that under the strict complementarity assumption, the primal-dual central path converges to the analytic center of the primal-dual optimal solution set, and the distance from any point on the central path to this analytic center is bounded by the duality gap

    Conic convex programming and self-dual embedding

    Get PDF
    How to initialize an algorithm to solve an optimization problem is of great theoretical and practical importance. In the simplex method for linear programming this issue is resolved by either the two-phase approach or using the so-called big M technique. In the interior point method, there is a more elegant way to deal with the initialization problem, viz. the self-dual embedding technique proposed by Ye, Todd and Mizuno (30). For linear programming this technique makes it possible to identify an optimal solution or conclude the problem to be infeasible/unbounded by solving its embedded self-dual problem. The embedded self-dual problem has a trivial initial solution and has the same structure as the original problem. Hence, it eliminates the need to consider the initialization problem at all. In this paper, we extend this approach to solve general conic convex programming, including semidefinite programming. Since a nonlinear conic convex programming problem may lack the so-called strict complementarity property, it causes difficulties in identifying solutions for the original problem, based on solutions for the embedded self-dual system. We provide numerous examples from semidefinite programming to illustrate various possibilities which have no analogue in the linear programming case

    Matrix convex functions with applications to weighted centers for semidefinite programming

    Get PDF
    In this paper, we develop various calculus rules for general smooth matrix-valued functions and for the class of matrix convex (or concave) functions first introduced by Loewner and Kraus in 1930s. Then we use these calculus rules and the matrix convex function -log X to study a new notion of weighted convex centers for semidefinite programming (SDP) and show that, with this definition, some known properties of weighted centers for linear programming can be extended to SDP. We also show how the calculus rules for matrix convex functions can be used in the implementation of barrier methods for optimization problems involving nonlinear matrix functions

    On the extensions of the Frank-Worfe theorem

    No full text
    In this paper we consider optimization problems defined by a quadratic objective function and a finite number of quadratic inequality constraints. Given that the objective function is bounded over the feasible set, we present a comprehensive study of the conditions under which the optimal solution set is nonempty, thus extending the so-called Frank-Wolfe theorem. In particular, we first prove a general continuity result for the solution set defined by a system of convex quadratic inequalities. This result implies immediately that the optimal solution set of the aforementioned problem is nonempty when all the quadratic functions involved are convex. In the absence of the convexity of the objective function, we give examples showing that the optimal solution set may be empty either when there are two or more convex quadratic constraints, or when the Hessian of the objective function has two or more negative eigenvalues. In the case when there exists only one convex quadratic inequality constraint (together with other linear constraints), or when the constraint functions are all convex quadratic and the objective function is quasi-convex (thus allowing one negative eigenvalue in its Hessian matrix), we prove that the optimal solution set is nonempty
    corecore