5 research outputs found

    A new perspective on the complexity of interior point methods for linear programming

    Get PDF
    In a dynamical systems paradigm, many optimization algorithms are equivalent to applying forward Euler method to the system of ordinary differential equations defined by the vector field of the search directions. Thus the stiffness of such vector fields will play an essential role in the complexity of these methods. We first exemplify this point with a theoretical result for general linesearch methods for unconstrained optimization, which we further employ to investigating the complexity of a primal short-step path-following interior point method for linear programming. Our analysis involves showing that the Newton vector field associated to the primal logarithmic barrier is nonstiff in a sufficiently small and shrinking neighbourhood of its minimizer. Thus, by confining the iterates to these neighbourhoods of the primal central path, our algorithm has a nonstiff vector field of search directions, and we can give a worst-case bound on its iteration complexity. Furthermore, due to the generality of our vector field setting, we can perform a similar (global) iteration complexity analysis when the Newton direction of the interior point method is computed only approximately, using some direct method for solving linear systems of equations

    A structured modified Newton approach for solving systems of nonlinear equations arising in interior-point methods for quadratic programming

    Full text link
    The focus in this work is on interior-point methods for inequality-constrained quadratic programs, and particularly on the system of nonlinear equations to be solved for each value of the barrier parameter. Newton iterations give high quality solutions, but we are interested in modified Newton systems that are computationally less expensive at the expense of lower quality solutions. We propose a structured modified Newton approach where each modified Jacobian is composed of a previous Jacobian, plus one low-rank update matrix per succeeding iteration. Each update matrix is, for a given rank, chosen such that the distance to the Jacobian at the current iterate is minimized, in both 2-norm and Frobenius norm. The approach is structured in the sense that it preserves the nonzero pattern of the Jacobian. The choice of update matrix is supported by results in an ideal theoretical setting. We also produce numerical results with a basic interior-point implementation to investigate the practical performance within and beyond the theoretical framework. In order to improve performance beyond the theoretical framework, we also motivate and construct two heuristics to be added to the method

    Effects Of Finite-Precision Arithmetic On Interior-Point Methods For Nonlinear Programming

    No full text
    We show that the effects of finite-precision arithmetic in forming and solving the linear system that arises at each iteration of primal-dual interior-point algorithms for nonlinear programming are benign. When we replace the standard assumption that the active constraint gradients are independentby the weaker Mangasarian-Fromovitz constraint qualifiation, rapid convergence usually is attainable, even when cancellation and roundoff errors occur during the calculations. In deriving our main results, we proveakey technical result about the size of the exact primal-dual step. This result can be used to modify existing analysis of primal-dual interior-point methods for convex programming, making it possible to extend the superlinear local convergence results to the nonconvex case

    Spectral estimates and preconditioning for saddle point systems arising from optimization problems

    Get PDF
    In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved
    corecore