116 research outputs found

    Convergence Analysis of an Inexact Feasible Interior Point Method for Convex Quadratic Programming

    Get PDF
    In this paper we will discuss two variants of an inexact feasible interior point algorithm for convex quadratic programming. We will consider two different neighbourhoods: a (small) one induced by the use of the Euclidean norm which yields a short-step algorithm and a symmetric one induced by the use of the infinity norm which yields a (practical) long-step algorithm. Both algorithms allow for the Newton equation system to be solved inexactly. For both algorithms we will provide conditions for the level of error acceptable in the Newton equation and establish the worst-case complexity results

    A new perspective on the complexity of interior point methods for linear programming

    Get PDF
    In a dynamical systems paradigm, many optimization algorithms are equivalent to applying forward Euler method to the system of ordinary differential equations defined by the vector field of the search directions. Thus the stiffness of such vector fields will play an essential role in the complexity of these methods. We first exemplify this point with a theoretical result for general linesearch methods for unconstrained optimization, which we further employ to investigating the complexity of a primal short-step path-following interior point method for linear programming. Our analysis involves showing that the Newton vector field associated to the primal logarithmic barrier is nonstiff in a sufficiently small and shrinking neighbourhood of its minimizer. Thus, by confining the iterates to these neighbourhoods of the primal central path, our algorithm has a nonstiff vector field of search directions, and we can give a worst-case bound on its iteration complexity. Furthermore, due to the generality of our vector field setting, we can perform a similar (global) iteration complexity analysis when the Newton direction of the interior point method is computed only approximately, using some direct method for solving linear systems of equations

    Interior point method for linear and convex optimizations.

    Get PDF
    by Shiu-Tung Ng.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 100-103).Abstract also in Chinese.Chapter 1 --- Preliminary --- p.5Chapter 1.1 --- Linear and Convex Optimization Model --- p.5Chapter 1.2 --- Notations for Linear Optimization --- p.5Chapter 1.3 --- Definition and Properties of Convexities --- p.7Chapter 1.4 --- Useful Theorem for Unconstrained Minimization --- p.10Chapter 2 --- Linear Optimization --- p.11Chapter 2.1 --- Self-dual Linear Optimization Model --- p.11Chapter 2.2 --- Definitions and Main Theorems --- p.14Chapter 2.3 --- Self-dual Embedding and Simple Example --- p.22Chapter 2.4 --- Newton step --- p.25Chapter 2.5 --- "Rescaling and Definition of δ(xs,w)" --- p.29Chapter 2.6 --- An Interior Point Method --- p.32Chapter 2.6.1 --- Algorithm with Full Newton Steps --- p.33Chapter 2.6.2 --- Iteration Bound --- p.33Chapter 2.7 --- Background and Rounding Procedure for Interior-point Solution --- p.36Chapter 2.8 --- Solving Some LP problems --- p.42Chapter 2.9 --- Remarks --- p.51Chapter 3 --- Convex Optimization --- p.53Chapter 3.1 --- Introduction --- p.53Chapter 3.1.1 --- Convex Optimization Problem --- p.53Chapter 3.1.2 --- Idea of Interior Point Method --- p.55Chapter 3.2 --- Logarithmic Barrier Method --- p.55Chapter 3.2.1 --- Basic Concepts and Properties --- p.55Chapter 3.2.2 --- k-Self-Concordance Condition --- p.62Chapter 3.2.3 --- Short-step Logarithmic Barrier Algorithm --- p.64Chapter 3.2.4 --- Initialization Algorithm --- p.67Chapter 3.3 --- Center Method --- p.70Chapter 3.3.1 --- Basic Concepts and Properties --- p.70Chapter 3.3.2 --- Short-step Center Algorithm --- p.75Chapter 3.3.3 --- Initialization Algorithm --- p.76Chapter 3.4 --- Properties and Examples on Self-Concordance --- p.78Chapter 3.5 --- Examples of Convex Optimization Problem --- p.82Chapter 3.5.1 --- Self-concordant Logarithmic Barrier and Distance Function --- p.82Chapter 3.5.2 --- General Convex Optimization Problems --- p.91Chapter 3.6 --- Remarks --- p.98Bibliograph

    A polynomial-time interior-point method for conic optimization, with inexact barrier evaluations

    Get PDF
    We consider a primal-dual short-step interior-point method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if approximate gradients and Hessians of the primal barrier function can be computed, and the relative errors in such quantities are not too large, then the method has polynomial worst-case iteration complexity. (In particular, polynomial iteration complexity ensues when the gradient and Hessian are evaluated exactly.) In addition, the algorithm requires no evaluation---or even approximate evaluation---of quantities related to the barrier function for the dual cone, even for problems in which the underlying cone is not self-dual

    A new adaptive algorithm for convex quadratic multicriteria optimization

    No full text
    We present a new adaptive algorithm for convex quadratic multicriteria optimization. The algorithm is able to adaptively refine the approximation to the set of efficient points by way of a warm-start interior-point scalarization approach. Numerical results show that this technique is faster than a standard method used for this problem

    A New Adaptive Algorithm for Convex Quadratic Multicriteria Optimization

    Get PDF
    We present a new adaptive algorithm for convex quadratic multicriteria optimization. The algorithm is able to adaptively refine the approximation to the set of efficient points by way of a warm-start interior-point scalarization approach. Numerical results show that this technique is an order of magnitude faster than a standard method used for this problem

    A geodesic interior-point method for linear optimization over symmetric cones

    Full text link
    We develop a new interior-point method for symmetric-cone optimization, a common generalization of linear, second-order-cone, and semidefinite programming. Our key idea is updating iterates with a geodesic of the cone instead of the kernel of the linear constraints. This approach yields a primal-dual-symmetric, scale-invariant, and line-search-free algorithm that uses just half the variables of a standard primal-dual method. With elementary arguments, we establish polynomial-time convergence matching the standard square-root-n bound. Finally, we prove global convergence of a long-step variant and compare the approaches computationally. For linear programming, our algorithms reduce to central-path tracking in the log domain

    Convergence and polynomiality of primal-dual interior-point algorithms for linear programming with selective addition of inequalities

    Get PDF
    This paper presents the convergence proof and complexity analysis of an interior-point framework that solves linear programming problems by dynamically selecting and adding relevant inequalities. First, we formulate a new primal–dual interior-point algorithm for solving linear programmes in non-standard form with equality and inequality constraints. The algorithm uses a primal–dual path-following predictor–corrector short-step interior-point method that starts with a reduced problem without any inequalities and selectively adds a given inequality only if it becomes active on the way to optimality. Second, we prove convergence of this algorithm to an optimal solution at which all inequalities are satisfied regardless of whether they have been added by the algorithm or not. We thus provide a theoretical foundation for similar schemes already used in practice. We also establish conditions under which the complexity of such algorithm is polynomial in the problem dimension and address remaining limitations without these conditions for possible further research
    • …
    corecore