86 research outputs found

    An infeasible interior-point arc-search method with Nesterov's restarting strategy for linear programming problems

    Full text link
    An arc-search interior-point method is a type of interior-point methods that approximates the central path by an ellipsoidal arc, and it can often reduce the number of iterations. In this work, to further reduce the number of iterations and computation time for solving linear programming problems, we propose two arc-search interior-point methods using Nesterov's restarting strategy that is well-known method to accelerate the gradient method with a momentum term. The first one generates a sequence of iterations in the neighborhood, and we prove that the convergence of the generated sequence to an optimal solution and the computation complexity is polynomial time. The second one incorporates the concept of the Mehrotra-type interior-point method to improve numerical performance. The numerical experiments demonstrate that the second one reduced the number of iterations and computational time. In particular, the average number of iterations was reduced compared to existing interior-point methods due to the momentum term.Comment: 33 pages, 6 figures, 2 table

    On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence

    Get PDF
    Interior-point methods provide one of the most popular ways of solving convex optimization problems. Two advantages of modern interior-point methods over other approaches are: (1) robust global convergence, and (2) the ability to obtain high accuracy solutions in theory (and in practice, if the algorithms are properly implemented, and as long as numerical linear system solvers continue to provide high accuracy solutions) for well-posed problem instances. This second ability is typically demonstrated by asymptotic superlinear convergence properties. In this thesis, we study superlinear convergence properties of interior-point methods with proven polynomial iteration complexity. Our focus is on linear programming and semidefinite programming special cases. We provide a survey on polynomial iteration complexity interior-point methods which also achieve asymptotic superlinear convergence. We analyze the elements of superlinear convergence proofs for a dual interior-point algorithm of Nesterov and Tun\c{c}el and a primal-dual interior-point algorithm of Mizuno, Todd and Ye. We present the results of our computational experiments which observe and track superlinear convergence for a variant of Nesterov and Tun\c{c}el's algorithm

    Predictor-corrector interior-point algorithm for sufficient linear complementarity problems based on a new type of algebraic equivalent transformation technique

    Get PDF
    We propose a new predictor-corrector (PC) interior-point algorithm (IPA) for solving linear complementarity problem (LCP) with P_* (Îș)-matrices. The introduced IPA uses a new type of algebraic equivalent transformation (AET) on the centering equations of the system defining the central path. The new technique was introduced by Darvay et al. [21] for linear optimization. The search direction discussed in this paper can be derived from positive-asymptotic kernel function using the function φ(t)=t^2 in the new type of AET. We prove that the IPA has O(1+4Îș)√n log⁡〖(3nÎŒ^0)/Δ〗 iteration complexity, where Îș is an upper bound of the handicap of the input matrix. To the best of our knowledge, this is the first PC IPA for P_* (Îș)-LCPs which is based on this search direction

    Solving Reduced KKT Systems in Barrier Methods for Linear and Quadratic Programming

    Full text link

    Interior-Point Algorithms Based on Primal-Dual Entropy

    Get PDF
    We propose a family of search directions based on primal-dual entropy in the context of interior point methods for linear programming. This new family contains previously proposed search directions in the context of primal-dual entropy. We analyze the new family of search directions by studying their primal-dual affine-scaling and constant-gap centering components. We then design primal-dual interior-point algorithms by utilizing our search directions in a homogeneous and self-dual framework. We present iteration complexity analysis of our algorithms and provide the results of computational experiments on NETLIB problems

    Interior Point Methods 25 Years Later

    Get PDF
    Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their low-degree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables. In this survey we will discuss several issues related to interior point methods including the proof of the worst-case complexity result, the reasons for their amazingly fast practi-cal convergence and the features responsible for their ability to solve very large problems. The ever-growing sizes of optimization problems impose new requirements on optimizatio
    • 

    corecore