9,660 research outputs found

    On inexact Newton directions in interior point methods for linear optimization

    Get PDF
    In each iteration of the interior point method (IPM) at least one linear system has to be solved. The main computational effort of IPMs consists in the computation of these linear systems. Solving the corresponding linear systems with a direct method becomes very expensive for large scale problems. In this thesis, we have been concerned with using an iterative method for solving the reduced KKT systems arising in IPMs for linear programming. The augmented system form of this linear system has a number of advantages, notably a higher degree of sparsity than the normal equations form. We design a block triangular preconditioner for this system which is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. We use the preconditioned conjugate gradients (PCG) method to solve the augmented system. Although the augmented system is indefinite, short recurrence iterative methods such as PCG can be applied to indefinite system in certain situations. This approach has been implemented within the HOPDM interior point solver. The KKT system is solved approximately. Therefore, it becomes necessary to study the convergence of IPM for this inexact case. We present the convergence analysis of the inexact infeasible path-following algorithm, prove the global convergence of this method and provide complexity analysis

    Convergence Analysis of an Inexact Feasible Interior Point Method for Convex Quadratic Programming

    Get PDF
    In this paper we will discuss two variants of an inexact feasible interior point algorithm for convex quadratic programming. We will consider two different neighbourhoods: a (small) one induced by the use of the Euclidean norm which yields a short-step algorithm and a symmetric one induced by the use of the infinity norm which yields a (practical) long-step algorithm. Both algorithms allow for the Newton equation system to be solved inexactly. For both algorithms we will provide conditions for the level of error acceptable in the Newton equation and establish the worst-case complexity results

    Convergence analysis of an Inexact Infeasible Interior Point method for Semidefinite Programming

    Get PDF
    In this paper we present an extension to SDP of the well known infeasible Interior Point method for linear programming of Kojima,Megiddo and Mizuno (A primal-dual infeasible-interior-point algorithm for Linear Programming, Math. Progr., 1993). The extension developed here allows the use of inexact search directions; i.e., the linear systems defining the search directions can be solved with an accuracy that increases as the solution is approached. A convergence analysis is carried out and the global convergence of the method is prove

    A distributed primal-dual interior-point method for loosely coupled problems using ADMM

    Full text link
    In this paper we propose an efficient distributed algorithm for solving loosely coupled convex optimization problems. The algorithm is based on a primal-dual interior-point method in which we use the alternating direction method of multipliers (ADMM) to compute the primal-dual directions at each iteration of the method. This enables us to join the exceptional convergence properties of primal-dual interior-point methods with the remarkable parallelizability of ADMM. The resulting algorithm has superior computational properties with respect to ADMM directly applied to our problem. The amount of computations that needs to be conducted by each computing agent is far less. In particular, the updates for all variables can be expressed in closed form, irrespective of the type of optimization problem. The most expensive computational burden of the algorithm occur in the updates of the primal variables and can be precomputed in each iteration of the interior-point method. We verify and compare our method to ADMM in numerical experiments.Comment: extended version, 50 pages, 9 figure

    Harmonic and Refined Harmonic Shift-Invert Residual Arnoldi and Jacobi--Davidson Methods for Interior Eigenvalue Problems

    Full text link
    This paper concerns the harmonic shift-invert residual Arnoldi (HSIRA) and Jacobi--Davidson (HJD) methods as well as their refined variants RHSIRA and RHJD for the interior eigenvalue problem. Each method needs to solve an inner linear system to expand the subspace successively. When the linear systems are solved only approximately, we are led to the inexact methods. We prove that the inexact HSIRA, RHSIRA, HJD and RHJD methods mimic their exact counterparts well when the inner linear systems are solved with only low or modest accuracy. We show that (i) the exact HSIRA and HJD expand subspaces better than the exact SIRA and JD and (ii) the exact RHSIRA and RHJD expand subspaces better than the exact HSIRA and HJD. Based on the theory, we design stopping criteria for inner solves. To be practical, we present restarted HSIRA, HJD, RHSIRA and RHJD algorithms. Numerical results demonstrate that these algorithms are much more efficient than the restarted standard SIRA and JD algorithms and furthermore the refined harmonic algorithms outperform the harmonic ones very substantially.Comment: 15 pages, 4 figure

    On affine scaling inexact dogleg methods for bound-constrained nonlinear systems

    Get PDF
    Within the framework of affine scaling trust-region methods for bound constrained problems, we discuss the use of a inexact dogleg method as a tool for simultaneously handling the trust-region and the bound constraints while seeking for an approximate minimizer of the model. Focusing on bound-constrained systems of nonlinear equations, an inexact affine scaling method for large scale problems, employing the inexact dogleg procedure, is described. Global convergence results are established without any Lipschitz assumption on the Jacobian matrix, and locally fast convergence is shown under standard assumptions. Convergence analysis is performed without specifying the scaling matrix used to handle the bounds, and a rather general class of scaling matrices is allowed in actual algorithms. Numerical results showing the performance of the method are also given

    Inexact Convex Relaxations for AC Optimal Power Flow: Towards AC Feasibility

    Full text link
    Convex relaxations of AC optimal power flow (AC-OPF) problems have attracted significant interest as in several instances they provably yield the global optimum to the original non-convex problem. If, however, the relaxation is inexact, the obtained solution is not AC-feasible. The quality of the obtained solution is essential for several practical applications of AC-OPF, but detailed analyses are lacking in existing literature. This paper aims to cover this gap. We provide an in-depth investigation of the solution characteristics when convex relaxations are inexact, we assess the most promising AC feasibility recovery methods for large-scale systems, and we propose two new metrics that lead to a better understanding of the quality of the identified solutions. We perform a comprehensive assessment on 96 different test cases, ranging from 14 to 3120 buses, and we show the following: (i) Despite an optimality gap of less than 1%, several test cases still exhibit substantial distances to both AC feasibility and local optimality and the newly proposed metrics characterize these deviations. (ii) Penalization methods fail to recover an AC-feasible solution in 15 out of 45 cases, and using the proposed metrics, we show that most failed test instances exhibit substantial distances to both AC-feasibility and local optimality. For failed test instances with small distances, we show how our proposed metrics inform a fine-tuning of penalty weights to obtain AC-feasible solutions. (iii) The computational benefits of warm-starting non-convex solvers have significant variation, but a computational speedup exists in over 75% of the cases

    Fast algorithms for large scale generalized distance weighted discrimination

    Full text link
    High dimension low sample size statistical analysis is important in a wide range of applications. In such situations, the highly appealing discrimination method, support vector machine, can be improved to alleviate data piling at the margin. This leads naturally to the development of distance weighted discrimination (DWD), which can be modeled as a second-order cone programming problem and solved by interior-point methods when the scale (in sample size and feature dimension) of the data is moderate. Here, we design a scalable and robust algorithm for solving large scale generalized DWD problems. Numerical experiments on real data sets from the UCI repository demonstrate that our algorithm is highly efficient in solving large scale problems, and sometimes even more efficient than the highly optimized LIBLINEAR and LIBSVM for solving the corresponding SVM problems
    corecore