35 research outputs found

    Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers

    Get PDF
    In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration

    The Dikin-Karmarkar Principle for Steepest Descent

    Get PDF
    This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16565Steepest feasible descent methods for inequality constrained optimization problems have commonly been plagued by short steps. The consequence of taking short steps is slow convergence to non-stationary points (zigzagging). In linear programming, both the projective algorithm of Karmarkar (1984) and its affined-variant, originally proposed by Dikin (1967), can be viewed as steepest feasible descent methods. However, both of these algorithms have been demonstrated to be effective and seem to have overcome the problem of short steps. These algorithms share a common norm. It is this choice of norm, in the context of steepest feasible descent, that we refer to as the Dikin-Karmarkar Principle. This research develops mathematical theory to quantify the short step behavior of Euclidean norm steepest feasible descent methods and the avoidance of short steps for steepest feasible descent with respect to the Dikin-Karmarkar norm. While the theory is developed for linear programming problems with only nonnegativity constraints on the variables. Our numerical experimentation demonstrates that this behavior occurs for the more general linear program with equality constraints added. Our numerical results also suggest that taking longer steps is not sufficient to ensure the efficiency of a steepest feasible descent algorithm. The uniform way in which the Dikin-Karmarkar norm treats every boundary is important in obtaining a satisfactory convergence

    Projective transformations for interior-point algorithms, and a superlinearly convergent algorithm for the w-center problem

    Get PDF
    Includes bibliographical references.Robert M. Freund

    Projective transformations for interior point methods, Part I: Basic Theory and Linear Programming

    Get PDF
    Includes bibliographical references (p. [53]-[55]).by Robert M. Freund

    On the convergence of the affine-scaling algorithm

    Get PDF
    Cover title.Includes bibliographical references (p. 20-22).Research partially supported by the National Science Foundation. NSF-ECS-8519058 Research partially supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research partially supported by the Science and Engineering Research Board of McMaster University.by Paul Tseng and Zhi-Quan Luo

    Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function

    Get PDF
    Includes bibliographical references (p. 28-29).by Robert M. Freund

    Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function

    Get PDF
    Bibliography: p. 28-29.Robert M. Freund

    Convergence property of the Iri-Imai algorithm for some smooth convex programming problems

    Get PDF
    In this paper, the Iri-Imai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity, called the harmonic convexity in this paper. A characterization of this convexity condition is given. The same convexity condition was used by Mehrotra and Sun to prove the convergence of a path-following algorithm. The Iri-Imai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior-point algorithms for smooth convex programming are mainly based on the path-following approach
    corecore