156 research outputs found

    Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers

    Get PDF
    In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration

    A dual version of Tardos's algorithm for linear programming

    Get PDF
    Bibliography: p. 11.by James B. Orlin

    On the worst case complexity of potential reduction algorithms for linear programming

    Get PDF
    Includes bibliographical references (p. 16-17).Supported by a Presidential Young Investigator Award. DDM-9158118 Supported by Draper Laboratory.Dimitris Bertsimas and Xiaodong Luo

    The Dikin-Karmarkar Principle for Steepest Descent

    Get PDF
    This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16565Steepest feasible descent methods for inequality constrained optimization problems have commonly been plagued by short steps. The consequence of taking short steps is slow convergence to non-stationary points (zigzagging). In linear programming, both the projective algorithm of Karmarkar (1984) and its affined-variant, originally proposed by Dikin (1967), can be viewed as steepest feasible descent methods. However, both of these algorithms have been demonstrated to be effective and seem to have overcome the problem of short steps. These algorithms share a common norm. It is this choice of norm, in the context of steepest feasible descent, that we refer to as the Dikin-Karmarkar Principle. This research develops mathematical theory to quantify the short step behavior of Euclidean norm steepest feasible descent methods and the avoidance of short steps for steepest feasible descent with respect to the Dikin-Karmarkar norm. While the theory is developed for linear programming problems with only nonnegativity constraints on the variables. Our numerical experimentation demonstrates that this behavior occurs for the more general linear program with equality constraints added. Our numerical results also suggest that taking longer steps is not sufficient to ensure the efficiency of a steepest feasible descent algorithm. The uniform way in which the Dikin-Karmarkar norm treats every boundary is important in obtaining a satisfactory convergence

    On the convergence of the affine-scaling algorithm

    Get PDF
    Cover title.Includes bibliographical references (p. 20-22).Research partially supported by the National Science Foundation. NSF-ECS-8519058 Research partially supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research partially supported by the Science and Engineering Research Board of McMaster University.by Paul Tseng and Zhi-Quan Luo

    On implementation of a self-dual embedding method for convex programming.

    Get PDF
    by Cheng Tak Wai, Johnny.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 59-62).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 2 --- Background --- p.7Chapter 2.1 --- Self-dual embedding --- p.7Chapter 2.2 --- Conic optimization --- p.8Chapter 2.3 --- Self-dual embedded conic optimization --- p.9Chapter 2.4 --- Connection with convex programming --- p.11Chapter 2.5 --- Chapter summary --- p.15Chapter 3 --- Implementation of the algorithm --- p.17Chapter 3.1 --- The new search direction --- p.17Chapter 3.2 --- Select the step-length --- p.23Chapter 3.3 --- The multi-constraint case --- p.25Chapter 3.4 --- Chapter summary --- p.32Chapter 4 --- Numerical results on randomly generated problem --- p.34Chapter 4.1 --- Single-constraint problems --- p.35Chapter 4.2 --- Multi-constraint problems --- p.36Chapter 4.3 --- Running time and the size of the problem --- p.39Chapter 4.4 --- Chapter summary --- p.42Chapter 5 --- Geometric optimization --- p.45Chapter 5.1 --- Geometric programming --- p.45Chapter 5.1.1 --- Monomials and posynomials --- p.45Chapter 5.1.2 --- Geometric programming --- p.46Chapter 5.1.3 --- Geometric program in convex form --- p.47Chapter 5.2 --- Conic transformation --- p.48Chapter 5.3 --- Computational results of geometric optimization problem --- p.50Chapter 5.4 --- Chapter summary --- p.55Chapter 6 --- Conclusion --- p.5

    Karmarkar's algorithm : a view from nonlinear programming

    Get PDF
    Karmarkar's algorithm for linear programming has become a highly active field of research, because it is claimed to be supremely efficient for the solution of very large calculations, because it has polynomial-time complexity, and because its theoretical properties are interesting. We describe and study the algorithm in the usual way that employs projective transformations and that requires the linear programming problem to be expressed in a standard form, the only inequality constraints being simple bounds on the variables. We then eliminate the dependence on the transformations analytically, which gives the form of the algorithm that can be viewed as a barrier function method from nonlinear programming. In this case the directions of the changes to the variables are solutions of quadratic programming calculations that have no general inequality constraints. By using some of the equalities to eliminate variables, we find a way of applying the algorithm directly to linear programming problems in general form. Thus, except for the addition of at most two new variables that make all but one of the constraints homogeneous, there is no need to increase the original number of variables, even when there are very many constraints. We apply this procedure to a two variable problem with an infinite number of constraints that are derived from tangents to the unit circle. We find that convergence occurs to a point that, unfortunately, is not the solution of the calculation. In finite cases, however, our way of treating general linear constraints directly does preserve all the convergence properties of the standard form of Karmarkar's algorithm
    corecore