78 research outputs found

    Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers

    Get PDF
    In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration

    The Dikin-Karmarkar Principle for Steepest Descent

    Get PDF
    This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16565Steepest feasible descent methods for inequality constrained optimization problems have commonly been plagued by short steps. The consequence of taking short steps is slow convergence to non-stationary points (zigzagging). In linear programming, both the projective algorithm of Karmarkar (1984) and its affined-variant, originally proposed by Dikin (1967), can be viewed as steepest feasible descent methods. However, both of these algorithms have been demonstrated to be effective and seem to have overcome the problem of short steps. These algorithms share a common norm. It is this choice of norm, in the context of steepest feasible descent, that we refer to as the Dikin-Karmarkar Principle. This research develops mathematical theory to quantify the short step behavior of Euclidean norm steepest feasible descent methods and the avoidance of short steps for steepest feasible descent with respect to the Dikin-Karmarkar norm. While the theory is developed for linear programming problems with only nonnegativity constraints on the variables. Our numerical experimentation demonstrates that this behavior occurs for the more general linear program with equality constraints added. Our numerical results also suggest that taking longer steps is not sufficient to ensure the efficiency of a steepest feasible descent algorithm. The uniform way in which the Dikin-Karmarkar norm treats every boundary is important in obtaining a satisfactory convergence

    On the worst case complexity of potential reduction algorithms for linear programming

    Get PDF
    Includes bibliographical references (p. 16-17).Supported by a Presidential Young Investigator Award. DDM-9158118 Supported by Draper Laboratory.Dimitris Bertsimas and Xiaodong Luo

    Theoretical Efficiency of A Shifted Barrier Function Algorithm for Linear Programming

    Get PDF
    This paper examines the theoretical efficiency of solving a standard-form linear program by solving a sequence of shifted-barrier problems of the form minimize cTx - n (xj + ehj) j.,1 x s.t. Ax = b , x + e h > , for a given and fixed shift vector h > 0, and for a sequence of values of > 0 that converges to zero. The resulting sequence of solutions to the shifted barrier problems will converge to a solution to the standard form linear program. The advantage of using the shiftedbarrier approach is that a starting feasible solution is unnecessary, and there is no need for a Phase I-Phase II approach to solving the linear program, either directly or through the addition of an artificial variable. Furthermore, the algorithm can be initiated with a "warm start," i.e., an initial guess of a primal solution x that need not be feasible. The number of iterations needed to solve the linear program to a desired level of accuracy will depend on a measure of how close the initial solution x is to being feasible. The number of iterations will also depend on the judicious choice of the shift vector h . If an approximate center of the dual feasible region is known, then h can be chosen so that the guaranteed fractional decrease in e at each iteration is (1 - 1/(6 i)) , which contributes a factor of 6 ii to the number of iterations needed to solve the problem. The paper also analyzes the complexity of computing an approximate center of the dual feasible region from a "warm start," i.e., an initial (possibly infeasible) guess ir of a solution to the center problem of the dual. Key Words: linear program, interior-point algorithm, center, barrier function, shifted-barrier function, Newton step

    Karmarkar's algorithm : a view from nonlinear programming

    Get PDF
    Karmarkar's algorithm for linear programming has become a highly active field of research, because it is claimed to be supremely efficient for the solution of very large calculations, because it has polynomial-time complexity, and because its theoretical properties are interesting. We describe and study the algorithm in the usual way that employs projective transformations and that requires the linear programming problem to be expressed in a standard form, the only inequality constraints being simple bounds on the variables. We then eliminate the dependence on the transformations analytically, which gives the form of the algorithm that can be viewed as a barrier function method from nonlinear programming. In this case the directions of the changes to the variables are solutions of quadratic programming calculations that have no general inequality constraints. By using some of the equalities to eliminate variables, we find a way of applying the algorithm directly to linear programming problems in general form. Thus, except for the addition of at most two new variables that make all but one of the constraints homogeneous, there is no need to increase the original number of variables, even when there are very many constraints. We apply this procedure to a two variable problem with an infinite number of constraints that are derived from tangents to the unit circle. We find that convergence occurs to a point that, unfortunately, is not the solution of the calculation. In finite cases, however, our way of treating general linear constraints directly does preserve all the convergence properties of the standard form of Karmarkar's algorithm

    On the Complexity of Linear Programming

    Get PDF
    In this paper we show a simple treatment of the complexity of Linear Programming. We describe the short step primal-dual path following algorithm and show that it solves the linear programming problem

    Polynomial-time algorithms for linear programming based only on primal scaling and projected gradients of a potential function

    Get PDF
    Bibliography: p. 28-29.Robert M. Freund

    A Regularized Jacobi Method for Large-Scale Linear Programming

    Get PDF
    A parallel algorithm based on Jacobi iterations is proposed to minimize the augmented Lagrangian functions of the multiplier method for large-scale linear programming. Sparsity is efficiently exploited for determining stepsizes (column-wise) for the Jacobi iterations. Linear convergence is shown with convergence ratio depending on sparsity but not on the penalty parameter and on problem size. Employing simulation of parallel computations, an experimental code is tested extensively on 68 Netlib problems. Results are compared with the simplex method, an interior point algorithm and a Gauss-Seidel approach. We observe that speedup against the simplex method generally increases with the problem size, while the parallel solution times increase slowly, if at all. Our preliminary results compared with the other two methods are highly encouraging as well
    corecore