31 research outputs found

    Projective transformations for interior-point algorithms, and a superlinearly convergent algorithm for the w-center problem

    Get PDF
    Includes bibliographical references.Robert M. Freund

    Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers

    Get PDF
    In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration

    Complexity analysis of a linear complementarity algorithm based on a Lyapunov function

    Get PDF
    Cover title. "Revised version of LIDS-P-1819."Includes bibliographical references.Partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Partially supported by the National Science Foundation. NSF-ECS-8519058by Paul Tseng

    Projective Transformations for Interior Point Methods, Part II: Analysis of An Algorithm for Finding the Weighted Center of a Polyhedral System

    Get PDF
    In Part II of this study, the basic theory of Part I is applied to the problem of finding the w-center of a polyhedral system X . We present a projective transformation algorithm, analagous but more general than Karmarkar's algorithm, for finding the w-center of X . The algorithm exhibits superlinear convergence. At each iteration, the algorithm either improves the objective function (the weighted logarithmic barrier function) by a fixed amount, or at a linear rate of improvement. This linear rate of improvement increases to unity, and so the algorithm is superlinearly convergent. The algorithm also updates an upper bound on the optimal objective value of the weighted logarithmic barrier function at each iteration. The direction chosen at each iteration is shown to be positively proportional to the projected Newton direction. This has two consequences. On the theoretical side, this broadens a result of Bayer and Lagarias regarding the connection between projective transformation methods and Newton's method. In terms of algorithms it means that our algorithm specializes to Vaidya's algorithm if it is used with a line search, and so we see that Vaidya's algorithm is superlinearly convergent as well. Finally, we show how to use the algorithm to construct well-scaled containing and contained ellipsoids centered at near-optimal solutions to the w-center problem. After a fixed number of iterations, the current iterate of the algorithm can be used as an approximate w-center, and one can easily construct well-scaled containing and contained ellipsoids centered at the current iterate, whose scale factor is of the same order as for the w-center itself. Keywords: analytic center, w-center, projective transformation,Newton method, ellipsoid, linear program

    A simple polynomial-time algorithm for convex quadratic programming

    Get PDF
    Caption title.Includes bibliographical references.This research is partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems), contract DAAL03-86-K-0171 This research is partially supported by the National Science Foundation grant NSF-ECS-8519058Paul Tseng

    Karmarkar's algorithm : a view from nonlinear programming

    Get PDF
    Karmarkar's algorithm for linear programming has become a highly active field of research, because it is claimed to be supremely efficient for the solution of very large calculations, because it has polynomial-time complexity, and because its theoretical properties are interesting. We describe and study the algorithm in the usual way that employs projective transformations and that requires the linear programming problem to be expressed in a standard form, the only inequality constraints being simple bounds on the variables. We then eliminate the dependence on the transformations analytically, which gives the form of the algorithm that can be viewed as a barrier function method from nonlinear programming. In this case the directions of the changes to the variables are solutions of quadratic programming calculations that have no general inequality constraints. By using some of the equalities to eliminate variables, we find a way of applying the algorithm directly to linear programming problems in general form. Thus, except for the addition of at most two new variables that make all but one of the constraints homogeneous, there is no need to increase the original number of variables, even when there are very many constraints. We apply this procedure to a two variable problem with an infinite number of constraints that are derived from tangents to the unit circle. We find that convergence occurs to a point that, unfortunately, is not the solution of the calculation. In finite cases, however, our way of treating general linear constraints directly does preserve all the convergence properties of the standard form of Karmarkar's algorithm

    Some Modifications and Extensions of Karmarkar's Main Algorithm with Computational Experiences

    Get PDF
    The Karmarkar algorithm and its modifications are studied in this thesis. A modified line search algorithm with extended searching bound to the facet of the simplex is developed and implemented. Using this modification, a modified row partition method is tested. Both algorithms are coded in Fortran 77 and compared their performances with the original Karmarkar algorithm. The modifications are promising and other extensions are encouraged.Computing and Information Scienc

    An Implementation of a Revised Karmarkar\u27s Method

    Get PDF
    We will show a variant of the Karmarkar\u27s algorithm for LPs with sparse matrices. We deal with the standard form LP. Starting from an initial interior point, one interation of our method consists of choice of a basis, factorization of the basis, optimality test, reduced gradient, conjugate gradient method and determination of the next point of iterate. A combination of the reduced gradient and the conjugate gradient method is used for generating the steepest descent direction of the transformed objective function. Bases which are maintained and updated throughout the iterations are effectively utilized. As a basis, we choose the linearly independent columns of the coefficient matrix corresponding to the decreasing order of the variables. The basis is then factorized in the LU-form which is used in the computations throughout the iteration. Preliminary numerical experiments will be reported. Emphasis is laid on the implementational issues of the sparse basis
    corecore