43 research outputs found

    Theoretical Efficiency of A Shifted Barrier Function Algorithm for Linear Programming

    Get PDF
    This paper examines the theoretical efficiency of solving a standard-form linear program by solving a sequence of shifted-barrier problems of the form minimize cTx - n (xj + ehj) j.,1 x s.t. Ax = b , x + e h > , for a given and fixed shift vector h > 0, and for a sequence of values of > 0 that converges to zero. The resulting sequence of solutions to the shifted barrier problems will converge to a solution to the standard form linear program. The advantage of using the shiftedbarrier approach is that a starting feasible solution is unnecessary, and there is no need for a Phase I-Phase II approach to solving the linear program, either directly or through the addition of an artificial variable. Furthermore, the algorithm can be initiated with a "warm start," i.e., an initial guess of a primal solution x that need not be feasible. The number of iterations needed to solve the linear program to a desired level of accuracy will depend on a measure of how close the initial solution x is to being feasible. The number of iterations will also depend on the judicious choice of the shift vector h . If an approximate center of the dual feasible region is known, then h can be chosen so that the guaranteed fractional decrease in e at each iteration is (1 - 1/(6 i)) , which contributes a factor of 6 ii to the number of iterations needed to solve the problem. The paper also analyzes the complexity of computing an approximate center of the dual feasible region from a "warm start," i.e., an initial (possibly infeasible) guess ir of a solution to the center problem of the dual. Key Words: linear program, interior-point algorithm, center, barrier function, shifted-barrier function, Newton step

    A Variable Metric Variant of the Karmarkar Algorithm for Linear Programming

    Get PDF
    The most time-consuming part of the Karmarkar algorithm for linear programming is the projection of a vector onto the nullspace of a matrix that changes at each iteration. We present a variant of the Karmarkar algorithm that uses standard variable-metric techniques in an innovative way to approximate this projection. In limited tests, this modification greatly reduces the number of matrix factorizations needed for the solution of linear programming problems

    A Regularized Jacobi Method for Large-Scale Linear Programming

    Get PDF
    A parallel algorithm based on Jacobi iterations is proposed to minimize the augmented Lagrangian functions of the multiplier method for large-scale linear programming. Sparsity is efficiently exploited for determining stepsizes (column-wise) for the Jacobi iterations. Linear convergence is shown with convergence ratio depending on sparsity but not on the penalty parameter and on problem size. Employing simulation of parallel computations, an experimental code is tested extensively on 68 Netlib problems. Results are compared with the simplex method, an interior point algorithm and a Gauss-Seidel approach. We observe that speedup against the simplex method generally increases with the problem size, while the parallel solution times increase slowly, if at all. Our preliminary results compared with the other two methods are highly encouraging as well

    Adaptive Use of Iterative Methods in Interior Point Methods for Linear Programming

    Get PDF
    In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. These ideas are tested by applying a modified version of the OB1-R code of Lustig, Marsten, and Shanno to a variety of problems from the NETLIB and other collections. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained. (Also cross-referenced as UMIACS-TR-95-111

    Calculation of chemical and phase equilibria

    Get PDF
    Bibliography: pages 167-169.The computation of chemical and phase equilibria is an essential aspect of chemical engineering design and development. Important applications range from flash calculations to distillation and pyrometallurgy. Despite the firm theoretical foundations on which the theory of chemical equilibrium is based there are two major difficulties that prevent the equilibrium state from being accurately determined. The first of these hindrances is the inaccuracy or total absence of pertinent thermodynamic data. The second is the complexity of the required calculation. It is the latter consideration which is the sole concern of this dissertation

    Existence and computation of a Cournot-Walras equilibrium

    Get PDF
    In this paper we present a general approach to existence problems in Cournot-Walras (CW) economies, based on mathematical programming theory. We propose a definition of the decision problem of firms which avoids the profit maximization rule as the only rational criterion for the firms and uses the excess demand function instead of the inverse demand function. We prove the existence of a CW equilibrium and we state practical conditions to characterize a CW equilibrium. We also propose efficient algorithms for computing CW equilibria. Finally, we consider some extensions such as externalities, Stackelberg, collusive and Nash equilibrium model
    corecore