176 research outputs found

    High performance simplex solver

    Get PDF
    The dual simplex method is frequently the most efficient technique for solving linear programming (LP) problems. This thesis describes an efficient implementation of the sequential dual simplex method and the design and development of two parallel dual simplex solvers. In serial, many advanced techniques for the (dual) simplex method are implemented, including sparse LU factorization, hyper-sparse linear system solution technique, efficient approaches to updating LU factors and sophisticated dual simplex pivoting rules. These techniques, some of which are novel, lead to serial performance which is comparable with the best public domain dual simplex solver, providing a solid foundation for the simplex parallelization. During the implementation of the sequential dual simplex solver, the study of classic LU factor update techniques leads to the development of three novel update variants. One of them is comparable to the most efficient established approach but is much simpler in terms of implementation, and the other two are specially useful for one of the parallel simplex solvers. In addition, the study of the dual simplex pivoting rules identifies and motivates further investigation of how hyper-sparsity maybe promoted. In parallel, two high performance simplex solvers are designed and developed. One approach, based on a less-known dual pivoting rule called suboptimization, exploits parallelism across multiple iterations (PAMI). The other, based on the regular dual pivoting rule, exploits purely single iteration parallelism (SIP). The performance of PAMI is comparable to a world-leading commercial simplex solver. SIP is frequently complementary to PAMI in achieving speedup when PAMI results in slowdown

    The double pivot simplex method

    Get PDF
    The simplex method, created by George Dantzig, optimally solves a linear program by pivoting. Dantzig’s pivots move from a basic feasible solution to a different basic feasible solution by exchanging exactly one basic variable with a nonbasic variable. This paper introduces the double pivot simplex method, which can transition between basic feasible solutions using two variables instead of one. Double pivots are performed by identifying the optimal basis in a two variable linear program using a new method called the slope algorithm. The slope algorithm is fast and allows an iteration of DPSM to have the same theoretical running time as an iteration of the simplex method. Computational experiments demonstrate that DPSM decreases the average number of pivots by approximately 41% on a small set of benchmark instances

    Implementation of an Interior Point Method with Basis Preconditioning

    Get PDF

    Large-Scale Linear Programming

    Get PDF
    During the week of June 2-6, 1980, the System and Decision Sciences Area of the International Institute for Applied Systems Analysis organized a workshop on large-scale linear programming in collaboration with the Systems Optimization Laboratory (SOL) of Stanford University, and co-sponsored by the Mathematical Programming Society (MPS). The participants in the meeting were invited from amongst those who actively contribute to research in large-scale linear programming methodology (including development of algorithms and software). The first volume of the Proceedings contains five chapters. The first is an historical review by George B. Dantzig of his own and related research in time-staged linear programming problems. Chapter 2 contains five papers which address various techniques for exploiting sparsity and degeneracy in the now standard LU decomposition of the basis used with the simplex algorithm for standard (unstructured) problems. The six papers of Chapter 3 concern aspects of variants of the simplex method which take into account through basis factorization the specific block-angular structure of constraint matrices generated by dynamic and/or stochastic linear programs. In Chapter 4, five papers address extensions of the original Dantzig-Wolfe procedure for utilizing the structure of planning problems by decomposing the original LP into LP subproblems coordinated by a relatively simple LP master problem of a certain type. Chapter 5 contains four papers which constitute a mini-symposium on the now famous Shor-Khachian ellipsoidal method applied to both real and integer linear programs. The first chapter of Volume 2 contains three papers on non-simplex methods for linear programming. The remaining chapters of Volume 2 concern topics of present interest in the field. A bibliography a large-scale linear programming research completes Volume 2

    Advances in design and implementation of optimization software

    Get PDF
    Developing optimization software that is capable of solving large and complex real-life problems is a huge effort. It is based on a deep knowledge of four areas: theory of optimization algorithms, relevant results of computer science, principles of software engineering, and computer technology. The paper highlights the diverse requirements of optimization software and introduces the ingredients needed to fulfill them. After a review of the hardware/software environment it gives a survey of computationally successful techniques for continuous optimization. It also outlines the perspective offered by parallel computing, and stresses the importance of optimization modeling systems. The inclusion of many references is intended to both give due credit to results in the field of optimization software and help readers obtain more detailed information on issues of interest

    Two dimensional search algorithms for linear programming

    Get PDF
    Linear programming is one of the most important classes of optimization problems. These mathematical models have been used by academics and practitioners to solve numerous real world applications. Quickly solving linear programs impacts decision makers from both the public and private sectors. Substantial research has been performed to solve this class of problems faster, and the vast majority of the solution techniques can be categorized as one dimensional search algorithms. That is, these methods successively move from one solution to another solution by solving a one dimensional subspace linear program at each iteration. This dissertation proposes novel algorithms that move between solutions by repeatedly solving a two dimensional subspace linear program. Computational experiments demonstrate the potential of these newly developed algorithms and show an average improvement of nearly 25% in solution time when compared to the corresponding one dimensional search version. This dissertation\u27s research creates the core concept of these two dimensional search algorithms, which is a fast technique to determine an optimal basis and an optimal solution to linear programs with only two variables. This method, called the slope algorithm, compares the slope formed by the objective function with the slope formed by each constraint to determine a pair of constraints that intersect at an optimal basis and an optimal solution. The slope algorithm is implemented within a simplex framework to perform two dimensional searches. This results in the double pivot simplex method. Differently than the well-known simplex method, the double pivot simplex method simultaneously pivots up to two basic variables with two nonbasic variables at each iteration. The theoretical computational complexity of the double pivot simplex method is identical to the simplex method. Computational results show that this new algorithm reduces the number of pivots to solve benchmark instances by approximately 40% when compared to the classical implementation of the simplex method, and 20% when compared to the primal simplex implementation of CPLEX, a high performance mathematical programming solver. Solution times of some random linear programs are also improved by nearly 25% on average. This dissertation also presents a novel technique, called the ratio algorithm, to find an optimal basis and an optimal solution to linear programs with only two constraints. When the ratio algorithm is implemented within a simplex framework to perform two dimensional searches, it results in the double pivot dual simplex method. In this case, the double pivot dual simplex method behaves similarly to the dual simplex method, but two variables are exchanged at every step. Two dimensional searches are also implemented within an interior point framework. This dissertation creates a set of four two dimensional search interior point algorithms derived from primal and dual affine scaling and logarithmic barrier search directions. Each iteration of these techniques quickly solves a two dimensional subspace linear program formed by the intersection of two search directions and the feasible region of the linear program. Search directions are derived by orthogonally partitioning the objective function vector, which allows these novel methods to improve the objective function value at each step by at least as much as the corresponding one dimensional search version. Computational experiments performed on benchmark linear programs demonstrate that these two dimensional search interior point algorithms improve the average solution time by approximately 12% and the average number of iterations by 15%. In conclusion, this dissertation provides a change of paradigm in linear programming optimization algorithms. Implementing two dimensional searches within both a simplex and interior point framework typically reduces the computational time and number of iterations to solve linear programs. Furthermore, this dissertation sets the stage for future research topics in multidimensional search algorithms to solve not only linear programs but also other critical classes of optimization methods. Consequently, this dissertation\u27s research can become one of the first steps to change how commercial and open source mathematical programming software will solve optimization problems

    Row generation techniques for approximate solution of linear programming problems

    Get PDF
    Ankara : The Department of Industrial Engineering and the Institute of Engineering and Science of Bilkent University, 2010.Thesis (Master's) -- Bilkent University, 2010.Includes bibliographical references leaves 69-77.In this study, row generation techniques are applied on general linear programming problems with a very large number of constraints with respect to the problem dimension. A lower bound is obtained for the change in the objective value caused by the generation of a specific row. To achieve row selection that results in a large shift in the feasible region and the objective value at each row generation iteration, the lower bound is used in the comparison of row generation candidates. For a warm-start to the solution procedure, an effective selection of the subset of constraints that constitutes the initial LP is considered. Several strategies are discussed to form such a small subset of constraints so as to obtain an initial solution close to the feasible region of the original LP. Approximation schemes are designed and compared to make possible the termination of row generation at a solution in the proximity of an optimal solution of the input LP. The row generation algorithm presented in this study, which is enhanced with a warm-start strategy and an approximation scheme is implemented and tested for computation time and the number of rows generated. Two efficient primal simplex method variants are used for benchmarking computation times, and the row generation algorithm appears to perform better than at least one of them especially when number of constraints is large.Paç, A BurakM.S

    Improved Primal Simplex: A More General Theoretical Framework and an Extended Experimental Analysis

    Get PDF
    International audienceIn this article, we propose a general framework for an algorithm derived from the primal simplex that guarantees a strict improvement in the objective after each iteration. Our approach relies on the identification of compatible variables that ensure a nondegenerate iteration if pivoted into the basis. The problem of finding a strict improvement in the objective function is proved to be equivalent to two smaller problems respectively focusing on compatible and incompatible variables. We then show that the improved primal simplex (IPS) of Elhallaoui et al. is a particular implementation of this generic theoretical framework. The resulting new description of IPS naturally emphasizes what should be considered as necessary adaptations of the framework versus specific implementation choices. This provides original insight into IPS that allows for the identification of weaknesses and potential alternative choices that would extend the efficiency of the method to a wider set of problems. We perform experimental tests on an extended collection of data sets including instances of Mittelmann's benchmark for linear programming. The results confirm the excellent potential of IPS and highlight some of its limits while showing a path toward an improved implementation of the generic algorithm

    Dynamic Factorization in Large-Scale Optimization

    Get PDF
    Mathematical Programming, 64, pp. 17-51.Factorization of linear programming (LP) models enables a large portion of the LP tableau to be represented implicitly and generated from the remaining explicit part. Dynamic factorization admits algebraic elements which change in dimension during the course of solution. A unifying mathematical framework for dynamic row factorization is presented with three algorithms which derive from different LP model row structures: generalized upper bound rows, pure network rows,and generalized network TOWS. Each of these structures is a generalization of its predecessors, and each corresponding algorithm exhibits just enough additional richness to accommodate the structure at hand within the unified framework. Implementation and computational results are presented for a variety of real-world models. These results suggest that each of these algorithms is superior to the traditional, non-factorized approach, with the degree of improvement depending upon the size and quality of the row factorization identified
    corecore