2,408 research outputs found
Recommended from our members
Alternative methods for representing the inverse of linear programming basis matrices
Methods for representing the inverse of Linear Programming (LP) basis matrices are closely related to techniques for solving a system of sparse unsymmetric linear equations by direct methods. It is now well accepted that for these problems the static process of reordering the matrix in the lower block triangular (LBT) form constitutes the initial step. We introduce a combined static and dynamic factorisation of a basis matrix and derive its inverse which we call the partial elimination form of the inverse (PEFI). This factorization takes advantage of the LBT structure and produces a sparser representation of the inverse than the elimination form of the inverse (EFI). In this we make use of the original columns (of the constraint matrix) which are in the basis. To represent the factored inverse it is, however, necessary to introduce special data structures which are used in the forward and the backward transformations (the two major algorithmic steps) of the simplex method. These correspond to solving a system of equations and solving a system of equations with the transposed matrix respectively. In this paper we compare the nonzero build up of PEFI with that of EFI. We have also investigated alternative methods for updating the basis inverse in the PEFI representation. The results of our experimental investigation are presented in this pape
Recommended from our members
Experimental investigation of an interior search method within a simple framework
A steepest gradient method for solving Linear Programming (LP) problems, followed by a procedure for purifying a non-basic solution to an improved extreme point solution have been embedded within an otherwise simplex based optimiser. The algorithm is designed to be hybrid in nature and exploits many aspects of sparse matrix and revised simplex technology. The interior search step terminates at a boundary point which is usually non-basic. This is then followed by a series of minor pivotal steps which lead to a basic feasible solution with a superior objective function value. It is concluded that the procedures discussed in this paper are likely to have three possible applications, which are
(i) improving a non-basic feasible solution to a superior extreme point solution,
(iii) an improved starting point for the revised simplex method, and
(iii) an efficient implementation of the multiple price strategy of the revised simplex method
On the Simplex Method using an Artificial Basis
The use of an artificial basis for the simplex method was suggested in an early paper by Dantzig. The idea is based on an observation that certain bases, which differ only in a relatively few columns from the true basis, may be easily inverted. Such artificial bases can then be exploited when carrying out simplex iterations. This idea was originally suggested for solving structured linear programming problems, and several approaches, such as Beale's method of pseudo-basic variables, have indeed been presented in the literature.
In this paper, we shall not consider the structure explicitly; rather its exploitation in our case is expected to result directly from the choice of an artificial basis. We shall consider this basis to remain unchanged over a number of simplex iterations. In particular, this basis may be chosen as the true basis which has been most recently reinverted. In such a case our approach yields an interpretation for a basis representation recently proposed by Bisschop and Meeraus who point out very favorable properties regarding the build-up of nonzero elements in the basis representation.
Our approach utilizes an auxiliary basis, which is small relative to the true basis, and whose dimension may change from one iteration to another. We shall finally develop an updating scheme for a product form representation of the inverse of such an auxiliary basis
Faster Geometric Algorithms via Dynamic Determinant Computation
The computation of determinants or their signs is the core procedure in many
important geometric algorithms, such as convex hull, volume and point location.
As the dimension of the computation space grows, a higher percentage of the
total computation time is consumed by these computations. In this paper we
study the sequences of determinants that appear in geometric algorithms. The
computation of a single determinant is accelerated by using the information
from the previous computations in that sequence.
We propose two dynamic determinant algorithms with quadratic arithmetic
complexity when employed in convex hull and volume computations, and with
linear arithmetic complexity when used in point location problems. We implement
the proposed algorithms and perform an extensive experimental analysis. On one
hand, our analysis serves as a performance study of state-of-the-art
determinant algorithms and implementations. On the other hand, we demonstrate
the supremacy of our methods over state-of-the-art implementations of
determinant and geometric algorithms. Our experimental results include a 20 and
78 times speed-up in volume and point location computations in dimension 6 and
11 respectively.Comment: 29 pages, 8 figures, 3 table
An investigation of pricing strategies within simplex
The PRICE step within the revised simplex method for the LP problems is considered in this report. Established strategies which have proven to be computationally efficient are first reviewed. A method based on the internal rate of return is then described. The implementation of this method and the results obtained by experimental investigation are discussed
High performance simplex solver
The dual simplex method is frequently the most efficient technique for solving linear programming
(LP) problems. This thesis describes an efficient implementation of the sequential dual
simplex method and the design and development of two parallel dual simplex solvers.
In serial, many advanced techniques for the (dual) simplex method are implemented, including
sparse LU factorization, hyper-sparse linear system solution technique, efficient approaches
to updating LU factors and sophisticated dual simplex pivoting rules. These techniques, some
of which are novel, lead to serial performance which is comparable with the best public domain
dual simplex solver, providing a solid foundation for the simplex parallelization.
During the implementation of the sequential dual simplex solver, the study of classic LU
factor update techniques leads to the development of three novel update variants. One of them
is comparable to the most efficient established approach but is much simpler in terms of implementation,
and the other two are specially useful for one of the parallel simplex solvers. In
addition, the study of the dual simplex pivoting rules identifies and motivates further investigation
of how hyper-sparsity maybe promoted.
In parallel, two high performance simplex solvers are designed and developed. One approach,
based on a less-known dual pivoting rule called suboptimization, exploits parallelism across
multiple iterations (PAMI). The other, based on the regular dual pivoting rule, exploits purely
single iteration parallelism (SIP). The performance of PAMI is comparable to a world-leading
commercial simplex solver. SIP is frequently complementary to PAMI in achieving speedup
when PAMI results in slowdown
- …