4,050 research outputs found

    Regularized Decomposition of Stochastic Programs: Algorithmic Techniques and Numerical Results

    Get PDF
    A finitely convergent non-simplex method for large scale structured linear programming problems arising in stochastic programming is presented. The method combines the ideas of the Dantzig-Wolfe decomposition principle and modern nonsmooth optimization methods. Algorithmic techniques taking advantage of properties of stochastic programs are described and numerical results for large real world problems reported

    The Complexity of the Simplex Method

    Get PDF
    The simplex method is a well-studied and widely-used pivoting method for solving linear programs. When Dantzig originally formulated the simplex method, he gave a natural pivot rule that pivots into the basis a variable with the most violated reduced cost. In their seminal work, Klee and Minty showed that this pivot rule takes exponential time in the worst case. We prove two main results on the simplex method. Firstly, we show that it is PSPACE-complete to find the solution that is computed by the simplex method using Dantzig's pivot rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a specific variable to enter the basis is PSPACE-complete. We use the known connection between Markov decision processes (MDPs) and linear programming, and an equivalence between Dantzig's pivot rule and a natural variant of policy iteration for average-reward MDPs. We construct MDPs and show PSPACE-completeness results for single-switch policy iteration, which in turn imply our main results for the simplex method

    A Study of the Properties of Parametric Programming

    Get PDF
    Programming is defined as the planning of activities for the sake of optimization. When linear constraints are ·assumed, together with a linear objective function, the optimization is defined as solving a linear programming problem. Linear programming was originally developed by Dantzig, Wood and (others for the U.S. Air Force. Since it was first published in 1947, it has been used widely in industrial as well as military situations. Examples of specific exotic and straight forward applications of the linear programming technique can be found in text books, monographs and technical papers in the several fields in which it has been used. The simplex procedure is the most powerful and efficient technique in existence for the solution of linear programming problems. It is assumed that the reader is familiar with the basic simplex procedure and nomenclature. Within the generalized simplex method, there are several specialized techniques that are superior for specific problem areas

    センケイ ソウホセイ モンダイ ニ タイスル コウバイ トウエイホウ ニツイテ

    Get PDF
    Simplex method for solving linear programming problem is most used and efficient procedure. After G.B. Dantzig first proposed his original simplex method, a lot of improved methods have been developed. Recently, besides those simplex-like procedures, iteration method have been investigated as an approximate solution for a large scale linear programming problem along with inner point method. Especially speaking, these iterative methods are attractive in a point of vectorization or parallelism of computer procedure. In this paper, we propose a scaled gradient projection algorithm which guarantees the efficient polynomial convergence and also investigate into the possibility of its parallel procedure

    Mechanism Design via Dantzig-Wolfe Decomposition

    Full text link
    In random allocation rules, typically first an optimal fractional point is calculated via solving a linear program. The calculated point represents a fractional assignment of objects or more generally packages of objects to agents. In order to implement an expected assignment, the mechanism designer must decompose the fractional point into integer solutions, each satisfying underlying constraints. The resulting convex combination can then be viewed as a probability distribution over feasible assignments out of which a random assignment can be sampled. This approach has been successfully employed in combinatorial optimization as well as mechanism design with or without money. In this paper, we show that both finding the optimal fractional point as well as its decomposition into integer solutions can be done at once. We propose an appropriate linear program which provides the desired solution. We show that the linear program can be solved via Dantzig-Wolfe decomposition. Dantzig-Wolfe decomposition is a direct implementation of the revised simplex method which is well known to be highly efficient in practice. We also show how to use the Benders decomposition as an alternative method to solve the problem. The proposed method can also find a decomposition into integer solutions when the fractional point is readily present perhaps as an outcome of other algorithms rather than linear programming. The resulting convex decomposition in this case is tight in terms of the number of integer points according to the Carath{\'e}odory's theorem

    On the Regularized Decomposition Method for Two Stage Stochastic Linear Problems

    Get PDF
    A new approach to the regularized decomposition (RD) algorithm for two stage stochastic problems is presented. The RD method combines the ideas of the Dantzig-Wolfe decomposition principle and modern nonsmooth optimization methods. A new subproblem solution method using the primal simplex algorithm for linear programming is proposed and then tested on a number of large scale problems. The new approach makes it possible to use a more general problem formulation and thus allows considerably more freedom when creating the model. The computational results are highly encouraging

    Cutting plane methods for general integer programming

    Get PDF
    Integer programming (IP) problems are difficult to solve due to the integer restrictions imposed on them. A technique for solving these problems is the cutting plane method. In this method, linear constraints are added to the associated linear programming (LP) problem until an integer optimal solution is found. These constraints cut off part of the LP solution space but do not eliminate any feasible integer solution. In this report algorithms for solving IP due to Gomory and to Dantzig are presented. Two other cutting plane approaches and two extensions to Gomory's algorithm are also discussed. Although these methods are mathematically elegant they are known to have slow convergence and an explosive storage requirement. As a result cutting planes are generally not computationally successful

    Linear Programming with Random Requirements

    Get PDF
    Linear programming was first developed by George B. Dantzig, Marshall Wood, and associates of the U.S. Air Force, in 1947. At that time, the Air Force organized a research group under the title of project SCOOP (Scientific Computation of Optimum Programs). This project contributed to the developing of a general interindustry model based on the Leontief input-output model, the Air Force programming and budgeting problem, and the problems which involved the relationship between two-person zero sum games and linear programming. The result was the formal development and application of the linear programming model. This project also developed the simplex computational method for finding the optimum feasible program. Early applications of linear programming were made in the military, in economics, and in the theory of games. During the last decade, however, linear programming applications have been extended to such other fields as management, engineering, and agriculture. As the application of linear programming has extended to many other fields, Dantzig (1955), Tinter (1955), Beale (1955), Madansky (1960), and others have been responsible for the formulation and development of stochastic linear programming. The stochastic linear programming problem occurs when some of the coefficients, in the objective function and/or in the constraint system of the linear programming model, are subject to random variation. In the literature, several methods are indicated for formulating the linear programming problem with random requirements to arrive at a solution. The intention of this study is to review some of these methods, and to compare one wit another in terms of the optimum value of the objective function which results from each method. There are three methods that will be considered. The first method is to replace the random element with its expected value and solve the resulting linear programming problem (Hadley, 1964). The second method is Dantzig’s two-stage linear programming problem with a random requirement (Dantzig, 1955). Suppose the following linear programming problem is considered: Min. (or max.) C’X X ≥ 0 Subject to: AX ≤ b, Where C and X are n by 1 vectors, b an m by 1 vector, and A an n matrix, and C’ is C transpose. If vector b is random and matrix A is known, then in the first stage, a decision is made on X, the random vector b is observed, and AX is compared with b. In the second stage, inaccuracies in the first decision are compensated for by a new decision variable Y with some penalty cost F. The problem then becomes, E min. (or max.) C’X + F’Y, X ≥ 0, Y ≥ 0, Subject to : AX + BY = b, Where B is an m by 2n matrix with elements ones, minus ones, and zeroes, and Y is an 2n by 1 vector with elements yi and y-i. E denotes an expectation. In the third method, the constraints with random requirements are to satisfy a given probability level. The problem then is to find values of the decision variables which optimize the expected objective function without violating the given probability measure (Charnes and Cooper, 1962). This report surveys the literature on basic linear programming and the simplex method of solution, describes random requirements, and illustrates three methods of solution. Finally, the optimal value of the objective function of each method is compared with the others
    corecore