83 research outputs found

    Decomposition techniques for large scale stochastic linear programs

    Get PDF
    Stochastic linear programming is an effective and often used technique for incorporating uncertainties about future events into decision making processes. Stochastic linear programs tend to be significantly larger than other types of linear programs and generally require sophisticated decomposition solution procedures. Detailed algorithms based uponDantzig-Wolfe and L-Shaped decomposition are developed and implemented. These algorithms allow for solutions to within an arbitrary tolerance on the gap between the lower and upper bounds on a problem\u27s objective function value. Special procedures and implementation strategies are presented that enable many multi-period stochastic linear programs to be solved with two-stage, instead of nested, decomposition techniques. Consequently, abroad class of large scale problems, with tens of millions of constraints and variables, can be solved on a personal computer. Myopic decomposition algorithms based upon a shortsighted view of the future are also developed. Although unable to guarantee an arbitrary solution tolerance, myopic decomposition algorithms may yield very good solutions in a fraction of the time required by Dantzig-Wolfe/L-Shaped decomposition based algorithms.In addition, derivations are given for statistics, based upon Mahalanobis squared distances,that can be used to provide measures for a random sample\u27s effectiveness in approximating a parent distribution. Results and analyses are provided for the applications of the decomposition procedures and sample effectiveness measures to a multi-period market investment model

    On the vehicle routing problem with time windows

    Get PDF

    MULTIOBJECTIVE OPTIMIZATION MODELS AND SOLUTION METHODS FOR PLANNING LAND DEVELOPMENT USING MINIMUM SPANNING TREES, LAGRANGIAN RELAXATION AND DECOMPOSITION TECHNIQUES

    Get PDF
    The land development problem is presented as the optimization of a weighted average of the objectives of three or more stakeholders, subject to develop within bounds residential, industrial and commercial areas that meet governmental goals. The work is broken into three main sections. First, a mixed integer formulation of the problem is presented along with an algorithm based on decomposition techniques that numerically has proven to outperform other solution methods. Second, a quadratic mixed integer programming formulation is presented including a compactness measure as applied to land development. Finally, to prevent the proliferation of sprawl a new measure of compactness that involves the use of the minimum spanning tree is embedded into a mixed integer programming formulation. Despite the exponential number of variables and constraints required to define the minimum spanning tree, this problem was solved using a hybrid algorithm developed in this research

    Shortest Paths and Vehicle Routing

    Get PDF

    Designing the Liver Allocation Hierarchy: Incorporating Equity and Uncertainty

    Get PDF
    Liver transplantation is the only available therapy for any acute or chronic condition resulting in irreversible liver dysfunction. The liver allocation system in the U.S. is administered by the United Network for Organ Sharing (UNOS), a scientific and educational nonprofit organization. The main components of the organ procurement and transplant network are Organ Procurement Organizations (OPOs), which are collections of transplant centers responsible for maintaining local waiting lists, harvesting donated organs and carrying out transplants. Currently in the U.S., OPOs are grouped into 11 regions to facilitate organ allocation, and a three-tier mechanism is utilized that aims to reduce organ preservation time and transport distance to maintain organ quality, while giving sicker patients higher priority. Livers are scarce and perishable resources that rapidly lose viability, which makes their transport distance a crucial factor in transplant outcomes. When a liver becomes available, it is matched with patients on the waiting list according to a complex mechanism that gives priority to patients within the harvesting OPO and region. Transplants at the regional level accounted for more than 50% of all transplants since 2000.This dissertation focuses on the design of regions for liver allocation hierarchy, and includes optimization models that incorporate geographic equity as well as uncertainty throughout the analysis. We employ multi-objective optimization algorithms that involve solving parametric integer programs to balance two possibly conflicting objectives in the system: maximizing efficiency, as measured by the number of viability adjusted transplants, and maximizing geographic equity, as measured by the minimum rate of organ flow into individual OPOs from outside of their own local area. Our results show that efficiency improvements of up to 6% or equity gains of about 70% can be achieved when compared to the current performance of the system by redesigning the regional configuration for the national liver allocation hierarchy.We also introduce a stochastic programming framework to capture the uncertainty of the system by considering scenarios that correspond to different snapshots of the national waiting list and maximize the expected benefit from liver transplants under this stochastic view of the system. We explore many algorithmic and computational strategies including sampling methods, column generation strategies, branching and integer-solution generation procedures, to aid the solution process of the resulting large-scale integer programs. We also explore an OPO-based extension to our two-stage stochastic programming framework that lends itself to more extensive computational testing. The regional configurations obtained using these models are estimated to increase expected life-time gained per transplant operation by up to 7% when compared to the current system.This dissertation also focuses on the general question of designing efficient algorithms that combine column and cut generation to solve large-scale two-stage stochastic linear programs. We introduce a flexible method to combine column generation and the L-shaped method for two-stage stochastic linear programming. We explore the performance of various algorithm designs that employ stabilization subroutines for strengthening both column and cut generation to effectively avoid degeneracy. We study two-stage stochastic versions of the cutting stock and multi-commodity network flow problems to analyze the performances of algorithms in this context

    Exploring the Power of Rescaling

    Get PDF
    The goal of our research is a comprehensive exploration of the power of rescaling to improve the efficiency of various algorithms for linear optimization and related problems. Linear optimization and linear feasibility problemsarguably yield the fundamental problems of optimization. Advances in solvingthese problems impact the core of optimization theory, and consequently itspractical applications. The development and analysis of solution methods for linear optimization is one of the major topics in optimization research. Although the polynomial time ellipsoid method has excellent theoretical properties,however it turned out to be inefficient in practice.Still today, in spite of the dominance of interior point methods, various algorithms, such as perceptron algorithms, rescaling perceptron algorithms,von Neumann algorithms, Chubanov\u27s method, and linear optimization related problems,such as the colorful feasibility problem -- whose complexity status is still undecided --are studied.Motivated by the successful application of a rescaling principle on the perceptron algorithm,our research aims to explore the power of rescaling on other algorithms too,and improve their computational complexity. We focus on algorithms forsolving linear feasibility and related problems, whose complexity depend on a quantity ρ\rho, which is a condition number for measuring the distance to the feasibility or infeasibility of the problem.These algorithms include the von Neumann algorithm and the perceptron algorithm. First, we discuss the close duality relationship between the perceptron and the von Neumann algorithms. This observation allows us to transit one algorithm as a variant of the other, as well as we can transit their complexity results. The discovery of this duality not only provides a profound insight into both of the algorithms, but also results in new variants of the algorithms.Based on this duality relationship, we propose a deterministic rescaling von Neumann algorithm. It computationally outperforms the original von Neumann algorithm. Though its complexity has not been proved yet, we construct a von Neumann example which shows that the rescaling steps cannot keep the quantity ρ\rho increasing monotonically. Showing a monotonic increase of ρ\rho is a common technique used to prove the complexity of rescaling algorithms. Therefore, this von Neumann example actually shows that another proof method needs to be discovered in order to obtain the complexity of this deterministic rescaling von Neumann algorithm. Furthermore, this von Neumann example serves as the foundation of a perceptron example, which verifies that ρ\rho is not always increasing after one rescaling step in the polynomial time deterministic rescaling perceptron algorithm either.After that, we adapt the idea of Chubanov\u27s method to our rescaling frame and develop a polynomial-time column-wise rescaling von Neumann algorithm. Chubanov recently proposed a simple polynomial-time algorithm for solving homogeneous linear systems with positive variables. The Basic Procedure of Chubanov\u27s method can either find a feasible solution, or identify an upper bound for at least one coordinate of any feasible solution. The column-wise rescaling von Neumann algorithm combines the Basic Procedure with column-wise rescaling to identify zero coordinates in all feasible solutions and remove the corresponding columns from the coefficient matrix. This is the first variant of the von Neumann algorithm with polynomial-time complexity. Furthermore, compared with the original von Neumann algorithm which returns an approximate solution, this rescaling variant guarantees an exact solution for feasible problems.Finally, we develop the methodology of higher order rescaling and propose a higher-order perceptron algorithm.We implement the perceptron improvement phase by utilizing parallel processors.Therefore, in a multi-core environment we may obtain several rescaling vectors without extra wall-clock time.Once we use these rescaling vectors in a single higher-order rescaling step, better rescaling ratesmay be expected and thus computational efficiency is improved

    Operations research software descriptions, vol. 1

    Get PDF

    Optimizing the Efficiency of the United States Organ Allocation System through Region Reorganization

    Get PDF
    Allocating organs for transplantation has been controversial in the United States for decades. Two main allocation approaches developed in the past are (1) to allocate organs to patients with higher priority at the same locale; (2) to allocate organs to patients with the greatest medical need regardless of their locations. To balance these two allocation preferences, the U.S. organ transplantation and allocation network has lately implemented a three-tier hierarchical allocation system, dividing the U.S. into 11 regions, composed of 59 Organ Procurement Organizations (OPOs). At present, an procured organ is offered first at the local level, and then regionally and nationally. The purpose of allocating organs at the regional level is to increase the likelihood that a donor-recipient match exists, compared to the former allocation approach, and to increase the quality of the match, compared to the latter approach. However, the question of which regional configuration is the most efficient remains unanswered. This dissertation develops several integer programming models to find the most efficient set of regions. Unlike previous efforts, our model addresses efficient region design for the entire hierarchical system given the existing allocation policy. To measure allocation efficiency, we use the intra-regional transplant cardinality. Two estimates are developed in this dissertation. One is a population-based estimate; the other is an estimate based on the situation where there is only one waiting list nationwide. The latter estimate is a refinement of the former one in that it captures the effect of national-level allocation and heterogeneity of clinical and demographic characteristics among donors and patients. To model national-level allocation, we apply a modeling technique similar to spill-and-recapture in the airline fleet assignment problem. A clinically based simulation model is used in this dissertation to estimate several necessary parameters in the analytic model and to verify the optimal regional configuration obtained from the analytic model. The resulting optimal region design problem is a large-scale set-partitioning problem in whichthere are too many columns to handle explicitly. Given this challenge, we adapt branch and price in this dissertation. We develop a mixed-integer programming pricing problem that is both theoretically and practically hard to solve. To alleviate this existing computational difficulty, we apply geographic decomposition to solve many smaller-scale pricing problems based on pre-specified subsets of OPOs instead of a big pricing problem. When solving each smaller-scale pricing problem, we also generate multiple ``promising' regions that are not necessarily optimal to the pricing problem. In addition, we attempt to develop more efficient solutions for the pricing problem by studying alternative formulations and developing strong valid inequalities. The computational studies in this dissertation use clinical data and show that (1) regional reorganization is beneficial; (2) our branch-and-price application is effective in solving the optimal region design problem

    Aspiration Based Decision Support Systems

    Get PDF
    This book focuses the methodology of decision analysis and support related to the principle of reference point optimization (developed by the editors of this volume and called also variously: aspiration-led decision support, quasi-satisfying framework of rationality, DIDAS methodology etc.). The selection principle applied for this volume was to concentrate on advances of theory and methodology, related to the focusing theme, to supplement them by experiences and methodological advances gained through wide applications and tests in one particular application area - the programming of development of industrial structures in chemical industry, and finally to give a very short description of various software products developed in the contracted study agreement

    Large-Scale Linear Programming

    Get PDF
    During the week of June 2-6, 1980, the System and Decision Sciences Area of the International Institute for Applied Systems Analysis organized a workshop on large-scale linear programming in collaboration with the Systems Optimization Laboratory (SOL) of Stanford University, and co-sponsored by the Mathematical Programming Society (MPS). The participants in the meeting were invited from amongst those who actively contribute to research in large-scale linear programming methodology (including development of algorithms and software). The first volume of the Proceedings contains five chapters. The first is an historical review by George B. Dantzig of his own and related research in time-staged linear programming problems. Chapter 2 contains five papers which address various techniques for exploiting sparsity and degeneracy in the now standard LU decomposition of the basis used with the simplex algorithm for standard (unstructured) problems. The six papers of Chapter 3 concern aspects of variants of the simplex method which take into account through basis factorization the specific block-angular structure of constraint matrices generated by dynamic and/or stochastic linear programs. In Chapter 4, five papers address extensions of the original Dantzig-Wolfe procedure for utilizing the structure of planning problems by decomposing the original LP into LP subproblems coordinated by a relatively simple LP master problem of a certain type. Chapter 5 contains four papers which constitute a mini-symposium on the now famous Shor-Khachian ellipsoidal method applied to both real and integer linear programs. The first chapter of Volume 2 contains three papers on non-simplex methods for linear programming. The remaining chapters of Volume 2 concern topics of present interest in the field. A bibliography a large-scale linear programming research completes Volume 2
    corecore