1,595 research outputs found

    Decomposition Based Search - A theoretical and experimental evaluation

    Full text link
    In this paper we present and evaluate a search strategy called Decomposition Based Search (DBS) which is based on two steps: subproblem generation and subproblem solution. The generation of subproblems is done through value ranking and domain splitting. Subdomains are explored so as to generate, according to the heuristic chosen, promising subproblems first. We show that two well known search strategies, Limited Discrepancy Search (LDS) and Iterative Broadening (IB), can be seen as special cases of DBS. First we present a tuning of DBS that visits the same search nodes as IB, but avoids restarts. Then we compare both theoretically and computationally DBS and LDS using the same heuristic. We prove that DBS has a higher probability of being successful than LDS on a comparable number of nodes, under realistic assumptions. Experiments on a constraint satisfaction problem and an optimization problem show that DBS is indeed very effective if compared to LDS.Comment: 16 pages, 8 figures. LIA Technical Report LIA00203, University of Bologna, 200

    A hybrid constraint programming and semidefinite programming approach for the stable set problem

    Full text link
    This work presents a hybrid approach to solve the maximum stable set problem, using constraint and semidefinite programming. The approach consists of two steps: subproblem generation and subproblem solution. First we rank the variable domain values, based on the solution of a semidefinite relaxation. Using this ranking, we generate the most promising subproblems first, by exploring a search tree using a limited discrepancy strategy. Then the subproblems are being solved using a constraint programming solver. To strengthen the semidefinite relaxation, we propose to infer additional constraints from the discrepancy structure. Computational results show that the semidefinite relaxation is very informative, since solutions of good quality are found in the first subproblems, or optimality is proven immediately.Comment: 14 page

    Intermediate integer programming representations using value disjunctions

    Full text link
    We introduce a general technique to create an extended formulation of a mixed-integer program. We classify the integer variables into blocks, each of which generates a finite set of vector values. The extended formulation is constructed by creating a new binary variable for each generated value. Initial experiments show that the extended formulation can have a more compact complete description than the original formulation. We prove that, using this reformulation technique, the facet description decomposes into one ``linking polyhedron'' per block and the ``aggregated polyhedron''. Each of these polyhedra can be analyzed separately. For the case of identical coefficients in a block, we provide a complete description of the linking polyhedron and a polynomial-time separation algorithm. Applied to the knapsack with a fixed number of distinct coefficients, this theorem provides a complete description in an extended space with a polynomial number of variables.Comment: 26 pages, 5 figure

    An optimization framework for solving capacitated multi-level lot-sizing problems with backlogging

    Get PDF
    This paper proposes two new mixed integer programming models for capacitated multi-level lot-sizing problems with backlogging, whose linear programming relaxations provide good lower bounds on the optimal solution value. We show that both of these strong formulations yield the same lower bounds. In addition to these theoretical results, we propose a new, effective optimization framework that achieves high quality solutions in reasonable computational time. Computational results show that the proposed optimization framework is superior to other well-known approaches on several important performance dimensions

    Designing the Liver Allocation Hierarchy: Incorporating Equity and Uncertainty

    Get PDF
    Liver transplantation is the only available therapy for any acute or chronic condition resulting in irreversible liver dysfunction. The liver allocation system in the U.S. is administered by the United Network for Organ Sharing (UNOS), a scientific and educational nonprofit organization. The main components of the organ procurement and transplant network are Organ Procurement Organizations (OPOs), which are collections of transplant centers responsible for maintaining local waiting lists, harvesting donated organs and carrying out transplants. Currently in the U.S., OPOs are grouped into 11 regions to facilitate organ allocation, and a three-tier mechanism is utilized that aims to reduce organ preservation time and transport distance to maintain organ quality, while giving sicker patients higher priority. Livers are scarce and perishable resources that rapidly lose viability, which makes their transport distance a crucial factor in transplant outcomes. When a liver becomes available, it is matched with patients on the waiting list according to a complex mechanism that gives priority to patients within the harvesting OPO and region. Transplants at the regional level accounted for more than 50% of all transplants since 2000.This dissertation focuses on the design of regions for liver allocation hierarchy, and includes optimization models that incorporate geographic equity as well as uncertainty throughout the analysis. We employ multi-objective optimization algorithms that involve solving parametric integer programs to balance two possibly conflicting objectives in the system: maximizing efficiency, as measured by the number of viability adjusted transplants, and maximizing geographic equity, as measured by the minimum rate of organ flow into individual OPOs from outside of their own local area. Our results show that efficiency improvements of up to 6% or equity gains of about 70% can be achieved when compared to the current performance of the system by redesigning the regional configuration for the national liver allocation hierarchy.We also introduce a stochastic programming framework to capture the uncertainty of the system by considering scenarios that correspond to different snapshots of the national waiting list and maximize the expected benefit from liver transplants under this stochastic view of the system. We explore many algorithmic and computational strategies including sampling methods, column generation strategies, branching and integer-solution generation procedures, to aid the solution process of the resulting large-scale integer programs. We also explore an OPO-based extension to our two-stage stochastic programming framework that lends itself to more extensive computational testing. The regional configurations obtained using these models are estimated to increase expected life-time gained per transplant operation by up to 7% when compared to the current system.This dissertation also focuses on the general question of designing efficient algorithms that combine column and cut generation to solve large-scale two-stage stochastic linear programs. We introduce a flexible method to combine column generation and the L-shaped method for two-stage stochastic linear programming. We explore the performance of various algorithm designs that employ stabilization subroutines for strengthening both column and cut generation to effectively avoid degeneracy. We study two-stage stochastic versions of the cutting stock and multi-commodity network flow problems to analyze the performances of algorithms in this context

    Separable Convex Optimization with Nested Lower and Upper Constraints

    Full text link
    We study a convex resource allocation problem in which lower and upper bounds are imposed on partial sums of allocations. This model is linked to a large range of applications, including production planning, speed optimization, stratified sampling, support vector machines, portfolio management, and telecommunications. We propose an efficient gradient-free divide-and-conquer algorithm, which uses monotonicity arguments to generate valid bounds from the recursive calls, and eliminate linking constraints based on the information from sub-problems. This algorithm does not need strict convexity or differentiability. It produces an ϵ\epsilon-approximate solution for the continuous problem in O(nlogmlognBϵ)\mathcal{O}(n \log m \log \frac{n B}{\epsilon}) time and an integer solution in O(nlogmlogB)\mathcal{O}(n \log m \log B) time, where nn is the number of decision variables, mm is the number of constraints, and BB is the resource bound. A complexity of O(nlogm)\mathcal{O}(n \log m) is also achieved for the linear and quadratic cases. These are the best complexities known to date for this important problem class. Our experimental analyses confirm the good performance of the method, which produces optimal solutions for problems with up to 1,000,000 variables in a few seconds. Promising applications to the support vector ordinal regression problem are also investigated

    Trying again to fail-first

    Get PDF
    For constraint satisfaction problems (CSPs), Haralick and Elliott [1] introduced the Fail-First Principle and defined in it terms of minimizing branch depth. By devising a range of variable ordering heuristics, each in turn trying harder to fail first, Smith and Grant [2] showed that adherence to this strategy does not guarantee reduction in search effort. The present work builds on Smith and Grant. It benefits from the development of a new framework for characterizing heuristic performance that defines two policies, one concerned with enhancing the likelihood of correctly extending a partial solution, the other with minimizing the effort to prove insolubility. The Fail-First Principle can be restated as calling for adherence to the second, fail-first policy, while discounting the other, promise policy. Our work corrects some deficiencies in the work of Smith and Grant, and goes on to confirm their finding that the Fail-First Principle, as originally defined, is insufficient. We then show that adherence to the fail-first policy must be measured in terms of size of insoluble subtrees, not branch depth. We also show that for soluble problems, both policies must be considered in evaluating heuristic performance. Hence, even in its proper form the Fail-First Principle is insufficient. We also show that the “FF” series of heuristics devised by Smith and Grant is a powerful tool for evaluating heuristic performance, including the subtle relations between heuristic features and adherence to a policy
    corecore