10 research outputs found

    Improvement to an existing multi-level capacitated lot sizing problem considering setup carryover, backlogging, and emission control

    Get PDF
    This paper presents a multi-level, multi-item, multi-period capacitated lot-sizing problem. The lot-sizing problem studies can obtain production quantities, setup decisions and inventory levels in each period fulfilling the demand requirements with limited capacity resources, considering the Bill of Material (BOM) structure while simultaneously minimizing the production, inventory, and machine setup costs. The paper proposes an exact solution to Chowdhury et al. (2018)\u27s[1] developed model, which considers the backlogging cost, setup carryover & greenhouse gas emission control to its model complexity. The problem contemplates the Dantzig-Wolfe (D.W.) decomposition to decompose the multi-level capacitated problem into a single-item uncapacitated lot-sizing sub-problem. To avoid the infeasibilities of the weighted problem (WP), an artificial variable is introduced, and the Big-M method is employed in the D.W. decomposition to produce an always feasible master problem. In addition, Wagner & Whitin\u27s[2] forward recursion algorithm is also incorporated in the solution approach for both end and component items to provide the minimum cost production plan. Introducing artificial variables in the D.W. decomposition method is a novel approach to solving the MLCLSP model. A better performance was achieved regarding reduced computational time (reduced by 50%) and optimality gap (reduced by 97.3%) in comparison to Chowdhury et al. (2018)\u27s[1] developed model

    An exact algorithm for the Partition Coloring Problem

    Get PDF
    We study the Partition Coloring Problem (PCP), a generalization of the Vertex Coloring Problem where the vertex set is partitioned. The PCP asks to select one vertex for each subset of the partition in such a way that the chromatic number of the induced graph is minimum. We propose a new Integer Linear Programming formulation with an exponential number of variables. To solve this formulation to optimality, we design an effective Branch-and-Price algorithm. Good quality initial solutions are computed via a new metaheuristic algorithm based on adaptive large neighborhood search. Extensive computational experiments on a benchmark test of instances from the literature show that our Branch-and-Price algorithm, combined with the new metaheuristic algorithm, is able to solve for the first time to proven optimality several open instances, and compares favorably with the current state-of-the-art exact algorithm

    Local cuts and two-period convex hull closures for big-bucket lot-sizing problems

    Get PDF
    Despite the significant attention they have drawn, big bucket lot-sizing problems remain notoriously difficult to solve. Previous work of Akartunali and Miller (2012) presented results (computational and theoretical) indicating that what makes these problems difficult are the embedded single-machine, single-level, multi-period submodels. We therefore consider the simplest such submodel, a multi-item, two-period capacitated relaxation. We propose a methodology that can approximate the convex hulls of all such possible relaxations by generating violated valid inequalities. To generate such inequalities, we separate two-period projections of fractional LP solutions from the convex hulls of the two-period closure we study. The convex hull representation of the two-period closure is generated dynamically using column generation. Contrary to regular column generation, our method is an outer approximation, and therefore can be used efficiently in a regular branch-and-bound procedure. We present computational results that illustrate how these two-period models could be effective in solving complicated problems

    Price-and-verify: a new algorithm for recursive circle packing using Dantzig–Wolfe decomposition

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer via the DOI in this record Packing rings into a minimum number of rectangles is an optimization problem which appears naturally in the logistics operations of the tube industry. It encompasses two major difficulties, namely the positioning of rings in rectangles and the recursive packing of rings into other rings. This problem is known as the Recursive Circle Packing Problem (RCPP). We present the first dedicated method for solving RCPP that provides strong dual bounds based on an exact Dantzig–Wolfe reformulation of a nonconvex mixed-integer nonlinear programming formulation. The key idea of this reformulation is to break symmetry on each recursion level by enumerating one-level packings, i.e., packings of circles into other circles, and by dynamically generating packings of circles into rectangles. We use column generation techniques to design a “price-and-verify” algorithm that solves this reformulation to global optimality. Extensive computational experiments on a large test set show that our method not only computes tight dual bounds, but often produces primal solutions better than those computed by heuristics from the literature.Federal Ministry of Education and Researc

    Automatic Dantzig–Wolfe reformulation of mixed integer programs

    No full text
    Dantzig–Wolfe decomposition (or reformulation) is well-known to provide strong dual bounds for specially structured mixed integer programs (MIPs). However, the method is not implemented in any state-of-the-art MIP solver as it is considered to require structural problem knowledge and tailoring to this structure. We provide a computational proof-of-concept that the reformulation can be automated. That is, we perform a rigorous experimental study, which results in identifying a score to estimate the quality of a decomposition: after building a set of potentially good candidates, we exploit such a score to detect which decomposition might be useful for Dantzig–Wolfe reformulation of a MIP. We experiment with general instances from MIPLIB2003 and MIPLIB2010 for which a decomposition method would not be the first choice, and demonstrate that strong dual bounds can be obtained from the automatically reformulated model using column generation. Our findings support the idea that Dantzig–Wolfe reformulation may hold more promise as a general-purpose tool than previously acknowledged by the research community

    Random Sampling and Machine Learning to Understand Good Decompositions

    Get PDF
    Motivated by its implications in the development of general purpose solvers for decomposable Mixed Integer Programs (MIP), we address a fundamental research question, that is to assess if good decomposition patterns can be consistently found by looking only at static properties of MIP input instances, or not. We adopt a data driven approach, devising a random sampling algorithm, considering a set of generic MIP base instances, and generating a large, balanced and well diversified set of decomposition patterns, that we analyze with machine learning tools. The use of both supervised and unsupervised techniques highlights interesting structures of random decompositions, as well as suggesting (under certain conditions) a positive answer to the initial question, triggering at the same time perspectives for future research. Keywords: Dantzig-Wolfe Decomposition, Machine Learning, Random Sampling

    Automatic Dantzig-Wolfe Reformulation of Mixed Integer Programs

    No full text
    Dantzig-Wolfe decomposition (or reformulation) is well-known to provide strong dual bounds for specially structured mixed integer programs (MIPs). However, the method is not implemented in any state-of-the-art MIP solver as it is considered to require structural problem knowledge and tailoring to this structure. We provide a computational proof-of-concept that the reformulation can be automated. That is, we perform a rigorous experimental study, which results in identifying a score to estimate the quality of a decomposition: after building a set of potentially good candidates, we exploit such a score to detect which decompo-sition might be useful for Dantzig-Wolfe reformulation of a MIP. We experiment with general instances from MIPLIB2003 and MIPLIB2010 for which a decom-position method would not be the first choice, and demonstrate that strong dual bounds can be obtained from the automatically reformulated model using column generation. Our findings support the idea that Dantzig-Wolfe reformulation ma

    Split Cuts From Sparse Disjunctions

    Get PDF
    Cutting planes are one of the major techniques used in solving Mixed-Integer Linear Programming (MIP) models. Various types of cuts have long been exploited by MIP solvers, leading to state-of-the-art performance in practice. Among them, the class of split cuts, which includes Gomory Mixed Integer (GMI) and Mixed Integer Rounding (MIR) cuts from tableaux, are arguably the most effective class of general cutting planes within a branch-and-cut framework. Sparsity, on the other hand, is a common characteristic of real-world MIP problems, and it is an important part of why the simplex method works so well inside branch-and-cut. A natural question, therefore, is to determine how sparsity can be incorporated into split cuts and how effective are split cuts that exploit sparsity. In this thesis, we evaluate the strength of split cuts that arise from sparse split disjunctions. In particular, we implement an approximate separation routine that separates only split cuts whose split disjunctions are sparse. We also present a straightforward way to exploit sparsity structure that is implicit in the MIP formulation. We run computational experiments and conclude that, one possibility to produce good split cuts is to try sparse disjunctions and exploit such structure

    Data-driven Structure Detection in Optimization: Decomposition, Hub Location, and Brain Connectivity

    Get PDF
    Employing data-driven methods to efficiently solve practical and large optimization problems is a recent trend that focuses on identifying patterns and structures in the problem data to help with its solution. In this thesis, we investigate this approach as an alternative to tackle real life large scale optimization problems which are hard to solve via traditional optimization techniques. We look into three different levels on which data-driven approaches can be used for optimization problems. The first level is the highest level, namely, model structure. Certain classes of mixed-integer programs are known to be efficiently solvable by exploiting special structures embedded in their constraint matrices. One such structure is the bordered block diagonal (BBD) structure that lends itself to Dantzig-Wolfe reformulation (DWR) and branch-and-price. Given a BBD structure for the constraint matrix of a general MIP, several platforms (such as COIN/DIP, SCIP/GCG and SAS/ DECOMP) exist that can perform automatic DWR of the problem and solve the MIP using branch-and-price. The challenge of using branch-and-price as a general-purpose solver, however, lies in the requirement of the knowledge of a structure a priori. We propose a new algorithm to automatically detect BBD structures inherent in a matrix. We start by introducing a new measure of goodness to capture desired features in BBD structures such as minimal border size, block cohesion and granularity of the structure. The main building block of the proposed approach is the modularity-based community detection in lieu of traditional graph/hypergraph partitioning methods to alleviate one major drawback of the existing approaches in the literature: predefining the number of blocks. When tested on MIPLIB instances using the SAS/DECOMP framework, the proposed algorithm was found to identify structures that, on average, lead to significant improvements both in computation time and optimality gap compared to those detected by the state-of-the-art BBD detection techniques in the literature. The second level is problem type where problem-specific patterns/characteristics are to be detected and exploited. We investigate hub location problem (HLP) as an example. HLP models the problem of selecting a subset of nodes within a given network as hubs, which enjoy economies of scale, and allocating the remaining nodes to the selected hubs. The main challenge of using HLP in certain promising domains is the inability of current solution approaches to handle large instances (e.g., networks with more than 1000 nodes). In this work, we explore an important pattern in the optimal hub networks: spatial separability. We show that at the optimal solutions, nodes are typically partitioned into allocation clusters in such a way that convex hulls of these clusters are disjoint. We exploit this pattern and propose a new data-driven approach that uses the insights generated from the solution of a smaller problem - low resolution representation - to find high quality solutions for the large HLPs. The third and the lowest level is the instance level where the instance-specific data is explored for patterns that would help solution of large problem instances. To this end, we open up a new application of HLPs originating from human brain connectivity networks (BCN) by introducing the largest (with 998 nodes) and the first three-dimensional dataset in the literature so far. Experiments reveal that the HLP models can successfully reproduce similar results to those in the medical literature related to hub organisation of the brain. We conclude that with certain customizations and methods that allow tackling very large instances, HLP models can potentially become an important tool to further investigate the intricate nature of hub organisations in human brain
    corecore