3,275 research outputs found

    Bin Packing and Related Problems: General Arc-flow Formulation with Graph Compression

    Full text link
    We present an exact method, based on an arc-flow formulation with side constraints, for solving bin packing and cutting stock problems --- including multi-constraint variants --- by simply representing all the patterns in a very compact graph. Our method includes a graph compression algorithm that usually reduces the size of the underlying graph substantially without weakening the model. As opposed to our method, which provides strong models, conventional models are usually highly symmetric and provide very weak lower bounds. Our formulation is equivalent to Gilmore and Gomory's, thus providing a very strong linear relaxation. However, instead of using column-generation in an iterative process, the method constructs a graph, where paths from the source to the target node represent every valid packing pattern. The same method, without any problem-specific parameterization, was used to solve a large variety of instances from several different cutting and packing problems. In this paper, we deal with vector packing, graph coloring, bin packing, cutting stock, cardinality constrained bin packing, cutting stock with cutting knife limitation, cutting stock with binary patterns, bin packing with conflicts, and cutting stock with binary patterns and forbidden pairs. We report computational results obtained with many benchmark test data sets, all of them showing a large advantage of this formulation with respect to the traditional ones

    Spotting Difficult Weakly Correlated Binary Knapsack Problems

    Get PDF
    In this paper, we examine the possibility of quickly deciding whether or not an instance of a binary knapsack problem is difficult for branch and bound algorithms. We first observe that the distribution of the objective function values is smooth and unimodal. We define a measure of difficulty of solving knapsack problems through branch and bound algorithms, and examine the relationship between the degree of correlation between profit and cost values, the skewness of the distribution of objective function values and the difficulty in solving weakly correlated binary knapsack problems. We see that the even though it is unlikely that an exact relationship exists for individual problem instances, some aggregate relationships may be observed. Key words: Binary Knapsack Problems; Skewness; Computational Experiments.

    Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

    Full text link
    We investigate two new optimization problems -- minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [14, 35] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to log-factors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201

    Solving Medium-Density Subset Sum Problems in Expected Polynomial Time: An Enumeration Approach

    Full text link
    The subset sum problem (SSP) can be briefly stated as: given a target integer EE and a set AA containing nn positive integer aja_j, find a subset of AA summing to EE. The \textit{density} dd of an SSP instance is defined by the ratio of nn to mm, where mm is the logarithm of the largest integer within AA. Based on the structural and statistical properties of subset sums, we present an improved enumeration scheme for SSP, and implement it as a complete and exact algorithm (EnumPlus). The algorithm always equivalently reduces an instance to be low-density, and then solve it by enumeration. Through this approach, we show the possibility to design a sole algorithm that can efficiently solve arbitrary density instance in a uniform way. Furthermore, our algorithm has considerable performance advantage over previous algorithms. Firstly, it extends the density scope, in which SSP can be solved in expected polynomial time. Specifically, It solves SSP in expected O(nlogn)O(n\log{n}) time when density dcn/lognd \geq c\cdot \sqrt{n}/\log{n}, while the previously best density scope is dcn/(logn)2d \geq c\cdot n/(\log{n})^{2}. In addition, the overall expected time and space requirement in the average case are proven to be O(n5logn)O(n^5\log n) and O(n5)O(n^5) respectively. Secondly, in the worst case, it slightly improves the previously best time complexity of exact algorithms for SSP. Specifically, the worst-case time complexity of our algorithm is proved to be O((n6)2n/2+n)O((n-6)2^{n/2}+n), while the previously best result is O(n2n/2)O(n2^{n/2}).Comment: 11 pages, 1 figur

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design
    corecore