330 research outputs found

    Improving problem reduction for 0-1 Multidimensional Knapsack Problems with valid inequalities

    Full text link
    © 2016 Elsevier Ltd. All rights reserved. This paper investigates the problem reduction heuristic for the Multidimensional Knapsack Problem (MKP). The MKP formulation is first strengthened by the Global Lifted Cover Inequalities (GLCI) using the cutting plane approach. The dynamic core problem heuristic is then applied to find good solutions. The GLCI is described in the general lifting framework and several variants are introduced. A Two-level Core problem Heuristic is also proposed to tackle large instances. Computational experiments were carried out on classic benchmark problems to demonstrate the effectiveness of this new method

    Higher-order cover cuts from zero–one knapsack constraints augmented by two-sided bounding inequalities

    Get PDF
    AbstractExtending our work on second-order cover cuts [F. Glover, H.D. Sherali, Second-order cover cuts, Mathematical Programming (ISSN: 0025-5610 1436-4646) (2007), doi:10.1007/s10107-007-0098-4. (Online)], we introduce a new class of higher-order cover cuts that are derived from the implications of a knapsack constraint in concert with supplementary two-sided inequalities that bound the sums of sets of variables. The new cuts can be appreciably stronger than the second-order cuts, which in turn dominate the classical knapsack cover inequalities. The process of generating these cuts makes it possible to sequentially utilize the second-order cuts by embedding them in systems that define the inequalities from which the higher-order cover cuts are derived. We characterize properties of these cuts, design specialized procedures to generate them, and establish associated dominance relationships. These results are used to devise an algorithm that generates all non-dominated higher-order cover cuts, and, in particular, to formulate and solve suitable separation problems for deriving a higher-order cut that deletes a given fractional solution to an underlying continuous relaxation. We also discuss a lifting procedure for further tightening any generated cut, and establish its polynomial-time operation for unit-coefficient cuts. A numerical example is presented that illustrates these procedures and the relative strength of the generated non-redundant, non-dominated higher-order cuts, all of which turn out to be facet-defining for this example. Some preliminary computational results are also presented to demonstrate the efficacy of these cuts in comparison with lifted minimal cover inequalities for the underlying knapsack polytope

    Approximating Geometric Knapsack via L-packings

    Full text link
    We study the two-dimensional geometric knapsack problem (2DK) in which we are given a set of n axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack (without rotating items). The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is (2 + \epsilon) [Jansen and Zhang, SODA 2004]. In this paper, we break the 2 approximation barrier, achieving a polynomial-time (17/9 + \epsilon) < 1.89 approximation, which improves to (558/325 + \epsilon) < 1.72 in the cardinality case. Essentially all prior work on 2DK approximation packs items inside a constant number of rectangular containers, where items inside each container are packed using a simple greedy strategy. We deviate for the first time from this setting: we show that there exists a large profit solution where items are packed inside a constant number of containers plus one L-shaped region at the boundary of the knapsack which contains items that are high and narrow and items that are wide and thin. As a second major and the main algorithmic contribution of this paper, we present a PTAS for this case. We believe that this will turn out to be useful in future work in geometric packing problems. We also consider the variant of the problem with rotations (2DKR), where items can be rotated by 90 degrees. Also, in this case, the best-known polynomial-time approximation factor (even for the cardinality case) is (2 + \epsilon) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time (3/2 + \epsilon)-approximation for 2DKR, which improves to (4/3 + \epsilon) in the cardinality case.Comment: 64pages, full version of FOCS 2017 pape

    Detecting semantic groups in MIP models

    Get PDF

    Recoverable Robust Knapsacks: the Discrete Scenario Case

    Get PDF
    Admission control problems have been studied extensively in the past. In a typical setting, resources like bandwidth have to be distributed to the different customers according to their demands maximizing the profit of the company. Yet, in real-world applications those demands are deviating and in order to satisfy their service requirements often a robust approach is chosen wasting benefits for the company. Our model overcomes this problem by allowing a limited recovery of a previously fixed assignment as soon as the data are known by violating at most k service promises and serving up to l new customers. Applying this approaches to the call admission problem on a single link of a telecommunication network leads to a recoverable robust version of the knapsack problem

    Generating facets for the independence system

    Get PDF
    In this paper, we present procedures to obtain facet-defining inequalities for the independence system polytope. These procedures are defined for inequalities which are not necessarily rank inequalities. We illustrate the use of these procedures by der iving strong valid inequalities for the acyclic induced subgraph, triangle free induced subgraph, bipartite induced subgraph, and knapsack polytopes. Finally, we derive a new family of facet-defining ineq ualities for the independence system polytope by adding a set of edges to antiwebs.© 2009 Society for Industrial and Applied Mathematics

    Applications of combinatorial optimization arising from large scale surveys

    Get PDF
    Many difficult statistical problems arising in censuses or in other large scale surveys have an underlying Combinatorial Optimization structure and can be solved with Combinatorial Optimization techniques. These techniques are often more efficient than the ad hoc solution techniques already developed in the field of Statistics. This thesis considers in detail two relevant cases of such statistical problems, and proposes solution approaches based on Combinatorial Optimization and Graph Theory. The first problem is the delineation of Functional Regions, the second one concerns the selection of the scope of a large survey, as briefly described below. The purpose of this work is therefore the innovative application of known techniques to very important and economically relevant practical problems that the "Censuses, Administrative and Statistical Registers Department" (DICA) of the Italian National Institute of Statistics (Istat), where I am senior researcher, has been dealing with. In several economical, statistical and geographical applications, a territory must be partitioned into Functional Regions. This operation is called Functional Regionalization. Functional Regions are areas that typically exceed administrative boundaries, and they are of interest for the evaluation of the social and economical phenomena under analysis. Functional Regions are not fixed and politically delimited, but are determined only by the interactions among all the localities of a territory. In this thesis, we focus on interactions represented by the daily journey-to-work flows between localities in which people live and/or work. Functional Regionalization of a territory often turns out to be computationally difficult, because of the size (that is, the number of localities constituting the territory under study) and the nature of the journey-to-work matrix (that is, the sparsity). In this thesis, we propose an innovative approach to Functional Regionalization based on the solution of graph partition problems over an undirected graph called transitions graph, which is generated by using the journey-to-work data. In this approach, the problem is solved by recursively partitioning the transition graph by using the min cut algorithms proposed by Stoer and Wagner and Brinkmeier. %In the second approach, the problem is solved maximizing a function of the sizes and interactions of subsets identified by successions of partitions obtained via Multilevel partitioning approach. This approach is applied to the determination of the Functional Regions for the Italian administrative regions. The target population of a statistical survey, also called scope, is the set of statistical units that should be surveyed. In the case of some large surveys or censuses, the scope cannot be the set of all available units, but it must be selected from this set. Surveying each unit has a cost and brings a different portion of the whole information. In this thesis, we focus on the case of Agricultural Census. In this case, the units are farms, and we want to determine a subset of units producing the minimum total cost and safeguarding at least a certain portion of the total information, according to the coverage levels assigned by the European regulations. Uncertainty aspects also occur, because the portion of information corresponding to each unit is not perfectly known before surveying it. The basic decision aspect is to establish the inclusion criteria before surveying each unit. We propose here to solve the described problem using multidimensional binary knapsack models
    • …
    corecore