22 research outputs found

    Directed Steiner Tree and the Lasserre Hierarchy

    Full text link
    The goal for the Directed Steiner Tree problem is to find a minimum cost tree in a directed graph G=(V,E) that connects all terminals X to a given root r. It is well known that modulo a logarithmic factor it suffices to consider acyclic graphs where the nodes are arranged in L <= log |X| levels. Unfortunately the natural LP formulation has a |X|^(1/2) integrality gap already for 5 levels. We show that for every L, the O(L)-round Lasserre Strengthening of this LP has integrality gap O(L log |X|). This provides a polynomial time |X|^{epsilon}-approximation and a O(log^3 |X|) approximation in O(n^{log |X|) time, matching the best known approximation guarantee obtained by a greedy algorithm of Charikar et al.Comment: 23 pages, 1 figur

    Approximation Limits of Linear Programs (Beyond Hierarchies)

    Full text link
    We develop a framework for approximation limits of polynomial-size linear programs from lower bounds on the nonnegative ranks of suitably defined matrices. This framework yields unconditional impossibility results that are applicable to any linear program as opposed to only programs generated by hierarchies. Using our framework, we prove that O(n^{1/2-eps})-approximations for CLIQUE require linear programs of size 2^{n^\Omega(eps)}. (This lower bound applies to linear programs using a certain encoding of CLIQUE as a linear optimization problem.) Moreover, we establish a similar result for approximations of semidefinite programs by linear programs. Our main ingredient is a quantitative improvement of Razborov's rectangle corruption lemma for the high error regime, which gives strong lower bounds on the nonnegative rank of certain perturbations of the unique disjointness matrix.Comment: 23 pages, 2 figure

    Generating general-purpose cutting planes for mixed-integer programs

    Get PDF
    Franz WesselmannPaderborn, Univ., Diss., 201

    On the Power and Limitations of Branch and Cut

    Get PDF
    The Stabbing Planes proof system [Paul Beame et al., 2018] was introduced to model the reasoning carried out in practical mixed integer programming solvers. As a proof system, it is powerful enough to simulate Cutting Planes and to refute the Tseitin formulas - certain unsatisfiable systems of linear equations od 2 - which are canonical hard examples for many algebraic proof systems. In a recent (and surprising) result, Dadush and Tiwari [Daniel Dadush and Samarth Tiwari, 2020] showed that these short refutations of the Tseitin formulas could be translated into quasi-polynomial size and depth Cutting Planes proofs, refuting a long-standing conjecture. This translation raises several interesting questions. First, whether all Stabbing Planes proofs can be efficiently simulated by Cutting Planes. This would allow for the substantial analysis done on the Cutting Planes system to be lifted to practical mixed integer programming solvers. Second, whether the quasi-polynomial depth of these proofs is inherent to Cutting Planes. In this paper we make progress towards answering both of these questions. First, we show that any Stabbing Planes proof with bounded coefficients (SP*) can be translated into Cutting Planes. As a consequence of the known lower bounds for Cutting Planes, this establishes the first exponential lower bounds on SP*. Using this translation, we extend the result of Dadush and Tiwari to show that Cutting Planes has short refutations of any unsatisfiable system of linear equations over a finite field. Like the Cutting Planes proofs of Dadush and Tiwari, our refutations also incur a quasi-polynomial blow-up in depth, and we conjecture that this is inherent. As a step towards this conjecture, we develop a new geometric technique for proving lower bounds on the depth of Cutting Planes proofs. This allows us to establish the first lower bounds on the depth of Semantic Cutting Planes proofs of the Tseitin formulas

    Robust Design of Single-Commodity Networks

    Get PDF
    The results in the present work were obtained in a collaboration with Eduardo Álvarez- Miranda, Valentina Cacchiani, Tim Dorneth, Michael Jünger, Frauke Liers, Andrea Lodi and Tiziano Parriani. The subject of this thesis is a robust network design problem, i.e., a problem of the type “dimension a network such that it has sufficient capacity in all likely scenarios.” In our case, we model the network with an undirected graph in which each scenario defines a supply or demand for each node. We say that a flow in the network is feasible for a scenario if it can balance out its supplies and demands. A scenario polytope B defines which scenarios are relevant. The task is now to find integer capacities that minimize the total installation costs while allowing for a feasible flow in each scenario. This problem is called Single-Commodity Robust Network Design Problem (sRND) and was introduced by Buchheim, Liers and Sanità (INOC 2011). The problem contains the Steiner Tree Problem (given an undirected graph and a terminal set, find a minimum cost subtree that connects all terminals) and therefore is N P-hard. The problem is also a natural extension of minimum cost flows. The network design literature treats the case that the scenario polytope B is given as the finite set of its extreme points (finite case) and that it is given as the feasible region of finitely many linear inequalities (polyhedral case). Both descriptions are equivalent, however, an efficient transformation is not possible in general. Buchheim, Liers and Sanità (INOC 2011) propose a Branch-and-Cut algorithm for the finite case. In this case, there exists a canonical problem formulation as a mixed integer linear program (MIP). It contains a set of flow variables for every scenario. Buchheim, Liers and Sanità enhance the formulation with general cutting planes that are called target cuts. The first part of the dissertation considers the problem variant where every scenario has exactly two terminal nodes. If the underlying network is a complete, unweighted graph, then this problem is the Network Synthesis Problem as defined by Chien (IBM Journal of R&D 1960). There exist polynomial time algorithms by Gomory and Hu (SIAM J. of Appl. Math 1961) and by Kabadi, Yan, Du and Nair (SIAM J. on Discr. Math.) for this special case. However, these algorithms are based on the fact that complete graphs are Hamiltonian. The result of this part is a similar algorithm for hypercube graphs that assumes a special distribution of the supplies and demands. These graphs are also Hamiltonian. The second part of the thesis discusses the structure of the polyhedron of feasible sRND solutions. Here, the first result is a new MIP-based capacity formulation for the sRND problem. The size of this formulation is independent of the number of extreme points of B and therefore, it is also suited for the polyhedral case. The formulation uses so-called cut-set inequalities that are known in similar form from other network design problems. By adapting a proof by Mattia (Computational Optimization and Applications 2013), we show that cut-set inequalities induce facets of the sRND polyhedron. To obtain a better linear programming relaxation of the capacity formulation, we interpret certain general mixed integer cuts as 3-partition inequalities and show that these inequalities induce facets as well. The capacity formulation has exponential size and we therefore need a separation algorithm for cut-set inequalities. In the finite case, we reduce the cut-set separation problem to a minimum cut problem that can be solved in polynomial time. In the polyhedral case, however, the separation problem is N P-hard, even if we assume that the scenario polytope is basically a cube. Such a scenario polytope is called Hose polytope. Nonetheless, we can solve the separation problem in practice: We show a MIP based separation procedure for the Hose scenario polytope. Additionally, the thesis presents two separation methods for 3-partition inequalities. These methods are independent of the encoding of the scenario polytope. Additionally, we present several rounding heuristics. The result is a Branch-and-Cut algorithm for the capacity formulation. We analyze the algorithm in the last part of the thesis. There, we show experimentally that the algorithm works in practice, both in the finite and in the polyhedral case. As a reference point, we use a CPLEX implementation of the flow based formulation and the computational results by Buchheim, Liers and Sanità. Our experiments show that the new Branch-and-Cut algorithm is an improvement over the existing approach. Here, the algorithm excels on problem instances with many scenarios. In particular, we can show that the MIP separation of the cut-set inequalities is practical

    Integrality and cutting planes in semidefinite programming approaches for combinatorial optimization

    Get PDF
    Many real-life decision problems are discrete in nature. To solve such problems as mathematical optimization problems, integrality constraints are commonly incorporated in the model to reflect the choice of finitely many alternatives. At the same time, it is known that semidefinite programming is very suitable for obtaining strong relaxations of combinatorial optimization problems. In this dissertation, we study the interplay between semidefinite programming and integrality, where a special focus is put on the use of cutting-plane methods. Although the notions of integrality and cutting planes are well-studied in linear programming, integer semidefinite programs (ISDPs) are considered only recently. We show that manycombinatorial optimization problems can be modeled as ISDPs. Several theoretical concepts, such as the Chvátal-Gomory closure, total dual integrality and integer Lagrangian duality, are studied for the case of integer semidefinite programming. On the practical side, we introduce an improved branch-and-cut approach for ISDPs and a cutting-plane augmented Lagrangian method for solving semidefinite programs with a large number of cutting planes. Throughout the thesis, we apply our results to a wide range of combinatorial optimization problems, among which the quadratic cycle cover problem, the quadratic traveling salesman problem and the graph partition problem. Our approaches lead to novel, strong and efficient solution strategies for these problems, with the potential to be extended to other problem classes

    Exploiting Structures in Mixed-Integer Second-Order Cone Optimization Problems for Branch-and-Conic-Cut Algorithms

    Get PDF
    This thesis studies computational approaches for mixed-integer second-order cone optimization (MISOCO) problems. MISOCO models appear in many real-world applications, so MISOCO has gained significant interest in recent years. However, despite recent advancements, there is a gap between the theoretical developments and computational practice. Three chapters of this thesis address three areas of computational methodology for an efficient branch-and-conic-cut (BCC) algorithm to solve MISOCO problems faster in practice. These chapters include a detailed discussion on practical work on adding cuts in a BCC algorithm, novel methodologies for warm-starting second-order cone optimization (SOCO) subproblems, and heuristics for MISOCO problems.The first part of this thesis concerns the development of a novel warm-starting method of interior-point methods (IPM) for SOCO problems. The method exploits the Jordan frames of an original instance and solves two auxiliary linear optimization problems. The solutions obtained from these problems are used to identify an ideal initial point of the IPM. Numerical results on public test sets indicate that the warm-start method works well in practice and reduces the number of iterations required to solve related SOCO problems by around 30-40%.The second part of this thesis presents novel heuristics for MISOCO problems. These heuristics use the Jordan frames from both continuous relaxations and penalty problems and present a way of finding feasible solutions for MISOCO problems. Numerical results on conic and quadratic test sets show significant performance in terms of finding a solution that has a small gap to optimality.The last part of this thesis presents application of disjunctive conic cuts (DCC) and disjunctive cylindrical cuts (DCyC) to asset allocation problems (AAP). To maximize the benefit from these powerful cuts, several decisions regarding the addition of these cuts are inspected in a practical setting. The analysis in this chapter gives insight about how these cuts can be added in case-specific settings

    Optimal Global Instruction Scheduling for the Itanium® Processor Architecture

    Get PDF
    On the Itanium 2 processor, effective global instruction scheduling is crucial to high performance. At the same time, it poses a challenge to the compiler: This code generation subtask involves strongly interdependent decisions and complex trade-offs that are difficult to cope with for heuristics. We tackle this NP-complete problem with integer linear programming (ILP), a search-based method that yields provably optimal results. This promises faster code as well as insights into the potential of the architecture. Our ILP model comprises global code motion with compensation copies, predication, and Itanium-specific features like control/data speculation. In integer linear programming, well-structured models are the key to acceptable solution times. The feasible solutions of an ILP are represented by integer points inside a polytope. If all vertices of this polytope are integral, then the ILP can be solved in polynomial time. We define two subproblems of global scheduling in which some constraint classes are omitted and show that the corresponding two subpolytopes of our ILP model are integral and polynomial sized. This substantiates that the found model is of high efficiency, which is also confirmed by the reasonable solution times. The ILP formulation is extended by further transformations like cyclic code motion, which moves instructions upwards out of a loop, circularly in the opposite direction of the loop backedges. Since the architecture requires instructions to be encoded in fixed-sized bundles of three, a bundler is developed that computes bundle sequences of minimal size by means of precomputed results and dynamic programming. Experiments have been conducted with a postpass tool that implements the ILP scheduler. It parses assembly procedures generated by Intel&#65533;s Itanium compiler and reschedules them as a whole. Using this tool, we optimize a selection of hot functions from the SPECint 2000 benchmark. The results show a significant speedup over the original code.Globale Instruktionsanordnung hat beim Itanium-2-Prozessor großen Einfluß auf die Leistung und stellt dabei gleichzeitig eine Herausforderung für den Compiler dar: Sie ist mit zahlreichen komplexen, wechselseitig voneinander abhängigen Entscheidungen verbunden, die für Heuristiken nur schwer zu beherrschen sind.Wir lösen diesesNP-vollständige Problem mit ganzzahliger linearer Programmierung (ILP), einer suchbasierten Methode mit beweisbar optimalen Ergebnissen. Das ermöglicht neben schnellerem Code auch Einblicke in das Potential der Itanium- Prozessorarchitektur. Unser ILP-Modell umfaßt globale Codeverschiebungen mit Kompensationscode, Prädikation und Itanium-spezifische Techniken wie Kontroll- und Datenspekulation. Bei ganzzahliger linearer Programmierung sind wohlstrukturierte Modelle der Schlüssel zu akzeptablen Lösungszeiten. Die zulässigen Lösungen eines ILPs werden durch ganzzahlige Punkte innerhalb eines Polytops repräsentiert. Sind die Eckpunkte dieses Polytops ganzzahlig, kann das ILP in Polynomialzeit gelöst werden. Wir definieren zwei Teilprobleme globaler Instruktionsanordnung durch Auslassung bestimmter Klassen von Nebenbedingungen und beweisen, daß die korrespondierenden Teilpolytope unseres ILP-Modells ganzzahlig und von polynomieller Größe sind. Dies untermauert die hohe Effizienz des gefundenen Modells, die auch durch moderate Lösungszeiten bestätigt wird. Das ILP-Modell wird um weitere Transformationen wie zyklische Codeverschiebung erweitert; letztere bezeichnet das Verschieben von Befehlen aufwärts aus einer Schleife heraus, in Gegenrichtung ihrer Rückwärtskanten. Da die Architektur eine Kodierung der Befehle in Dreierbündeln fester Größe vorschreibt, wird ein Bundler entwickelt, der Bündelsequenzen minimaler Länge mit Hilfe vorberechneter Teilergebnisse und dynamischer Programmierung erzeugt. Für die Experimente wurde ein Postpassoptimierer erstellt. Er liest von Intels Itanium-Compiler erzeugte Assemblerroutinen ein und ordnet die enthaltenen Instruktionen mit Hilfe der ILP-Methode neu an. Angewandt auf eine Auswahl von Funktionen aus dem Benchmark SPECint 2000 erreicht der Optimierer eine signifikante Beschleunigung gegenüber dem Originalcode
    corecore