14 research outputs found

    Polynomial-time Computation of Exact Correlated Equilibrium in Compact Games

    Full text link
    In a landmark paper, Papadimitriou and Roughgarden described a polynomial-time algorithm ("Ellipsoid Against Hope") for computing sample correlated equilibria of concisely-represented games. Recently, Stein, Parrilo and Ozdaglar showed that this algorithm can fail to find an exact correlated equilibrium, but can be easily modified to efficiently compute approximate correlated equilibria. Currently, it remains unresolved whether the algorithm can be modified to compute an exact correlated equilibrium. We show that it can, presenting a variant of the Ellipsoid Against Hope algorithm that guarantees the polynomial-time identification of exact correlated equilibrium. Our new algorithm differs from the original primarily in its use of a separation oracle that produces cuts corresponding to pure-strategy profiles. As a result, we no longer face the numerical precision issues encountered by the original approach, and both the resulting algorithm and its analysis are considerably simplified. Our new separation oracle can be understood as a derandomization of Papadimitriou and Roughgarden's original separation oracle via the method of conditional probabilities. Also, the equilibria returned by our algorithm are distributions with polynomial-sized supports, which are simpler (in the sense of being representable in fewer bits) than the mixtures of product distributions produced previously; no tractable algorithm has previously been proposed for identifying such equilibria.Comment: 15 page

    A Positive Semidefinite Approximation of the Symmetric Traveling Salesman Polytope

    Full text link
    For a convex body B in a vector space V, we construct its approximation P_k, k=1, 2, . . . using an intersection of a cone of positive semidefinite quadratic forms with an affine subspace. We show that P_k is contained in B for each k. When B is the Symmetric Traveling Salesman Polytope on n cities T_n, we show that the scaling of P_k by n/k+ O(1/n) contains T_n for k no more than n/2. Membership for P_k is computable in time polynomial in n (of degree linear in k). We discuss facets of T_n that lie on the boundary of P_k. We introduce a new measure on each facet defining inequality for T_n in terms of the eigenvalues of a quadratic form. Using these eigenvalues of facets, we show that the scaling of P_1 by n^(1/2) has all of the facets of T_n defined by the subtour elimination constraints either in its interior or lying on its boundary.Comment: 25 page

    Stable matchings and linear programming

    Get PDF
    AbstractThis paper continues the work of Abeledo and Rothblum, who study nonbipartite stable matching problems from a polyhedral perspective. We establish here additional properties of fractional stable matchings and use linear programming to obtain an alternative polynomial algorithm for solving stable matching problems

    Pup Matching: Model Formulations and Solution Approaches

    Get PDF
    We model Pup Matching, the logistics problem of matching or pairing semitrailers known as pups to cabs that are able to tow one or two of the pups simultaneously, as an AfP-complete version of the Network Loading Problem (NLP). We examine a branch and bound solution approach tailored to the NLP formulation through the use of three families of cutting planes and four heuristic procedures. Theoretically, we specify facet defining conditions for a cut family that we refer to as odd flow inequalities and show that each heuristic yields a 2-approximation. Computationally, the cheapest of the four heuristic values achieved an average error of 1.3% among solved test problems randomly generated from realistic data. Branch and bound solved to optimality 67% of these problems. Application of the cutting plane families reduced the average relative difference between upper and lower bounds prior to branching from 18.8% to 6.4%

    Persistency of Linear Programming Relaxations for the Stable Set Problem

    Get PDF
    The Nemhauser-Trotter theorem states that the standard linear programming (LP) formulation for the stable set problem has a remarkable property, also known as (weak) persistency: for every optimal LP solution that assigns integer values to some variables, there exists an optimal integer solution in which these variables retain the same values. While the standard LP is defined by only non-negativity and edge constraints, a variety of other LP formulations have been studied and one may wonder whether any of them has this property as well. We show that any other formulation that satisfies mild conditions cannot have the persistency property on all graphs, unless it is always equal to the stable set polytope.Comment: 17 pages, 6 figure

    Robust Design of Single-Commodity Networks

    Get PDF
    The results in the present work were obtained in a collaboration with Eduardo Álvarez- Miranda, Valentina Cacchiani, Tim Dorneth, Michael Jünger, Frauke Liers, Andrea Lodi and Tiziano Parriani. The subject of this thesis is a robust network design problem, i.e., a problem of the type “dimension a network such that it has sufficient capacity in all likely scenarios.” In our case, we model the network with an undirected graph in which each scenario defines a supply or demand for each node. We say that a flow in the network is feasible for a scenario if it can balance out its supplies and demands. A scenario polytope B defines which scenarios are relevant. The task is now to find integer capacities that minimize the total installation costs while allowing for a feasible flow in each scenario. This problem is called Single-Commodity Robust Network Design Problem (sRND) and was introduced by Buchheim, Liers and Sanità (INOC 2011). The problem contains the Steiner Tree Problem (given an undirected graph and a terminal set, find a minimum cost subtree that connects all terminals) and therefore is N P-hard. The problem is also a natural extension of minimum cost flows. The network design literature treats the case that the scenario polytope B is given as the finite set of its extreme points (finite case) and that it is given as the feasible region of finitely many linear inequalities (polyhedral case). Both descriptions are equivalent, however, an efficient transformation is not possible in general. Buchheim, Liers and Sanità (INOC 2011) propose a Branch-and-Cut algorithm for the finite case. In this case, there exists a canonical problem formulation as a mixed integer linear program (MIP). It contains a set of flow variables for every scenario. Buchheim, Liers and Sanità enhance the formulation with general cutting planes that are called target cuts. The first part of the dissertation considers the problem variant where every scenario has exactly two terminal nodes. If the underlying network is a complete, unweighted graph, then this problem is the Network Synthesis Problem as defined by Chien (IBM Journal of R&D 1960). There exist polynomial time algorithms by Gomory and Hu (SIAM J. of Appl. Math 1961) and by Kabadi, Yan, Du and Nair (SIAM J. on Discr. Math.) for this special case. However, these algorithms are based on the fact that complete graphs are Hamiltonian. The result of this part is a similar algorithm for hypercube graphs that assumes a special distribution of the supplies and demands. These graphs are also Hamiltonian. The second part of the thesis discusses the structure of the polyhedron of feasible sRND solutions. Here, the first result is a new MIP-based capacity formulation for the sRND problem. The size of this formulation is independent of the number of extreme points of B and therefore, it is also suited for the polyhedral case. The formulation uses so-called cut-set inequalities that are known in similar form from other network design problems. By adapting a proof by Mattia (Computational Optimization and Applications 2013), we show that cut-set inequalities induce facets of the sRND polyhedron. To obtain a better linear programming relaxation of the capacity formulation, we interpret certain general mixed integer cuts as 3-partition inequalities and show that these inequalities induce facets as well. The capacity formulation has exponential size and we therefore need a separation algorithm for cut-set inequalities. In the finite case, we reduce the cut-set separation problem to a minimum cut problem that can be solved in polynomial time. In the polyhedral case, however, the separation problem is N P-hard, even if we assume that the scenario polytope is basically a cube. Such a scenario polytope is called Hose polytope. Nonetheless, we can solve the separation problem in practice: We show a MIP based separation procedure for the Hose scenario polytope. Additionally, the thesis presents two separation methods for 3-partition inequalities. These methods are independent of the encoding of the scenario polytope. Additionally, we present several rounding heuristics. The result is a Branch-and-Cut algorithm for the capacity formulation. We analyze the algorithm in the last part of the thesis. There, we show experimentally that the algorithm works in practice, both in the finite and in the polyhedral case. As a reference point, we use a CPLEX implementation of the flow based formulation and the computational results by Buchheim, Liers and Sanità. Our experiments show that the new Branch-and-Cut algorithm is an improvement over the existing approach. Here, the algorithm excels on problem instances with many scenarios. In particular, we can show that the MIP separation of the cut-set inequalities is practical

    Exact Integer Programming Approaches to Sequential Instruction Scheduling and Offset Assignment

    Get PDF
    The dissertation at hand presents the main concepts and results derived when studying the optimal solution of two NP-hard compiler optimization problems, namely instruction scheduling and offset assignment, by means of integer programming. It is the outcome of several years of research as an assistant at Michael Jünger's computer science chair in Cologne, with the particular aim to apply exact mathematical optimization techniques to real-world problems arising in the domain of technical computer science. The two problems studied are rather unrelated apart from the fact that they both take place during the machine code generation phase of a compiler and deal with the handling of limited resources. Instruction scheduling is about the assignment of issue clock cycles to instructions in the presence of precedence, latency, and resource constraints such that the total time needed to execute all the instructions is minimized. Offset assignment deals with storage layouts of program variables and the efficient use of address registers for accesses to these variables. The objective is to employ specialized instructions in order to minimize the overhead caused by address computations. While instruction scheduling needs to be carried out by almost every present compiler irrespective of the processor architecture, the offset assignment problem occurs mainly in compilers for highly specialized processor designs. Instruction scheduling is a well-studied field where several exact and heuristic approaches have been developed and experimentally evaluated in the past. In this thesis, we concentrate on the basic-block instruction scheduling problem for single-issue processors. Basic blocks are program fragments with no side-entrances and -exits, i.e., every instruction of a basic block needs to be executed before the control flow may leave it and enter another basic block. Single-issue processors are capable of starting the execution of exactly one instruction per clock cycle. A number of techniques to preprocess instances of the basic-block instruction scheduling problem were proposed in the literature and are, with emphasis on the more recent ones that arose since the year 2000, thoroughly reviewed in this thesis. They finally led to a constraint programming approach in 2006 that was shown to solve about 350,000 instances to optimality and where some of these instances comprised up to about 2,500 instructions. The last attempt to tackle the problem using integer programming however dates to a time prior to the publication of the latest preprocessing advances. While being successful on a set of instances that impose very restrictive latency constraints, it was shown to be unable to solve hundreds of instances from the aforementioned benchmark set that comprises also large and varying latencies. In addition, the previous integer programming models were almost all based on so-called time-indexed formulations where decision variables model an explicit assignment of instructions to clock cycles. In this thesis, a completely different and novel approach is taken based on the linear ordering problem, a well-studied combinatorial optimization problem. The new models lead to alternative characterizations of the feasible solutions to the basic-block instruction scheduling problem. These facilitate the employment of advanced integer programming methodologies, in particular the design of branch-and-cut algorithms that can handle larger instances. The formulations are further extended by additional inequalities that can be used as cutting planes. Combined with the preprocessing routines that are partially extended and improved as well, the respective solver implementation eventually turned out to be competitive to the constraint programming method. Reaching this point has taken some years and this thesis presents not only the derived models but also several ideas and byproducts that arose in the meantime, and that can help and inspire researchers even if they aim at the application of different solution methodologies. The starting point regarding the offset assignment problem was a different one because especially exact solution approaches were rather rare prior to the models presented in this thesis. The offset assignment problem arose in the 1990s and is considered in several variants that are of theoretical and practical interest. In the simplest one, a processor is assumed to provide only a single address register and only very restricted possibilities to avoid address computation overhead. However, even this simplest variant, that may serve as a building block for the more complex ones, is already NP-hard and has been studied mainly from a heuristic point of view. The few existing exact solution approaches were not capable to solve moderately sized instances so that the quality of heuristic solutions relative to the optimum was hardly known at all. Again, the inspection of the combinatorial structure of the various problem variants turned out to be the key for designing branch-and-cut implementations that can profit from knowledge about related combinatorial optimization problems. The implementation targeting the simple problem variant was the first capable to optimally solve the majority of about 3,000 instances collected in a standard benchmark set. The method could then be further generalized in two steps. First, in a collaboration with Roberto Castañeda Lozano, additional techniques could be incorporated into the approach in order to handle multiple address registers. Fortunately, the methods could then even be further extended to as well deal with more flexible addressing capabilities. In this way, the thesis at hand does not only answer the question how large the address computation overhead can be when using heuristics, but as well presents first results that allow to analyze the impact of the mentioned increased addressing capabilities on the runtime performance and size of real-world programs
    corecore