35 research outputs found

    Representations of Monotone Boolean Functions by Linear Programs

    Get PDF
    We introduce the notion of monotone linear-programming circuits (MLP circuits), a model of computation for partial Boolean functions. Using this model, we prove the following results. 1. MLP circuits are superpolynomially stronger than monotone Boolean circuits. 2. MLP circuits are exponentially stronger than monotone span programs. 3. MLP circuits can be used to provide monotone feasibility interpolation theorems for Lovasz-Schrijver proof systems, and for mixed Lovasz-Schrijver proof systems. 4. The Lovasz-Schrijver proof system cannot be polynomially simulated by the cutting planes proof system. This is the first result showing a separation between these two proof systems. Finally, we discuss connections between the problem of proving lower bounds on the size of MLPs and the problem of proving lower bounds on extended formulations of polytopes

    Convex Algebraic Geometry Approaches to Graph Coloring and Stable Set Problems

    Get PDF
    The objective of a combinatorial optimization problem is to find an element that maximizes a given function defined over a large and possibly high-dimensional finite set. It is often the case that the set is so large that solving the problem by inspecting all the elements is intractable. One approach to circumvent this issue is by exploiting the combinatorial structure of the set (and possibly the function) and reformulate the problem into a familiar set-up where known techniques can be used to attack the problem. Some common solution methods for combinatorial optimization problems involve formulations that make use of Systems of Linear Equations, Linear Programs (LPs), Semidefinite Programs (SDPs), and more generally, Conic and Semi-algebraic Programs. Although, generality often implies flexibility and power in the formulations, in practice, an increase in sophistication usually implies a higher running time of the algorithms used to solve the problem. Despite this, for some combinatorial problems, it is hard to rule out the applicability of one formulation over the other. One example of this is the Stable Set Problem. A celebrated result of Lovász's states that it is possible to solve (to arbitrary accuracy) in polynomial time the Stable Set Problem for perfect graphs. This is achieved by showing that the Stable Set Polytope of a perfect graph is the projection of a slice of a Positive Semidefinite Cone of not too large dimension. Thus, the Stable Set Problem can be solved with the use of a reasonably sized SDP. However, it is unknown whether one can solve the same problem using a reasonably sized LP. In fact, even for simple classes of perfect graphs, such as Bipartite Graphs, we do not know the right order of magnitude of the minimum size LP formulation of the problem. Another example is Graph Coloring. In 2008 Jesús De Loera, Jon Lee, Susan Margulies and Peter Malkin proposed a technique to solve several combinatorial problems, including Graph Coloring Problems, using Systems of Linear Equations. These systems are obtained by reformulating the decision version of the combinatorial problem with a system of polynomial equations. By a theorem of Hilbert, known as Hilbert's Nullstellensatz, the infeasibility of this polynomial system can be determined by solving a (usually large) system of linear equations. The size of this system is an exponential function of a parameter dd that we call the degree of the Nullstellensatz Certificate. Computational experiments of De Loera et al. showed that the Nullstellensatz method had potential applications for detecting non-33-colorability of graphs. Even for known hard instances of graph coloring with up to two thousand vertices and tens of thousands of edges the method was useful. Moreover, all of these graphs had very small Nullstellensatz Certificates. Although, the existence of hard non-33-colorable graph examples for the Nullstellensatz approach are known, determining what combinatorial properties makes the Nullstellensatz approach effective (or ineffective) is wide open. The objective of this thesis is to amplify our understanding on the power and limitations of these methods, all of these falling into the umbrella of Convex Algebraic Geometry approaches, for combinatorial problems. We do this by studying the behavior of these approaches for Graph Coloring and Stable Set Problems. First, we study the Nullstellensatz approach for graphs having large girth and chromatic number. We show that that every non-kk-colorable graph with girth gg needs a Nullstellensatz Certificate of degree Ω(g)\Omega(g) to detect its non-kk-colorability. It is our general belief that the power of the Nullstellensatz method is tied with the interplay between local and global features of the encoding polynomial system. If a graph is locally kk-colorable, but globally non-kk-colorable, we suspect that it will be hard for the Nullstellensatz to detect the non-kk-colorability of the graph. Our results point towards that direction. Finally, we study the Stable Set Problem for dd-regular Bipartite Graphs having no C4C_4, i.e., having no cycle of length four. In 2017 Manuel Aprile \textit{et al.} showed that the Stable Set Polytope of the incidence graph Gd1G_{d-1} of a Finite Projective Plane of order d1d-1 (hence, dd-regular) does not admit an LP formulation with fewer than ln(d)dE(Gd1)\frac{\ln(d)}{d}|E(G_{d-1})| facets. Although, we did not manage to improve this lower bound for general dd-regular graphs, we show that any 44-regular bipartite graph GG having no C4C_4 does not admit an LP formulation with fewer than E(G)|E(G)| facets. In addition, we obtain computational results showing the E(G)|E(G)| lower bound also holds for the Finite Projective Plane G4G_4, a 55-regular graph. It is our belief that Aprile et al. bounds can be improved considerably

    Machine scheduling and Lagrangian relaxation

    Get PDF

    Contributions to the moment-SOS approach in global polynomial optimization

    Get PDF
    L''Optimisation Polynomiale' s'intéresse aux problèmes d'optimisation P de la forme min {f(x): x dans K} où f est un polynôme et K est un ensemble semi-algébrique de base, c'est-à-dire défini par un nombre fini de contraintes inégalité polynomiales, K={x dans Rn : gj(x) <= 0}. Cette sous discipline de l'optimisation a émergé dans la dernière décennie grâce à la combinaison de deux facteurs: l'existence de certains résultats puissants de géométrie algébrique réelle et la puissance de l'optimisation semidéfinie (qui permet d'exploiter les premiers). Il en a résulté une méthodologie générale (que nous appelons ``moments-SOS') qui permet d'approcher aussi près que l'on veut l'optimum global de P en résolvant une hiérarchie de relaxations convexes. Cependant, chaque relaxation étant un programme semi-défini dont la taille augmente avec le rang dans la hiérarchie, malheureusement, au vu de l'état de l'art actuel des progiciels de programmation semidéfinie, cette méthodologie est pour l'instant limitée à des problèmes P de taille modeste sauf si des symétries ou de la parcimonie sont présentes dans la définition de P. Cette thèse essaie donc de répondre à la question: Peux-t-on quand même utiliser la méthodologie moments-SOS pour aider à résoudre P même si on ne peut résoudre que quelques (voire une seule) relaxations de la hiérarchie? Et si oui, comment? Nous apportons deux contributions: I. Dans une première contribution nous considérons les problèmes non convexes en variables mixtes (MINLP) pour lesquelles dans les contraintes polynomiales {g(x) <=0} où le polynôme g n'est pas concave, g est concerné par peu de variables. Pour résoudre de tels problèmes (de taille est relativement importante) on utilise en général des méthodes de type ``Branch-and-Bound'. En particulier, pour des raisons d'efficacité évidentes, à chaque nœud de l'arbre de recherche on doit calculer rapidement une borne inférieure sur l'optimum global. Pour ce faire on utilise des relaxations convexes du problème obtenues grâce à l'utilisation de sous estimateurs convexes du critère f (et des polynômes g pour les contraintes g(x)<= 0 non convexes). Notre contribution est de fournir une méthodologie générale d'obtention de tels sous estimateurs polynomiaux convexes pour tout polynôme g, sur une boite. La nouveauté de notre contribution (grâce à la méthodologie moment-SOS) est de pouvoir minimiser directement le critère d'erreur naturel qui mesure la norme L_1 de la différence f-f' entre f et son sous estimateur convexe polynomial f'. Les résultats expérimentaux confirment que le sous estimateur convexe polynomial que nous obtenons est nettement meilleur que ceux obtenus par des méthodes classiques de type ``alpha-BB' et leurs variantes, tant du point de vue du critère L_1 que du point de vue de la qualité des bornes inférieures obtenus quand on minimise f' (au lieu de f) sur la boite. II: Dans une deuxième contribution on considère des problèmes P pour lesquels seules quelques relaxations de la hiérarchie moments-SOS peuvent être implantées, par exemple celle de rang k dans la hiérarchie, et on utilise la solution de cette relaxation pour construire une solution admissible de P. Cette idée a déjà été exploitée pour certains problèmes combinatoire en variables 0/1, parfois avec des garanties de performance remarquables (par exemple pour le problème MAXCUT). Nous utilisons des résultats récents de l'approche moment-SOS en programmation polynomiale paramétrique pour définir un algorithme qui calcule une solution admissible pour P à partir d'une modification mineure de la relaxation convexe d'ordre k. L'idée de base est de considérer la variable x_1 comme un paramètre dans un intervalle Y_1 de R et on approxime la fonction ``valeur optimale' J(y) du problème d'optimisation paramétrique P(y)= min {f(x): x dans K; x_1=y} par un polynôme univarié de degré d fixé. Cette étape se ramène à la résolution d'un problème d'optimisation convexe (programme semidéfini). On calcule un minimiseur global y de J sur l'intervalle Y (un problème d'optimisation convexe ``facile') et on fixe la variable x_1=y. On itère ensuite sur les variables restantes x_2,...,x_n en prenant x_2 comme paramètre dans un intervalle Y_2, etc. jusqu'à obtenir une solution complète x de R^n qui est faisable si K est convexe ou dans certains problèmes en variables 0/1 où la faisabilité est facile à vérifier (e.g., MAXCUT, k-CLUSTTER, Knapsack). Sinon on utilise le point obtenu x comme initialisation dans un procédure d'optimisation locale pour obtenir une solution admissible. Les résultats expérimentaux obtenus sur de nombreux exemples sont très encourageants et prometteurs.Polynomial Optimization is concerned with optimization problems of the form (P) : f* = { f(x) with x in set K}, where K is a basic semi-algebraic set in Rn defined by K={x in Rn such as gj(x) less or equal 0}; and f is a real polynomial of n variables x = (x1, x2, ..., xn). In this thesis we are interested in problems (P) where symmetries and/or structured sparsity are not easy to detect or to exploit, and where only a few (or even no) semidefinite relaxations of the moment-SOS approach can be implemented. And the issue we investigate is: How can the moment-SOS methodology be still used to help solve such problem (P)? We provide two applications of the moment-SOS approach to help solve (P) in two different contexts. * In a first contribution we consider MINLP problems on a box B = [xL, xU] of Rn and propose a moment-SOS approach to construct polynomial convex underestimators for the objective function f (if non convex) and for -gj if in the constraint gj(x) less or equal 0, the polynomial gj is not concave. We work in the context where one wishes to find a convex underestimator of a non-convex polynomial f of a few variables on a box B of Rn. The novelty with previous works on this topic is that we want to compute a polynomial convex underestimator p of f that minimizes the important tightness criterion which is the L1 norm of (f-h) on B, over all convex polynomials h of degree d _fixed. Indeed in previous works for computing a convex underestimator L of f, this tightness criterion is not taken into account directly. It turns out that the moment-SOS approach is well suited to compute a polynomial convex underestimator p that minimizes the tightness criterion and numerical experiments on a sample of non-trivial examples show that p outperforms L not only with respect to the tightness score but also in terms of the resulting lower bounds obtained by minimizing respectively p and L on B. Similar improvements also occur when we use the moment-SOS underestimator instead of the aBB-one in refinements of the aBB method. * In a second contribution we propose an algorithm that also uses an optimal solution of a semidefinite relaxation in the moment-SOS hierarchy (in fact a slight modification) to provide a feasible solution for the initial optimization problem but with no rounding procedure. In the present context, we treat the first variable x1 of x = (x1, x2, ...., xn) as a parameter in some bounded interval Y of R. Notice that f*=min { J(y) : y in Y} where J is the function J(y) := inf {f(x) : x in K ; x1=y}. That is one has reduced the original n-dimensional optimization problem (P) to an equivalent one-dimensional optimization problem on an interval. But of course determining the optimal value function J is even more complicated than (P) as one has to determine a function (instead of a point in Rn), an infinite-dimensional problem. But the idea is to approximate J(y) on Y by a univariate polynomial p(y) with the degree d and fortunately, computing such a univariate polynomial is possible via solving a semidefinite relaxation associated with the parameter optimization problem. The degree d of p(y) is related to the size of this semidefinite relaxation. The higher the degree d is, the better is the approximation of J(y) by p(y) and in fact, one may show that p(y) converges to J(y) in a strong sense on Y as d increases. But of course the resulting semidefinite relaxation becomes harder (or impossible) to solve as d increases and so in practice d is fixed to a small value. Once the univariate polynomial p(y) has been determined, one computes x1* in Y that minimizes p(y) on Y, a convex optimization problem that can be solved efficiently. The process is iterated to compute x2 in a similar manner, and so on, until a point x in Rn has been computed. Finally, as x* is not feasible in general, we then use x* as a starting point for a local optimization procedure to find a final feasible point x in K. When K is convex, the following variant is implemented. After having computed x1* as indicated, x2* is computed with x1 fixed at the value x1*, and x3 is computed with x1 and x2 fixed at the values x1* and x2* respectively, etc., so that the resulting point x* is feasible, i.e., x* in K. The same variant applies for 0/1 programs for which feasibility is easy to detect like e.g., for MAXCUT, k-CLUSTER or 0/1-KNAPSACK problems

    Robust Design of Single-Commodity Networks

    Get PDF
    The results in the present work were obtained in a collaboration with Eduardo Álvarez- Miranda, Valentina Cacchiani, Tim Dorneth, Michael Jünger, Frauke Liers, Andrea Lodi and Tiziano Parriani. The subject of this thesis is a robust network design problem, i.e., a problem of the type “dimension a network such that it has sufficient capacity in all likely scenarios.” In our case, we model the network with an undirected graph in which each scenario defines a supply or demand for each node. We say that a flow in the network is feasible for a scenario if it can balance out its supplies and demands. A scenario polytope B defines which scenarios are relevant. The task is now to find integer capacities that minimize the total installation costs while allowing for a feasible flow in each scenario. This problem is called Single-Commodity Robust Network Design Problem (sRND) and was introduced by Buchheim, Liers and Sanità (INOC 2011). The problem contains the Steiner Tree Problem (given an undirected graph and a terminal set, find a minimum cost subtree that connects all terminals) and therefore is N P-hard. The problem is also a natural extension of minimum cost flows. The network design literature treats the case that the scenario polytope B is given as the finite set of its extreme points (finite case) and that it is given as the feasible region of finitely many linear inequalities (polyhedral case). Both descriptions are equivalent, however, an efficient transformation is not possible in general. Buchheim, Liers and Sanità (INOC 2011) propose a Branch-and-Cut algorithm for the finite case. In this case, there exists a canonical problem formulation as a mixed integer linear program (MIP). It contains a set of flow variables for every scenario. Buchheim, Liers and Sanità enhance the formulation with general cutting planes that are called target cuts. The first part of the dissertation considers the problem variant where every scenario has exactly two terminal nodes. If the underlying network is a complete, unweighted graph, then this problem is the Network Synthesis Problem as defined by Chien (IBM Journal of R&D 1960). There exist polynomial time algorithms by Gomory and Hu (SIAM J. of Appl. Math 1961) and by Kabadi, Yan, Du and Nair (SIAM J. on Discr. Math.) for this special case. However, these algorithms are based on the fact that complete graphs are Hamiltonian. The result of this part is a similar algorithm for hypercube graphs that assumes a special distribution of the supplies and demands. These graphs are also Hamiltonian. The second part of the thesis discusses the structure of the polyhedron of feasible sRND solutions. Here, the first result is a new MIP-based capacity formulation for the sRND problem. The size of this formulation is independent of the number of extreme points of B and therefore, it is also suited for the polyhedral case. The formulation uses so-called cut-set inequalities that are known in similar form from other network design problems. By adapting a proof by Mattia (Computational Optimization and Applications 2013), we show that cut-set inequalities induce facets of the sRND polyhedron. To obtain a better linear programming relaxation of the capacity formulation, we interpret certain general mixed integer cuts as 3-partition inequalities and show that these inequalities induce facets as well. The capacity formulation has exponential size and we therefore need a separation algorithm for cut-set inequalities. In the finite case, we reduce the cut-set separation problem to a minimum cut problem that can be solved in polynomial time. In the polyhedral case, however, the separation problem is N P-hard, even if we assume that the scenario polytope is basically a cube. Such a scenario polytope is called Hose polytope. Nonetheless, we can solve the separation problem in practice: We show a MIP based separation procedure for the Hose scenario polytope. Additionally, the thesis presents two separation methods for 3-partition inequalities. These methods are independent of the encoding of the scenario polytope. Additionally, we present several rounding heuristics. The result is a Branch-and-Cut algorithm for the capacity formulation. We analyze the algorithm in the last part of the thesis. There, we show experimentally that the algorithm works in practice, both in the finite and in the polyhedral case. As a reference point, we use a CPLEX implementation of the flow based formulation and the computational results by Buchheim, Liers and Sanità. Our experiments show that the new Branch-and-Cut algorithm is an improvement over the existing approach. Here, the algorithm excels on problem instances with many scenarios. In particular, we can show that the MIP separation of the cut-set inequalities is practical

    On approximability and LP formulations for multicut and feedback set problems

    Get PDF
    Graph cut algorithms are an important tool for solving optimization problems in a variety of areas in computer science. Of particular importance is the min ss-tt cut problem and an efficient (polynomial time) algorithm for it. Unfortunately, efficient algorithms are not known for several other cut problems. Furthermore, the theory of NP-completeness rules out the existence of efficient algorithms for these problems if the PNPP\neq NP conjecture is true. For this reason, much of the focus has shifted to the design of approximation algorithms. Over the past 30 years significant progress has been made in understanding the approximability of various graph cut problems. In this thesis we further advance our understanding by closing some of the gaps in the known approximability results. Our results comprise of new approximation algorithms as well as new hardness of approximation bounds. For both of these, new linear programming (LP) formulations based on a labeling viewpoint play a crucial role. One of the problems we consider is a generalization of the min ss-tt cut problem, known as the multicut problem. In a multicut instance, we are given an undirected or directed weighted supply graph and a set of pairs of vertices which can be encoded as a demand graph. The goal is to remove a minimum weight set of edges from the supply graph such that all the demand pairs are disconnected. We study the effect of the structure of the demand graph on the approximability of multicut. We prove several algorithmic and hardness results which unify previous results and also yield new results. Our algorithmic result generalizes the constant factor approximations known for the undirected and directed multiway cut problems to a much larger class of demand graphs. Our hardness result proves the optimality of the hitting-set LP for directed graphs. In addition to the results on multicut, we also prove results for multiway cut and another special case of multicut, called linear-3-cut. Our results exhibit tight approximability bounds in some cases and improve upon the existing bound in other cases. As a consequence, we also obtain tight approximation results for related problems. Another part of the thesis is focused on feedback set problems. In a subset feedback edge or vertex set instance, we are given an undirected edge or vertex weighted graph, and a set of terminals. The goal is to find a minimum weight set of edges or vertices which hit all of the cycles that contain some terminal vertex. There is a natural hitting-set LP which has an Ω(logk)\Omega(\log k) integrality gap for kk terminals. Constant factor approximation algorithms have been developed using combinatorial techniques. However, the factors are not tight, and the algorithms are sometimes complicated. Since most of the related problems admit optimal approximation algorithms using LP relaxations, lack of good LP relaxations was seen as a fundamental roadblock towards resolving the approximability of these problems. In this thesis we address this by developing new LP relaxations with constant integrality gaps for subset feedback edge and vertex set problems

    Semidefinite Programming. methods and algorithms for energy management

    Get PDF
    La présente thèse a pour objet d explorer les potentialités d une méthode prometteuse de l optimisation conique, la programmation semi-définie positive (SDP), pour les problèmes de management d énergie, à savoir relatifs à la satisfaction des équilibres offre-demande électrique et gazier.Nos travaux se déclinent selon deux axes. Tout d abord nous nous intéressons à l utilisation de la SDP pour produire des relaxations de problèmes combinatoires et quadratiques. Si une relaxation SDP dite standard peut être élaborée très simplement, il est généralement souhaitable de la renforcer par des coupes, pouvant être déterminées par l'étude de la structure du problème ou à l'aide de méthodes plus systématiques. Nous mettons en œuvre ces deux approches sur différentes modélisations du problème de planification des arrêts nucléaires, réputé pour sa difficulté combinatoire. Nous terminons sur ce sujet par une expérimentation de la hiérarchie de Lasserre, donnant lieu à une suite de SDP dont la valeur optimale tend vers la solution du problème initial.Le second axe de la thèse porte sur l'application de la SDP à la prise en compte de l'incertitude. Nous mettons en œuvre une approche originale dénommée optimisation distributionnellement robuste , pouvant être vue comme un compromis entre optimisation stochastique et optimisation robuste et menant à des approximations sous forme de SDP. Nous nous appliquons à estimer l'apport de cette approche sur un problème d'équilibre offre-demande avec incertitude. Puis, nous présentons une relaxation SDP pour les problèmes MISOCP. Cette relaxation se révèle être de très bonne qualité, tout en ne nécessitant qu un temps de calcul raisonnable. La SDP se confirme donc être une méthode d optimisation prometteuse qui offre de nombreuses opportunités d'innovation en management d énergie.The present thesis aims at exploring the potentialities of a powerful optimization technique, namely Semidefinite Programming, for addressing some difficult problems of energy management. We pursue two main objectives. The first one consists of using SDP to provide tight relaxations of combinatorial and quadratic problems. A first relaxation, called standard can be derived in a generic way but it is generally desirable to reinforce them, by means of tailor-made tools or in a systematic fashion. These two approaches are implemented on different models of the Nuclear Outages Scheduling Problem, a famous combinatorial problem. We conclude this topic by experimenting the Lasserre's hierarchy on this problem, leading to a sequence of semidefinite relaxations whose optimal values tends to the optimal value of the initial problem.The second objective deals with the use of SDP for the treatment of uncertainty. We investigate an original approach called distributionnally robust optimization , that can be seen as a compromise between stochastic and robust optimization and admits approximations under the form of a SDP. We compare the benefits of this method w.r.t classical approaches on a demand/supply equilibrium problem. Finally, we propose a scheme for deriving SDP relaxations of MISOCP and we report promising computational results indicating that the semidefinite relaxation improves significantly the continuous relaxation, while requiring a reasonable computational effort.SDP therefore proves to be a promising optimization method that offers great opportunities for innovation in energy management.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Exponential Lower Bounds and Integrality Gaps for Tree-Like Lovász–Schrijver Procedures

    No full text
    corecore