343 research outputs found

    Handelman 's hierarchy for the maximum stable set problem.

    Get PDF
    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a result of Handelman showing that a positive polynomial over a polytope with non-empty interior can be represented as conic combination of products of the linear constraints defining the polytope. We relate the rank of Handelman's hierarchy with structural properties of graphs. In particular we show a relation to fractional clique covers which we use to upper bound the Handelman rank for perfect graphs and determine its exact value in the vertex-transitive case. Moreover we show two upper bounds on the Handelman rank in terms of the (fractional) stability number of the graph and compute the Handelman rank for several classes of graphs including odd cycles and wheels and their complements. We also point out links to several other linear and semidefinite programming hierarchies

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    The linearization problem of a binary quadratic problem and its applications

    Full text link
    We provide several applications of the linearization problem of a binary quadratic problem. We propose a new lower bounding strategy, called the linearization-based scheme, that is based on a simple certificate for a quadratic function to be non-negative on the feasible set. Each linearization-based bound requires a set of linearizable matrices as an input. We prove that the Generalized Gilmore-Lawler bounding scheme for binary quadratic problems provides linearization-based bounds. Moreover, we show that the bound obtained from the first level reformulation linearization technique is also a type of linearization-based bound, which enables us to provide a comparison among mentioned bounds. However, the strongest linearization-based bound is the one that uses the full characterization of the set of linearizable matrices. Finally, we present a polynomial-time algorithm for the linearization problem of the quadratic shortest path problem on directed acyclic graphs. Our algorithm gives a complete characterization of the set of linearizable matrices for the quadratic shortest path problem

    Improved convergence rates for Lasserre-type hierarchies of upper bounds for box-constrained polynomial optimization

    Get PDF
    We consider the problem of minimizing a given multivariate polynomial f over the hypercube [-1,1]^n. An idea, introduced by Lasserre, is to find a probability distribution on the hypercube with polynomial density function h (of given degree r) that minimizes the expectation of f over the hypercube with respect to this probability distribution. It is known that, for the Lebesgue measure one may show an error bound in 1/sqrt{r} if h is a sum-of-squares density, and an error bound in 1/r if h is the density of a beta distribution. In this paper, we show another probability distribution that permits to show an error bound in 1/r^2 when selecting a density function h with a Schmuedgen-type sum-of-squares decomposition. The convergence rate analysis relies on the theory of polynomial kernels, and in particular on Jackson kernels. We also show that the resulting upper bounds may be computed as generalized eigenvalue problems, as is also the case for sum-of-squares densitie

    Decomposition Methods for Nonlinear Optimization and Data Mining

    Full text link
    We focus on two central themes in this dissertation. The first one is on decomposing polytopes and polynomials in ways that allow us to perform nonlinear optimization. We start off by explaining important results on decomposing a polytope into special polyhedra. We use these decompositions and develop methods for computing a special class of integrals exactly. Namely, we are interested in computing the exact value of integrals of polynomial functions over convex polyhedra. We present prior work and new extensions of the integration algorithms. Every integration method we present requires that the polynomial has a special form. We explore two special polynomial decomposition algorithms that are useful for integrating polynomial functions. Both polynomial decompositions have strengths and weaknesses, and we experiment with how to practically use them. After developing practical algorithms and efficient software tools for integrating a polynomial over a polytope, we focus on the problem of maximizing a polynomial function over the continuous domain of a polytope. This maximization problem is NP-hard, but we develop approximation methods that run in polynomial time when the dimension is fixed. Moreover, our algorithm for approximating the maximum of a polynomial over a polytope is related to integrating the polynomial over the polytope. We show how the integration methods can be used for optimization. The second central topic in this dissertation is on problems in data science. We first consider a heuristic for mixed-integer linear optimization. We show how many practical mixed-integer linear have a special substructure containing set partition constraints. We then describe a nice data structure for finding feasible zero-one integer solutions to systems of set partition constraints. Finally, we end with an applied project using data science methods in medical research.Comment: PHD Thesis of Brandon Dutr

    The Recruitment of Spirit-Directed-Healers

    Get PDF
    The research sought to identify individual and social factors associated with the recruitment of spirit-directed-healers in Sub-Saharan Africa. This type of traditional healer was defined as a person who: 1) elicits and uses spiritual direction and information in the diagnosis and treatment of illness, 2) attributes the ultimate source of his/her healing power to one or more anthropopsychic spirits, and 3) claims to have been called to healing by those spirits. The spirit-directed-healers\u27 decisions, demands, and remedies are legitimized by their source--the directing supernatural entity. Data gathered through observations and open-ended interviews with four traditional healers who practiced in a single community in Ghana, West Africa, were sorted by subject and informant. Characteristics common to the four informants were considered as factors possibly associated with recruitment into the status of spirit-directed-healer. These were: 1) each began healing in his/her late teens or early twenties, 2) each had practiced conventional alternate occupations, 3) each had received little or no formal education, 4) recall of childhood appeared typical for the time and place, 5) components of the orientation and procreation families were typical, 6) each had had contact during childhood with one or more close relatives who were active in healing, 7) each had lost one or both parents in his/her late teens or early twenties, 8) each manifested efficient, non-psychotic behavior during the time of the observations and interviews, 9) each had received the call to healing during a crisis period when he/she was actually or emotionally separated from significant authority figures, and 10) acceptance of the call resolved the crisis. Cross-cultural comparisons were made to test the general hypothesis that spirit-directed-healer type societies would be significantly associated with: 1) culture norms that encourage long lasting dependence of young adults on family elders for decision direction and 2) a religious system in which the high god is absent or otiose. As a result of the comparisons a predictive model was presented. It is predicted that spirit-directed-healers will be associated with: 1) societies in which the high god is absent or otiose (\u3c .001), 2) norms for post-marital residence which result in proximity of residence between newly married couples and parents or uncle of one or both of the marriage partners (\u3c .01), 3) the presence of ritualized trance (\u3c .05)

    Contributions to the moment-SOS approach in global polynomial optimization

    Get PDF
    L''Optimisation Polynomiale' s'intéresse aux problèmes d'optimisation P de la forme min {f(x): x dans K} où f est un polynôme et K est un ensemble semi-algébrique de base, c'est-à-dire défini par un nombre fini de contraintes inégalité polynomiales, K={x dans Rn : gj(x) <= 0}. Cette sous discipline de l'optimisation a émergé dans la dernière décennie grâce à la combinaison de deux facteurs: l'existence de certains résultats puissants de géométrie algébrique réelle et la puissance de l'optimisation semidéfinie (qui permet d'exploiter les premiers). Il en a résulté une méthodologie générale (que nous appelons ``moments-SOS') qui permet d'approcher aussi près que l'on veut l'optimum global de P en résolvant une hiérarchie de relaxations convexes. Cependant, chaque relaxation étant un programme semi-défini dont la taille augmente avec le rang dans la hiérarchie, malheureusement, au vu de l'état de l'art actuel des progiciels de programmation semidéfinie, cette méthodologie est pour l'instant limitée à des problèmes P de taille modeste sauf si des symétries ou de la parcimonie sont présentes dans la définition de P. Cette thèse essaie donc de répondre à la question: Peux-t-on quand même utiliser la méthodologie moments-SOS pour aider à résoudre P même si on ne peut résoudre que quelques (voire une seule) relaxations de la hiérarchie? Et si oui, comment? Nous apportons deux contributions: I. Dans une première contribution nous considérons les problèmes non convexes en variables mixtes (MINLP) pour lesquelles dans les contraintes polynomiales {g(x) <=0} où le polynôme g n'est pas concave, g est concerné par peu de variables. Pour résoudre de tels problèmes (de taille est relativement importante) on utilise en général des méthodes de type ``Branch-and-Bound'. En particulier, pour des raisons d'efficacité évidentes, à chaque nœud de l'arbre de recherche on doit calculer rapidement une borne inférieure sur l'optimum global. Pour ce faire on utilise des relaxations convexes du problème obtenues grâce à l'utilisation de sous estimateurs convexes du critère f (et des polynômes g pour les contraintes g(x)<= 0 non convexes). Notre contribution est de fournir une méthodologie générale d'obtention de tels sous estimateurs polynomiaux convexes pour tout polynôme g, sur une boite. La nouveauté de notre contribution (grâce à la méthodologie moment-SOS) est de pouvoir minimiser directement le critère d'erreur naturel qui mesure la norme L_1 de la différence f-f' entre f et son sous estimateur convexe polynomial f'. Les résultats expérimentaux confirment que le sous estimateur convexe polynomial que nous obtenons est nettement meilleur que ceux obtenus par des méthodes classiques de type ``alpha-BB' et leurs variantes, tant du point de vue du critère L_1 que du point de vue de la qualité des bornes inférieures obtenus quand on minimise f' (au lieu de f) sur la boite. II: Dans une deuxième contribution on considère des problèmes P pour lesquels seules quelques relaxations de la hiérarchie moments-SOS peuvent être implantées, par exemple celle de rang k dans la hiérarchie, et on utilise la solution de cette relaxation pour construire une solution admissible de P. Cette idée a déjà été exploitée pour certains problèmes combinatoire en variables 0/1, parfois avec des garanties de performance remarquables (par exemple pour le problème MAXCUT). Nous utilisons des résultats récents de l'approche moment-SOS en programmation polynomiale paramétrique pour définir un algorithme qui calcule une solution admissible pour P à partir d'une modification mineure de la relaxation convexe d'ordre k. L'idée de base est de considérer la variable x_1 comme un paramètre dans un intervalle Y_1 de R et on approxime la fonction ``valeur optimale' J(y) du problème d'optimisation paramétrique P(y)= min {f(x): x dans K; x_1=y} par un polynôme univarié de degré d fixé. Cette étape se ramène à la résolution d'un problème d'optimisation convexe (programme semidéfini). On calcule un minimiseur global y de J sur l'intervalle Y (un problème d'optimisation convexe ``facile') et on fixe la variable x_1=y. On itère ensuite sur les variables restantes x_2,...,x_n en prenant x_2 comme paramètre dans un intervalle Y_2, etc. jusqu'à obtenir une solution complète x de R^n qui est faisable si K est convexe ou dans certains problèmes en variables 0/1 où la faisabilité est facile à vérifier (e.g., MAXCUT, k-CLUSTTER, Knapsack). Sinon on utilise le point obtenu x comme initialisation dans un procédure d'optimisation locale pour obtenir une solution admissible. Les résultats expérimentaux obtenus sur de nombreux exemples sont très encourageants et prometteurs.Polynomial Optimization is concerned with optimization problems of the form (P) : f* = { f(x) with x in set K}, where K is a basic semi-algebraic set in Rn defined by K={x in Rn such as gj(x) less or equal 0}; and f is a real polynomial of n variables x = (x1, x2, ..., xn). In this thesis we are interested in problems (P) where symmetries and/or structured sparsity are not easy to detect or to exploit, and where only a few (or even no) semidefinite relaxations of the moment-SOS approach can be implemented. And the issue we investigate is: How can the moment-SOS methodology be still used to help solve such problem (P)? We provide two applications of the moment-SOS approach to help solve (P) in two different contexts. * In a first contribution we consider MINLP problems on a box B = [xL, xU] of Rn and propose a moment-SOS approach to construct polynomial convex underestimators for the objective function f (if non convex) and for -gj if in the constraint gj(x) less or equal 0, the polynomial gj is not concave. We work in the context where one wishes to find a convex underestimator of a non-convex polynomial f of a few variables on a box B of Rn. The novelty with previous works on this topic is that we want to compute a polynomial convex underestimator p of f that minimizes the important tightness criterion which is the L1 norm of (f-h) on B, over all convex polynomials h of degree d _fixed. Indeed in previous works for computing a convex underestimator L of f, this tightness criterion is not taken into account directly. It turns out that the moment-SOS approach is well suited to compute a polynomial convex underestimator p that minimizes the tightness criterion and numerical experiments on a sample of non-trivial examples show that p outperforms L not only with respect to the tightness score but also in terms of the resulting lower bounds obtained by minimizing respectively p and L on B. Similar improvements also occur when we use the moment-SOS underestimator instead of the aBB-one in refinements of the aBB method. * In a second contribution we propose an algorithm that also uses an optimal solution of a semidefinite relaxation in the moment-SOS hierarchy (in fact a slight modification) to provide a feasible solution for the initial optimization problem but with no rounding procedure. In the present context, we treat the first variable x1 of x = (x1, x2, ...., xn) as a parameter in some bounded interval Y of R. Notice that f*=min { J(y) : y in Y} where J is the function J(y) := inf {f(x) : x in K ; x1=y}. That is one has reduced the original n-dimensional optimization problem (P) to an equivalent one-dimensional optimization problem on an interval. But of course determining the optimal value function J is even more complicated than (P) as one has to determine a function (instead of a point in Rn), an infinite-dimensional problem. But the idea is to approximate J(y) on Y by a univariate polynomial p(y) with the degree d and fortunately, computing such a univariate polynomial is possible via solving a semidefinite relaxation associated with the parameter optimization problem. The degree d of p(y) is related to the size of this semidefinite relaxation. The higher the degree d is, the better is the approximation of J(y) by p(y) and in fact, one may show that p(y) converges to J(y) in a strong sense on Y as d increases. But of course the resulting semidefinite relaxation becomes harder (or impossible) to solve as d increases and so in practice d is fixed to a small value. Once the univariate polynomial p(y) has been determined, one computes x1* in Y that minimizes p(y) on Y, a convex optimization problem that can be solved efficiently. The process is iterated to compute x2 in a similar manner, and so on, until a point x in Rn has been computed. Finally, as x* is not feasible in general, we then use x* as a starting point for a local optimization procedure to find a final feasible point x in K. When K is convex, the following variant is implemented. After having computed x1* as indicated, x2* is computed with x1 fixed at the value x1*, and x3 is computed with x1 and x2 fixed at the values x1* and x2* respectively, etc., so that the resulting point x* is feasible, i.e., x* in K. The same variant applies for 0/1 programs for which feasibility is easy to detect like e.g., for MAXCUT, k-CLUSTER or 0/1-KNAPSACK problems
    corecore