112 research outputs found

    A study on set variable representations in constraint programming

    Get PDF
    Il lavoro presentato in questa tesi si colloca nel contesto della programmazione con vincoli, un paradigma per modellare e risolvere problemi di ricerca combinatoria che richiedono di trovare soluzioni in presenza di vincoli. Una vasta parte di questi problemi trova naturale formulazione attraverso il linguaggio delle variabili insiemistiche. Dal momento che il dominio di tali variabili può essere esponenziale nel numero di elementi, una rappresentazione esplicita è spesso non praticabile. Recenti studi si sono quindi focalizzati nel trovare modi efficienti per rappresentare tali variabili. Pertanto si è soliti rappresentare questi domini mediante l'uso di approssimazioni definite tramite intervalli (d'ora in poi rappresentazioni), specificati da un limite inferiore e un limite superiore secondo un'appropriata relazione d'ordine. La recente evoluzione della ricerca sulla programmazione con vincoli sugli insiemi ha chiaramente indicato che la combinazione di diverse rappresentazioni permette di raggiungere prestazioni di ordini di grandezza superiori rispetto alle tradizionali tecniche di codifica. Numerose proposte sono state fatte volgendosi in questa direzione. Questi lavori si differenziano su come è mantenuta la coerenza tra le diverse rappresentazioni e su come i vincoli vengono propagati al fine di ridurre lo spazio di ricerca. Sfortunatamente non esiste alcun strumento formale per paragonare queste combinazioni. Il principale obiettivo di questo lavoro è quello di fornire tale strumento, nel quale definiamo precisamente la nozione di combinazione di rappresentazioni facendo emergere gli aspetti comuni che hanno caratterizzato i lavori precedenti. In particolare identifichiamo due tipi possibili di combinazioni, una forte ed una debole, definendo le nozioni di coerenza agli estremi sui vincoli e sincronizzazione tra rappresentazioni. Il nostro studio propone alcune interessanti intuizioni sulle combinazioni esistenti, evidenziandone i limiti e svelando alcune sorprese. Inoltre forniamo un'analisi di complessità della sincronizzazione tra minlex, una rappresentazione in grado di propagare in maniera ottimale vincoli lessicografici, e le principali rappresentazioni esistenti

    A Comparison of Lex Bounds for Multiset Variables in Constraint Programming

    Full text link
    Set and multiset variables in constraint programming have typically been represented using subset bounds. However, this is a weak representation that neglects potentially useful information about a set such as its cardinality. For set variables, the length-lex (LL) representation successfully provides information about the length (cardinality) and position in the lexicographic ordering. For multiset variables, where elements can be repeated, we consider richer representations that take into account additional information. We study eight different representations in which we maintain bounds according to one of the eight different orderings: length-(co)lex (LL/LC), variety-(co)lex (VL/VC), length-variety-(co)lex (LVL/LVC), and variety-length-(co)lex (VLL/VLC) orderings. These representations integrate together information about the cardinality, variety (number of distinct elements in the multiset), and position in some total ordering. Theoretical and empirical comparisons of expressiveness and compactness of the eight representations suggest that length-variety-(co)lex (LVL/LVC) and variety-length-(co)lex (VLL/VLC) usually give tighter bounds after constraint propagation. We implement the eight representations and evaluate them against the subset bounds representation with cardinality and variety reasoning. Results demonstrate that they offer significantly better pruning and runtime.Comment: 7 pages, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI-11

    Modelling dynamic programming-based global constraints in constraint programming

    Get PDF
    Dynamic Programming (DP) can solve many complex problems in polynomial or pseudo-polynomial time, and it is widely used in Constraint Programming (CP) to implement powerful global constraints. Implementing such constraints is a nontrivial task beyond the capability of most CP users, who must rely on their CP solver to provide an appropriate global constraint library. This also limits the usefulness of generic CP languages, some or all of whose solvers might not provide the required constraints. A technique was recently introduced for directly modelling DP in CP, which provides a way around this problem. However, no comparison of the technique with other approaches was made, and it was missing a clear formalisation. In this paper we formalise the approach and compare it with existing techniques on MiniZinc benchmark problems, including the flow formulation of DP in Integer Programming. We further show how it can be improved by state reduction methods

    Replicable parallel branch and bound search

    Get PDF
    Combinatorial branch and bound searches are a common technique for solving global optimisation and decision problems. Their performance often depends on good search order heuristics, refined over decades of algorithms research. Parallel search necessarily deviates from the sequential search order, sometimes dramatically and unpredictably, e.g. by distributing work at random. This can disrupt effective search order heuristics and lead to unexpected and highly variable parallel performance. The variability makes it hard to reason about the parallel performance of combinatorial searches. This paper presents a generic parallel branch and bound skeleton, implemented in Haskell, with replicable parallel performance. The skeleton aims to preserve the search order heuristic by distributing work in an ordered fashion, closely following the sequential search order. We demonstrate the generality of the approach by applying the skeleton to 40 instances of three combinatorial problems: Maximum Clique, 0/1 Knapsack and Travelling Salesperson. The overheads of our Haskell skeleton are reasonable: giving slowdown factors of between 1.9 and 6.2 compared with a class-leading, dedicated, and highly optimised C++ Maximum Clique solver. We demonstrate scaling up to 200 cores of a Beowulf cluster, achieving speedups of 100x for several Maximum Clique instances. We demonstrate low variance of parallel performance across all instances of the three combinatorial problems and at all scales up to 200 cores, with median Relative Standard Deviation (RSD) below 2%. Parallel solvers that do not follow the sequential search order exhibit far higher variance, with median RSD exceeding 85% for Knapsack

    Multi-objective optimization in graphical models

    Get PDF
    Many real-life optimization problems are combinatorial, i.e. they concern a choice of the best solution from a finite but exponentially large set of alternatives. Besides, the solution quality of many of these problems can often be evaluated from several points of view (a.k.a. criteria). In that case, each criterion may be described by a different objective function. Some important and well-known multicriteria scenarios are: · In investment optimization one wants to minimize risk and maximize benefits. · In travel scheduling one wants to minimize time and cost. · In circuit design one wants to minimize circuit area, energy consumption and maximize speed. · In knapsack problems one wants to minimize load weight and/or volume and maximize its economical value. The previous examples illustrate that, in many cases, these multiple criteria are incommensurate (i.e., it is difficult or impossible to combine them into a single criterion) and conflicting (i.e., solutions that are good with respect one criterion are likely to be bad with respect to another). Taking into account simultaneously the different criteria is not trivial and several notions of optimality have been proposed. Independently of the chosen notion of optimality, computing optimal solutions represents an important current research challenge. Graphical models are a knowledge representation tool widely used in the Artificial Intelligence field. They seem to be specially suitable for combinatorial problems. Roughly, graphical models are graphs in which nodes represent variables and the (lack of) arcs represent conditional independence assumptions. In addition to the graph structure, it is necessary to specify its micro-structure which tells how particular combinations of instantiations of interdependent variables interact. The graphical model framework provides a unifying way to model a broad spectrum of systems and a collection of general algorithms to efficiently solve them. In this Thesis we integrate multi-objective optimization problems into the graphical model paradigm and study how algorithmic techniques developed in the graphical model context can be extended to multi-objective optimization problems. As we show, multiobjective optimization problems can be formalized as a particular case of graphical models using the semiring-based framework. It is, to the best of our knowledge, the first time that graphical models in general, and semiring-based problems in particular are used to model an optimization problem in which the objective function is partially ordered. Moreover, we show that most of the solving techniques for mono-objective optimization problems can be naturally extended to the multi-objective context. The result of our work is the mathematical formalization of multi-objective optimization problems and the development of a set of multiobjective solving algorithms that have been proved to be efficient in a number of benchmarks.Muchos problemas reales de optimización son combinatorios, es decir, requieren de la elección de la mejor solución (o solución óptima) dentro de un conjunto finito pero exponencialmente grande de alternativas. Además, la mejor solución de muchos de estos problemas es, a menudo, evaluada desde varios puntos de vista (también llamados criterios). Es este caso, cada criterio puede ser descrito por una función objetivo. Algunos escenarios multi-objetivo importantes y bien conocidos son los siguientes: · En optimización de inversiones se pretende minimizar los riesgos y maximizar los beneficios. · En la programación de viajes se quiere reducir el tiempo de viaje y los costes. · En el diseño de circuitos se quiere reducir al mínimo la zona ocupada del circuito, el consumo de energía y maximizar la velocidad. · En los problemas de la mochila se quiere minimizar el peso de la carga y/o el volumen y maximizar su valor económico. Los ejemplos anteriores muestran que, en muchos casos, estos criterios son inconmensurables (es decir, es difícil o imposible combinar todos ellos en un único criterio) y están en conflicto (es decir, soluciones que son buenas con respecto a un criterio es probable que sean malas con respecto a otra). Tener en cuenta de forma simultánea todos estos criterios no es trivial y para ello se han propuesto diferentes nociones de optimalidad. Independientemente del concepto de optimalidad elegido, el cómputo de soluciones óptimas representa un importante desafío para la investigación actual. Los modelos gráficos son una herramienta para la represetanción del conocimiento ampliamente utilizados en el campo de la Inteligencia Artificial que parecen especialmente indicados en problemas combinatorios. A grandes rasgos, los modelos gráficos son grafos en los que los nodos representan variables y la (falta de) arcos representa la interdepencia entre variables. Además de la estructura gráfica, es necesario especificar su (micro-estructura) que indica cómo interactúan instanciaciones concretas de variables interdependientes. Los modelos gráficos proporcionan un marco capaz de unificar el modelado de un espectro amplio de sistemas y un conjunto de algoritmos generales capaces de resolverlos eficientemente. En esta tesis integramos problemas de optimización multi-objetivo en el contexto de los modelos gráficos y estudiamos cómo diversas técnicas algorítmicas desarrolladas dentro del marco de los modelos gráficos se pueden extender a problemas de optimización multi-objetivo. Como mostramos, este tipo de problemas se pueden formalizar como un caso particular de modelo gráfico usando el paradigma basado en semi-anillos (SCSP). Desde nuestro conocimiento, ésta es la primera vez que los modelos gráficos en general, y el paradigma basado en semi-anillos en particular, se usan para modelar un problema de optimización cuya función objetivo está parcialmente ordenada. Además, mostramos que la mayoría de técnicas para resolver problemas monoobjetivo se pueden extender de forma natural al contexto multi-objetivo. El resultado de nuestro trabajo es la formalización matemática de problemas de optimización multi-objetivo y el desarrollo de un conjunto de algoritmos capaces de resolver este tipo de problemas. Además, demostramos que estos algoritmos son eficientes en un conjunto determinado de benchmarks

    PYCSP3: Modeling Combinatorial Constrained Problems in Python

    Full text link
    In this document, we introduce PYCSP33, a Python library that allows us to write models of combinatorial constrained problems in a simple and declarative way. Currently, with PyCSP33, you can write models of constraint satisfaction and optimization problems. More specifically, you can build CSP (Constraint Satisfaction Problem) and COP (Constraint Optimization Problem) models. Importantly, there is a complete separation between modeling and solving phases: you write a model, you compile it (while providing some data) in order to generate an XCSP3 instance (file), and you solve that problem instance by means of a constraint solver. In this document, you will find all that you need to know about PYCSP33, with more than 40 illustrative models

    Exploiting Global Constraints for Search and Propagation

    Get PDF
    Résumé Cette thèse se concentre sur la Programmation par contraintes (PPC), qui est un paradigme émergent pour résoudre des problèmes complexes d’optimisation combinatoire. Les principales contributions tournent autour du filtrage des contraintes et de la recherche; les deux sont des composantes cl´e dans la résolution de problèmes complexes à travers la PPC. D’un côté, le filtrage des contraintes permet de réduire la taille de l’espace de recherche, d’autre part, la recherche définit la manière dont cet espace sera exploré. Les progrès sur ces sujets sont essentiels pour élargir l’applicabilité de CP à des problèmes réels. En ce qui concerne le filtrage des contraintes, les contributions sont les suivantes: premièrement, on propose une amélioration sur un algorithme existant de la version relaxée d’une contrainte commune qui apparaît souvent dans les problèmes d’affectation (soft gcc). L’algorithme proposé améliore en termes de complexité soit pour la cohérence, soit pour le filtrage et en termes de facilité d’implémentation. Deuxièmement, on introduit une nouvelle contrainte (soit dure soit relaxée) et les algorithmes de filtrage pour une sous-structure récurrente qui se produit dans les problèmes d’affectation des ressources hétérogènes (hierarchical gcc). Nous montrons des résultats encourageants par rapport à une d´écomposition équivalente basée sur gcc. En ce qui concerne la recherche, nous présentons tout d’abord les algorithmes pour compter le nombre de solutions pour deux importantes familles de contraintes: les contraintes sur les occurrences, par exemple, alldifferent, symmetric alldifferent et gcc, et les contraintes de séquence admissible, telles que regular. Ces algorithmes sont à la base d’une nouvelle famille d’heuristiques de recherche, centrées sur les contraintes et basées sur le d´énombrement. Ces heuristiques extraient des informations sur le nombre de solutions des contraintes, pour guider la recherche vers des parties de l’espace de recherche qui contiennent probablement un grand nombre de solutions. Les résultats expérimentaux sur huit différents problèmes montrent une performance impressionnante par rapport à l’état de l’art des heuristiques génériques. Enfin, nous expérimentons une forme forte, déjà connue, de filtrage qui est guidée par la recherche (quick shaving). Cette technique donne des résultats soit encourageants soit mauvais lorsqu’elle est appliquée aveuglément à tous les problèmes. Nous avons introduit un estimateur simple mais très efficace pour activer ou désactiver dynamiquement le quick shaving; de tests expérimentaux ont montré des résultats très prometteurs.----------Abstract This thesis focuses on Constraint Programming (CP), that is an emergent paradigm to solve complex combinatorial optimization problems. The main contributions revolve around constraint filtering and search that are two main components of CP. On one side, constraint filtering allows to reduce the size of the search space, on the other, search defines how this space will be explored. Advances on these topics are crucial to broaden the applicability of CP to real-life problems. For what concerns constraint filtering, the contribution is twofold: we firstly propose an improvement on an existing algorithm of the relaxed version of a constraint that frequently appears in assignment problems (soft gcc). The algorithm proposed outperforms the previously known in terms of time-complexity both for the consistency check and for the filtering and in term of ease of implementiation. Secondly, we introduce a new constraint (both hard and soft version) and associated filtering algorithms for a recurrent sub-structure that occurs in assignment problems with heterogeneous resources (hierarchical gcc). We show promising results when compared to an equivalent decomposition based on gcc. For what concerns search, we introduce algorithms to count the number of solutions for two important families of constraints: occurrence counting constraints, such as alldifferent, symmetric alldifferent and gcc, and sequencing constraints, such as regular. These algorithms are the building blocks of a new family of search heuristics, called constraint-centered counting-based heuristics. They extract information about the number of solutions the individual constraints admit, to guide search towards parts of the search space that are likely to contain a high number of solutions. Experimental results on eight different problems show an impressive performance compared to other generic state-of-the-art heuristics. Finally, we experiment on an already known strong form of constraint filtering that is heuristically guided by the search (quick shaving). This technique gives mixed results when applied blindly to any problem. We introduced a simple yet very effective estimator to dynamically disable quick shaving and showed experimentally very promising results

    The (Coarse) Fine-Grained Structure of NP-Hard SAT and CSP Problems

    Get PDF
    We study the fine-grained complexity of NP-complete satisfiability (SAT) problems and constraint satisfaction problems (CSPs) in the context of the strong exponential-time hypothesis (SETH), showing non-trivial lower and upper bounds on the running time. Here, by a non-trivial lower bound for a problem SAT(Gamma) (respectively CSP(Gamma)) with constraint language F, we mean a value c(0) &amp;gt; 1 such that the problem cannot be solved in time O(c(n)) for any c &amp;lt; c(0) unless SETH is false, while a non-trivial upper bound is simply an algorithm for the problem running in time O(c(n)) for some c &amp;lt; 2. Such lower bounds have proven extremely elusive, and except for cases where c(0) = 2 effectively no such previous bound was known. We achieve this by employing an algebraic framework, studying constraint languages r in terms of their algebraic properties. We uncover a powerful algebraic framework where a mild restriction on the allowed constraints offers a concise algebraic characterization. On the relational side we restrict ourselves to Boolean languages closed under variable negation and partial assignment, called sign-symmetric languages. On the algebraic side this results in a description via partial operations arising from system of identities, with a close connection to operations resulting in tractable CSPs, such as near unanimity operations and edge operations. Using this connection we construct improved algorithms for several interesting classes of sign-symmetric languages, and prove explicit lower bounds under SETH. Thus, we find the first example of an NP-complete SAT problem with a non-trivial algorithm which also admits a non-trivial lower bound under SETH. This suggests a dichotomy conjecture with a close connection to the CSP dichotomy theorem: an NP-complete SAT problem admits an improved algorithm if and only if it admits a non-trivial partial invariant of the above form.Funding Agencies|Swedish resourch council (VR) [2019-03690]</p
    • …
    corecore