62 research outputs found

    Binarisation for Valued Constraint Satisfaction Problems

    Get PDF
    We study methods for transforming valued constraint satisfaction problems (VCSPs) to binary VCSPs. First, we show that the standard dual encoding preserves many aspects of the algebraic properties that capture the computational complexity of VCSPs. Second, we extend the reduction of CSPs to binary CSPs described by Bul´ın et al. [Log. Methods Comput. Sci., 11 (2015)] to VCSPs. This reduction establishes that VCSPs over a fixed valued constraint language are polynomial-time equivalent to minimum-cost homomorphism problems over a fixed digraph

    Necessary conditions for tractability of valued CSPs

    Full text link
    The connection between constraint languages and clone theory has been a fruitful line of research on the complexity of constraint satisfaction problems. In a recent result, Cohen et al. [SICOMP'13] have characterised a Galois connection between valued constraint languages and so-called weighted clones. In this paper, we study the structure of weighted clones. We extend the results of Creed and Zivny from [CP'11/SICOMP'13] on types of weightings necessarily contained in every nontrivial weighted clone. This result has immediate computational complexity consequences as it provides necessary conditions for tractability of weighted clones and thus valued constraint languages. We demonstrate that some of the necessary conditions are also sufficient for tractability, while others are provably not.Comment: To appear in SIAM Journal on Discrete Mathematics (SIDMA

    Binarisation via Dualisation for Valued Constraints

    Get PDF
    Constraint programming is a natural paradigm for many combinatorial optimisation problems. The complexity of constraint satisfaction for various forms of constraints has been widely-studied, both to inform the choice of appropriate algorithms, and to understand better the boundary between polynomial-time complexity and NP-hardness. In constraint programming it is well-known that any constraint satisfaction problem can be converted to an equivalent binary problem using the so-called dual encoding. Using this standard approach any fixed collection of constraints, of arbitrary arity, can be converted to an equivalent set of constraints of arity at most two. Here we show that this transformation, although it changes the domain of the constraints, preserves all the relevant algebraic properties that determine the complexity. Moreover, we show that the dual encoding preserves many of the key algorithmic properties of the original instance. We also show that this remains true for more general valued constraint languages, where constraints may assign different cost values to different assignments. Hence, we obtain a simple proof of the fact that to classify the computational complexity of all valued constraint languages it suffices to classify only binary valued constraint languages

    The power of Sherali-Adams relaxations for general-valued CSPs

    Full text link
    We give a precise algebraic characterisation of the power of Sherali-Adams relaxations for solvability of valued constraint satisfaction problems to optimality. The condition is that of bounded width which has already been shown to capture the power of local consistency methods for decision CSPs and the power of semidefinite programming for robust approximation of CSPs. Our characterisation has several algorithmic and complexity consequences. On the algorithmic side, we show that several novel and many known valued constraint languages are tractable via the third level of the Sherali-Adams relaxation. For the known languages, this is a significantly simpler algorithm than the previously obtained ones. On the complexity side, we obtain a dichotomy theorem for valued constraint languages that can express an injective unary function. This implies a simple proof of the dichotomy theorem for conservative valued constraint languages established by Kolmogorov and Zivny [JACM'13], and also a dichotomy theorem for the exact solvability of Minimum-Solution problems. These are generalisations of Minimum-Ones problems to arbitrary finite domains. Our result improves on several previous classifications by Khanna et al. [SICOMP'00], Jonsson et al. [SICOMP'08], and Uppman [ICALP'13].Comment: Full version of an ICALP'15 paper (arXiv:1502.05301

    Formulations linéaires pour la programmation par contraintes

    Get PDF
    National audienceDans ce papier, nous présentons dans un premier temps des techniques génériques permettant de formuler toute contrainte d'un CSP comme un Programme Linéaire en Nombres Entiers (PLNE). Cela conduit à exploiter en Programmation Par Contraintes (PPC) de nombreux outils algorithmiques proposés par la communauté de la Recherche Opérationnelle (RO) dans le but, par exemple, de développer des contraintes globales. Ensuite, nous proposons un modèle linéaire générique pour améliorer la technique de filtrage basée sur les coûts réduits [5]. La résolution de ce modèle linéaire permet de calculer des coûts réduits plus intéressants que ceux calculés en résolvant la relaxation continue classique d'un PLNE

    Toward a Dichotomy for Approximation of H-Coloring

    Get PDF
    Given two (di)graphs G, H and a cost function c:V(G) x V(H) -> Q_{>= 0} cup {+infty}, in the minimum cost homomorphism problem, MinHOM(H), we are interested in finding a homomorphism f:V(G)-> V(H) (a.k.a H-coloring) that minimizes sum limits_{v in V(G)}c(v,f(v)). The complexity of exact minimization of this problem is well understood [Pavol Hell and Arash Rafiey, 2012], and the class of digraphs H, for which the MinHOM(H) is polynomial time solvable is a small subset of all digraphs. In this paper, we consider the approximation of MinHOM within a constant factor. In terms of digraphs, MinHOM(H) is not approximable if H contains a digraph asteroidal triple (DAT). We take a major step toward a dichotomy classification of approximable cases. We give a dichotomy classification for approximating the MinHOM(H) when H is a graph (i.e. symmetric digraph). For digraphs, we provide constant factor approximation algorithms for two important classes of digraphs, namely bi-arc digraphs (digraphs with a conservative semi-lattice polymorphism or min-ordering), and k-arc digraphs (digraphs with an extended min-ordering). Specifically, we show that: - Dichotomy for Graphs: MinHOM(H) has a 2|V(H)|-approximation algorithm if graph H admits a conservative majority polymorphims (i.e. H is a bi-arc graph), otherwise, it is inapproximable; - MinHOM(H) has a |V(H)|^2-approximation algorithm if H is a bi-arc digraph; - MinHOM(H) has a |V(H)|^2-approximation algorithm if H is a k-arc digraph. In conclusion, we show the importance of these results and provide insights for achieving a dichotomy classification of approximable cases. Our constant factors depend on the size of H. However, the implementation of our algorithms provides a much better approximation ratio. It leaves open to investigate a classification of digraphs H, where MinHOM(H) admits a constant factor approximation algorithm that is independent of |V(H)|

    Constraint Network Satisfaction for Finite Relation Algebras

    Get PDF
    Network satisfaction problems (NSPs) for finite relation algebras are computational decision problems, studied intensively since the 1990s. The major open research challenge in this field is to understand which of these problems are solvable by polynomial-time algorithms. Since there are known examples of undecidable NSPs of finite relation algebras it is advisable to restrict the scope of such a classification attempt to well-behaved subclasses of relation algebras. The class of relation algebras with a normal representation is such a well-behaved subclass. Many well-known examples of relation algebras, such as the Point Algebra, RCC5, and Allen’s Interval Algebra admit a normal representation. The great advantage of finite relation algebras with normal representations is that their NSP is essentially the same as a constraint satisfaction problem (CSP). For a relational structure B the problem CSP(B) is the computational problem to decide whether a given finite relational structure C has a homomorphism to B. The study of CSPs has a long and rich history, culminating for the time being in the celebrated proofs of the Feder-Vardi dichotomy conjecture. Bulatov and Zhuk independently proved that for every finite structure B the problem CSP(B) is in P or NP-complete. Both proofs rely on the universal-algebraic approach, a powerful theory that connects algebraic properties of structures B with complexity results for the decision problems CSP(B). Our contributions to the field are divided into three parts. Firstly, we provide two algebraic criteria for NP-hardness of NSPs. Our second result is a complete classification of the complexity of NSPs for symmetric relation algebras with a flexible atom; these problems are in P or NP-complete. Our result is obtained via a decidable condition on the relation algebra which implies polynomial-time tractability of the NSP. As a third contribution we prove that for a large class of NSPs, non-hardness implies that the problems can even be solved by Datalog programs, unless P = NP. This result can be used to strengthen the dichotomy result for NSPs of symmetric relation algebras with a flexible atom: every such problem can be solved by a Datalog program or is NP-complete. Our proof relies equally on known results and new observations in the algebraic analysis of finite structures. The CSPs that emerge from NSPs are typically of the form CSP(B) for an infinite structure B and therefore do not fall into the scope of the dichotomy result for finite structures. In this thesis we study NSPs of finite relation algebras with normal representations by the universal algebraic methods which were developed for the study of finite and infinite-domain CSPs. We additionally make use of model theory and a Ramsey-type result of Nešetril and Rödl. Our contributions to the field are divided into three parts. Firstly, we provide two algebraic criteria for NP-hardness of NSPs. Our second result is a complete classification of the complexity of NSPs for symmetric relation algebras with a flexible atom; these problems are in P or NP-complete. Our result is obtained via a decidable condition on the relation algebra which implies polynomial-time tractability of the NSP. As a third contribution we prove that for a large class of NSPs the containment in P implies that the problems can even be solved by Datalog programs, unless P = NP. As a third contribution we prove that for a large class of NSPs, non-hardness implies that the problems can even be solved by Datalog programs, unless P = NP. This result can be used to strengthen the dichotomy result for NSPs of symmetric relation algebras with a flexible atom: every such problem can be solved by a Datalog program or is NP-complete. Our proof relies equally on known results and new observations in the algebraic analysis of finite structures

    Complexity Classifications for the Valued Constraint Satisfaction Problem

    Get PDF
    In a valued constraint satisfaction problem (VCSP), the goal is to find an assignment of values to variables that minimizes a given sum of functions. Each function in the sum depends on a subset of variables, takes values which are rational numbers or infinity, and is chosen from a fixed finite set of functions called a constraint language. We study how the computational complexity of this problem depends on the constraint language. We often consider the case where infinite values are disallowed, and refer to such constraint languages as being finite-valued. If we consider such finite-valued constraint languages, the case where we allow variables to take two values was classified by Cohen et al., who show that submodular functions essentially give rise to the only tractable case. Non-submodular functions can be used to express the NP-hard Max Cut problem. We consider the case where the variables can take three values, and identify a new infinite set of functions called skew bisubmodular functions which imply tractability. We prove that submodularity with respect to some total order and skew bisubmodularity give rise to the only tractable cases, and in all other cases Max Cut can be expressed. We also show that our characterisation of tractable cases is tight, that is, none of the conditions can be omitted. Thus, our results provide a new dichotomy theorem in constraint satisfaction research. We also negatively answer the question of whether multimorphisms can capture all necessary tractable constraint languages. We then study the VCSP as a homomorphism problem on digraphs. By adapting a proof designed for CSPs we show that each VCSP with a fixed finite constraint language is equivalent to one where the constraint language consists of one {0,infinity}-valued binary function (i.e. a digraph) and one finite-valued unary function. This latter problem is known as the Minimum Cost Homomorphism Problem for digraphs. We also show that our reduction preserves a number of useful algebraic properties of the constraint language. Finally, given a finite-valued constraint language, we consider the case where the variables of our VCSP are allowed to take four values. We prove that 1-defect chain multimorphisms, which are required in the four element dichotomy of Min CSP, are a special case of more general fractional polymorphisms we call {a,b}-1-defect fractional polymorphisms. We conclude with a conjecture for the four element case, and some interesting open problems which might lead to a tighter description of tractable finite-valued constraint languages on finite domains of any size

    Automatic texture classification in manufactured paper

    Get PDF

    Machine learning applications for the topology prediction of transmembrane beta-barrel proteins

    Get PDF
    The research topic for this PhD thesis focuses on the topology prediction of beta-barrel transmembrane proteins. Transmembrane proteins adopt various conformations that are about the functions that they provide. The two most predominant classes are alpha-helix bundles and beta-barrel transmembrane proteins. Alpha-helix proteins are present in larger numbers than beta-barrel transmembrane proteins in structure databases. Therefore, there is a need to find computational tools that can predict and detect the structure of beta-barrel transmembrane proteins. Transmembrane proteins are used for active transport across the membrane or signal transduction. Knowing the importance of their roles, it becomes essential to understand the structures of the proteins. Transmembrane proteins are also a significant focus for new drug discovery. Transmembrane beta-barrel proteins play critical roles in the translocation machinery, pore formation, membrane anchoring, and ion exchange. In bioinformatics, many years of research have been spent on the topology prediction of transmembrane alpha-helices. The efforts to TMB (transmembrane beta-barrel) proteins topology prediction have been overshadowed, and the prediction accuracy could be improved with further research. Various methodologies have been developed in the past to predict TMB proteins topology. Methods developed in the literature that are available include turn identification, hydrophobicity profiles, rule-based prediction, HMM (Hidden Markov model), ANN (Artificial Neural Networks), radial basis function networks, or combinations of methods. The use of cascading classifier has never been fully explored. This research presents and evaluates approaches such as ANN (Artificial Neural Networks), KNN (K-Nearest Neighbors, SVM (Support Vector Machines), and a novel approach to TMB topology prediction with the use of a cascading classifier. Computer simulations have been implemented in MATLAB, and the results have been evaluated. Data were collected from various datasets and pre-processed for each machine learning technique. A deep neural network was built with an input layer, hidden layers, and an output. Optimisation of the cascading classifier was mainly obtained by optimising each machine learning algorithm used and by starting using the parameters that gave the best results for each machine learning algorithm. The cascading classifier results show that the proposed methodology predicts transmembrane beta-barrel proteins topologies with high accuracy for randomly selected proteins. Using the cascading classifier approach, the best overall accuracy is 76.3%, with a precision of 0.831 and recall or probability of detection of 0.799 for TMB topology prediction. The accuracy of 76.3% is achieved using a two-layers cascading classifier. By constructing and using various machine-learning frameworks, systems were developed to analyse the TMB topologies with significant robustness. We have presented several experimental findings that may be useful for future research. Using the cascading classifier, we used a novel approach for the topology prediction of TMB proteins
    • …
    corecore