1,124 research outputs found

    Parameterized Constraint Satisfaction Problems: a Survey

    Get PDF
    We consider constraint satisfaction problems parameterized above or below guaranteed values. One example is MaxSat parameterized above m/2: given a CNF formula F with m clauses, decide whether there is a truth assignment that satisfies at least m/2 + k clauses, where k is the parameter. Among other problems we deal with are MaxLin2-AA (given a system of linear equations over F_2 in which each equation has a positive integral weight, decide whether there is an assignment to the variables that satisfies equations of total weight at least W/2+k, where W is the total weight of all equations), Max-r-Lin2-AA (the same as MaxLin2-AA, but each equation has at most r variables, where r is a constant) and Max-r-Sat-AA (given a CNF formula F with m clauses in which each clause has at most r literals, decide whether there is a truth assignment satisfying at least sum_{i=1}^m (1-2^{r_i})+k clauses, where k is the parameter, r_i is the number of literals in clause i, and r is a constant). We also consider Max-r-CSP-AA, a natural generalization of both Max-r-Lin2-AA and Max-r-Sat-AA, order (or, permutation) constraint satisfaction problems parameterized above the average value and some other problems related to MaxSat. We discuss results, both polynomial kernels and parameterized algorithms, obtained for the problems mainly in the last few years as well as some open questions

    A Constraint-directed Local Search Approach to Nurse Rostering Problems

    Full text link
    In this paper, we investigate the hybridization of constraint programming and local search techniques within a large neighbourhood search scheme for solving highly constrained nurse rostering problems. As identified by the research, a crucial part of the large neighbourhood search is the selection of the fragment (neighbourhood, i.e. the set of variables), to be relaxed and re-optimized iteratively. The success of the large neighbourhood search depends on the adequacy of this identified neighbourhood with regard to the problematic part of the solution assignment and the choice of the neighbourhood size. We investigate three strategies to choose the fragment of different sizes within the large neighbourhood search scheme. The first two strategies are tailored concerning the problem properties. The third strategy is more general, using the information of the cost from the soft constraint violations and their propagation as the indicator to choose the variables added into the fragment. The three strategies are analyzed and compared upon a benchmark nurse rostering problem. Promising results demonstrate the possibility of future work in the hybrid approach

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    Particle Swarm Optimization with non-smooth penalty reformulation for a complex portfolio selection problem

    Get PDF
    In the classical model for portfolio selection the risk is measured by the variance of returns. It is well known that, if returns are not elliptically distributed, this may cause inaccurate investment decisions. To address this issue, several alternative measures of risk have been proposed. In this contribution we focus on a class of measures that uses information contained both in lower and in upper tail of the distribution of the returns. We consider a nonlinear mixed-integer portfolio selection model which takes into account several constraints used in fund management practice. The latter problem is NP-hard in general, and exact algorithms for its minimization, which are both effective and efficient, are still sought at present. Thus, to approximately solve this model we experience the heuristics Particle Swarm Optimization (PSO). Since PSO was originally conceived for unconstrained global optimization problems, we apply it to a novel reformulation of our mixed-integer model, where a standard exact penalty function is introduced.Portfolio selection, coherent risk measure, fund management constraints, NP-hard mathematical programming problem, PSO, exact penalty method, SP100 index's assets.

    On the freezing of variables in random constraint satisfaction problems

    Full text link
    The set of solutions of random constraint satisfaction problems (zero energy groundstates of mean-field diluted spin glasses) undergoes several structural phase transitions as the amount of constraints is increased. This set first breaks down into a large number of well separated clusters. At the freezing transition, which is in general distinct from the clustering one, some variables (spins) take the same value in all solutions of a given cluster. In this paper we study the critical behavior around the freezing transition, which appears in the unfrozen phase as the divergence of the sizes of the rearrangements induced in response to the modification of a variable. The formalism is developed on generic constraint satisfaction problems and applied in particular to the random satisfiability of boolean formulas and to the coloring of random graphs. The computation is first performed in random tree ensembles, for which we underline a connection with percolation models and with the reconstruction problem of information theory. The validity of these results for the original random ensembles is then discussed in the framework of the cavity method.Comment: 32 pages, 7 figure

    Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    Get PDF
    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways.In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present only when it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter αN\alpha\in\mathbb{N}. Nevertheless, here it is proved that the probability of requiring a value of α>k\alpha>k to obtain a solution for a random graph decreases exponentially: P(α>k)2(k+1)P(\alpha>k) \leq 2^{-(k+1)}, making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.Comment: Working pape
    corecore