161 research outputs found
On the relations between SAT and CSP enumerative algorithms
AbstractWe show the equivalence between the so-called Davis–Putnam procedure (Davis et al., Comm. ACM 5 (1962) 394–397; Davis and Putnam (J. ACM 7 (1960) 201–215)) and the Forward Checking of Haralick and Elliot (Artificial Intelligence 14 (1980) 263–313). Both apply the paradigm choose and propagate in two different formalisms, namely the propositional calculus and the constraint satisfaction problems formalism. They happen to be strictly equivalent as soon as a compatible instantiation order is chosen. This equivalence is shown considering the resolution of the clausal expression of a CSP by the Davis–Putnam procedure
From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More
We consider questions that arise from the intersection between the areas of
polynomial-time approximation algorithms, subexponential-time algorithms, and
fixed-parameter tractable algorithms. The questions, which have been asked
several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a
non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and
Minimum Dominating Set (DomSet) problems parameterized by the size of the
optimal solution. In particular, letting be the optimum and be
the size of the input, is there an algorithm that runs in
time and outputs a solution of size
, for any functions and that are independent of (for
Clique, we want )?
In this paper, we show that both Clique and DomSet admit no non-trivial
FPT-approximation algorithm, i.e., there is no
-FPT-approximation algorithm for Clique and no
-FPT-approximation algorithm for DomSet, for any function
(e.g., this holds even if is the Ackermann function). In fact, our results
imply something even stronger: The best way to solve Clique and DomSet, even
approximately, is to essentially enumerate all possibilities. Our results hold
under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which
states that no -time algorithm can distinguish between a satisfiable
3SAT formula and one which is not even -satisfiable for some
constant .
Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for
Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and
Maximum Induced Matching in bipartite graphs. Additionally, we rule out
-FPT-approximation algorithm for Densest -Subgraph although this
ratio does not yet match the trivial -approximation algorithm.Comment: 43 pages. To appear in FOCS'1
10481 Abstracts Collection -- Computational Counting
From November 28 to December 3 2010, the Dagstuhl Seminar 10481 ``Computational Counting\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Constraint-based sequence mining using constraint programming
The goal of constraint-based sequence mining is to find sequences of symbols
that are included in a large number of input sequences and that satisfy some
constraints specified by the user. Many constraints have been proposed in the
literature, but a general framework is still missing. We investigate the use of
constraint programming as general framework for this task. We first identify
four categories of constraints that are applicable to sequence mining. We then
propose two constraint programming formulations. The first formulation
introduces a new global constraint called exists-embedding. This formulation is
the most efficient but does not support one type of constraint. To support such
constraints, we develop a second formulation that is more general but incurs
more overhead. Both formulations can use the projected database technique used
in specialised algorithms. Experiments demonstrate the flexibility towards
constraint-based settings and compare the approach to existing methods.Comment: In Integration of AI and OR Techniques in Constraint Programming
(CPAIOR), 201
Optimizing personal computer configurations with heuristic-based search methods
Given the diversity and limited compatibility for personal computer hardware, obtaining an (sub-)optimal configuration for different usage restricted to some budget limits and other possible criteria can be challenging. In this paper, we firstly formulated these common configuration problems as discrete optimization problems to flexibly add in or modify users' requirements. More interestingly, we proposed two intelligent optimizers: a simple-yet-powerful beam search method and a min-conflict heuristic-based micro-genetic algorithm (MGA) to solve this real-life optimization problem. The heuristic-based MGA consistently outperformed the beam search and branch-and-bound method in most test cases. Furthermore, our work opens up exciting directions for investigation.postprin
When the decomposition meets the constraint satisfaction problem
This paper explores the joint use of decomposition methods and parallel computing for solving constraint satisfaction problems and introduces a framework called Parallel Decomposition for Constraint Satisfaction Problems (PD-CSP). The main idea is that the set of constraints are first clustered using a decomposition algorithm in which highly correlated constraints are grouped together. Next, parallel search of variables is performed on the produced clusters in a way that is friendly for parallel computing. In particular, for the first step, we propose the adaptation of two well-known clustering algorithms ( k -means and DBSCAN). For the second step, we develop a GPU-based approach to efficiently explore the clusters. The results from the extensive experimental evaluation show that the PD-CSP provides competitive results in terms of accuracy and runtime
Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data
Constraint Programming (CP) has proved an effective paradigm to model and
solve difficult combinatorial satisfaction and optimisation problems from
disparate domains. Many such problems arising from the commercial world are
permeated by data uncertainty. Existing CP approaches that accommodate
uncertainty are less suited to uncertainty arising due to incomplete and
erroneous data, because they do not build reliable models and solutions
guaranteed to address the user's genuine problem as she perceives it. Other
fields such as reliable computation offer combinations of models and associated
methods to handle these types of uncertain data, but lack an expressive
framework characterising the resolution methodology independently of the model.
We present a unifying framework that extends the CP formalism in both model
and solutions, to tackle ill-defined combinatorial problems with incomplete or
erroneous data. The certainty closure framework brings together modelling and
solving methodologies from different fields into the CP paradigm to provide
reliable and efficient approches for uncertain constraint problems. We
demonstrate the applicability of the framework on a case study in network
diagnosis. We define resolution forms that give generic templates, and their
associated operational semantics, to derive practical solution methods for
reliable solutions.Comment: Revised versio
Randomized approximation of the constraint satisfaction problem
Also published in Lecture Notes in Computer Science, Volume 1097, 1996, Pages 76-87. 5th Scandinavian Workshop on Algorithm Theory, SWAT 1996, Reykjavik, Iceland, 3-5 July 1996. https://doi.org/10.1007/3-540-61422-2_122</p
- …