302 research outputs found
Flexible constrained sampling with guarantees for pattern mining
Pattern sampling has been proposed as a potential solution to the infamous
pattern explosion. Instead of enumerating all patterns that satisfy the
constraints, individual patterns are sampled proportional to a given quality
measure. Several sampling algorithms have been proposed, but each of them has
its limitations when it comes to 1) flexibility in terms of quality measures
and constraints that can be used, and/or 2) guarantees with respect to sampling
accuracy. We therefore present Flexics, the first flexible pattern sampler that
supports a broad class of quality measures and constraints, while providing
strong guarantees regarding sampling accuracy. To achieve this, we leverage the
perspective on pattern mining as a constraint satisfaction problem and build
upon the latest advances in sampling solutions in SAT as well as existing
pattern mining algorithms. Furthermore, the proposed algorithm is applicable to
a variety of pattern languages, which allows us to introduce and tackle the
novel task of sampling sets of patterns. We introduce and empirically evaluate
two variants of Flexics: 1) a generic variant that addresses the well-known
itemset sampling task and the novel pattern set sampling task as well as a wide
range of expressive constraints within these tasks, and 2) a specialized
variant that exploits existing frequent itemset techniques to achieve
substantial speed-ups. Experiments show that Flexics is both accurate and
efficient, making it a useful tool for pattern-based data exploration.Comment: Accepted for publication in Data Mining & Knowledge Discovery journal
(ECML/PKDD 2017 journal track
Rational Deployment of CSP Heuristics
Heuristics are crucial tools in decreasing search effort in varied fields of
AI. In order to be effective, a heuristic must be efficient to compute, as well
as provide useful information to the search algorithm. However, some well-known
heuristics which do well in reducing backtracking are so heavy that the gain of
deploying them in a search algorithm might be outweighed by their overhead.
We propose a rational metareasoning approach to decide when to deploy
heuristics, using CSP backtracking search as a case study. In particular, a
value of information approach is taken to adaptive deployment of solution-count
estimation heuristics for value ordering. Empirical results show that indeed
the proposed mechanism successfully balances the tradeoff between decreasing
backtracking and heuristic computational overhead, resulting in a significant
overall search time reduction.Comment: 7 pages, 2 figures, to appear in IJCAI-2011, http://www.ijcai.org
Mapping constrained optimization problems to quantum annealing with application to fault diagnosis
Current quantum annealing (QA) hardware suffers from practical limitations
such as finite temperature, sparse connectivity, small qubit numbers, and
control error. We propose new algorithms for mapping boolean constraint
satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In
particular we develop a new embedding algorithm for mapping a CSP onto a
hardware Ising model with a fixed sparse set of interactions, and propose two
new decomposition algorithms for solving problems too large to map directly
into hardware.
The mapping technique is locally-structured, as hardware compatible Ising
models are generated for each problem constraint, and variables appearing in
different constraints are chained together using ferromagnetic couplings. In
contrast, global embedding techniques generate a hardware independent Ising
model for all the constraints, and then use a minor-embedding algorithm to
generate a hardware compatible Ising model. We give an example of a class of
CSPs for which the scaling performance of D-Wave's QA hardware using the local
mapping technique is significantly better than global embedding.
We validate the approach by applying D-Wave's hardware to circuit-based
fault-diagnosis. For circuits that embed directly, we find that the hardware is
typically able to find all solutions from a min-fault diagnosis set of size N
using 1000N samples, using an annealing rate that is 25 times faster than a
leading SAT-based sampling method. Further, we apply decomposition algorithms
to find min-cardinality faults for circuits that are up to 5 times larger than
can be solved directly on current hardware.Comment: 22 pages, 4 figure
An approximation trichotomy for Boolean #CSP
We give a trichotomy theorem for the complexity of approximately counting the
number of satisfying assignments of a Boolean CSP instance. Such problems are
parameterised by a constraint language specifying the relations that may be
used in constraints. If every relation in the constraint language is affine
then the number of satisfying assignments can be exactly counted in polynomial
time. Otherwise, if every relation in the constraint language is in the
co-clone IM_2 from Post's lattice, then the problem of counting satisfying
assignments is complete with respect to approximation-preserving reductions in
the complexity class #RH\Pi_1. This means that the problem of approximately
counting satisfying assignments of such a CSP instance is equivalent in
complexity to several other known counting problems, including the problem of
approximately counting the number of independent sets in a bipartite graph. For
every other fixed constraint language, the problem is complete for #P with
respect to approximation-preserving reductions, meaning that there is no fully
polynomial randomised approximation scheme for counting satisfying assignments
unless NP=RP
- …