2,690 research outputs found
Efficient enumeration of solutions produced by closure operations
In this paper we address the problem of generating all elements obtained by
the saturation of an initial set by some operations. More precisely, we prove
that we can generate the closure of a boolean relation (a set of boolean
vectors) by polymorphisms with a polynomial delay. Therefore we can compute
with polynomial delay the closure of a family of sets by any set of "set
operations": union, intersection, symmetric difference, subsets, supersets
). To do so, we study the problem: for a set
of operations , decide whether an element belongs to the closure
by of a family of elements. In the boolean case, we prove that
is in P for any set of boolean operations
. When the input vectors are over a domain larger than two
elements, we prove that the generic enumeration method fails, since
is NP-hard for some . We also study the
problem of generating minimal or maximal elements of closures and prove that
some of them are related to well known enumeration problems such as the
enumeration of the circuits of a matroid or the enumeration of maximal
independent sets of a hypergraph. This article improves on previous works of
the same authors.Comment: 30 pages, 1 figure. Long version of the article arXiv:1509.05623 of
the same name which appeared in STACS 2016. Final version for DMTCS journa
Constraint-based sequence mining using constraint programming
The goal of constraint-based sequence mining is to find sequences of symbols
that are included in a large number of input sequences and that satisfy some
constraints specified by the user. Many constraints have been proposed in the
literature, but a general framework is still missing. We investigate the use of
constraint programming as general framework for this task. We first identify
four categories of constraints that are applicable to sequence mining. We then
propose two constraint programming formulations. The first formulation
introduces a new global constraint called exists-embedding. This formulation is
the most efficient but does not support one type of constraint. To support such
constraints, we develop a second formulation that is more general but incurs
more overhead. Both formulations can use the projected database technique used
in specialised algorithms. Experiments demonstrate the flexibility towards
constraint-based settings and compare the approach to existing methods.Comment: In Integration of AI and OR Techniques in Constraint Programming
(CPAIOR), 201
On CNF Conversion for SAT Enumeration
Modern SAT solvers are designed to handle problems expressed in Conjunctive
Normal Form (CNF) so that non-CNF problems must be CNF-ized upfront, typically
by using variants of either Tseitin or Plaisted&Greenbaum transformations. When
passing from solving to enumeration, however, the capability of producing
partial satisfying assignment that are as small as possible becomes crucial,
which raises the question of whether such CNF encodings are also effective for
enumeration. In this paper, we investigate both theoretically and empirically
the effectiveness of CNF conversions for SAT enumeration. On the negative side,
we show that: (i) Tseitin transformation prevents the solver from producing
short partial assignments, thus seriously affecting the effectiveness of
enumeration; (ii) Plaisted&Greenbaum transformation overcomes this problem only
in part. On the positive side, we show that combining Plaisted&Greenbaum
transformation with NNF preprocessing upfront -- which is typically not used in
solving -- can fully overcome the problem and can drastically reduce both the
number of partial assignments and the execution time.Comment: 14 pages, 12 figure
On CNF Conversion for Disjoint SAT Enumeration
Modern SAT solvers are designed to handle problems expressed in Conjunctive Normal Form (CNF) so that non-CNF problems must be CNF-ized upfront, typically by using variants of either Tseitin or Plaisted and Greenbaum transformations. When passing from solving to enumeration, however, the capability of producing partial satisfying assignments that are as small as possible becomes crucial, which raises the question of whether such CNF encodings are also effective for enumeration.
In this paper, we investigate both theoretically and empirically the effectiveness of CNF conversions for disjoint SAT enumeration. On the negative side, we show that: (i) Tseitin transformation prevents the solver from producing short partial assignments, thus seriously affecting the effectiveness of enumeration; (ii) Plaisted and Greenbaum transformation overcomes this problem only in part. On the positive side, we show that combining Plaisted and Greenbaum transformation with NNF preprocessing upfront - which is typically not used in solving - can fully overcome the problem and can drastically reduce both the number of partial assignments and the execution time
On The Power of Tree Projections: Structural Tractability of Enumerating CSP Solutions
The problem of deciding whether CSP instances admit solutions has been deeply
studied in the literature, and several structural tractability results have
been derived so far. However, constraint satisfaction comes in practice as a
computation problem where the focus is either on finding one solution, or on
enumerating all solutions, possibly projected to some given set of output
variables. The paper investigates the structural tractability of the problem of
enumerating (possibly projected) solutions, where tractability means here
computable with polynomial delay (WPD), since in general exponentially many
solutions may be computed. A general framework based on the notion of tree
projection of hypergraphs is considered, which generalizes all known
decomposition methods. Tractability results have been obtained both for classes
of structures where output variables are part of their specification, and for
classes of structures where computability WPD must be ensured for any possible
set of output variables. These results are shown to be tight, by exhibiting
dichotomies for classes of structures having bounded arity and where the tree
decomposition method is considered
Low-complexity dominance-based Sphere Decoder for MIMO Systems
The sphere decoder (SD) is an attractive low-complexity alternative to
maximum likelihood (ML) detection in a variety of communication systems. It is
also employed in multiple-input multiple-output (MIMO) systems where the
computational complexity of the optimum detector grows exponentially with the
number of transmit antennas. We propose an enhanced version of the SD based on
an additional cost function derived from conditions on worst case interference,
that we call dominance conditions. The proposed detector, the king sphere
decoder (KSD), has a computational complexity that results to be not larger
than the complexity of the sphere decoder and numerical simulations show that
the complexity reduction is usually quite significant
Search Through Systematic Set Enumeration
In many problem domains, solutions take the form of unordered sets. We present the Set-Enumerations (SE)-tree - a vehicle for representing sets and/or enumerating them in a best-first fashion. We demonstrate its usefulness as the basis for a unifying search-based framework for domains where minimal (maximal) elements of a power set are targeted, where minimal (maximal) partial instantiations of a set of variables are sought, or where a composite decision is not dependent on the order in which its primitive component-decisions are taken. Particular instantiations of SE-tree-based algorithms for some AI problem domains are used to demonstrate the general features of the approach. These algorithms are compared theoretically and empirically with current algorithms
- …