49,927 research outputs found
An incremental algorithm for generating all minimal models
AbstractThe task of generating minimal models of a knowledge base is at the computational heart of diagnosis systems like truth maintenance systems, and of nonmonotonic systems like autoepistemic logic, default logic, and disjunctive logic programs. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Ψ1,Ψ2,… , with the following properties: first, Ψ1 is the class of all Horn knowledge bases; second, if a knowledge base T is in Ψk, then T has at most k minimal models, and all of them may be found in time O(lk2), where l is the length of the knowledge base; third, for an arbitrary knowledge base T, we can find the minimum k such that T belongs to Ψk in time polynomial in the size of T; and, last, where K is the class of all knowledge bases, it is the case that ⋃i=1∞Ψi=K, that is, every knowledge base belongs to some class in the hierarchy. The algorithm is incremental, that is, it is capable of generating one model at a time
Using Answer Set Programming for pattern mining
Serial pattern mining consists in extracting the frequent sequential patterns
from a unique sequence of itemsets. This paper explores the ability of a
declarative language, such as Answer Set Programming (ASP), to solve this issue
efficiently. We propose several ASP implementations of the frequent sequential
pattern mining task: a non-incremental and an incremental resolution. The
results show that the incremental resolution is more efficient than the
non-incremental one, but both ASP programs are less efficient than dedicated
algorithms. Nonetheless, this approach can be seen as a first step toward a
generic framework for sequential pattern mining with constraints.Comment: Intelligence Artificielle Fondamentale (2014
Efficient enumeration of solutions produced by closure operations
In this paper we address the problem of generating all elements obtained by
the saturation of an initial set by some operations. More precisely, we prove
that we can generate the closure of a boolean relation (a set of boolean
vectors) by polymorphisms with a polynomial delay. Therefore we can compute
with polynomial delay the closure of a family of sets by any set of "set
operations": union, intersection, symmetric difference, subsets, supersets
). To do so, we study the problem: for a set
of operations , decide whether an element belongs to the closure
by of a family of elements. In the boolean case, we prove that
is in P for any set of boolean operations
. When the input vectors are over a domain larger than two
elements, we prove that the generic enumeration method fails, since
is NP-hard for some . We also study the
problem of generating minimal or maximal elements of closures and prove that
some of them are related to well known enumeration problems such as the
enumeration of the circuits of a matroid or the enumeration of maximal
independent sets of a hypergraph. This article improves on previous works of
the same authors.Comment: 30 pages, 1 figure. Long version of the article arXiv:1509.05623 of
the same name which appeared in STACS 2016. Final version for DMTCS journa
Automatic Generation of Minimal Cut Sets
A cut set is a collection of component failure modes that could lead to a
system failure. Cut Set Analysis (CSA) is applied to critical systems to
identify and rank system vulnerabilities at design time. Model checking tools
have been used to automate the generation of minimal cut sets but are generally
based on checking reachability of system failure states. This paper describes a
new approach to CSA using a Linear Temporal Logic (LTL) model checker called BT
Analyser that supports the generation of multiple counterexamples. The approach
enables a broader class of system failures to be analysed, by generalising from
failure state formulae to failure behaviours expressed in LTL. The traditional
approach to CSA using model checking requires the model or system failure to be
modified, usually by hand, to eliminate already-discovered cut sets, and the
model checker to be rerun, at each step. By contrast, the new approach works
incrementally and fully automatically, thereby removing the tedious and
error-prone manual process and resulting in significantly reduced computation
time. This in turn enables larger models to be checked. Two different
strategies for using BT Analyser for CSA are presented. There is generally no
single best strategy for model checking: their relative efficiency depends on
the model and property being analysed. Comparative results are given for the
A320 hydraulics case study in the Behavior Tree modelling language.Comment: In Proceedings ESSS 2015, arXiv:1506.0325
On the Enumeration of all Minimal Triangulations
We present an algorithm that enumerates all the minimal triangulations of a
graph in incremental polynomial time. Consequently, we get an algorithm for
enumerating all the proper tree decompositions, in incremental polynomial time,
where "proper" means that the tree decomposition cannot be improved by removing
or splitting a bag
Recommended from our members
Incremental evolution of cellular automata for random number generation
Cellular automata (CA) have been used in pseudorandom number generation for over a decade. Recent studies show that controllable CA (CCA) can generate better random sequences than conventional one-dimensional (1-d) CA and compete with two-dimensional (2-d) CA. Yet the structural complexity of CCA is higher than that of 1-d PCA. It would be good if CCA can attain good randomness quality with the least structural complexity. In this paper, we evolve PCA/CCA to their lowest complexity level using genetic algorithms (GAs). Meanwhile, the randomness quality and output efficiency of PCA/CCA are also evolved. The evolution process involves two algorithms a multi-objective genetic algorithm (MOGA) and an algorithm for incremental evolution. A set of PCA/CCA are evolved and compared in randomness, complexity, and efficiency. The results show that without any spacing, CCA could generate good random number sequences that could pass DIEHARD. And, to obtain the same randomness quality, the structural complexity of CCA is not higher than that of 1-d CA. Furthermore, the methodology developed could be used to evolve other CA or serve as a yardstick to compare different types of CA
Minimal generating sets of non-modular invariant rings of finite groups
It is a classical problem to compute a minimal set of invariant polynomial
generating the invariant ring of a finite group as an algebra. We present here
an algorithm for the computation of minimal generating sets in the non-modular
case. Apart from very few explicit computations of Groebner bases, the
algorithm only involves very basic operations, and is thus rather fast.
As a test bed for comparative benchmarks, we use transitive permutation
groups on 7 and 8 variables. In most examples, our algorithm implemented in
Singular works much faster than the one used in Magma, namely by factors
between 50 and 1000. We also compute some further examples on more than 8
variables, including a minimal generating set for the natural action of the
cyclic group of order 11 in characteristic 0 and of order 15 in characteristic
2.
We also apply our algorithm to the computation of irreducible secondary
invariants.Comment: 14 pages v3: Timings updated. One example adde
Chaining Test Cases for Reactive System Testing (extended version)
Testing of synchronous reactive systems is challenging because long input
sequences are often needed to drive them into a state at which a desired
feature can be tested. This is particularly problematic in on-target testing,
where a system is tested in its real-life application environment and the time
required for resetting is high. This paper presents an approach to discovering
a test case chain---a single software execution that covers a group of test
goals and minimises overall test execution time. Our technique targets the
scenario in which test goals for the requirements are given as safety
properties. We give conditions for the existence and minimality of a single
test case chain and minimise the number of test chains if a single test chain
is infeasible. We report experimental results with a prototype tool for C code
generated from Simulink models and compare it to state-of-the-art test suite
generators.Comment: extended version of paper published at ICTSS'1
Incremental Recompilation of Knowledge
Approximating a general formula from above and below by Horn formulas (its
Horn envelope and Horn core, respectively) was proposed by Selman and Kautz
(1991, 1996) as a form of ``knowledge compilation,'' supporting rapid
approximate reasoning; on the negative side, this scheme is static in that it
supports no updates, and has certain complexity drawbacks pointed out by
Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many
frameworks and schemes proposed in the literature for theory update and
revision are plagued by serious complexity-theoretic impediments, even in the
Horn case, as was pointed out by Eiter and Gottlob (1992), and is further
demonstrated in the present paper. More fundamentally, these schemes are not
inductive, in that they may lose in a single update any positive properties of
the represented sets of formulas (small size, Horn structure, etc.). In this
paper we propose a new scheme, incremental recompilation, which combines Horn
approximation and model-based updates; this scheme is inductive and very
efficient, free of the problems facing its constituents. A set of formulas is
represented by an upper and lower Horn approximation. To update, we replace the
upper Horn formula by the Horn envelope of its minimum-change update, and
similarly the lower one by the Horn core of its update; the key fact which
enables this scheme is that Horn envelopes and cores are easy to compute when
the underlying formula is the result of a minimum-change update of a Horn
formula by a clause. We conjecture that efficient algorithms are possible for
more complex updates.Comment: See http://www.jair.org/ for any accompanying file
- …