418 research outputs found
Model Counting of Query Expressions: Limitations of Propositional Methods
Query evaluation in tuple-independent probabilistic databases is the problem
of computing the probability of an answer to a query given independent
probabilities of the individual tuples in a database instance. There are two
main approaches to this problem: (1) in `grounded inference' one first obtains
the lineage for the query and database instance as a Boolean formula, then
performs weighted model counting on the lineage (i.e., computes the probability
of the lineage given probabilities of its independent Boolean variables); (2)
in methods known as `lifted inference' or `extensional query evaluation', one
exploits the high-level structure of the query as a first-order formula.
Although it is widely believed that lifted inference is strictly more powerful
than grounded inference on the lineage alone, no formal separation has
previously been shown for query evaluation. In this paper we show such a formal
separation for the first time.
We exhibit a class of queries for which model counting can be done in
polynomial time using extensional query evaluation, whereas the algorithms used
in state-of-the-art exact model counters on their lineages provably require
exponential time. Our lower bounds on the running times of these exact model
counters follow from new exponential size lower bounds on the kinds of d-DNNF
representations of the lineages that these model counters (either explicitly or
implicitly) produce. Though some of these queries have been studied before, no
non-trivial lower bounds on the sizes of these representations for these
queries were previously known.Comment: To appear in International Conference on Database Theory (ICDT) 201
Testing systems of identical components
We consider the problem of testing sequentially the components of a multi-component reliability system in order to figure out the state of the system via costly tests. In particular, systems with identical components are considered. The notion of lexicographically large binary decision trees is introduced and a heuristic algorithm based on that notion is proposed. The performance of the heuristic algorithm is demonstrated by computational results, for various classes of functions. In particular, in all 200 random cases where the underlying function is a threshold function, the proposed heuristic produces optimal solutions
Deontic logic as a study of conditions of rationality in norm-related activities
The program put forward in von Wright's last works defines deontic logic as ``a study of conditions which must be satisfied in rational norm-giving activity'' and thus introduces the perspective of logical pragmatics. In this paper a formal explication for von Wright's program is proposed within the framework of set-theoretic approach and extended to a two-sets model which allows for the separate treatment of obligation-norms and permission norms. The three translation functions connecting the language of deontic logic with the language of the extended set-theoretical approach are introduced, and used in proving the correspondence between the deontic theorems, on one side, and the perfection properties of the norm-set and the ``counter-set'', on the other side. In this way the possibility of reinterpretation of standard deontic logic as the theory of perfection properties that ought to be achieved in norm-giving activity has been formally proved. The extended set-theoretic approach is applied to the problem of rationality of principles of completion of normative systems. The paper concludes with a plaidoyer for logical pragmatics turn envisaged in the late phase of Von Wright's work in deontic logic
Decomposition of select expressions
A select operation that is part of an expression applying to a relational database is decomposed into one or more independent select operations for the purpose of optimising the relational expression. The select expression is treated as a logical expression. From the canonical form of this expression an optimal conjunctive form is obtained which can be decomposed into separate select operations. These separate selects can then be moved to the most effective place within the relational expression. The method also eliminates redundancy in the original expression. A prototype has been used in developing the optimisation method; from this prototype an implementation for use in an actual system has been derived
Prime Forms in Possibilistic Logic
Possibilistic logic is a weighted logic used to represent uncertain and inconsistent knowledge. Its semantics is often defined by a possibility distribution, which is a function from a set of interpretations to a totally ordered scale. In this paper, we consider a new semantic characteristics of knowledge bases in possibilistic logic (or possibilistic knowledge bases) by a generalized notion of propositional prime implicant, which we call prioritized
prime implicant. We first consider several desirable
properties of a prioritized prime implicant for characterizing possibilistic knowledge bases. Some examples show that existing generalizations of prime
implicant in possibilistic logic do not satisfy all of
these properties. We then provide a novel definition of prioritized prime implicant, which is a set
of weighted literals that may be inconsistent. We
show that the prioritized prime implicants satisfy
all the desirable properties. Finally, we discuss the
problem of computing prioritized prime implicants
of a possibilistic knowledge base
Binary decision diagrams for fault tree analysis
This thesis develops a new approach to fault tree analysis, namely the Binary Decision
Diagram (BDD) method. Conventional qualitative fault tree analysis techniques such
as the "top-down" or "bottom-up" approaches are now so well developed that further
refinement is unlikely to result in vast improvements in terms of their computational
capability. The BDD method has exhibited potential gains to be made in terms of
speed and efficiency in determining the minimal cut sets. Further, the nature of the
binary decision diagram is such that it is more suited to Boolean manipulation. The
BDD method has been programmed and successfully applied to a number of
benchmark fault trees.
The analysis capabilities of the technique have been extended such that all quantitative
fault tree top event parameters, which can be determined by conventional Kinetic Tree
Theory, can now be derived directly from the BDD. Parameters such as the top event
probability, frequency of occurrence and expected number of occurrences can be
calculated exactly using this method, removing the need for the approximations
previously required.
Thus the BDD method is proven to have advantages in terms of both accuracy and
efficiency. Initiator/enabler event analysis and importance measures have been
incorporated to extend this method into a full analysis procedure
- …