371 research outputs found

    Testing systems of identical components

    Get PDF
    We consider the problem of testing sequentially the components of a multi-component reliability system in order to figure out the state of the system via costly tests. In particular, systems with identical components are considered. The notion of lexicographically large binary decision trees is introduced and a heuristic algorithm based on that notion is proposed. The performance of the heuristic algorithm is demonstrated by computational results, for various classes of functions. In particular, in all 200 random cases where the underlying function is a threshold function, the proposed heuristic produces optimal solutions

    Model Counting of Query Expressions: Limitations of Propositional Methods

    Full text link
    Query evaluation in tuple-independent probabilistic databases is the problem of computing the probability of an answer to a query given independent probabilities of the individual tuples in a database instance. There are two main approaches to this problem: (1) in `grounded inference' one first obtains the lineage for the query and database instance as a Boolean formula, then performs weighted model counting on the lineage (i.e., computes the probability of the lineage given probabilities of its independent Boolean variables); (2) in methods known as `lifted inference' or `extensional query evaluation', one exploits the high-level structure of the query as a first-order formula. Although it is widely believed that lifted inference is strictly more powerful than grounded inference on the lineage alone, no formal separation has previously been shown for query evaluation. In this paper we show such a formal separation for the first time. We exhibit a class of queries for which model counting can be done in polynomial time using extensional query evaluation, whereas the algorithms used in state-of-the-art exact model counters on their lineages provably require exponential time. Our lower bounds on the running times of these exact model counters follow from new exponential size lower bounds on the kinds of d-DNNF representations of the lineages that these model counters (either explicitly or implicitly) produce. Though some of these queries have been studied before, no non-trivial lower bounds on the sizes of these representations for these queries were previously known.Comment: To appear in International Conference on Database Theory (ICDT) 201

    Second Moment Method on k-SAT: a General Framework

    Full text link
    We give a general framework implementing the Second Moment Method on k-SAT and discuss the conditions making the Second Moment Method work in this framework. As applications, we make the Second Moment Method work on boolean solutions and implicants. We extend this to the distributional model of k-SAT.Comment: 35 page

    BOOM - A Heuristic Boolean Minimizer

    Get PDF
    This paper presents an algorithm for two-level Boolean minimization (BOOM) based on a new implicant generation paradigm. In contrast to all previous minimization methods, where the implicants are generated bottom-up, the proposed method uses a top-down approach. Thus, instead of increasing the dimensionality of implicants by omitting literals from their terms, the dimension of a term is gradually decreased by adding new literals. The method is advantageous especially for functions with many input variables (up to thousands) and with only few care terms defined, where other minimization tools are not applicable because of the long runtime. The method has been tested on several different kinds of problems and the results were compared with ESPRESSO

    A Network-Based Deterministic Model for Causal Complexity

    Get PDF
    Despite the widespread use of techniques and tools for causal analysis, existing methodologies still fall short as they largely regard causal variables as independent elements, thereby failing to appreciate the significance of the interactions of causal variables. The prospect of inferring causal relationships from weaker structural assumptions compels for further research in this area. This study explores the effects of the interactions of variables in the context of causal analysis, and introduces new advancements to this area of research. In this study, we introduce a new approach for the causal complexity with the goal of making the solution set closer to deterministic by taking into consideration the underlying patterns embedded within a dataset; in particular, the interactions of causal variables. Our model follows the configurational approach, and as such, is able to account for the three major phenomena of conjunctural causation, equifinality, and causal asymmetry

    Boolean Functions: Theory, Algorithms, and Applications

    Full text link
    This monograph provides the first comprehensive presentation of the theoretical, algorithmic and applied aspects of Boolean functions, i.e., {0,1}-valued functions of a finite number of {0,1}-valued variables. The book focuses on algebraic representations of Boolean functions, especially normal form representations. It presents the fundamental elements of the theory (Boolean equations and satisfiability problems, prime implicants and associated representations, dualization, etc.), an in-depth study of special classes of Boolean functions (quadratic, Horn, shellable, regular, threshold, read-once, etc.), and two fruitful generalizations of the concept of Boolean functions (partially defined and pseudo-Boolean functions). It features a rich bibliography of about one thousand items. Prominent among the disciplines in which Boolean methods play a significant role are propositional logic, combinatorics, graph and hypergraph theory, complexity theory, integer programming, combinatorial optimization, game theory, reliability theory, electrical and computer engineering, artificial intelligence, etc. The book contains applications of Boolean functions in all these areas

    Lower Bounds for DeMorgan Circuits of Bounded Negation Width

    Get PDF
    We consider Boolean circuits over {or, and, neg} with negations applied only to input variables. To measure the "amount of negation" in such circuits, we introduce the concept of their "negation width". In particular, a circuit computing a monotone Boolean function f(x_1,...,x_n) has negation width w if no nonzero term produced (purely syntactically) by the circuit contains more than w distinct negated variables. Circuits of negation width w=0 are equivalent to monotone Boolean circuits, while those of negation width w=n have no restrictions. Our motivation is that already circuits of moderate negation width w=n^{epsilon} for an arbitrarily small constant epsilon>0 can be even exponentially stronger than monotone circuits. We show that the size of any circuit of negation width w computing f is roughly at least the minimum size of a monotone circuit computing f divided by K=min{w^m,m^w}, where m is the maximum length of a prime implicant of f. We also show that the depth of any circuit of negation width w computing f is roughly at least the minimum depth of a monotone circuit computing f minus log K. Finally, we show that formulas of bounded negation width can be balanced to achieve a logarithmic (in their size) depth without increasing their negation width
    corecore