64 research outputs found

    On the Role of Canonicity in Bottom-up Knowledge Compilation

    Get PDF
    We consider the problem of bottom-up compilation of knowledge bases, which is usually predicated on the existence of a polytime function for combining compilations using Boolean operators (usually called an Apply function). While such a polytime Apply function is known to exist for certain languages (e.g., OBDDs) and not exist for others (e.g., DNNF), its existence for certain languages remains unknown. Among the latter is the recently introduced language of Sentential Decision Diagrams (SDDs), for which a polytime Apply function exists for unreduced SDDs, but remains unknown for reduced ones (i.e. canonical SDDs). We resolve this open question in this paper and consider some of its theoretical and practical implications. Some of the findings we report question the common wisdom on the relationship between bottom-up compilation, language canonicity and the complexity of the Apply function

    On Representation and Genetic Operators in Evolutionary Algorithms

    Get PDF
    The application of evolutionary algorithms (EAs) requires as a basic design decision the choice of a suitable representation of the variable space and appropriate genetic operators. In practice mainly problemspecific representations with specific genetic operators and miscellaneous extensions can be observed. In this connection it attracts attention that hardly any formal requirements on the genetic operators are stated. In this article we first formalize the representation problem and then propose a package of requirements to guide the design of genetic operators. By the definition of distance measures on the geno- and phenotype space it is possible to integrate problem-specific knowledge into the genetic operators. As an example we show how this package of requirements can be used to design a genetic programming (GP) system for finding Boolean functions

    On Lower Bounds for Parity Branching Programs

    Get PDF
    Diese Arbeit beschaeftigt sich mit der Komplexität von parity Branching Programmen. Es werden superpolynomiale untere Schranken für verschiedene Varianten bewiesen, nämlich für well-structured graph-driven parity branching programs, general graph-driven parity branching programs und Summen von graph-driven parity branching programs

    Separating Incremental and Non-Incremental Bottom-Up Compilation

    Get PDF
    The aim of a compiler is, given a function represented in some language, to generate an equivalent representation in a target language L. In bottom-up (BU) compilation of functions given as CNF formulas, constructing the new representation requires compiling several subformulas in L. The compiler starts by compiling the clauses in L and iteratively constructs representations for new subformulas using an "Apply" operator that performs conjunction in L, until all clauses are combined into one representation. In principle, BU compilation can generate representations for any subformulas and conjoin them in any way. But an attractive strategy from a practical point of view is to augment one main representation - which we call the core - by conjoining to it the clauses one at a time. We refer to this strategy as incremental BU compilation. We prove that, for known relevant languages L for BU compilation, there is a class of CNF formulas that admit BU compilations to L that generate only polynomial-size intermediate representations, while their incremental BU compilations all generate an exponential-size core

    Truth Table Minimization of Computational Models

    Full text link
    Complexity theory offers a variety of concise computational models for computing boolean functions - branching programs, circuits, decision trees and ordered binary decision diagrams to name a few. A natural question that arises in this context with respect to any such model is this: Given a function f:{0,1}^n \to {0,1}, can we compute the optimal complexity of computing f in the computational model in question? (according to some desirable measure). A critical issue regarding this question is how exactly is f given, since a more elaborate description of f allows the algorithm to use more computational resources. Among the possible representations are black-box access to f (such as in computational learning theory), a representation of f in the desired computational model or a representation of f in some other model. One might conjecture that if f is given as its complete truth table (i.e., a list of f's values on each of its 2^n possible inputs), the most elaborate description conceivable, then any computational model can be efficiently computed, since the algorithm computing it can run poly(2^n) time. Several recent studies show that this is far from the truth - some models have efficient and simple algorithms that yield the desired result, others are believed to be hard, and for some models this problem remains open. In this thesis we will discuss the computational complexity of this question regarding several common types of computational models. We shall present several new hardness results and efficient algorithms, as well as new proofs and extensions for known theorems, for variants of decision trees, formulas and branching programs

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes
    • …
    corecore