479 research outputs found

    A rewriting approach to binary decision diagrams

    Get PDF
    AbstractBinary decision diagrams (BDDs) provide an established technique for propositional formula manipulation. In this paper, we present the basic BDD theory by means of standard rewriting techniques. Since a BDD is a DAG instead of a tree we need a notion of shared rewriting and develop appropriate theory. A rewriting system is presented by which canonical reduced ordered BDDs (ROBDDs) can be obtained and for which uniqueness of ROBDD representation is proved. Next, an alternative rewriting system is presented, suitable for actually computing ROBDDs from formulas. For this rewriting system a layerwise strategy is defined, and it is proved that when replacing the classical apply-algorithm by layerwise rewriting, roughly the same complexity bound is reached as in the classical algorithm. Moreover, a layerwise innermost strategy is defined and it is proved that the full classical algorithm for computing ROBDDs can be replaced by layerwise innermost rewriting without essentially affecting the complexity. Finally a lazy strategy is proposed sometimes performing much better than the traditional algorithm

    Positivity of the symmetric group characters is as hard as the polynomial time hierarchy

    Full text link
    We prove that deciding the vanishing of the character of the symmetric group is C=PC_=P-complete. We use this hardness result to prove that the the square of the character is not contained in #P\#P, unless the polynomial hierarchy collapses to the second level. This rules out the existence of any (unsigned) combinatorial description for the square of the characters. As a byproduct of our proof we conclude that deciding positivity of the character is PPPP-complete under many-one reductions, and hence PHPH-hard under Turing-reductions.Comment: 15 pages, 1 figur

    Truth Table Minimization of Computational Models

    Full text link
    Complexity theory offers a variety of concise computational models for computing boolean functions - branching programs, circuits, decision trees and ordered binary decision diagrams to name a few. A natural question that arises in this context with respect to any such model is this: Given a function f:{0,1}^n \to {0,1}, can we compute the optimal complexity of computing f in the computational model in question? (according to some desirable measure). A critical issue regarding this question is how exactly is f given, since a more elaborate description of f allows the algorithm to use more computational resources. Among the possible representations are black-box access to f (such as in computational learning theory), a representation of f in the desired computational model or a representation of f in some other model. One might conjecture that if f is given as its complete truth table (i.e., a list of f's values on each of its 2^n possible inputs), the most elaborate description conceivable, then any computational model can be efficiently computed, since the algorithm computing it can run poly(2^n) time. Several recent studies show that this is far from the truth - some models have efficient and simple algorithms that yield the desired result, others are believed to be hard, and for some models this problem remains open. In this thesis we will discuss the computational complexity of this question regarding several common types of computational models. We shall present several new hardness results and efficient algorithms, as well as new proofs and extensions for known theorems, for variants of decision trees, formulas and branching programs

    Algebraic Geometry Arising from Discrete Models of Gene Regulatory Networks

    Get PDF
    Discrete models of gene regulatory networks have gained popularity in computational systems biology over the last dozen years. However, not all discrete network models reflect the behaviors of real biological systems. In this work, we focus on two model selection methods and algebraic geometry arising from these model selection methods. The first model selection method involves biologically relevant functions. We begin by introducing k-canalizing functions, a generalization of nested canalizing functions. We extend results on nested canalizing functions and derived a unique extended monomial form of arbitrary Boolean functions. This gives us a stratification of the set of n-variable Boolean functions by canalizing depth. We obtain closed formulas for the number of n-variable Boolean functions with depth k, which simultaneously generalizes enumeration formulas for canalizing, and nested canalizing functions. We characterize the set of k-canalizing functions as an algebraic variety in F2n. 2 . Next, e propose a method for the reverse engineering of networks of k-canalizing functions using techniques from computational algebra, based on our parametrization of k-canalizing functions. We also analyze binary decision diagrams of k-canalizing functions. The second model selection method involves computing minimal polynomial models using Gröbner bases. We built up the connection between staircases and Gröbner bases. We pro-vided a necessary and sufficient condition for the ideal I(V ) to have a unique reduced Gröbner basis, using the concept of a basic staircase. We also provide a sufficient combinatorial characterization of V ⊂ Nnp that yields a unique reduced Grobner basis

    Learning understandable classifier models.

    Get PDF
    The topic of this dissertation is the automation of the process of extracting understandable patterns and rules from data. An unprecedented amount of data is available to anyone with a computer connected to the Internet. The disciplines of Data Mining and Machine Learning have emerged over the last two decades to face this challenge. This has led to the development of many tools and methods. These tools often produce models that make very accurate predictions about previously unseen data. However, models built by the most accurate methods are usually hard to understand or interpret by humans. In consequence, they deliver only decisions, and are short of any explanations. Hence they do not directly lead to the acquisition of new knowledge. This dissertation contributes to bridging the gap between the accurate opaque models and those less accurate but more transparent for humans. This dissertation first defines the problem of learning from data. It surveys the state-of-the-art methods for supervised learning of both understandable and opaque models from data, as well as unsupervised methods that detect features present in the data. It describes popular methods of rule extraction from unintelligible models which rewrite them into an understandable form. Limitations of rule extraction are described. A novel definition of understandability which ties computational complexity and learning is provided to show that rule extraction is an NP-hard problem. Next, a discussion whether one can expect that even an accurate classifier has learned new knowledge. The survey ends with a presentation of two approaches to building of understandable classifiers. On the one hand, understandable models must be able to accurately describe relations in the data. On the other hand, often a description of the output of a system in terms of its input requires the introduction of intermediate concepts, called features. Therefore it is crucial to develop methods that describe the data with understandable features and are able to use those features to present the relation that describes the data. Novel contributions of this thesis follow the survey. Two families of rule extraction algorithms are considered. First, a method that can work with any opaque classifier is introduced. Artificial training patterns are generated in a mathematically sound way and used to train more accurate understandable models. Subsequently, two novel algorithms that require that the opaque model is a Neural Network are presented. They rely on access to the network\u27s weights and biases to induce rules encoded as Decision Diagrams. Finally, the topic of feature extraction is considered. The impact on imposing non-negativity constraints on the weights of a neural network is considered. It is proved that a three layer network with non-negative weights can shatter any given set of points and experiments are conducted to assess the accuracy and interpretability of such networks. Then, a novel path-following algorithm that finds robust sparse encodings of data is presented. In summary, this dissertation contributes to improved understandability of classifiers in several tangible and original ways. It introduces three distinct aspects of achieving this goal: infusion of additional patterns from the underlying pattern distribution into rule learners, the derivation of decision diagrams from neural networks, and achieving sparse coding with neural networks with non-negative weights

    An extensive English language bibliography on graph theory and its applications

    Get PDF
    Bibliography on graph theory and its application

    Invariants in Low-Dimensional Topology and Knot Theory

    Get PDF
    This meeting concentrated on topological invariants in low dimensional topology and knot theory. We include both three and four dimensional manifolds in our phrase “low dimensional topology”. The intent of the conference was to understand the reach of knot theoretic invariants into four dimensions, including results in Khovanov homology, variants of Floer homology and quandle cohomology and to understand relationships among categorification, topological quantum field theories and four dimensional manifold invariants as in particular Seiberg-Witten invariants

    Computer Science Logic 2018: CSL 2018, September 4-8, 2018, Birmingham, United Kingdom

    Get PDF
    corecore