4 research outputs found

    Some heuristics for generating tree-like FBDD types

    No full text
    Reduced ordered binary decision diagrams (OBDD's) are nowadays the state-of-the-art representation scheme for Boolean functions in Boolean manipulation. Recent results have shown that it is possible to use the more general concept of type consistent free binary decision diagrams (FBDD's) without giving up most of the useful computational properties of OBDD's, but possibly reducing the space needed for computations. The FBDD types used in this approach which generalize the OBDD's orders have significant impact on the space required like orders have. In the paper we start to describe some heuristics for deriving tree-like FBDD's types. The experimental results obtained demonstrate clearly that the FBDD-approach is not only of theoretical interest but also of practical usefulness even with such easy structured tree-based FBDD types produced by the developed heuristics. (orig.)Available from TIB Hannover: RR 1843(94-03) / FIZ - Fachinformationszzentrum Karlsruhe / TIB - Technische InformationsbibliothekSIGLEDEGerman

    Some Heuristics for Generating Tree-like FBDD Types

    No full text
    Reduced ordered binary decision diagrams (OBDD's) are nowadays the state-of-the-art representation scheme for Boolean functions in Boolean manipulation. Recent results have shown that it is possible to use the more general concept of type consistent free binary decision diagrams (FBDD's) without giving up most of the useful computational properties of OBDD's, but possibly reducing the space needed for computations. The FBDD types used in this approach which generalize the OBDD's orders have significant impact on the space required like orders have. In the paper we start to describe some heuristics for deriving tree-like FBDD's types. The experimental results obtained demonstrate clearly that the FBDD--approach is not only of theoretical interest but also of practical usefulness even with such easy structured tree--based FBDD types produced by the developed heuristics

    Learning understandable classifier models.

    Get PDF
    The topic of this dissertation is the automation of the process of extracting understandable patterns and rules from data. An unprecedented amount of data is available to anyone with a computer connected to the Internet. The disciplines of Data Mining and Machine Learning have emerged over the last two decades to face this challenge. This has led to the development of many tools and methods. These tools often produce models that make very accurate predictions about previously unseen data. However, models built by the most accurate methods are usually hard to understand or interpret by humans. In consequence, they deliver only decisions, and are short of any explanations. Hence they do not directly lead to the acquisition of new knowledge. This dissertation contributes to bridging the gap between the accurate opaque models and those less accurate but more transparent for humans. This dissertation first defines the problem of learning from data. It surveys the state-of-the-art methods for supervised learning of both understandable and opaque models from data, as well as unsupervised methods that detect features present in the data. It describes popular methods of rule extraction from unintelligible models which rewrite them into an understandable form. Limitations of rule extraction are described. A novel definition of understandability which ties computational complexity and learning is provided to show that rule extraction is an NP-hard problem. Next, a discussion whether one can expect that even an accurate classifier has learned new knowledge. The survey ends with a presentation of two approaches to building of understandable classifiers. On the one hand, understandable models must be able to accurately describe relations in the data. On the other hand, often a description of the output of a system in terms of its input requires the introduction of intermediate concepts, called features. Therefore it is crucial to develop methods that describe the data with understandable features and are able to use those features to present the relation that describes the data. Novel contributions of this thesis follow the survey. Two families of rule extraction algorithms are considered. First, a method that can work with any opaque classifier is introduced. Artificial training patterns are generated in a mathematically sound way and used to train more accurate understandable models. Subsequently, two novel algorithms that require that the opaque model is a Neural Network are presented. They rely on access to the network\u27s weights and biases to induce rules encoded as Decision Diagrams. Finally, the topic of feature extraction is considered. The impact on imposing non-negativity constraints on the weights of a neural network is considered. It is proved that a three layer network with non-negative weights can shatter any given set of points and experiments are conducted to assess the accuracy and interpretability of such networks. Then, a novel path-following algorithm that finds robust sparse encodings of data is presented. In summary, this dissertation contributes to improved understandability of classifiers in several tangible and original ways. It introduces three distinct aspects of achieving this goal: infusion of additional patterns from the underlying pattern distribution into rule learners, the derivation of decision diagrams from neural networks, and achieving sparse coding with neural networks with non-negative weights
    corecore