19 research outputs found

    On efficient ordered binary decision diagram minimization heuristics based on two-level logic.

    Get PDF
    by Chun Gu.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 69-71).Abstract also in Chinese.Chapter 1 --- Introduction --- p.3Chapter 2 --- Definitions --- p.7Chapter 3 --- Some Previous Work on OBDD --- p.13Chapter 3.1 --- The Work of Bryant --- p.13Chapter 3.2 --- Some Variations of the OBDD --- p.14Chapter 3.3 --- Previous Work on Variable Ordering of OBDD --- p.16Chapter 3.3.1 --- The FIH Heuristic --- p.16Chapter 3.3.2 --- The Dynamic Variable Ordering --- p.17Chapter 3.3.3 --- The Interleaving method --- p.19Chapter 4 --- Two Level Logic Function and OBDD --- p.21Chapter 5 --- DSCF Algorithm --- p.25Chapter 6 --- Thin Boolean Function --- p.33Chapter 6.1 --- The Structure and Properties of thin Boolean functions --- p.33Chapter 6.1.1 --- The construction of Thin OBDDs --- p.33Chapter 6.1.2 --- Properties of Thin Boolean Functions --- p.38Chapter 6.1.3 --- Thin Factored Functions --- p.49Chapter 6.2 --- The Revised DSCF Algorithm --- p.52Chapter 6.3 --- Experimental Results --- p.54Chapter 7 --- A Pattern Merging Algorithm --- p.59Chapter 7.1 --- Merging of Patterns --- p.60Chapter 7.2 --- The Algorithm --- p.62Chapter 7.3 --- Experiments and Conclusion --- p.65Chapter 8 --- Conclusions --- p.6

    Minimization of lines in reversible circuits

    Get PDF
    Reversible computing has been theoretically shown to be an efficient approach over conventional computing due to the property of virtually zero power dissipation. A major concern in reversible circuits is the number of circuit lines or qubits which are a limited resource. In this thesis we explore the line reduction problem using a decision diagram based synthesis approach and introduce a line reduction algorithm— Minimization of lines using Ordered Kronecker Functional Decision Diagrams (MOKFDD). The algorithm uses a new sub-circuit for a positive Davio node structure in addition to the existing node structures. We also present a shared node ordering for OKFDDs. OKFDDs are a combination of OBDDs and OFDDs. The experimental results shows that the number of circuit lines and quantum cost can be reduced with our proposed approach.NSER

    BDD Minimization for Approximate Computing

    Get PDF
    We present Approximate BDD Minimization (ABM) as a problem that has application in approximate computing. Given a BDD representation of a multi-output Boolean function, ABM asks whether there exists another function that has a smaller BDD representation but meets a threshold w.r.t. an error metric. We present operators to derive approximated functions and present algorithms to exactly compute the error metrics directly on the BDD representation. An experimental evaluation demonstrates the applicability of the proposed approaches

    An n log n Algorithm for Online BDD Refinement

    Get PDF
    Binary Decision Diagrams are in widespread use in verification systemsfor the canonical representation of Boolean functions. A BDD representinga function phi : B^nu -> N can easily be reduced to its canonical form inlinear time.In this paper, we consider a natural online BDD refinement problemand show that it can be solved in O(n log n) if n bounds the size of theBDD and the total size of update operations.We argue that BDDs in an algebraic framework should be understoodas minimal fixed points superimposed on maximal fixed points. We proposea technique of controlled growth of equivalence classes to make theminimal fixed point calculations be carried out efficiently. Our algorithmis based on a new understanding of the interplay between the splittingand growing of classes of nodes.We apply our algorithm to show that automata with exponentiallylarge, but implicitly represented alphabets, can be minimized in timeO(n log n), where n is the total number of BDD nodes representing theautomaton

    Algorithms for regression and classification

    Get PDF
    Regression and classification are statistical techniques that may be used to extract rules and patterns out of data sets. Analyzing the involved algorithms comprises interdisciplinary research that offers interesting problems for statisticians and computer scientists alike. The focus of this thesis is on robust regression and classification in genetic association studies. In the context of robust regression, new exact algorithms and results for robust online scale estimation with the estimators Qn and Sn and for robust linear regression in the plane with the estimator least quartile difference (LQD) are presented. Additionally, an evolutionary computation algorithm for robust regression with different estimators in higher dimensions is devised. These estimators include the widely used least median of squares (LMS) and least trimmed squares (LTS). For classification in genetic association studies, this thesis describes a Genetic Programming algorithm that outpeforms the standard approaches on the considered data sets. It is able to identify interesting genetic factors not found before in a data set on sporadic breast cancer and to handle larger data sets than the compared methods. In addition, it is extendible to further application fields

    Logics for digital circuit verification : theory, algorithms, and applications

    Get PDF

    Low Power Design Techniques for Digital Logic Circuits.

    Get PDF
    With the rapid increase in the density and the size of chips and systems, area and power dissipationbecome critical concern in Very Large Scale Integrated (VLSI) circuit design. Low powerdesign techniques are essential for today's VLSI industry. The history of symbolic logic and sometypical techniques for finite state machine (FSM) logic synthesis are reviewed.The state assignment is used to optimize area and power dissipation for FSMs. Two costfunctions, targeting area and power, are presented. The Genetic Algorithm (GA) is used to searchfor a good state assignment to minimize the cost functions. The algorithm has been implementedin C. The program can produce better results than NOVA, which is integrated into SIS by DCBerkeley, and other publications both in area and power tested by MCNC benchmarks.Flip-flops are the core components of FSMs. The reduction of power dissipation from flip-flopscan save power for digital systems significantly. Three new kinds of flip-flops, called differentialCMOS single edge-triggered flip-flop with clock gating, double edge-triggered and multiple valuedflip-flops employing multiple valued clocks, are proposed. All circuits are simulated using PSpice.Most researchers have focused on developing low-power techniques in AND/OR or NAND& NOR based circuits. The low power techniques for AND /XOR based circuits are still intheir early stage of development. To implement a complex function involving many inputs,a form of decomposition into smaller subfunctions is required such that the subfunctions fitinto the primitive elements to be used in the implementation. Best polarity based XOR gatedecomposition technique has been developed, which targets low power using Huffman algorithm.Compared to the published results, the proposed method shows considerable improvement inpower dissipation. Further, Boolean functions can be expressed by Fixed Polarity Reed-Muller(FPRM) forms. Based on polarity transformation, an algorithm is developed and implementedin C language which can find the best polarity for power and area optimization. Benchmarkexamples of up to 21 inputs run on a personal computer are given

    Learning understandable classifier models.

    Get PDF
    The topic of this dissertation is the automation of the process of extracting understandable patterns and rules from data. An unprecedented amount of data is available to anyone with a computer connected to the Internet. The disciplines of Data Mining and Machine Learning have emerged over the last two decades to face this challenge. This has led to the development of many tools and methods. These tools often produce models that make very accurate predictions about previously unseen data. However, models built by the most accurate methods are usually hard to understand or interpret by humans. In consequence, they deliver only decisions, and are short of any explanations. Hence they do not directly lead to the acquisition of new knowledge. This dissertation contributes to bridging the gap between the accurate opaque models and those less accurate but more transparent for humans. This dissertation first defines the problem of learning from data. It surveys the state-of-the-art methods for supervised learning of both understandable and opaque models from data, as well as unsupervised methods that detect features present in the data. It describes popular methods of rule extraction from unintelligible models which rewrite them into an understandable form. Limitations of rule extraction are described. A novel definition of understandability which ties computational complexity and learning is provided to show that rule extraction is an NP-hard problem. Next, a discussion whether one can expect that even an accurate classifier has learned new knowledge. The survey ends with a presentation of two approaches to building of understandable classifiers. On the one hand, understandable models must be able to accurately describe relations in the data. On the other hand, often a description of the output of a system in terms of its input requires the introduction of intermediate concepts, called features. Therefore it is crucial to develop methods that describe the data with understandable features and are able to use those features to present the relation that describes the data. Novel contributions of this thesis follow the survey. Two families of rule extraction algorithms are considered. First, a method that can work with any opaque classifier is introduced. Artificial training patterns are generated in a mathematically sound way and used to train more accurate understandable models. Subsequently, two novel algorithms that require that the opaque model is a Neural Network are presented. They rely on access to the network\u27s weights and biases to induce rules encoded as Decision Diagrams. Finally, the topic of feature extraction is considered. The impact on imposing non-negativity constraints on the weights of a neural network is considered. It is proved that a three layer network with non-negative weights can shatter any given set of points and experiments are conducted to assess the accuracy and interpretability of such networks. Then, a novel path-following algorithm that finds robust sparse encodings of data is presented. In summary, this dissertation contributes to improved understandability of classifiers in several tangible and original ways. It introduces three distinct aspects of achieving this goal: infusion of additional patterns from the underlying pattern distribution into rule learners, the derivation of decision diagrams from neural networks, and achieving sparse coding with neural networks with non-negative weights
    corecore