34 research outputs found

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes

    JINC - A Multi-Threaded Library for Higher-Order Weighted Decision Diagram Manipulation

    Get PDF
    Ordered Binary Decision Diagrams (OBDDs) have been proven to be an efficient data structure for symbolic algorithms. The efficiency of the symbolic methods de- pends on the underlying OBDD library. Available OBDD libraries are based on the standard concepts and so far only differ in implementation details. This thesis introduces new techniques to increase run-time and space-efficiency of an OBDD library. This thesis introduces the framework of Higher-Order Weighted Decision Diagrams (HOWDDs) to combine the similarities of different OBDD variants. This frame- work pioneers the basis for the new variant Toggling Algebraic Decision Diagrams (TADDs) which has been shown to be a space-efficient HOWDD variant for sym- bolic matrix representation. The concept of HOWDDs has been use to implement the OBDD library JINC. This thesis also analyzes the usage of multi-threading techniques to speed-up OBDD manipulations. A new reordering framework ap- plies the advantages of multi-threading techniques to reordering algorithms. This approach uses an abstraction layer so that the original reordering algorithms are not touched. The challenge that arise from a straight forward algorithm is that the computed-tables and the garbage collection are not as efficient as in a single- threaded environment. We resolve this problem by developing a new multi-operand APPLY algorithm that eliminates the creation of temporary nodes which could occur during computation and thus reduces the need for caching or garbage collection. The HOWDD framework leads to an efficient library design which has been shown to be more efficient than the established OBDD library CUDD. The HOWDD instance TADD reduces the needed number of nodes by factor two compared to ordinary ADDs. The new multi-threading approaches are more efficient than single-threading approaches by several factors. In the case of the new reordering framework the speed- up almost equals the theoretical optimal speed-up. The novel multi-operand APPLY algorithm reduces the memory usage for the n-queens problem by factor 50 which enables the calculation of bigger problem instances compared to the traditional APPLY approach. The new approaches improve the performance and reduce the memory footprint. This leads to the conclusion that applications should be reviewed whether they could benefit from the new multi-threading multi-operand approaches introduced and discussed in this thesis

    Advances in Functional Decomposition: Theory and Applications

    Get PDF
    Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research

    Using ordered partial decision diagrams for manufacture test generation

    Get PDF
    Because of limited tester time and memory, a primary goal of digital circuit manufacture test generation is to create compact test sets. Test generation programs that use Ordered Binary Decision Diagrams (OBDDs) as their primary functional representation excel at this task. Unfortunately, the use of OBDDs limits the application of these test generation programs to small circuits. This is because the size of the OBDD used to represent a function can be exponential in the number of the function's switching variables. Working with these functions can cause OBDD-based programs to exceed acceptable time and memory limits. This research proposes using Ordered Partial Decision Diagrams (OPDDs) instead as the primary functional representation for test generation systems. By limiting the number of vertices allowed in a single OPDD, complex functions can be partially represented in order to save time and memory. An OPDD-based test generation system is developed and techniques which improve its performance are evaluated on a small benchmark circuit. The new system is then demonstrated on larger and more complex circuits than its OBDD-based counterpart allows

    From Dust to Dawn: Practically Efficient Two-Party Secure Function Evaluation Protocols and their Modular Design

    Get PDF
    General two-party Secure Function Evaluation (SFE) allows mutually distrusting parties to (jointly) correctly compute \emph{any} function on their private input data, without revealing the inputs. SFE, properly designed, guarantees to satisfy the most stringent security requirements, even for interactive computation. Two-party SFE can benefit almost any client-server interaction where privacy is required, such as privacy-preserving credit checking, medical classification, or face recognition. Today, SFE is subject of an immense amount of research in a variety of directions, and is not easy to navigate. In this paper, we systematize the most \emph{practically important} work of the vast research knowledge on \emph{general} SFE. It turns out that the most efficient SFE protocols today are obtained by combining several basic techniques, such as garbled circuits and homomorphic encryption. We limit our detailed discussion to efficient general techniques. In particular, we do not discuss the details of currently \emph{practically inefficient} techniques, such as fully homomorphic encryption (although we elaborate on its practical relevance), nor do we cover \emph{specialized} techniques applicable only to small classes of functions. As an important practical contribution, we present a framework in which today\u27s practically most efficient techniques for general SFE can be viewed as building blocks with well-defined interfaces that can be easily combined to establish a complete efficient solution. Further, our approach naturally lends itself to automated protocol generation (compilation). This is evidenced by the implementation of (parts of) our framework in the TASTY SFE compiler (introduced at ACM CCS 2010). In sum, our work is positioned as a comprehensive guide in state-of-the-art SFE, with the additional goal of extracting, systematizing and unifying the most relevant and promising general techniques from among the mass of SFE knowledge. We hope this guide would help developers of SFE libraries and privacy-preserving protocols in selecting the most efficient SFE components available today

    Computing leakage current distributions and determination of minimum leakage vectors for combinational designs

    Get PDF
    Analyzing circuit leakage and minimizing leakage during the standby mode of oper- ation of a circuit are important problems faced during contemporary circuit design. Analysis of the leakage profiles of an implementation would enable a designer to select between several implementations in a leakage optimal way. Once such an im- plementation is selected, minimizing leakage during standby operation (by finding the minimum leakage state over all input vector states) allows further power reduc- tions. However, both these problems are NP-hard. Since leakage power is currently approaching about half the total circuit power, these two problems are of prime rel- evance. This thesis addresses these NP-hard problems. An Algebraic Decision Diagram (ADD) based approach to determine and implicitly represent the leakage value for all input vectors of a combinational circuit is presented. In its exact form, this technique can compute the leakage value of each input vector, by storing these leakage values implicitly in an ADD structure. To broaden the applicability of this technique, an approximate version of the algorithm is presented as well. The approximation is done by limiting the total number of discriminant nodes in any ADD. It is experimentally demonstrated that these approximate techniques produce results with quantifiable errors. In particular, it is shown that limiting the number of discriminants to a value between 12 and 16 is practical, allowing for good accuracy and lowered memory utilization. In addition, a heuristic approach to determine the input vector which minimizes leakage for a combinational design is presented. Approximate signal probabilities of internal nodes are used as a guide in finding the minimum leakage vector. Probabilistic heuristics are used to select the next gate to be processed, as well as to select the best state of the selected gate. A fast satisfiability solver is employed to ensure the consistency of the assignments that are made in this process. Experimental results indicate that this method has very low run-times, with excellent accuracy, compared to existing approaches

    Model checking: Algorithmic verification and debugging

    Get PDF
    Turing Lecture from the winners of the 2007 ACM A.M. Turing Award.In 1981, Edmund M. Clarke and E. Allen Emerson, working in the USA, and Joseph Sifakis working independently in France, authored seminal papers that founded what has become the highly successful field of model checking. This verification technology provides an algorithmic means of determining whether an abstract model-representing, for example, a hardware or software design-satisfies a formal specification expressed as a temporal logic (TL) formula. Moreover, if the property does not hold, the method identifies a counterexample execution that shows the source of the problem.The progression of model checking to the point where it can be successfully used for complex systems has required the development of sophisticated means of coping with what is known as the state explosion problem. Great strides have been made on this problem over the past 28 years by what is now a very large international research community. As a result many major hardware and software companies are beginning to use model checking in practice. Examples of its use include the verification of VLSI circuits, communication protocols, software device drivers, real-time embedded systems, and security algorithms.The work of Clarke, Emerson, and Sifakis continues to be central to the success of this research area. Their work over the years has led to the creation of new logics for specification, new verification algorithms, and surprising theoretical results. Model checking tools, created by both academic and industrial teams, have resulted in an entirely novel approach to verification and test case generation. This approach, for example, often enables engineers in the electronics industry to design complex systems with considerable assurance regarding the correctness of their initial designs. Model checking promises to have an even greater impact on the hardware and software industries in the future.-Moshe Y. Vardi, Editor-in-Chief

    Discrete Function Representations Utilizing Decision Diagrams and Spectral Techniques

    Get PDF
    All discrete function representations become exponential in size in the worst case. Binary decision diagrams have become a common method of representing discrete functions in computer-aided design applications. For many functions, binary decision diagrams do provide compact representations. This work presents a way to represent large decision diagrams as multiple smaller partial binary decision diagrams. In the Boolean domain, each truth table entry consisting of a Boolean value only provides local information about a function at that point in the Boolean space. Partial binary decision diagrams thus result in the loss of information for a portion of the Boolean space. If the function were represented in the spectral domain however, each integer-valued coefficient would contain some global information about the function. This work also explores spectral representations of discrete functions, including the implementation of a method for transforming circuits from netlist representations directly into spectral decision diagrams
    corecore