66 research outputs found

    Finding false paths for sequential circuits using operations on ROBDDs

    Get PDF
    Performance of VLSI is, first of all, its high operatio

    Approximate In-memory computing on RERAMs

    Get PDF
    Computing systems have seen tremendous growth over the past few decades in their capabilities, efficiency, and deployment use cases. This growth has been driven by progress in lithography techniques, improvement in synthesis tools, architectures and power management. However, there is a growing disparity between computing power and the demands on modern computing systems. The standard Von-Neuman architecture has separate data storage and data processing locations. Therefore, it suffers from a memory-processor communication bottleneck, which is commonly referred to as the \u27memory wall\u27. The relatively slower progress in memory technology compared with processing units has continued to exacerbate the memory wall problem. As feature sizes in the CMOS logic family reduce further, quantum tunneling effects are becoming more prominent. Simultaneously, chip transistor density is already so high that all transistors cannot be powered up at the same time without violating temperature constraints, a phenomenon characterized as dark-silicon. Coupled with this, there is also an increase in leakage currents with smaller feature sizes, resulting in a breakdown of \u27Dennard\u27s\u27 scaling. All these challenges cannot be met without fundamental changes in current computing paradigms. One viable solution is in-memory computing, where computing and storage are performed alongside each other. A number of emerging memory fabrics such as ReRAMS, STT-RAMs, and PCM RAMs are capable of performing logic in-memory. ReRAMs possess high storage density, have extremely low power consumption and a low cost of fabrication. These advantages are due to the simple nature of its basic constituting elements which allow nano-scale fabrication. We use flow-based computing on ReRAM crossbars for computing that exploits natural sneak paths in those crossbars. Another concurrent development in computing is the maturation of domains that are error resilient while being highly data and power intensive. These include machine learning, pattern recognition, computer vision, image processing, and networking, etc. This shift in the nature of computing workloads has given weight to the idea of approximate computing , in which device efficiency is improved by sacrificing tolerable amounts of accuracy in computation. We present a mathematically rigorous foundation for the synthesis of approximate logic and its mapping to ReRAM crossbars using search based and graphical methods

    DPLL+ROBDD Derivation Applied to Inversion of Some Cryptographic Functions

    Get PDF
    Abstract. The paper presents logical derivation algorithms that can be applied to inversion of polynomially computable discrete functions. The proposed approach is based on the fact that it is possible to organize DPLL derivation on a small subset of variables appeared in a CNF which encodes the algorithm computing the function. The experimental results showed that arrays of conflict clauses generated by this mode of derivation, as a rule, have efficient ROBDD representations. This fact is the departing point of development of a hybrid DPLL+ROBDD derivation strategy: derivation techniques for ROBDD representations of conflict databases are the same as those ones in common DPLL (variable assignments and unit propagation). In addition, compact ROBDD representations of the conflict databases can be shared effectively in a distributed computing environment

    Computing leakage current distributions and determination of minimum leakage vectors for combinational designs

    Get PDF
    Analyzing circuit leakage and minimizing leakage during the standby mode of oper- ation of a circuit are important problems faced during contemporary circuit design. Analysis of the leakage profiles of an implementation would enable a designer to select between several implementations in a leakage optimal way. Once such an im- plementation is selected, minimizing leakage during standby operation (by finding the minimum leakage state over all input vector states) allows further power reduc- tions. However, both these problems are NP-hard. Since leakage power is currently approaching about half the total circuit power, these two problems are of prime rel- evance. This thesis addresses these NP-hard problems. An Algebraic Decision Diagram (ADD) based approach to determine and implicitly represent the leakage value for all input vectors of a combinational circuit is presented. In its exact form, this technique can compute the leakage value of each input vector, by storing these leakage values implicitly in an ADD structure. To broaden the applicability of this technique, an approximate version of the algorithm is presented as well. The approximation is done by limiting the total number of discriminant nodes in any ADD. It is experimentally demonstrated that these approximate techniques produce results with quantifiable errors. In particular, it is shown that limiting the number of discriminants to a value between 12 and 16 is practical, allowing for good accuracy and lowered memory utilization. In addition, a heuristic approach to determine the input vector which minimizes leakage for a combinational design is presented. Approximate signal probabilities of internal nodes are used as a guide in finding the minimum leakage vector. Probabilistic heuristics are used to select the next gate to be processed, as well as to select the best state of the selected gate. A fast satisfiability solver is employed to ensure the consistency of the assignments that are made in this process. Experimental results indicate that this method has very low run-times, with excellent accuracy, compared to existing approaches

    Learning understandable classifier models.

    Get PDF
    The topic of this dissertation is the automation of the process of extracting understandable patterns and rules from data. An unprecedented amount of data is available to anyone with a computer connected to the Internet. The disciplines of Data Mining and Machine Learning have emerged over the last two decades to face this challenge. This has led to the development of many tools and methods. These tools often produce models that make very accurate predictions about previously unseen data. However, models built by the most accurate methods are usually hard to understand or interpret by humans. In consequence, they deliver only decisions, and are short of any explanations. Hence they do not directly lead to the acquisition of new knowledge. This dissertation contributes to bridging the gap between the accurate opaque models and those less accurate but more transparent for humans. This dissertation first defines the problem of learning from data. It surveys the state-of-the-art methods for supervised learning of both understandable and opaque models from data, as well as unsupervised methods that detect features present in the data. It describes popular methods of rule extraction from unintelligible models which rewrite them into an understandable form. Limitations of rule extraction are described. A novel definition of understandability which ties computational complexity and learning is provided to show that rule extraction is an NP-hard problem. Next, a discussion whether one can expect that even an accurate classifier has learned new knowledge. The survey ends with a presentation of two approaches to building of understandable classifiers. On the one hand, understandable models must be able to accurately describe relations in the data. On the other hand, often a description of the output of a system in terms of its input requires the introduction of intermediate concepts, called features. Therefore it is crucial to develop methods that describe the data with understandable features and are able to use those features to present the relation that describes the data. Novel contributions of this thesis follow the survey. Two families of rule extraction algorithms are considered. First, a method that can work with any opaque classifier is introduced. Artificial training patterns are generated in a mathematically sound way and used to train more accurate understandable models. Subsequently, two novel algorithms that require that the opaque model is a Neural Network are presented. They rely on access to the network\u27s weights and biases to induce rules encoded as Decision Diagrams. Finally, the topic of feature extraction is considered. The impact on imposing non-negativity constraints on the weights of a neural network is considered. It is proved that a three layer network with non-negative weights can shatter any given set of points and experiments are conducted to assess the accuracy and interpretability of such networks. Then, a novel path-following algorithm that finds robust sparse encodings of data is presented. In summary, this dissertation contributes to improved understandability of classifiers in several tangible and original ways. It introduces three distinct aspects of achieving this goal: infusion of additional patterns from the underlying pattern distribution into rule learners, the derivation of decision diagrams from neural networks, and achieving sparse coding with neural networks with non-negative weights

    Parameterized verification and repair of concurrent systems

    Get PDF
    In this thesis, we present novel approaches for model checking, repair and synthesis of systems that may be parameterized in their number of components. The parameterized model checking problem (PMCP) is in general undecidable, and therefore the focus is on restricted classes of parameterized concurrent systems where the problem is decidable. Under certain conditions, the problem is decidable for guarded protocols, and for systems that communicate via a token, a pairwise, or a broadcast synchronization. In this thesis we improve existing results for guarded protocols and we show that the PMCP of guarded protocols and token passing systems is decidable for specifications that add a quantitative aspect to LTL, called Prompt-LTL. Furthermore, we present, to our knowledge, the first parameterized repair algorithm. The parameterized repair problem is to find a refinement of a process implementation p such that the concurrent system with an arbitrary number of instances of p is correct. We show how this algorithm can be used on classes of systems that can be represented as well structured transition systems (WSTS). Additionally we present two safety synthesis algorithms that utilize a lazy approach. Given a faulty system, the algorithms first symbolically model check the system, then the obtained error traces are analyzed to synthesize a candidate that has no such traces. Experimental results show that our algorithm solves a number of benchmarks that are intractable for existing tools. Furthermore, we introduce our tool AIGEN for generating random Boolean functions and transition systems in a symbolic representation.In dieser Arbeit stellen wir neuartige Ans atze für das Model-Checking, die Reparatur und die Synthese von Systemen vor, die in ihrer Anzahl von Komponenten parametrisiert sein können. Das Problem des parametrisierten Model-Checking (PMCP) ist im Allgemeinen unentscheidbar, und daher liegt der Fokus auf eingeschränkten Klassen parametrisierter synchroner Systeme, bei denen das Problem entscheidbar ist. Unter bestimmten Bedingungen ist das Problem für Guarded Protocols und für Systeme, die über ein Token, eine Pairwise oder eine Broadcast-Synchronisation kommunizieren, entscheidbar. In dieser Arbeit verbessern wir bestehende Ergebnisse für Guarded Protocols und zeigen die Entscheidbarkeit des PMCP für Guarded Protocols und Token-Passing Systeme mit Spezifikationen in der temporalen Logik Prompt-LTL, die LTL einen quantitativen Aspekt hinzufügt. Darüber hinaus präsentieren wir unseres Wissens den ersten parametrisierten Reparaturalgorithmus. Das parametrisierte Reparaturproblem besteht darin, eine Verfeinerung einer Prozessimplementierung p zu finden, so dass das synchrone Systeme mit einer beliebigen Anzahl von Instanzen von p korrekt ist. Wir zeigen, wie dieser Algorithmus auf Klassen von Systemen angewendet werden kann, die als Well Structured Transition Systems (WSTS) dargestellt werden können. Außerdem präsentieren wir zwei Safety-Synthesis Algorithmen, die einen "lazy" Ansatz verwenden. Bei einem fehlerhaften System überprüfen die Algorithmen das System symbolisch, dann werden die erhaltenen "Gegenbeispiel" analysiert, um einen Kandidaten zu synthetisieren der keine solchen Fehlerpfade hat. Versuchsergebnisse zeigen, dass unser Algorithmus eine Reihe von Benchmarks löst, die für bestehende Tools nicht lösbar sind. Darüber hinaus stellen wir unser Tool AIGEN zur Erzeugung zufälliger Boolescher Funktionen und Transitionssysteme in einer symbolischen Darstellung vor

    Solving hard industrial combinatorial problems with SAT

    Get PDF
    The topic of this thesis is the development of SAT-based techniques and tools for solving industrial combinatorial problems. First, it describes the architecture of state-of-the-art SAT and SMT Solvers based on the classical DPLL procedure. These systems can be used as black boxes for solving combinatorial problems. However, sometimes we can increase their efficiency with slight modifications of the basic algorithm. Therefore, the study and development of techniques for adjusting SAT Solvers to specific combinatorial problems is the first goal of this thesis. Namely, SAT Solvers can only deal with propositional logic. For solving general combinatorial problems, two different approaches are possible: - Reducing the complex constraints into propositional clauses. - Enriching the SAT Solver language. The first approach corresponds to encoding the constraint into SAT. The second one corresponds to using propagators, the basis for SMT Solvers. Regarding the first approach, in this document we improve the encoding of two of the most important combinatorial constraints: cardinality constraints and pseudo-Boolean constraints. After that, we present a new mixed approach, called lazy decomposition, which combines the advantages of encodings and propagators. The other part of the thesis uses these theoretical improvements in industrial combinatorial problems. We give a method for efficiently scheduling some professional sport leagues with SAT. The results are promising and show that a SAT approach is valid for these problems. However, the chaotical behavior of CDCL-based SAT Solvers due to VSIDS heuristics makes it difficult to obtain a similar solution for two similar problems. This may be inconvenient in real-world problems, since a user expects similar solutions when it makes slight modifications to the problem specification. In order to overcome this limitation, we have studied and solved the close solution problem, i.e., the problem of quickly finding a close solution when a similar problem is considered
    corecore