252 research outputs found

    Symbolic analysis of bounded Petri nets

    Get PDF
    This paper presents a symbolic approach for the analysis of bounded Petri nets. The structure and behavior of the Petri net is symbolically modeled by using Boolean functions, thus reducing reasoning about Petri nets to Boolean calculation. The set of reachable markings is calculated by symbolically firing the transitions in the Petri net. Highly concurrent systems suffer from the state explosion problem produced by an exponential increase of the number of reachable states. This state explosion is handled by using Binary Decision Diagrams (BDDs) which are capable of representing large sets of markings with small data structures. Petri nets have the ability to model a large variety of systems and the flexibility to describe causality, concurrency, and conditional relations. The manipulation of vast state spaces generated by Petri nets enables the efficient analysis of a wide range of problems, e.g., deadlock freeness, liveness, and concurrency. A number of examples are presented in order to show how large reachability sets can be generated, represented, and analyzed with moderate BDD sizes. By using this symbolic framework, properties requiring an exhaustive analysis of the reachability graph can be efficiently verified.Peer ReviewedPostprint (published version

    Aggressive aggregation

    Get PDF
    Among the first steps in a compilation pipeline is the construction of an Intermediate Representation (IR), an in-memory representation of the input program. Any attempt to program optimisation, both in terms of size and running time, has to operate on this structure. There may be one or multiple such IRs, however, most compilers use some form of a Control Flow Graph (CFG) internally. This representation clearly aims at general-purpose programming languages, for which it is well suited and allows for many classical program optimisations. On the other hand, a growing structural difference between the input program and the chosen IR can lose or obfuscate information that can be crucial for effective optimisation. With today’s rise of a multitude of different programming languages, Domain-Specific Languages (DSLs), and computing platforms, the classical machine-oriented IR is reaching its limits and a broader variety of IRs is needed. This realisation yielded, e.g., Multi-Level Intermediate Representation (MLIR), a compiler framework that facilitates the creation of a wide range of IRs and encourages their reuse among different programming languages and the corresponding compilers. In this modern spirit, this dissertation explores the potential of Algebraic Decision Diagrams (ADDs) as an IR for (domain-specific) program optimisation. The data structure remains the state of the art for Boolean function representation for more than thirty years and is well-known for its optimality in size and depth, i.e. running time. As such, it is ideally suited to represent the corresponding classes of programs in the role of an IR. We will discuss its application in a variety of different program domains, ranging from DSLs to machine-learned programs and even to general-purpose programming languages. Two representatives for DSLs, a graphical and a textual one, prove the adequacy of ADDs for the program optimisation of modelled decision services. The resulting DSLs facilitate experimentation with ADDs and provide valuable insight into their potential and limitations: input programs can be aggregated in a radical fashion, at the risk of the occasional exponential growth. With the aggregation of large Random Forests into a single aggregated ADD, we bring this potential to a program domain of practical relevance. The results are impressive: both running time and size of the Random Forest program are reduced by multiple orders of magnitude. It turns out that this ADD-based aggregation can be generalised, even to generaliii purpose programming languages. The resulting method achieves impressive speedups for a seemingly optimal program: the iterative Fibonacci implementation. Altogether, ADDs facilitate effective program optimisation where the input programs allow for a natural transformation to the data structure. In these cases, they have proven to be an extremely powerful tool for the optimisation of a program’s running time and, in some cases, of its size. The exploration of their potential as an IR has only started and deserves attention in future research

    TOWARD LOWER COMMUNICATION IN GARBLED CIRCUIT EVALUATION

    Get PDF
    Secure Multi-party Computation (SMC) is a classical problem in theoretical security. In a SMC problem, two or more parties must compute correctly a function f on their respective inputs x and y, while preserving the privacy of their inputs and additional security properties. One of the approaches proposed for addressing the SMC problem relies on the design of Garbled Circuit (GC). In Garbled Circuits (GCs), the function to be computed is represented as a Boolean circuit composed of binary gates. The input and output wire of each gate is masked such that the party evaluating the Garbled Boolean Circuits (GBC) cannot gain any information about the inputs or the intermediate results that appear during the function evaluation. The complexity of today's most efficient GC protocol depends linearly on the size of the Boolean circuit representation of the evaluated function. The total cost and run-time interaction between parties increase linearly with the number of gates and can be huge for complex GBCs. Actually, interest has grown in the efficiency of this technique and in its applications to computation outsourcing in untrusted environments. A recent work shows that XOR gates in a Boolean circuit have no cost for the secure computation protocol. Therefore, circuits with a reduced number of non-XOR gates are more convenient and one of the possible ways to reduce the complexity of the computation is to reduce the number of non-XOR gates in the Boolean circuit. Recalling that, the main aim of this work is to reduce the number of non-XOR gates, which directly results in a reduced number of interactions between the parties and transfer complexity at runtime, we present different approaches for reducing the communication cost of Secure Multi-party Computation (SMC) and improving the overall computation time and efficiency of the execution of SMC

    Automated Synthesis of Memristor Crossbar Networks

    Get PDF
    The advancement of semiconductor device technology over the past decades has enabled the design of increasingly complex electrical and computational machines. Electronic design automation (EDA) has played a significant role in the design and implementation of transistor-based machines. However, as transistors move closer toward their physical limits, the speed-up provided by Moore\u27s law will grind to a halt. Once again, we find ourselves on the verge of a paradigm shift in the computational sciences as newer devices pave the way for novel approaches to computing. One of such devices is the memristor -- a resistor with non-volatile memory. Memristors can be used as junctional switches in crossbar circuits, which comprise of intersecting sets of vertical and horizontal nanowires. The major contribution of this dissertation lies in automating the design of such crossbar circuits -- doing a new kind of EDA for a new kind of computational machinery. In general, this dissertation attempts to answer the following questions: a. How can we synthesize crossbars for computing large Boolean formulas, up to 128-bit? b. How can we synthesize more compact crossbars for small Boolean formulas, up to 8-bit? c. For a given loop-free C program doing integer arithmetic, is it possible to synthesize an equivalent crossbar circuit? We have presented novel solutions to each of the above problems. Our new, proposed solutions resolve a number of significant bottlenecks in existing research, via the usage of innovative logic representation and artificial intelligence techniques. For large Boolean formulas (up to 128-bit), we have utilized Reduced Ordered Binary Decision Diagrams (ROBDDs) to automatically synthesize linearly growing crossbar circuits that compute them. This cutting edge approach towards flow-based computing has yielded state-of-the-art results. It is worth noting that this approach is scalable to n-bit Boolean formulas. We have made significant original contributions by leveraging artificial intelligence for automatic synthesis of compact crossbar circuits. This inventive method has been expanded to encompass crossbar networks with 1D1M (1-diode-1-memristor) switches, as well. The resultant circuits satisfy the tight constraints of the Feynman Grand Prize challenge and are able to perform 8-bit binary addition. A leading edge development for end-to-end computation with flow-based crossbars has been implemented, which involves methodical translation of loop-free C programs into crossbar circuits via automated synthesis. The original contributions described in this dissertation reflect the substantial progress we have made in the area of electronic design automation for synthesis of memristor crossbar networks

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes

    Approximate logic synthesis: a survey

    Get PDF
    Approximate computing is an emerging paradigm that, by relaxing the requirement for full accuracy, offers benefits in terms of design area and power consumption. This paradigm is particularly attractive in applications where the underlying computation has inherent resilience to small errors. Such applications are abundant in many domains, including machine learning, computer vision, and signal processing. In circuit design, a major challenge is the capability to synthesize the approximate circuits automatically without manually relying on the expertise of designers. In this work, we review methods devised to synthesize approximate circuits, given their exact functionality and an approximability threshold. We summarize strategies for evaluating the error that circuit simplification can induce on the output, which guides synthesis techniques in choosing the circuit transformations that lead to the largest benefit for a given amount of induced error. We then review circuit simplification methods that operate at the gate or Boolean level, including those that leverage classical Boolean synthesis techniques to realize the approximations. We also summarize strategies that take high-level descriptions, such as C or behavioral Verilog, and synthesize approximate circuits from these descriptions

    Witness generation in existential CTL model checking

    Get PDF
    Hardware and software systems are widely used in applications where failure is prohibitively costly or even unacceptable. The main obstacle to make such systems more reliable and capable of more complex and sensitive tasks is our limited ability to design and implement them with sufficiently high degree of confidence in their correctness under all circumstances. As an automated technique that verifies the system early in the design phase, model checking explores the state space of the system exhaustively and rigorously to determine if the system satisfies the specifications and detect fatal errors that may be missed by simulation and testing. One essential advantage of model checking is the capability to generate witnesses and counterexamples. They are simple and straightforward forms to prove an existential specification or falsify a universal specification. Beside enhancing the credibility of the model checker\u27s conclusion, they either strengthen engineers\u27 confidence in the system or provide hints to reveal potential defects. In this dissertation, we focus on symbolic model checking with specifications expressed in computation tree logic (CTL), which describes branching-time behaviors of the system, and investigate the witness generation techniques for the existential fragment of CTL, i.e., ECTL, covering both decision-diagram-based and SAT-based. Since witnesses provide important debugging information and may be inspected by engineers, smaller ones are always preferable to ease their interpretation and understanding. To the best of our knowledge, no existing witness generation technique guarantees the minimality for a general ECTL formula with nested existential CTL operators. One contribution of this dissertation is to fill this gap with the minimality guarantee. With the help of the saturation algorithm, our approach computes the minimum witness size for the given ECTL formula in every state, stored as an additive edge-valued multiway decision diagrams (EV+MDD), a variant of the well-known binary decision diagram (BDD), and then builds a minimum witness. Though computationally intensive, this has promising applications in reducing engineers\u27 workload. SAT-based model checking, in particular, bounded model checking, reduces a model checking problem problem into a satisfiability problem and leverages a SAT solver to solve it. Another contribution of this dissertation is to improve the translation of bounded semantics of ECTL into propositional formulas. By realizing the possibility of path reuse, i.e., a state may build its own witness by reusing its successor\u27s, we may generate a significantly smaller formula, which is often easier for a SAT solver to answer, and thus boost the performance of bounded model checking

    Highly Automated Formal Verification of Arithmetic Circuits

    Get PDF
    This dissertation investigates the problems of two distinctive formal verification techniques for verifying large scale multiplier circuits and proposes two approaches to overcome some of these problems. The first technique is equivalence checking based on recurrence relations, while the second one is the symbolic computation technique which is based on the theory of Gröbner bases. This investigation demonstrates that approaches based on symbolic computation have better scalability and more robustness than state-of-the-art equivalence checking techniques for verification of arithmetic circuits. According to this conclusion, the thesis leverages the symbolic computation technique to verify floating-point designs. It proposes a new algebraic equivalence checking, in contrast to classical combinational equivalence checking, the proposed technique is capable of checking the equivalence of two circuits which have different architectures of arithmetic units as well as control logic parts, e.g., floating-point multipliers
    • …
    corecore