65 research outputs found

    A constraint solver for software engineering : finding models and cores of large relational specifications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 105-120).Relational logic is an attractive candidate for a software description language, because both the design and implementation of software often involve reasoning about relational structures: organizational hierarchies in the problem domain, architectural configurations in the high level design, or graphs and linked lists in low level code. Until recently, however, frameworks for solving relational constraints have had limited applicability. Designed to analyze small, hand-crafted models of software systems, current frameworks perform poorly on specifications that are large or that have partially known solutions. This thesis presents an efficient constraint solver for relational logic, with recent applications to design analysis, code checking, test-case generation, and declarative configuration. The solver provides analyses for both satisfiable and unsatisfiable specifications--a finite model finder for the former and a minimal unsatisfiable core extractor for the latter. It works by translating a relational problem to a boolean satisfiability problem; applying an off-the-shelf SAT solver to the resulting formula; and converting the SAT solver's output back to the relational domain. The idea of solving relational problems by reduction to SAT is not new. The core contributions of this work, instead, are new techniques for expanding the capacity and applicability of SAT-based engines. They include: a new interface to SAT that extends relational logic with a mechanism for specifying partial solutions; a new translation algorithm based on sparse matrices and auto-compacting circuits; a new symmetry detection technique that works in the presence of partial solutions; and a new core extraction algorithm that recycles inferences made at the boolean level to speed up core minimization at the specification level.by Emina Torlak.Ph.D

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Heuristics for the refinement of assumptions in generalized reactivity formulae

    Get PDF
    Reactive synthesis is concerned with automatically generating implementations from formal specifications. These specifications are typically written in the language of generalized reactivity (GR(1)), a subset of linear temporal logic capable of expressing the most common industrial specification patterns, and describe the requirements about the behavior of a system under assumptions about the environment where the system is to be deployed. Oftentimes no implementation exists which guarantees the required behavior under all possible environments, typically due to missing assumptions (this is usually referred to as unrealizability). To address this issue, new assumptions need to be added to complete the specification, a problem known as assumptions refinement. Since the space of candidate assumptions is intractably large, searching for the best solutions is inherently hard. In particular, new methods are needed to (i) increase the effectiveness of the search procedures, measured as the ratio between the number of solutions found and of refinements explored; and (ii) improve the results' quality, defined as the weakness of the solutions. In this thesis we propose a set of heuristics to meet these goals, and a methodology to assess and compare assumptions refinement methods based on quantitative metrics. The heuristics are in the form of algorithms to generate candidate refinements during the search, and quantitative measures to assess the quality of the candidates. We first discuss a heuristic method to generate assumptions that target the cause of unrealizability. This is done by selecting candidate refinement formulas based on Craig's interpolation. We provide a formal underpinning of the technique and evaluate it in terms of our new metric of effectiveness, as defined above, whose value is improved with respect to the state of the art. We demonstrate this on a set of popular benchmarks of embedded software. We then provide a formal, quantitative characterization of the permissiveness of environment assumptions in the form of a weakness measure. We prove that the partial order induced by this measure is consistent with the one induced by implication. The key advantage of this measure is that it allows for prioritizing candidate solutions, as we show experimentally. Lastly, we propose a notion of minimal refinements with respect to the observed counterstrategies. We demonstrate that exploring minimal refinements produces weaker solutions, and reduces the amount of computations needed to explore each refinement. However, this may come at the cost of reducing the effectiveness of the search. To counteract this effect, we propose a hybrid search approach in which both minimal and non-minimal refinements are explored.Open Acces

    Loop Transformations for the Optimized Generation of Reconfigurable Hardware

    Get PDF
    Current high-level design environments offer little support to implement data-intensive applications on heterogeneous-memory systems; they rather focus on parallelism. This thesis addresses the memory hierarchy problem to high-level transformations of loop structures. The composition of long transformation sequences by combining shorter subsequences is studied together with the influence of the order of applying transformation steps. Several methods are presented to estimate bounds on Ehrhart quasi-polynomials, which can be used to statically evaluate program properties, such as memory usage. Since loop transformations not only influence the data access pattern but also the control complexity we present a hardware loop controller architecture which supports hardware generation from the polyhedral representation used for loop transformations. The techniques are demonstrated by the semi-automatic generation of an FPGA implementation of an inverse discrete wavelet transform

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    On The Design Of Low-Complexity High-Speed Arithmetic Circuits In Quantum-Dot Cellular Automata Nanotechnology

    Get PDF
    For the last four decades, the implementation of very large-scale integrated systems has largely based on complementary metal-oxide semiconductor (CMOS) technology. However, this technology has reached its physical limitations. Emerging nanoscale technologies such as quantum-dot cellular automata (QCA), single electron tunneling (SET), and tunneling phase logic (TPL) are major candidate for possible replacements of CMOS. These nanotechnologies use majority and/or minority logic and inverters as circuit primitives. In this dissertation, a comprehensive methodology for majority/minority logic networks synthesis is developed. This method is capable of processing any arbitrary multi-output Boolean function to nd its equivalent optimal majority logic network targeting to optimize either the number of gates or levels. The proposed method results in different primary equivalent majority expression networks. However, the most optimized network will be generated as a nal solution. The obtained results for 15 MCNC benchmark circuits show that when the number of majority gates is the rst optimization priority, there is an average reduction of 45.3% in the number of gates and 15.1% in the number of levels. They also show that when the rst priority is the number of levels, an average reduction of 23.5% in the number of levels and 43.1% in the number of gates is possible, compared to the majority AND/OR mapping method. These results are better compared to those obtained from the best existing methods. In this dissertation, our approach is to exploit QCA technology because of its capability to implement high-density, very high-speed switching and tremendously lowpower integrated systems and is more amenable to digital circuits design. In particular, we have developed algorithms for the QCA designs of various single- and multi-operation arithmetic arrays. Even though, majority/minority logic are the basic units in promising nanotechnologies, an XOR function can be constructed in QCA as a single device. The basic cells of the proposed arrays are developed based on the fundamental logic devices in QCA and a single-layer structure of the three-input XOR function. This process leads to QCA arithmetic circuits with better results in view of dierent aspects such as cell count, area, and latency, compared to their best counterparts. The proposed arrays can be formed in a pipeline manner to perform the arithmetic operations for any number of bits which could be quite valuable while considering the future design of large-scale QCA circuits

    Human-machine communication for educational systems design

    Get PDF
    • …
    corecore