45,801 research outputs found
A Graph Rewriting Approach for Transformational Design of Digital Systems
Transformational design integrates design and verification. It combines âcorrectness by constructionâ and design creativity by the use of pre-proven behaviour preserving transformations as design steps. The formal aspects of this methodology are hidden in the transformations. A constraint is the availability of a design representation with a compositional formal semantics. Graph representations are useful design representations because of their visualisation of design information. In this paper graph rewriting theory, as developed in the last twenty years in mathematics, is shown to be a useful basis for a formal framework for transformational design. The semantic aspects of graphs which are no part of graph rewriting theory are included by the use of attributed graphs. The used attribute algebra, table algebra, is a relation algebra derived from database theory. The combination of graph rewriting, table algebra and transformational design is new
Using ACL2 to Verify Loop Pipelining in Behavioral Synthesis
Behavioral synthesis involves compiling an Electronic System-Level (ESL)
design into its Register-Transfer Level (RTL) implementation. Loop pipelining
is one of the most critical and complex transformations employed in behavioral
synthesis. Certifying the loop pipelining algorithm is challenging because
there is a huge semantic gap between the input sequential design and the output
pipelined implementation making it infeasible to verify their equivalence with
automated sequential equivalence checking techniques. We discuss our ongoing
effort using ACL2 to certify loop pipelining transformation. The completion of
the proof is work in progress. However, some of the insights developed so far
may already be of value to the ACL2 community. In particular, we discuss the
key invariant we formalized, which is very different from that used in most
pipeline proofs. We discuss the needs for this invariant, its formalization in
ACL2, and our envisioned proof using the invariant. We also discuss some
trade-offs, challenges, and insights developed in course of the project.Comment: In Proceedings ACL2 2014, arXiv:1406.123
Symbolic QED Pre-silicon Verification for Automotive Microcontroller Cores: Industrial Case Study
We present an industrial case study that demonstrates the practicality and
effectiveness of Symbolic Quick Error Detection (Symbolic QED) in detecting
logic design flaws (logic bugs) during pre-silicon verification. Our study
focuses on several microcontroller core designs (~1,800 flip-flops, ~70,000
logic gates) that have been extensively verified using an industrial
verification flow and used for various commercial automotive products. The
results of our study are as follows: 1. Symbolic QED detected all logic bugs in
the designs that were detected by the industrial verification flow (which
includes various flavors of simulation-based verification and formal
verification). 2. Symbolic QED detected additional logic bugs that were not
recorded as detected by the industrial verification flow. (These additional
bugs were also perhaps detected by the industrial verification flow.) 3.
Symbolic QED enables significant design productivity improvements: (a) 8X
improved (i.e., reduced) verification effort for a new design (8 person-weeks
for Symbolic QED vs. 17 person-months using the industrial verification flow).
(b) 60X improved verification effort for subsequent designs (2 person-days for
Symbolic QED vs. 4-7 person-months using the industrial verification flow). (c)
Quick bug detection (runtime of 20 seconds or less), together with short
counterexamples (10 or fewer instructions) for quick debug, using Symbolic QED
Abstract State Machines 1988-1998: Commented ASM Bibliography
An annotated bibliography of papers which deal with or use Abstract State
Machines (ASMs), as of January 1998.Comment: Also maintained as a BibTeX file at http://www.eecs.umich.edu/gasm
KARL: A Knowledge-Assisted Retrieval Language
Data classification and storage are tasks typically performed by application specialists. In contrast, information users are primarily non-computer specialists who use information in their decision-making and other activities. Interaction efficiency between such users and the computer is often reduced by machine requirements and resulting user reluctance to use the system. This thesis examines the problems associated with information retrieval for non-computer specialist users, and proposes a method for communicating in restricted English that uses knowledge of the entities involved, relationships between entities, and basic English language syntax and semantics to translate the user requests into formal queries. The proposed method includes an intelligent dictionary, syntax and semantic verifiers, and a formal query generator. In addition, the proposed system has a learning capability that can improve portability and performance. With the increasing demand for efficient human-machine communication, the significance of this thesis becomes apparent. As human resources become more valuable, software systems that will assist in improving the human-machine interface will be needed and research addressing new solutions will be of utmost importance. This thesis presents an initial design and implementation as a foundation for further research and development into the emerging field of natural language database query systems
Experiments with a Convex Polyhedral Analysis Tool for Logic Programs
Convex polyhedral abstractions of logic programs have been found very useful
in deriving numeric relationships between program arguments in order to prove
program properties and in other areas such as termination and complexity
analysis. We present a tool for constructing polyhedral analyses of
(constraint) logic programs. The aim of the tool is to make available, with a
convenient interface, state-of-the-art techniques for polyhedral analysis such
as delayed widening, narrowing, "widening up-to", and enhanced automatic
selection of widening points. The tool is accessible on the web, permits user
programs to be uploaded and analysed, and is integrated with related program
transformations such as size abstractions and query-answer transformation. We
then report some experiments using the tool, showing how it can be conveniently
used to analyse transition systems arising from models of embedded systems, and
an emulator for a PIC microcontroller which is used for example in wearable
computing systems. We discuss issues including scalability, tradeoffs of
precision and computation time, and other program transformations that can
enhance the results of analysis.Comment: Paper presented at the 17th Workshop on Logic-based Methods in
Programming Environments (WLPE2007
Validation of the new Hipparcos reduction
Context.A new reduction of the astrometric data as produced by the Hipparcos
mission has been published, claiming accuracies for nearly all stars brighter
than magnitude Hp = 8 to be better, by up to a factor 4, than in the original
catalogue. Aims.The new Hipparcos astrometric catalogue is checked for the
quality of the data and the consistency of the formal errors as well as the
possible presence of error correlations. The differences with the earlier
publication are explained. Methods. The internal errors are followed through
the reduction process, and the external errors are investigated on the basis of
a comparison with radio observations of a small selection of stars, and the
distribution of negative parallaxes. Error correlation levels are investigated
and the reduction by more than a factor 10 as obtained in the new catalogue is
explained. Results.The formal errors on the parallaxes for the new catalogue
are confirmed. The presence of a small amount of additional noise, though
unlikely, cannot be ruled out. Conclusions. The new reduction of the Hipparcos
astrometric data provides an improvement by a factor 2.2 in the total weight
compared to the catalogue published in 1997, and provides much improved data
for a wide range of studies on stellar luminosities and local galactic
kinematics.Comment: 12 pages, 19 figures, accepted for publication by Astronomy and
Astrophysic
Computer-aided verification in mechanism design
In mechanism design, the gold standard solution concepts are dominant
strategy incentive compatibility and Bayesian incentive compatibility. These
solution concepts relieve the (possibly unsophisticated) bidders from the need
to engage in complicated strategizing. While incentive properties are simple to
state, their proofs are specific to the mechanism and can be quite complex.
This raises two concerns. From a practical perspective, checking a complex
proof can be a tedious process, often requiring experts knowledgeable in
mechanism design. Furthermore, from a modeling perspective, if unsophisticated
agents are unconvinced of incentive properties, they may strategize in
unpredictable ways.
To address both concerns, we explore techniques from computer-aided
verification to construct formal proofs of incentive properties. Because formal
proofs can be automatically checked, agents do not need to manually check the
properties, or even understand the proof. To demonstrate, we present the
verification of a sophisticated mechanism: the generic reduction from Bayesian
incentive compatible mechanism design to algorithm design given by Hartline,
Kleinberg, and Malekian. This mechanism presents new challenges for formal
verification, including essential use of randomness from both the execution of
the mechanism and from the prior type distributions. As an immediate
consequence, our work also formalizes Bayesian incentive compatibility for the
entire family of mechanisms derived via this reduction. Finally, as an
intermediate step in our formalization, we provide the first formal
verification of incentive compatibility for the celebrated
Vickrey-Clarke-Groves mechanism
- âŠ