15,129 research outputs found

    Partially defined computer instructions and guards

    Get PDF
    AbstractWe here extend our earlier work on the theory of computer instructions to consider instructions which are only partially defined. For every such instruction, we assume that it is defined whenever a certain Boolean expression is true; we refer to such a Boolean expression as a guard, following Dijkstra. This is a special case of a more general function on the set of states of a computer, representing an expression in a programming language. Many constructs for instructions now generalize to partially defined instructions; in particular, we define the notion of conditional input and output regions, as well as the relevant region of a more general expression. Fundamental theorems about instructions generalize to theorems about guards and about partially defined instructions. We also define the parallel execution of such instructions, which is useful in validating a generalized instruction commutativity criterion

    On the Expressive Power of Multiple Heads in CHR

    Full text link
    Constraint Handling Rules (CHR) is a committed-choice declarative language which has been originally designed for writing constraint solvers and which is nowadays a general purpose language. CHR programs consist of multi-headed guarded rules which allow to rewrite constraints into simpler ones until a solved form is reached. Many empirical evidences suggest that multiple heads augment the expressive power of the language, however no formal result in this direction has been proved, so far. In the first part of this paper we analyze the Turing completeness of CHR with respect to the underneath constraint theory. We prove that if the constraint theory is powerful enough then restricting to single head rules does not affect the Turing completeness of the language. On the other hand, differently from the case of the multi-headed language, the single head CHR language is not Turing powerful when the underlying signature (for the constraint theory) does not contain function symbols. In the second part we prove that, no matter which constraint theory is considered, under some reasonable assumptions it is not possible to encode the CHR language (with multi-headed rules) into a single headed language while preserving the semantics of the programs. We also show that, under some stronger assumptions, considering an increasing number of atoms in the head of a rule augments the expressive power of the language. These results provide a formal proof for the claim that multiple heads augment the expressive power of the CHR language.Comment: v.6 Minor changes, new formulation of definitions, changed some details in the proof

    Learning a Static Analyzer from Data

    Full text link
    To be practically useful, modern static analyzers must precisely model the effect of both, statements in the programming language as well as frameworks used by the program under analysis. While important, manually addressing these challenges is difficult for at least two reasons: (i) the effects on the overall analysis can be non-trivial, and (ii) as the size and complexity of modern libraries increase, so is the number of cases the analysis must handle. In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs. Our method consists of two ingredients: (i) a synthesis algorithm capable of learning a candidate analyzer from a given dataset, and (ii) a counter-example guided learning procedure which generates new programs beyond those in the initial dataset, critical for discovering corner cases and ensuring the learned analysis generalizes to unseen programs. We implemented and instantiated our approach to the task of learning JavaScript static analysis rules for a subset of points-to analysis and for allocation sites analysis. These are challenging yet important problems that have received significant research attention. We show that our approach is effective: our system automatically discovered practical and useful inference rules for many cases that are tricky to manually identify and are missed by state-of-the-art, manually tuned analyzers

    A wide-spectrum language for verification of programs on weak memory models

    Full text link
    Modern processors deploy a variety of weak memory models, which for efficiency reasons may (appear to) execute instructions in an order different to that specified by the program text. The consequences of instruction reordering can be complex and subtle, and can impact on ensuring correctness. Previous work on the semantics of weak memory models has focussed on the behaviour of assembler-level programs. In this paper we utilise that work to extract some general principles underlying instruction reordering, and apply those principles to a wide-spectrum language encompassing abstract data types as well as low-level assembler code. The goal is to support reasoning about implementations of data structures for modern processors with respect to an abstract specification. Specifically, we define an operational semantics, from which we derive some properties of program refinement, and encode the semantics in the rewriting engine Maude as a model-checking tool. The tool is used to validate the semantics against the behaviour of a set of litmus tests (small assembler programs) run on hardware, and also to model check implementations of data structures from the literature against their abstract specifications

    Stateman: Using Metafunctions to Manage Large Terms Representing Machine States

    Full text link
    When ACL2 is used to model the operational semantics of computing machines, machine states are typically represented by terms recording the contents of the state components. When models are realistic and are stepped through thousands of machine cycles, these terms can grow quite large and the cost of simplifying them on each step grows. In this paper we describe an ACL2 book that uses HIDE and metafunctions to facilitate the management of large terms representing such states. Because the metafunctions for each state component updater are solely responsible for creating state expressions (i.e., "writing") and the metafunctions for each state component accessor are solely responsible for extracting values (i.e., "reading") from such state expressions, they can maintain their own normal form, use HIDE to prevent other parts of ACL2 from inspecting them, and use honsing to uniquely represent state expressions. The last feature makes it possible to memoize the metafunctions, which can improve proof performance in some machine models. This paper describes a general-purpose ACL2 book modeling a byte-addressed memory supporting "mixed" reads and writes. By "mixed" we mean that reads need not correspond (in address or number of bytes) with writes. Verified metafunctions simplify such "read-over-write" expressions while hiding the potentially large state expression. A key utility is a function that determines an upper bound on the value of a symbolic arithmetic expression, which plays a role in resolving writes to addresses given by symbolic expressions. We also report on a preliminary experiment with the book, which involves the production of states containing several million function calls.Comment: In Proceedings ACL2 2015, arXiv:1509.0552

    Using ACL2 to Verify Loop Pipelining in Behavioral Synthesis

    Get PDF
    Behavioral synthesis involves compiling an Electronic System-Level (ESL) design into its Register-Transfer Level (RTL) implementation. Loop pipelining is one of the most critical and complex transformations employed in behavioral synthesis. Certifying the loop pipelining algorithm is challenging because there is a huge semantic gap between the input sequential design and the output pipelined implementation making it infeasible to verify their equivalence with automated sequential equivalence checking techniques. We discuss our ongoing effort using ACL2 to certify loop pipelining transformation. The completion of the proof is work in progress. However, some of the insights developed so far may already be of value to the ACL2 community. In particular, we discuss the key invariant we formalized, which is very different from that used in most pipeline proofs. We discuss the needs for this invariant, its formalization in ACL2, and our envisioned proof using the invariant. We also discuss some trade-offs, challenges, and insights developed in course of the project.Comment: In Proceedings ACL2 2014, arXiv:1406.123

    Flexible Invariants Through Semantic Collaboration

    Full text link
    Modular reasoning about class invariants is challenging in the presence of dependencies among collaborating objects that need to maintain global consistency. This paper presents semantic collaboration: a novel methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means. Combined with a simple ownership mechanism and useful default schemes, semantic collaboration achieves the flexibility necessary to reason about complicated inter-object dependencies but requires limited annotation burden when applied to standard specification patterns. The methodology is implemented in AutoProof, our program verifier for the Eiffel programming language (but it is applicable to any language supporting some form of representation invariants). An evaluation on several challenge problems proposed in the literature demonstrates that it can handle a variety of idiomatic collaboration patterns, and is more widely applicable than the existing invariant methodologies.Comment: 22 page
    • …
    corecore