7,807 research outputs found

    An ACL2 Mechanization of an Axiomatic Framework for Weak Memory

    Full text link
    Proving the correctness of programs written for multiple processors is a challenging problem, due in no small part to the weaker memory guarantees afforded by most modern architectures. In particular, the existence of store buffers means that the programmer can no longer assume that writes to different locations become visible to all processors in the same order. However, all practical architectures do provide a collection of weaker guarantees about memory consistency across processors, which enable the programmer to write provably correct programs in spite of a lack of full sequential consistency. In this work, we present a mechanization in the ACL2 theorem prover of an axiomatic weak memory model (introduced by Alglave et al.). In the process, we provide a new proof of an established theorem involving these axioms.Comment: In Proceedings ACL2 2014, arXiv:1406.123

    A wide-spectrum language for verification of programs on weak memory models

    Full text link
    Modern processors deploy a variety of weak memory models, which for efficiency reasons may (appear to) execute instructions in an order different to that specified by the program text. The consequences of instruction reordering can be complex and subtle, and can impact on ensuring correctness. Previous work on the semantics of weak memory models has focussed on the behaviour of assembler-level programs. In this paper we utilise that work to extract some general principles underlying instruction reordering, and apply those principles to a wide-spectrum language encompassing abstract data types as well as low-level assembler code. The goal is to support reasoning about implementations of data structures for modern processors with respect to an abstract specification. Specifically, we define an operational semantics, from which we derive some properties of program refinement, and encode the semantics in the rewriting engine Maude as a model-checking tool. The tool is used to validate the semantics against the behaviour of a set of litmus tests (small assembler programs) run on hardware, and also to model check implementations of data structures from the literature against their abstract specifications

    Semantic Criteria of Correct Formalization

    Get PDF
    This paper compares several models of formalization. It articulates criteria of correct formalization and identifies their problems. All of the discussed criteria are so called “semantic” criteria, which refer to the interpretation of logical formulas. However, as will be shown, different versions of an implicitly applied or explicitly stated criterion of correctness depend on different understandings of “interpretation” in this context

    CAISL: Simplification Logic for Conditional Attribute Implications

    Get PDF
    In this work, we present a sound and complete axiomatic system for conditional attribute implications (CAIs) in Triadic Concept Analysis (TCA). Our approach is strongly based on the Simplification paradigm which offers a more suitable way for automated reasoning than the one based on Armstrong’s Axioms. We also present an automated method to prove the derivability of a CAI from a set of CAI s.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    GPU Concurrency: Weak Behaviours and Programming Assumptions

    Get PDF
    Concurrency is pervasive and perplexing, particularly on graphics processing units (GPUs). Current specifications of languages and hardware are inconclusive; thus programmers often rely on folklore assumptions when writing software. To remedy this state of affairs, we conducted a large empirical study of the concurrent behaviour of deployed GPUs. Armed with litmus tests (i.e. short concurrent programs), we questioned the assumptions in programming guides and vendor documentation about the guarantees provided by hardware. We developed a tool to generate thousands of litmus tests and run them under stressful workloads. We observed a litany of previously elusive weak behaviours, and exposed folklore beliefs about GPU programming---often supported by official tutorials---as false. As a way forward, we propose a model of Nvidia GPU hardware, which correctly models every behaviour witnessed in our experiments. The model is a variant of SPARC Relaxed Memory Order (RMO), structured following the GPU concurrency hierarchy

    Axiomatic approach to the cosmological constant

    Full text link
    A theory of the cosmological constant Lambda is currently out of reach. Still, one can start from a set of axioms that describe the most desirable properties a cosmological constant should have. This can be seen in certain analogy to the Khinchin axioms in information theory, which fix the most desirable properties an information measure should have and that ultimately lead to the Shannon entropy as the fundamental information measure on which statistical mechanics is based. Here we formulate a set of axioms for the cosmological constant in close analogy to the Khinchin axioms, formally replacing the dependency of the information measure on probabilities of events by a dependency of the cosmological constant on the fundamental constants of nature. Evaluating this set of axioms one finally arrives at a formula for the cosmological constant that is given by Lambda = (G^2/hbar^4) (m_e/alpha_el)^6, where G is the gravitational constant, m_e is the electron mass, and alpha_el is the low energy limit of the fine structure constant. This formula is in perfect agreement with current WMAP data. Our approach gives physical meaning to the Eddington-Dirac large number hypothesis and suggests that the observed value of the cosmological constant is not at all unnatural.Comment: 7 pages, no figures. Some further references adde
    • …
    corecore