6,937 research outputs found

    How to interpret and establish consistency results for semantics of concurrent programming languages

    Full text link
    It is meaningful that a language is provided with several semantic descriptions: e.g. one which serves the needs of the implementor, another one that is suitable for specification and yet another one that will be used to explain the language to the user. In this case one has to guarantee that the various semantics are 'consistent'. The attempt of this paper is to clarify the notion 'consistency' and to present a general framework and theorems for consistency results

    Bridging the Gap between Programming Languages and Hardware Weak Memory Models

    Full text link
    We develop a new intermediate weak memory model, IMM, as a way of modularizing the proofs of correctness of compilation from concurrent programming languages with weak memory consistency semantics to mainstream multi-core architectures, such as POWER and ARM. We use IMM to prove the correctness of compilation from the promising semantics of Kang et al. to POWER (thereby correcting and improving their result) and ARMv7, as well as to the recently revised ARMv8 model. Our results are mechanized in Coq, and to the best of our knowledge, these are the first machine-verified compilation correctness results for models that are weaker than x86-TSO

    Flexible Invariants Through Semantic Collaboration

    Full text link
    Modular reasoning about class invariants is challenging in the presence of dependencies among collaborating objects that need to maintain global consistency. This paper presents semantic collaboration: a novel methodology to specify and reason about class invariants of sequential object-oriented programs, which models dependencies between collaborating objects by semantic means. Combined with a simple ownership mechanism and useful default schemes, semantic collaboration achieves the flexibility necessary to reason about complicated inter-object dependencies but requires limited annotation burden when applied to standard specification patterns. The methodology is implemented in AutoProof, our program verifier for the Eiffel programming language (but it is applicable to any language supporting some form of representation invariants). An evaluation on several challenge problems proposed in the literature demonstrates that it can handle a variety of idiomatic collaboration patterns, and is more widely applicable than the existing invariant methodologies.Comment: 22 page

    Decentralised Clinical Guidelines Modelling with Lightweight Coordination Calculus

    No full text
    Background: Clinical protocols and guidelines have been considered as a major means to ensure that cost-effective services are provided at the point of care. Recently, the computerisation of clinical guidelines has attracted extensive research interest. Many languages and frameworks have been developed. Thus far, however,an enactment mechanism to facilitate decentralised guideline execution has been a largely neglected line of research. It is our contention that decentralisation is essential to maintain a high-performance system in pervasive health care scenarios. In this paper, we propose the use of Lightweight Coordination Calculus (LCC) as a feasible solution. LCC is a light-weight and executable process calculus that has been used successfully in multi-agent systems, peer-to-peer (p2p) computer networks, etc. In light of an envisaged pervasive health care scenario, LCC, which represents clinical protocols and guidelines as message-based interaction models, allows information exchange among software agents distributed across different departments and/or hospitals. Results: We outlined the syntax and semantics of LCC; proposed a list of refined criteria against which the appropriateness of candidate clinical guideline modelling languages are evaluated; and presented two LCC interaction models of real life clinical guidelines. Conclusions: We demonstrated that LCC is particularly useful in modelling clinical guidelines. It specifies the exact partition of a workflow of events or tasks that should be observed by multiple "players" as well as the interactions among these "players". LCC presents the strength of both process calculi and Horn clauses pair of which can provide a close resemblance of logic programming and the flexibility of practical implementation

    TriCheck: Memory Model Verification at the Trisection of Software, Hardware, and ISA

    Full text link
    Memory consistency models (MCMs) which govern inter-module interactions in a shared memory system, are a significant, yet often under-appreciated, aspect of system design. MCMs are defined at the various layers of the hardware-software stack, requiring thoroughly verified specifications, compilers, and implementations at the interfaces between layers. Current verification techniques evaluate segments of the system stack in isolation, such as proving compiler mappings from a high-level language (HLL) to an ISA or proving validity of a microarchitectural implementation of an ISA. This paper makes a case for full-stack MCM verification and provides a toolflow, TriCheck, capable of verifying that the HLL, compiler, ISA, and implementation collectively uphold MCM requirements. The work showcases TriCheck's ability to evaluate a proposed ISA MCM in order to ensure that each layer and each mapping is correct and complete. Specifically, we apply TriCheck to the open source RISC-V ISA, seeking to verify accurate, efficient, and legal compilations from C11. We uncover under-specifications and potential inefficiencies in the current RISC-V ISA documentation and identify possible solutions for each. As an example, we find that a RISC-V-compliant microarchitecture allows 144 outcomes forbidden by C11 to be observed out of 1,701 litmus tests examined. Overall, this paper demonstrates the necessity of full-stack verification for detecting MCM-related bugs in the hardware-software stack.Comment: Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating System

    Intensional and Extensional Semantics of Bounded and Unbounded Nondeterminism

    Get PDF
    We give extensional and intensional characterizations of nondeterministic functional programs: as structure preserving functions between biorders, and as nondeterministic sequential algorithms on ordered concrete data structures which compute them. A fundamental result establishes that the extensional and intensional representations of non-deterministic programs are equivalent, by showing how to construct a unique sequential algorithm which computes a given monotone and stable function, and describing the conditions on sequential algorithms which correspond to continuity with respect to each order. We illustrate by defining may and must-testing denotational semantics for a sequential functional language with bounded and unbounded choice operators. We prove that these are computationally adequate, despite the non-continuity of the must-testing semantics of unbounded nondeterminism. In the bounded case, we prove that our continuous models are fully abstract with respect to may and must-testing by identifying a simple universal type, which may also form the basis for models of the untyped lambda-calculus. In the unbounded case we observe that our model contains computable functions which are not denoted by terms, by identifying a further "weak continuity" property of the definable elements, and use this to establish that it is not fully abstract

    Synthesis of behavioral models from scenarios

    No full text

    Unifying type systems for mobile processes

    Full text link
    We present a unifying framework for type systems for process calculi. The core of the system provides an accurate correspondence between essentially functional processes and linear logic proofs; fragments of this system correspond to previously known connections between proofs and processes. We show how the addition of extra logical axioms can widen the class of typeable processes in exchange for the loss of some computational properties like lock-freeness or termination, allowing us to see various well studied systems (like i/o types, linearity, control) as instances of a general pattern. This suggests unified methods for extending existing type systems with new features while staying in a well structured environment and constitutes a step towards the study of denotational semantics of processes using proof-theoretical methods
    corecore