20,549 research outputs found

    Towards Practical Graph-Based Verification for an Object-Oriented Concurrency Model

    Get PDF
    To harness the power of multi-core and distributed platforms, and to make the development of concurrent software more accessible to software engineers, different object-oriented concurrency models such as SCOOP have been proposed. Despite the practical importance of analysing SCOOP programs, there are currently no general verification approaches that operate directly on program code without additional annotations. One reason for this is the multitude of partially conflicting semantic formalisations for SCOOP (either in theory or by-implementation). Here, we propose a simple graph transformation system (GTS) based run-time semantics for SCOOP that grasps the most common features of all known semantics of the language. This run-time model is implemented in the state-of-the-art GTS tool GROOVE, which allows us to simulate, analyse, and verify a subset of SCOOP programs with respect to deadlocks and other behavioural properties. Besides proposing the first approach to verify SCOOP programs by automatic translation to GTS, we also highlight our experiences of applying GTS (and especially GROOVE) for specifying semantics in the form of a run-time model, which should be transferable to GTS models for other concurrent languages and libraries.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    Full abstraction for fair testing in CCS

    Get PDF
    In previous work with Pous, we defined a semantics for CCS which may both be viewed as an innocent presheaf semantics and as a concurrent game semantics. It is here proved that a behavioural equivalence induced by this semantics on CCS processes is fully abstract for fair testing equivalence. The proof relies on a new algebraic notion called playground, which represents the 'rule of the game'. From any playground, two languages, equipped with labelled transition systems, are derived, as well as a strong, functional bisimulation between them.Comment: 15 pages, to appear in CALCO '13. To appear Lecture notes in computer science (2013

    Process Algebras

    Get PDF
    Process Algebras are mathematically rigorous languages with well defined semantics that permit describing and verifying properties of concurrent communicating systems. They can be seen as models of processes, regarded as agents that act and interact continuously with other similar agents and with their common environment. The agents may be real-world objects (even people), or they may be artifacts, embodied perhaps in computer hardware or software systems. Many different approaches (operational, denotational, algebraic) are taken for describing the meaning of processes. However, the operational approach is the reference one. By relying on the so called Structural Operational Semantics (SOS), labelled transition systems are built and composed by using the different operators of the many different process algebras. Behavioral equivalences are used to abstract from unwanted details and identify those systems that react similarly to external experiments

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Prototyping Formal System Models with Active Objects

    Full text link
    We propose active object languages as a development tool for formal system models of distributed systems. Additionally to a formalization based on a term rewriting system, we use established Software Engineering concepts, including software product lines and object orientation that come with extensive tool support. We illustrate our modeling approach by prototyping a weak memory model. The resulting executable model is modular and has clear interfaces between communicating participants through object-oriented modeling. Relaxations of the basic memory model are expressed as self-contained variants of a software product line. As a modeling language we use the formal active object language ABS which comes with an extensive tool set. This permits rapid formalization of core ideas, early validity checks in terms of formal invariant proofs, and debugging support by executing test runs. Hence, our approach supports the prototyping of formal system models with early feedback.Comment: In Proceedings ICE 2018, arXiv:1810.0205

    Adequacy of compositional translations for observational semantics

    Get PDF
    We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension

    Intensional and Extensional Semantics of Bounded and Unbounded Nondeterminism

    Get PDF
    We give extensional and intensional characterizations of nondeterministic functional programs: as structure preserving functions between biorders, and as nondeterministic sequential algorithms on ordered concrete data structures which compute them. A fundamental result establishes that the extensional and intensional representations of non-deterministic programs are equivalent, by showing how to construct a unique sequential algorithm which computes a given monotone and stable function, and describing the conditions on sequential algorithms which correspond to continuity with respect to each order. We illustrate by defining may and must-testing denotational semantics for a sequential functional language with bounded and unbounded choice operators. We prove that these are computationally adequate, despite the non-continuity of the must-testing semantics of unbounded nondeterminism. In the bounded case, we prove that our continuous models are fully abstract with respect to may and must-testing by identifying a simple universal type, which may also form the basis for models of the untyped lambda-calculus. In the unbounded case we observe that our model contains computable functions which are not denoted by terms, by identifying a further "weak continuity" property of the definable elements, and use this to establish that it is not fully abstract

    Quantitative testing semantics for non-interleaving

    Full text link
    This paper presents a non-interleaving denotational semantics for the ?-calculus. The basic idea is to define a notion of test where the outcome is not only whether a given process passes a given test, but also in how many different ways it can pass it. More abstractly, the set of possible outcomes for tests forms a semiring, and the set of process interpretations appears as a module over this semiring, in which basic syntactic constructs are affine operators. This notion of test leads to a trace semantics in which traces are partial orders, in the style of Mazurkiewicz traces, extended with readiness information. Our construction has standard may- and must-testing as special cases
    corecore