39,662 research outputs found

    Towards Practical Graph-Based Verification for an Object-Oriented Concurrency Model

    Get PDF
    To harness the power of multi-core and distributed platforms, and to make the development of concurrent software more accessible to software engineers, different object-oriented concurrency models such as SCOOP have been proposed. Despite the practical importance of analysing SCOOP programs, there are currently no general verification approaches that operate directly on program code without additional annotations. One reason for this is the multitude of partially conflicting semantic formalisations for SCOOP (either in theory or by-implementation). Here, we propose a simple graph transformation system (GTS) based run-time semantics for SCOOP that grasps the most common features of all known semantics of the language. This run-time model is implemented in the state-of-the-art GTS tool GROOVE, which allows us to simulate, analyse, and verify a subset of SCOOP programs with respect to deadlocks and other behavioural properties. Besides proposing the first approach to verify SCOOP programs by automatic translation to GTS, we also highlight our experiences of applying GTS (and especially GROOVE) for specifying semantics in the form of a run-time model, which should be transferable to GTS models for other concurrent languages and libraries.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    Efficient and Reasonable Object-Oriented Concurrency

    Full text link
    Making threaded programs safe and easy to reason about is one of the chief difficulties in modern programming. This work provides an efficient execution model for SCOOP, a concurrency approach that provides not only data race freedom but also pre/postcondition reasoning guarantees between threads. The extensions we propose influence both the underlying semantics to increase the amount of concurrent execution that is possible, exclude certain classes of deadlocks, and enable greater performance. These extensions are used as the basis an efficient runtime and optimization pass that improve performance 15x over a baseline implementation. This new implementation of SCOOP is also 2x faster than other well-known safe concurrent languages. The measurements are based on both coordination-intensive and data-manipulation-intensive benchmarks designed to offer a mixture of workloads.Comment: Proceedings of the 10th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE '15). ACM, 201

    Analysis of Sample Acquisition Dynamics Using Discrete Element Method

    Get PDF
    The analysis presented in this paper is conducted in the framework of the Ocean Worlds Autonomy Testbed for Exploration Research and Simulation (OceanWATERS) project, currently under development at NASA Ames Research Center. OceanWATERS aims at designing a simulation environment which allows for testing autonomy of scientific lander missions to the icy moons of our solar system. Mainly focused on reproducing the end effector interaction with the inherent terrain, this paper introduces a novel discrete element method (DEM)-based approach to determine forces and torques acting on the landers scoop during the sample acquisition process. An accurate force feedback from the terrain on the scoop is required by fault-detection and autonomous decision-making algorithms to identify when the requested torque on the robotic arms joints exceeds the maximum available torque. Knowledge of the terrain force feedback significantly helps evaluating the arms links structural properties and properly selecting actuators for the joints. Models available in literature constitute a partial representation of the dynamics of the interaction. As an example, Balovnev derived an analytical expression of the vertical and horizontal force acting on a bucket while collecting a sample as a function of its geometry and velocity, soil parameters and reached depth. Although the model represents an adequate approximation of the two force components, it ignores the direction orthogonal to the scoop motion and neglects the torque. This work relies on DEM analysis to compensate for analytical models deficiencies and inaccuracies, i. e. provide force and torque 3D vectors, defined in the moving reference (body) frame attached to the scoop, at each instant of the sample collection process. Results from the first presented analysis relate to the specific OceanWATERS sampling strategy, which consists of collecting the sample through five consecutive passes with increasing depth, each pass following the same circularlinear- circular trajectory. Data is collected given a specific scoop design interacting with two types of bulk materials, which may characterize the surface of icy planetary bodies: snow and ice. Although specifically concerned with the OceanWATERS design, this first analysis provides the expected force trends for similar sampling strategies and allows to deduce phenomenological information about the general scooping process. In order to further instruct the community on the use of DEM tools as a solution to the sampling collection problem, two more analyses have been carried out, mainly focused on reducing the DEM computation time, which increases with a decrease in particle size. After running a set of identical simulations, where the only changing parameter is the size of the spherical particle, it is observed that the resulting force trajectories, starting from a given particle size, converge to the true trend. It is deducible that a further decrease in size yields negligible improvements in the accuracy, while it sensibly increases computation time. A final analysis aims at discussing limitations of approximating bulk material particles having a complex shape, e. g. ice fragments, with spheres, by comparing force trends resulting in the two cases for the same simulation scenario

    SCOOP: A Tool for SymboliC Optimisations Of Probabilistic Processes

    Get PDF
    This paper presents SCOOP: a tool that symbolically optimises process-algebraic specifications of probabilistic processes. It takes specifications in the prCRL language (combining data and probabilities), which are linearised first to an intermediate format: the LPPE. On this format, optimisations such as dead-variable reduction and confluence reduction are applied automatically by SCOOP. That way, drastic state space reductions are achieved while never having to generate the complete state space, as data variables are unfolded only locally. The optimised state spaces are ready to be analysed by for instance CADP or PRISM

    Modelling, reduction and analysis of Markov automata (extended version)

    Get PDF
    Markov automata (MA) constitute an expressive continuous-time compositional modelling formalism. They appear as semantic backbones for engineering frameworks including dynamic fault trees, Generalised Stochastic Petri Nets, and AADL. Their expressive power has thus far precluded them from effective analysis by probabilistic (and statistical) model checkers, stochastic game solvers, or analysis tools for Petri net-like formalisms. This paper presents the foundations and underlying algorithms for efficient MA modelling, reduction using static analysis, and most importantly, quantitative analysis. We also discuss implementation pragmatics of supporting tools and present several case studies demonstrating feasibility and usability of MA in practice

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Analysis of Timed and Long-Run Objectives for Markov Automata

    Get PDF
    Markov automata (MAs) extend labelled transition systems with random delays and probabilistic branching. Action-labelled transitions are instantaneous and yield a distribution over states, whereas timed transitions impose a random delay governed by an exponential distribution. MAs are thus a nondeterministic variation of continuous-time Markov chains. MAs are compositional and are used to provide a semantics for engineering frameworks such as (dynamic) fault trees, (generalised) stochastic Petri nets, and the Architecture Analysis & Design Language (AADL). This paper considers the quantitative analysis of MAs. We consider three objectives: expected time, long-run average, and timed (interval) reachability. Expected time objectives focus on determining the minimal (or maximal) expected time to reach a set of states. Long-run objectives determine the fraction of time to be in a set of states when considering an infinite time horizon. Timed reachability objectives are about computing the probability to reach a set of states within a given time interval. This paper presents the foundations and details of the algorithms and their correctness proofs. We report on several case studies conducted using a prototypical tool implementation of the algorithms, driven by the MAPA modelling language for efficiently generating MAs.Comment: arXiv admin note: substantial text overlap with arXiv:1305.705
    corecore