13,276 research outputs found

    Towards optimisation of model queries : A parallel execution approach

    Get PDF
    The growing size of software models poses significant scalability challenges. Amongst these challenges is the execution time of queries and transformations. In many cases, model management programs are (or can be) expressed as chains and combinations of core fundamental operations. Most of these operations are pure functions, making them amenable to parallelisation, lazy evaluation and short-circuiting. In this paper we show how all three of these optimisations can be combined in the context of Epsilon: an OCL-inspired family of model management languages. We compare our solutions with both interpreted and compiled OCL as well as hand-written Java code. Our experiments show a significant improvement in the performance of queries, especially on large models

    Weaving Rules into [email protected] for Embedded Smart Systems

    Get PDF
    Smart systems are characterised by their ability to analyse measured data in live and to react to changes according to expert rules. Therefore, such systems exploit appropriate data models together with actions, triggered by domain-related conditions. The challenge at hand is that smart systems usually need to process thousands of updates to detect which rules need to be triggered, often even on restricted hardware like a Raspberry Pi. Despite various approaches have been investigated to efficiently check conditions on data models, they either assume to fit into main memory or rely on high latency persistence storage systems that severely damage the reactivity of smart systems. To tackle this challenge, we propose a novel composition process, which weaves executable rules into a data model with lazy loading abilities. We quantitatively show, on a smart building case study, that our approach can handle, at low latency, big sets of rules on top of large-scale data models on restricted hardware.Comment: pre-print version, published in the proceedings of MOMO-17 Worksho

    Full contract verification for ATL using symbolic execution

    Get PDF
    The Atlas Transformation Language (ATL) is currently one of the most used model transformation languages and has become a de facto standard in model-driven engineering for implementing model transformations. At the same time, it is understood by the community that enhancing methods for exhaustively verifying such transformations allows for a more widespread adoption of model-driven engineering in industry. A variety of proposals for the verification of ATL transformations have arisen in the past few years. However, the majority of these techniques are either based on non-exhaustive testing or on proof methods that require human assistance and/or are not complete. In this paper, we describe our method for statically verifying the declarative subset of ATL model transformations. This verification is performed by translating the transformation (including features like filters, OCL expressions, and lazy rules) into our model transformation language DSLTrans. As we handle only the declarative portion of ATL, and DSLTrans is Turing-incomplete, this reduction in expressivity allows us to use a symbolic-execution approach to generate representations of all possible input models to the transformation. We then verify pre-/post-condition contracts on these representations, which in turn verifies the transformation itself. The technique we present in this paper is exhaustive for the subset of declarative ATL model transformations. This means that if the prover indicates a contract holds on a transformation, then the contract’s pre-/post-condition pair will be true for any input model for that transformation. We demonstrate and explore the applicability of our technique by studying several relatively large and complex ATL model transformations, including a model transformation developed in collaboration with our industrial partner. As well, we present our ‘slicing’ technique. This technique selects only those rules in the DSLTrans transformation needed for contract proof, thereby reducing proving timeComisión Interministerial de Ciencia y Tecnología TIN2015-70560-RJunta de Andalucía P10-TIC-5906Junta de Andalucía P12-TIC-186

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Program transformations using temporal logic side conditions

    Get PDF
    This paper describes an approach to program optimisation based on transformations, where temporal logic is used to specify side conditions, and strategies are created which expand the repertoire of transformations and provide a suitable level of abstraction. We demonstrate the power of this approach by developing a set of optimisations using our transformation language and showing how the transformations can be converted into a form which makes it easier to apply them, while maintaining trust in the resulting optimising steps. The approach is illustrated through a transformational case study where we apply several optimisations to a small program

    WSCDL to WSBPEL: A Case Study of ATL-based Transformation

    Get PDF
    The ATLAS Transformation Language (ATL) is a hybrid transformation language that combines declarative and imperative programming elements and provides means to define model transformations. Most transformations using ATL reported in the literature show a simplified use of ATL, and often involve a single transformation. However, in more realistic situations, multiple transformations may be necessary, especially in case the original input/output models are not represented in the metametamodeling representation expected by the transformation engine. In this paper, we discuss a model transformation from service choreography (WSCDL) to service orchestration (WSBPEL), which cannot be performed in a single ATL transformation due to the mismatch between the concrete XML syntax of these languages and the metametamodeling representation expected by the ATL transformation engine. This requires auxiliary transformations in which this mismatch is resolved. In principle, the required auxiliary transformations can be implemented using XSLT or a general-purpose programming language like Java. However, in our case study, we evaluate the use of ATL to perform these transformations. We exploit ATL by leveraging the ATL's XML\ud injection and the XML extraction mechanisms to perform the overall transformation in terms of a transformation chain

    Sequentializing Parameterized Programs

    Full text link
    We exhibit assertion-preserving (reachability preserving) transformations from parameterized concurrent shared-memory programs, under a k-round scheduling of processes, to sequential programs. The salient feature of the sequential program is that it tracks the local variables of only one thread at any point, and uses only O(k) copies of shared variables (it does not use extra counters, not even one counter to keep track of the number of threads). Sequentialization is achieved using the concept of a linear interface that captures the effect an unbounded block of processes have on the shared state in a k-round schedule. Our transformation utilizes linear interfaces to sequentialize the program, and to ensure the sequential program explores only reachable states and preserves local invariants.Comment: In Proceedings FIT 2012, arXiv:1207.348

    Batch solution of small PDEs with the OPS DSL

    Get PDF
    In this paper we discuss the challenges and optimisations opportunities when solving a large number of small, equally sized discretised PDEs on regular grids. We present an extension of the OPS (Oxford Parallel library for Structured meshes) embedded Domain Specific Language, and show how support can be added for solving multiple systems, and how OPS makes it easy to deploy a variety of transformations and optimisations. The new capabilities in OPS allow to automatically apply data structure transformations, as well as execution schedule transformations to deliver high performance on a variety of hardware platforms. We evaluate our work on an industrially representative finance simulation on Intel CPUs, as well as NVIDIA GPUs

    Automatically Discovering Hidden Transformation Chaining Constraints

    Get PDF
    Model transformations operate on models conforming to precisely defined metamodels. Consequently, it often seems relatively easy to chain them: the output of a transformation may be given as input to a second one if metamodels match. However, this simple rule has some obvious limitations. For instance, a transformation may only use a subset of a metamodel. Therefore, chaining transformations appropriately requires more information. We present here an approach that automatically discovers more detailed information about actual chaining constraints by statically analyzing transformations. The objective is to provide developers who decide to chain transformations with more data on which to base their choices. This approach has been successfully applied to the case of a library of endogenous transformations. They all have the same source and target metamodel but have some hidden chaining constraints. In such a case, the simple metamodel matching rule given above does not provide any useful information

    GNA: new framework for statistical data analysis

    Full text link
    We report on the status of GNA --- a new framework for fitting large-scale physical models. GNA utilizes the data flow concept within which a model is represented by a directed acyclic graph. Each node is an operation on an array (matrix multiplication, derivative or cross section calculation, etc). The framework enables the user to create flexible and efficient large-scale lazily evaluated models, handle large numbers of parameters, propagate parameters' uncertainties while taking into account possible correlations between them, fit models, and perform statistical analysis. The main goal of the paper is to give an overview of the main concepts and methods as well as reasons behind their design. Detailed technical information is to be published in further works.Comment: 9 pages, 3 figures, CHEP 2018, submitted to EPJ Web of Conference
    corecore