493 research outputs found

    Software Challenges For HL-LHC Data Analysis

    Full text link
    The high energy physics community is discussing where investment is needed to prepare software for the HL-LHC and its unprecedented challenges. The ROOT project is one of the central software players in high energy physics since decades. From its experience and expectations, the ROOT team has distilled a comprehensive set of areas that should see research and development in the context of data analysis software, for making best use of HL-LHC's physics potential. This work shows what these areas could be, why the ROOT team believes investing in them is needed, which gains are expected, and where related work is ongoing. It can serve as an indication for future research proposals and cooperations

    Kevoree Modeling Framework (KMF): Efficient modeling techniques for runtime use

    Get PDF
    The creation of Domain Specific Languages(DSL) counts as one of the main goals in the field of Model-Driven Software Engineering (MDSE). The main purpose of these DSLs is to facilitate the manipulation of domain specific concepts, by providing developers with specific tools for their domain of expertise. A natural approach to create DSLs is to reuse existing modeling standards and tools. In this area, the Eclipse Modeling Framework (EMF) has rapidly become the defacto standard in the MDSE for building Domain Specific Languages (DSL) and tools based on generative techniques. However, the use of EMF generated tools in domains like Internet of Things (IoT), Cloud Computing or Models@Runtime reaches several limitations. In this paper, we identify several properties the generated tools must comply with to be usable in other domains than desktop-based software systems. We then challenge EMF on these properties and describe our approach to overcome the limitations. Our approach, implemented in the Kevoree Modeling Framework (KMF), is finally evaluated according to the identified properties and compared to EMF.Comment: ISBN 978-2-87971-131-7; N° TR-SnT-2014-11 (2014

    A Framework for Model-Driven Scientific Workflow Engineering

    Get PDF
    So-called scientific workflows are one important means in the context of data-intensive science for reliable and efficient scientific data processing in distributed computing infrastructures such as Grids. Scientific Workflow Management Systems (SWfMS) help scientists model and run scientific workflows, whereas a domain-specific layer for workflow modeling by a scientist and a technical layer for automated workflow execution can be distinguished. Initially, many SWfMS were developed from scratch using custom workflow technologies languages without application of already existing and established business workflow technologies. Among the reasons were different life cycles for scientific and business workflows as well as incompatible interfaces and communication protocols of the respective execution infrastructures. Meanwhile, several business IT infrastructures have evolved to serviceoriented architectures (SOAs), for which many Web service standards and technologies have been developed. The Web Services Business Process Execution Language (BPEL), for example, is a well-accepted standard for the implementation and execution of business workflows in SOAs. The SOA architecture pattern has been adopted in scientific IT infrastructures by so-called Service Grids based on existing standards and technologies. Due to this development, BPEL is also suitable for the execution of scientific workflows at the technical layer, which has been elaborated on in many publications and projects. However, BPEL is a workflow language for IT experts and is originally not suited for scientific workflow modeling by a scientist at the domain-specific layer. A domain-specific abstraction of BPEL is therefore required that can be specifically tailored for scientific workflow modeling as well as a corresponding mapping to the technical layer. These challenges of the domain-specific abstraction and the mapping are addressed in this thesis with the help of the Business Process Model and Notation (BPMN) standard and technologies from Model-Driven Software Development (MDSD). Therefore, the MoDFlow approach for Model-Driven Scientific WorkFlow Engineering is presented to map domain-specific scientific workflow models via a BPMN-based intermediate layer to an executable workflow model. The intermediate layer is specified by MoDFlow.BPMN, which is a BPMN metamodel subset with custom extensions for the scientific domain. MoDFlow.BPMN2BPEL defines three consecutive transformation steps to map MoDFlow.BPMN to BPEL for workflow execution. Furthermore, different methods to utilize and extend MoDFlow.BPMN and MoDFlow.BPMN2BPEL are described in the MoDFlow approach, in which the definition of so-called domain-specific languages (DSLs) for the modeling of scientific workflows at the domain-specific layer is focused. The MoDFlow framework is an implementation of the MoDFlow approach, which is based on the Eclipse Modeling Framework (EMF). The MoDFlow framework is evaluated in three application scenarios, in which different utilization and extension mechanisms are examined. The first two application scenarios investigate the technical feasibility of the approach and support scientific workflows with parameter sweeps that are executed on a Grid infrastructure. The third application scenario has been conducted in collaboration with the PubFlow project, which aims to create an infrastructure to model and execute data publication workflows. Based on the Xtext framework, a textual DSL and a corresponding language infrastructure is defined for this purpose that supports developers in creating data publication workflows. This scenario aims to illustrate the practicability of the MoDFlow framework. PubFlow currently plans to implement an additional graphical DSL based on the BPMN notation and a corresponding workflow editor for scientists

    Automatically Leveraging MapReduce Frameworks for Data-Intensive Applications

    Full text link
    MapReduce is a popular programming paradigm for developing large-scale, data-intensive computation. Many frameworks that implement this paradigm have recently been developed. To leverage these frameworks, however, developers must become familiar with their APIs and rewrite existing code. Casper is a new tool that automatically translates sequential Java programs into the MapReduce paradigm. Casper identifies potential code fragments to rewrite and translates them in two steps: (1) Casper uses program synthesis to search for a program summary (i.e., a functional specification) of each code fragment. The summary is expressed using a high-level intermediate language resembling the MapReduce paradigm and verified to be semantically equivalent to the original using a theorem prover. (2) Casper generates executable code from the summary, using either the Hadoop, Spark, or Flink API. We evaluated Casper by automatically converting real-world, sequential Java benchmarks to MapReduce. The resulting benchmarks perform up to 48.2x faster compared to the original.Comment: 12 pages, additional 4 pages of references and appendi

    Correct-by-construction implementation of runtime monitors using stepwise refinement

    Get PDF
    Runtime verification (RV) is a lightweight technique for verifying traces of computer systems. One challenge in applying RV is to guarantee that the implementation of a runtime monitor correctly detects and signals unexpected events. In this paper, we present a method for deriving correct-by-construction implementations of runtime monitors from high-level specifications using Fiat, a Coq library for stepwise refinement. SMEDL (Scenario-based Meta-Event Definition Language), a domain specific language for event-driven RV, is chosen as the specification language. We propose an operational semantics for SMEDL suitable to be used in Fiat to describe the behavior of a monitor in a relational way. Then, by utilizing Fiat\u27s refinement calculus, we transform a declarative monitor specification into an executable runtime monitor with a proof that the behavior of the implementation is strictly a subset of that provided by the specification. Moreover, we define a predicate on the syntax structure of a monitor definition to ensure termination and determinism. Most of the proof work required to generate monitor code has been automated

    Middle-out domain-specific aspect languages and their application in agent-based modelling runtime inspection

    Get PDF
    Domain-Specific Aspect Languages (DSALs) are a valuable tool for separating cross-cutting concerns, particularly within fields with endemic cross-cutting practices. Agent-Based Modelling (ABM) runtime inspection, which cuts across the core concern of model development, serves as a prime example. Despite their usefulness, DSALs face multiple adoption issues: the literature regarding their development and use is incohesive, coupling to a weave target hinders re-use, and available tooling is immature compared to Domain-Specific Languages (DSLs). We believe these issues can be aided by furthering DSL middle-out techniques for DSALs.We first define the background of what a DSAL is and how they may be used, moving onto how we can use DSL techniques to further DSALs. We develop a middle-out semantic model approach for developing domain-level DSALs with transparent aspect orientation using adaptions of DSL techniques. We have implemented the approach for model-specific DSALs for the in-house framework Animaux, and as middleware-specific DSAL for agent messages in the JADE framework, which can be specialised to models using extension DSALs. We give illustrative result cases using our implementations to provide a base of the user development costs and performance of this approach.In conclusion, we believe the adoption of these technologies aids ABM applications and encourage future work in similar fields. This thesis has given a base philosophy toward DSLs, a novel approach for the development of middle-out DSALs and illustrative cases of this approach

    Towards fast metamodel evolution in LiquidML

    Get PDF
    The software industry is applying Model-driven development approaches due to a core set of benefits, such as raising the level of abstraction and reducing coding errors. However, their underlying modeling languages tend to be quite static, making their evolution hard, specifically when the corresponding metamodel does not support primitives and/or functionalities required in specific business domains. This paper presents an extension to the LiquidML language to support fast metamodel evolution by allowing experts to abstract new language concepts from primitives while supporting automatic tool evolution and zero application downtime. To probe our claims, we evaluate the evolutionary capabilities of existing modeling languages and LiquidML in a real world language extension.Ministerio de EconomĂ­a y Competitividad TIN2016-76956-C3-2-R (POLOLAS)Ministerio de EconomĂ­a y Competitividad TIN2015-71938-RED
    • …
    corecore