6,883 research outputs found

    Towards an Open Set of Real-World Benchmarks for Model Queries and Transformations

    Get PDF
    International audienceWith the growing size and complexity of systems under design, industry needs a generation of Model-Driven Engineering (MDE) tools, especially model query and transformation, with the proven capability to handle large-scale scenarios. While researchers are proposing several technical solutions in this sense, the community lacks a set of shared scalability benchmarks, that would simplify quantitative assessment of advancements and enable cross-evaluation of different proposals. Benchmarks in previous work have been synthesized to stress specific features of model management, lacking both generality and industrial validity. In this paper, we initiate an effort to define a set of shared benchmarks, gathering queries and transformations from real-world MDE case studies. We make these case available to community evaluation via a public MDE benchmark repository

    The Train Benchmark: cross-technology performance evaluation of continuous model queries

    Get PDF
    In model-driven development of safety-critical systems (like automotive, avionics or railways), well- formedness of models is repeatedly validated in order to detect design flaws as early as possible. In many indus- trial tools, validation rules are still often implemented by a large amount of imperative model traversal code which makes those rule implementations complicated and hard to maintain. Additionally, as models are rapidly increas- ing in size and complexity, efficient execution of validation rules is challenging for the currently available tools. Checking well-formedness constraints can be captured by declarative queries over graph models, while model update operations can be specified as model transformations. This paper presents a benchmark for systematically assessing the scalability of validating and revalidating well-formedness constraints over large graph models. The benchmark defines well-formedness validation scenarios in the railway domain: a metamodel, an instance model generator and a set of well- formedness constraints captured by queries, fault injection and repair operations (imitating the work of systems engi- neers by model transformations). The benchmark focuses on the performance of query evaluation, i.e. its execution time and memory consumption, with a particular empha- sis on reevaluation. We demonstrate that the benchmark can be adopted to various technologies and query engines, including modeling tools; relational, graph and semantic databases. The Train Benchmark is available as an open- source project with continuous builds from https://github. com/FTSRG/trainbenchmark
    • …
    corecore