977 research outputs found

    Partial loading of XMI models

    Get PDF
    XML Metadata Interchange (XMI) is an OMG-standardised model exchange format, which is natively supported by the Eclipse Modeling Framework (EMF) and the majority of the modelling and model management languages and tools. Whilst XMI is widely supported, the XMI parser provided by EMF is inefficient in some cases where models are readonly (such as input models for model query, model-to-model transformation, etc) as it always requires loading the entire model into memory. In this paper we present a novel algorithm, and a prototype implementation (SmartSAX), which is capable of partially loading models persisted in XMI. SmartSAX oérs improved performance, in terms of loading time and memory footprint, over the default EMF XMI parser. We describe the algorithm in detail, and present benchmarking results that demonstrate the substantial improvements of the prototype implementation over the XMI parser provided by EMF

    Kevoree Modeling Framework (KMF): Efficient modeling techniques for runtime use

    Get PDF
    The creation of Domain Specific Languages(DSL) counts as one of the main goals in the field of Model-Driven Software Engineering (MDSE). The main purpose of these DSLs is to facilitate the manipulation of domain specific concepts, by providing developers with specific tools for their domain of expertise. A natural approach to create DSLs is to reuse existing modeling standards and tools. In this area, the Eclipse Modeling Framework (EMF) has rapidly become the defacto standard in the MDSE for building Domain Specific Languages (DSL) and tools based on generative techniques. However, the use of EMF generated tools in domains like Internet of Things (IoT), Cloud Computing or Models@Runtime reaches several limitations. In this paper, we identify several properties the generated tools must comply with to be usable in other domains than desktop-based software systems. We then challenge EMF on these properties and describe our approach to overcome the limitations. Our approach, implemented in the Kevoree Modeling Framework (KMF), is finally evaluated according to the identified properties and compared to EMF.Comment: ISBN 978-2-87971-131-7; N° TR-SnT-2014-11 (2014

    Towards Memory-Efficient Validation of Large XMI Models

    Get PDF
    Model validation is a common activity in model-driven engineering, where a model is checked against a set of consistency rules (also referred to as constraints) to assess whether it has desirable properties further to those that can be expressed by the metamodel that it conforms to (e.g. to check that all states in a state machine are reachable or that no classes in an object-oriented model are involved in circular inheritance relationships). Such constraints can be written in general-purpose (e.g. Java) or in task-specific validation languages such as the Object Constraint Language (OCL) or the Epsilon Validation Language (EVL). To check a model that is serialised in the OMG-standard XMI format against a set of constraints, the current state of practice requires loading the entire model into memory first. This can be problematic in cases where loading the model into memory requires more memory (heap space) than is available in the host machine, and is sub-optimal when carrying out distributed model validation over a number of machines. In this paper, we present an approach that uses static analysis to split sets of model validation constraints into sub-groups that operate on smaller subsets of the model. Combined with existing XMI partial loading capabilities, the proposed approach makes it possible to check larger XMI - based models on a single machine and to potentially improve efficiency when checking models in a distributed setting

    A framework for domain-specific modeling on graph databases

    Full text link
    La complexité du logiciel augmente tout le temps: les systèmes deviennent plus grands et plus complexes. La modélisation est un élément central de génie logicielle pour relever les défis de la complexité. Cependant, un défi majeur auquel est confronté le développement de logiciels axés sur les modèles est l'évolutivité des outils de modélisation avec une taille croissante de modèles. Certaines initiatives ont commencé à explorer la modélisation tout en stockant des modèles dans une base de données de graphes. Dans cette thèse, nous présentons NMF, un framework pour créer et éditer des modèles dans une base de données Neo4j élevée à l'abstraction du langage de modélisation.Software complexity increases all the time: systems become larger and more complex. Modeling is a central part of software engineering to tackle challenges of complexity. However, a prominent challenge model-driven software development is facing is scalability of modeling tools with a growing size of models. Some initiatives started exploring modeling while storing models in a graph database. In this thesis, we present NMF, a framework to create and edit MDE models in a Neo4j database lifted to the abstraction of the modeling language

    Improving memory efficiency for processing large-scale models

    Get PDF
    International audienceScalability is a main obstacle for applying Model-Driven Engineering to reverse engineering, or to any other activity manipulating large models. Existing solutions to persist and query large models are currently ine cient and strongly linked to memory availability. In this paper, we propose a memory unload strategy for Neo4EMF, a persistence layer built on top of the Eclipse Modeling Framework and based on a Neo4j database backend. Our solution allows us to partially unload a model during the execution of a query by using a periodical dirty saving mechanism and transparent reloading. Our experiments show that this approach enables to query large models in a restricted amount of memory with an acceptable performance

    Gradient based hyper-parameter optimisation for well conditioned kriging metamodels

    Get PDF
    In this work a two step approach to efficiently carrying out hyper parameter optimisation, required for building kriging and gradient enhanced kriging metamodels, is presented. The suggested approach makes use of an initial line search along the hyper-diagonal of the design space in order to find a suitable starting point for a subsequent gradient based optimisation algorithm. During the optimisation an upper bound constraint is imposed on the condition number of the correlation matrix in order to keep it from being ill conditioned. Partial derivatives of both the condensed log likelihood function and the condition number are obtained using the adjoint method, the latter has been derived in this work. The approach is tested on a number of analytical examples and comparisons are made to other optimisation approaches. Finally the approach is used to construct metamodels for a finite element model of an aircraft wing box comprising of 126 thickness design variables and is then compared with a sub-set of the other optimisation approaches

    Towards efficient comparison of change-based models

    Get PDF
    Comparison of large models can be time-consuming since every element has to be visited, matched, and compared with its respective element in other models. This can result in bottlenecks in collaborative modelling environments, where identifying differences between two versions of a model is desirable. Reducing the comparison process to only the elements that have been modified since a previous known state (e.g., previous version) could significantly reduce the time required for large model comparison. This paper presents how change-based persistence can be used to localise the comparison of models so that only elements affected by recent changes are compared and to substantially reduce comparison and differencing time (up to 90% in some experiments) compared to state-based model comparison

    Short-term fire front spread prediction using inverse modelling and airborne infrared images

    Get PDF
    A wildfire forecasting tool capable of estimating the fire perimeter position sufficiently in advance of the actual fire arrival will assist firefighting operations and optimise available resources. However, owing to limited knowledge of fire event characteristics (e.g. fuel distribution and characteristics, weather variability) and the short time available to deliver a forecast, most of the current models only provide a rough approximation of the forthcoming fire positions and dynamics. The problem can be tackled by coupling data assimilation and inverse modelling techniques. We present an inverse modelling-based algorithm that uses infrared airborne images to forecast short-term wildfire dynamics with a positive lead time. The algorithm is applied to two real-scale mallee-heath shrubland fire experiments, of 9 and 25 ha, successfully forecasting the fire perimeter shape and position in the short term. Forecast dependency on the assimilation windows is explored to prepare the system to meet real scenario constraints. It is envisaged the system will be applied at larger time and space scales.Peer ReviewedPostprint (author's final draft

    A research roadmap towards achieving scalability in model driven engineering

    Get PDF
    International audienceAs Model-Driven Engineering (MDE) is increasingly applied to larger and more complex systems, the current generation of modelling and model management technologies are being pushed to their limits in terms of capacity and eciency. Additional research and development is imperative in order to enable MDE to remain relevant with industrial practice and to continue delivering its widely recognised productivity , quality, and maintainability benefits. Achieving scalabil-ity in modelling and MDE involves being able to construct large models and domain-specific languages in a systematic manner, enabling teams of modellers to construct and refine large models in a collaborative manner, advancing the state of the art in model querying and transformations tools so that they can cope with large models (of the scale of millions of model elements), and providing an infrastructure for ecient storage, indexing and retrieval of large models. This paper attempts to provide a research roadmap for these aspects of scalability in MDE and outline directions for work in this emerging research area
    corecore