254 research outputs found

    Turning Models Inside Out

    Get PDF
    We present an approach for change-based (as opposed to state-based) model persistence that can facilitate highperformance incremental model processing (e.g. validation, transformation) by minimising the cost of change identification when models evolve. We illustrate a prototype that implements the proposed approach on top of the Eclipse Modelling Framework and we present a roadmap for further work in this direction

    VirtualEMF: a Model Virtualization Tool

    Get PDF
    International audienceSpecification of complex systems involves several heterogeneous and interrelated models. Model composition is a crucial (and complex) modeling activity that allows combining different system perspectives into a single cross-domain view. Current composition solutions fail to fully address the problem, presenting important limitations concerning efficiency, interoperability, and/or synchronization. To cope with these issues, in this demo we introduce VirtualEMF: a model composition tool based on the concept of a virtual model, i.e., a model that do not hold concrete data, but that redirects all its model manipulation operations to the set of base models from which it was generated

    transML: A Family of Languages to Model Model Transformations

    Get PDF
    Proceedings of: 13th International Conference on Model Driven Engineering Languages and Systems, MODELS 2010, Oslo, Norway, October 3-8, 2010Model transformation is one of the pillars of Model-Driven Engineering (MDE). The increasing complexity of systems and modelling languages has dramatically raised the complexity and size of model transformations. Even though many transformation languages and tools have been proposed in the last few years, most of them are directed to the implementation phase of transformation development. However, there is a lack of cohesive support for the other phases of the transformation development, like requirements, analysis, design and testing. In this paper, we propose a unified family of languages to cover the life-cycle of transformation development. Moreover, following an MDE approach, we provide tools to partially automate the progressive refinement of models between the different phases and the generation of code for specific transformation implementation languages.Work funded by the Spanish Ministry of Science (project TIN2008-02081 and grants JC2009-00015,PR2009-0019), the R&Dprogramme of the Madrid Region (project S2009/TIC-1650), and the European Commission’s 7th Framework programme (grants #218575 (INESS), #248864 (MADES))

    Example-based Validation of Domain-Specific Visual Languages

    Full text link
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in SLE 2015: Proceedings of the 2015 ACM SIGPLAN International Conference on Software Language Engineering, http://dx.doi.org/10.1145/10.1145/2814251.2814256The definition of Domain-Specific Languages (DSLs) is a recurrent activity in Model-Driven Engineering. However, their construction is many times an ad-hoc process, partly due to the lack of tools enabling a proper engineering of DSLs and promoting domain experts to play an active role. The focus of this paper is on the validation of meta- models for visual DSLs. For this purpose, we propose a language and tool support for describing properties that in- stances of meta-models should (or should not) meet. Then, our system uses a model finder to produce example models, enriched with a graphical concrete syntax, that confirm or refute the assumptions of the meta-model developer. Our language complements metaBest, a framework for the validation and verification of meta-models that includes two other languages for unit testing and specification-based test- ing of meta-models. A salient feature of our approach is that it fosters interaction with domain experts by the use, process- ing and creation of informal drawings constructed in editors liked yED or Dia. We assess the usefulness of the approach in the validation of a DSL for house blueprints, with the par- ticipation of 26 4th year computer science students.Work supported by the Spanish MINECO (TIN2011-24139 and TIN2014-52129-R), the R&D programme of the Madrid Region (S2013/ICE-3006), and the EU commission (FP7-ICT-2013-10, #611125)

    Generic meta-modelling with concepts, templates and mixin layers

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-16145-2_2Proceedings of 13th International Conference, MODELS 2010, Oslo, Norway, October 3-8, 2010.Meta-modelling is a key technique in Model Driven Engineering, where it is used for language engineering and domain modelling. However, mainstream approaches like the OMG’s Meta-Object Facility provide little support for abstraction, modularity, reusability and extendibility of (meta-)models, behaviours and transformations. In order to alleviate this weakness, we bring three elements of generic programming into meta-modelling: concepts, templates and mixin layers. Concepts permit an additional typing for models, enabling the definition of behaviours and transformations independently of meta-models, making specifications reusable. Templates use concepts to express requirements on their generic parameters, and are applicable to models and meta-models. Finally, we define functional layers by means of meta-model mixins which can extend other meta-models. As a proof of concept we also report on MetaDepth, a multi-level meta-modelling framework that implements these ideas.Work sponsored by the Spanish Ministry of Science, project TIN2008-02081 and mobility grants JC2009-00015 and PR2009-0019, and by the R&D programme of the Community of Madrid, project S2009/TIC-165

    Type inference in flexible model-driven engineering using classification algorithms

    Get PDF
    Flexible or bottom-up model-driven engineering (MDE) is an emerging approach to domain and systems modelling. Domain experts, who have detailed domain knowledge, typically lack the technical expertise to transfer this knowledge using traditional MDE tools. Flexible MDE approaches tackle this challenge by promoting the use of simple drawing tools to increase the involvement of domain experts in the language definition process. In such approaches, no metamodel is created upfront, but instead the process starts with the definition of example models that will be used to infer the metamodel. Pre-defined metamodels created by MDE experts may miss important concepts of the domain and thus restrict their expressiveness. However, the lack of a metamodel, that encodes the semantics of conforming models has some drawbacks, among others that of having models with elements that are unintentionally left untyped. In this paper, we propose the use of classification algorithms to help with the inference of such untyped elements. We evaluate the proposed approach in a number of random generated example models from various domains. The correct type prediction varies from 23 to 100% depending on the domain, the proportion of elements that were left untyped and the prediction algorithm used

    Generic model transformations: Write once, reuse everywhere

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-21732-6_5Proceedings of 4th International Conference, ICMT 2011, Zurich, Switzerland, June 27-28, 2011Model transformation is one of the core techniques in Model Driven Engineering. Many transformation languages exist nowadays, but few offer mechanisms directed to the reuse of whole transformations or transformation fragments in different contexts. Taking inspiration from generic programming, in this paper we define model transformation templates. These templates are defined over meta-model concepts which later can be bound to specific meta-models. The binding mechanism is flexible as it permits mapping concepts and meta-models with certain kinds of structural heterogeneities. The approach is general and can be applied to any model transformation language. In this paper we report on its application to ATL.Work funded by the Spanish Ministry of Science (projects TIN2008-02081 and TIN2009-11555), and the R&D programme of the Madrid Region (project S2009 /TIC-1650

    TRIM33 switches off Ifnb1 gene transcription during the late phase of macrophage activation

    Get PDF
    Despite its importance during viral or bacterial infections, transcriptional regulation of the interferon-β gene (Ifnb1) in activated macrophages is only partially understood. Here we report that TRIM33 deficiency results in high, sustained expression of Ifnb1 at late stages of toll-like receptor-mediated activation in macrophages but not in fibroblasts. In macrophages, TRIM33 is recruited by PU.1 to a conserved region, the Ifnb1 Control Element (ICE), located 15 kb upstream of the Ifnb1 transcription start site. ICE constitutively interacts with Ifnb1 through a TRIM33-independent chromatin loop. At late phases of lipopolysaccharide activation of macrophages, TRIM33 is bound to ICE, regulates Ifnb1 enhanceosome loading, controls Ifnb1 chromatin structure and represses Ifnb1 gene transcription by preventing recruitment of CBP/p300. These results characterize a previously unknown mechanism of macrophage-specific regulation of Ifnb1 transcription whereby TRIM33 is critical for Ifnb1 gene transcription shutdown

    Low-code development and model-driven engineering: Two sides of the same coin?

    Get PDF
    The last few years have witnessed a significant growth of so-called low-code development platforms (LCDPs) both in gaining traction on the market and attracting interest from academia. LCDPs are advertised as visual development platforms, typically running on the cloud, reducing the need for manual coding and also targeting non-professional programmers. Since LCDPs share many of the goals and features of model-driven engineering approaches, it is a common point of debate whether low-code is just a new buzzword for model-driven technologies, or whether the two terms refer to genuinely distinct approaches. To contribute to this discussion, in this expert-voice paper, we compare and contrast low-code and model-driven approaches, identifying their differences and commonalities, analysing their strong and weak points, and proposing directions for cross-pollination

    Predicting therapy success and costs using baseline characteristics - An Approach for personalized treatment recommendations

    Get PDF
    Background: Different treatment alternatives exist for psychological disorders. Both clinical and cost effectiveness of treatment are crucial aspects for policy makers, therapists, and patients and thus play major roles for healthcare decision-making. At the start of an intervention, it is often not clear which specific individuals benefit most from a particular intervention alternative or how costs will be distributed on an individual patient level. Objective: This study aimed at predicting the individual outcome and costs for patients before the start of an internet-based intervention. Based on these predictions, individualized treatment recommendations can be provided. Thus, we expand the discussion of personalized treatment recommendation. Methods: Outcomes and costs were predicted based on baseline data of 350 patients from a two-arm randomized controlled trial that compared treatment as usual and blended therapy for depressive disorders. For this purpose, we evaluated various machine learning techniques, compared the predictive accuracy of these techniques, and revealed features that contributed most to the prediction performance. We then combined these predictions and utilized an incremental cost-effectiveness ratio in order to derive individual treatment recommendations before the start of treatment. Results: Predicting clinical outcomes and costs is a challenging task that comes with high uncertainty when only utilizing baseline information. However, we were able to generate predictions that were more accurate than a predefined reference measure in the shape of mean outcome and cost values. Questionnaires that include anxiety or depression items and questions regarding the mobility of individuals and their energy levels contributed to the prediction performance. We then described how patients can be individually allocated to the most appropriate treatment type. For an incremental cost-effectiveness threshold of 25,000 €/quality-adjusted life year, we demonstrated that our recommendations would have led to slightly worse outcomes (1.98%), but with decreased cost (5.42%). Conclusions: Our results indicate that it was feasible to provide personalized treatment recommendations at baseline and thus allocate patients to the most beneficial treatment type. This could potentially lead to improved decision-making, better outcomes for individuals, and reduced health care costs
    corecore