540 research outputs found

    Evaluation of Kermeta for Solving Graph-based Problems

    Get PDF
    Kermeta is a meta-language for specifying the structure and behavior of graphs of interconnected objects called models. In this paper,\ud we show that Kermeta is relatively suitable for solving three graph-based\ud problems. First, Kermeta allows the specification of generic model\ud transformations such as refactorings that we apply to different metamodels\ud including Ecore, Java, and Uml. Second, we demonstrate the extensibility\ud of Kermeta to the formal language Alloy using an inter-language model\ud transformation. Kermeta uses Alloy to generate recommendations for\ud completing partially specified models. Third, we show that the Kermeta\ud compiler achieves better execution time and memory performance compared\ud to similar graph-based approaches using a common case study. The\ud three solutions proposed for those graph-based problems and their\ud evaluation with Kermeta according to the criteria of genericity,\ud extensibility, and performance are the main contribution of the paper.\ud Another contribution is the comparison of these solutions with those\ud proposed by other graph-based tools

    Generic Model Refactorings

    Get PDF
    Many modeling languages share some common concepts and principles. For example, Java, MOF, and UML share some aspects of the concepts\ud of classes, methods, attributes, and inheritance. However, model\ud transformations such as refactorings specified for a given language\ud cannot be readily reused for another language because their related\ud metamodels may be structurally different. Our aim is to enable a\ud flexible reuse of model transformations across various metamodels.\ud Thus, in this paper, we present an approach allowing the specification\ud of generic model transformations, in particular refactorings, so\ud that they can be applied to different metamodels. Our approach relies\ud on two mechanisms: (1) an adaptation based mainly on the weaving\ud of aspects; (2) the notion of model typing, an extension of object\ud typing in the model-oriented context. We validated our approach by\ud performing some experiments that consisted of specifying three well\ud known refactorings (Encapsulate Field, Move Method, and Pull Up Method)\ud and applying each of them onto three different metamodels (Java,\ud MOF, and UML)

    Improving Prolog Programs: Refactoring for Prolog

    Full text link
    Refactoring is an established technique from the OO-community to restructure code: it aims at improving software readability, maintainability and extensibility. Although refactoring is not tied to the OO-paradigm in particular, its ideas have not been applied to Logic Programming until now. This paper applies the ideas of refactoring to Prolog programs. A catalogue is presented listing refactorings classified according to scope. Some of the refactorings have been adapted from the OO-paradigm, while others have been specifically designed for Prolog. Also the discrepancy between intended and operational semantics in Prolog is addressed by some of the refactorings. In addition, ViPReSS, a semi-automatic refactoring browser, is discussed and the experience with applying \vipress to a large Prolog legacy system is reported. Our main conclusion is that refactoring is not only a viable technique in Prolog but also a rather desirable one.Comment: To appear in ICLP 200

    Full Semantics Preservation in Model Transformation – A Comparison of Proof Techniques

    Get PDF
    Model transformation is a prime technique in modern, model-driven software design. One of the most challenging issues is to show that the semantics of the models is not affected by the transformation. So far, there is hardly any research into this issue, in particular in those cases where the source and target languages are different.\ud \ud In this paper, we are using two different state-of-the-art proof techniques (explicit bisimulation construction versus borrowed contexts) to show bisimilarity preservation of a given model transformation between two simple (self-defined) languages, both of which are equipped with a graph transformation-based operational semantics. The contrast between these proof techniques is interesting because they are based on different model transformation strategies: triple graph grammars versus in situ transformation. We proceed to compare the proofs and discuss scalability to a more realistic setting.\u
    corecore