53 research outputs found

    Model driven product line engineering : core asset and process implications

    Get PDF
    Reuse is at the heart of major improvements in productivity and quality in Software Engineering. Both Model Driven Engineering (MDE) and Software Product Line Engineering (SPLE) are software development paradigms that promote reuse. Specifically, they promote systematic reuse and a departure from craftsmanship towards an industrialization of the software development process. MDE and SPLE have established their benefits separately. Their combination, here called Model Driven Product Line Engineering (MDPLE), gathers together the advantages of both. Nevertheless, this blending requires MDE to be recasted in SPLE terms. This has implications on both the core assets and the software development process. The challenges are twofold: (i) models become central core assets from which products are obtained and (ii) the software development process needs to cater for the changes that SPLE and MDE introduce. This dissertation proposes a solution to the first challenge following a feature oriented approach, with an emphasis on reuse and early detection of inconsistencies. The second part is dedicated to assembly processes, a clear example of the complexity MDPLE introduces in software development processes. This work advocates for a new discipline inside the general software development process, i.e., the Assembly Plan Management, which raises the abstraction level and increases reuse in such processes. Different case studies illustrate the presented ideas.This work was hosted by the University of the Basque Country (Faculty of Computer Sciences). The author enjoyed a doctoral grant from the Basque Goverment under the “Researchers Training Program” during the years 2005 to 2009. The work was was co-supported by the Spanish Ministry of Education, and the European Social Fund under contracts WAPO (TIN2005-05610) and MODELINE (TIN2008-06507-C02-01)

    Consistency-by-Construction Techniques for Software Models and Model Transformations

    Get PDF
    A model is consistent with given specifications (specs) if and only if all the specifications are held on the model, i.e., all the specs are true (correct) for the model. Constructing consistent models (e.g., programs or artifacts) is vital during software development, especially in Model-Driven Engineering (MDE), where models are employed throughout the life cycle of software development phases (analysis, design, implementation, and testing). Models are usually written using domain-specific modeling languages (DSMLs) and specified to describe a domain problem or a system from different perspectives and at several levels of abstraction. If a model conforms to the definition of its DSML (denoted usually by a meta-model and integrity constraints), the model is consistent. Model transformations are an essential technology for manipulating models, including, e.g., refactoring and code generation in a (semi)automated way. They are often supposed to have a well-defined behavior in the sense that their resulting models are consistent with regard to a set of constraints. Inconsistent models may affect their applicability and thus the automation becomes untrustworthy and error-prone. The consistency of the models and model transformation results contribute to the quality of the overall modeled system. Although MDE has significantly progressed and become an accepted best practice in many application domains such as automotive and aerospace, there are still several significant challenges that have to be tackled to realize the MDE vision in the industry. Challenges such as handling and resolving inconsistent models (e.g., incomplete models), enabling and enforcing model consistency/correctness during the construction, fostering the trust in and use of model transformations (e.g., by ensuring the resulting models are consistent), developing efficient (automated, standardized and reliable) domain-specific modeling tools, and dealing with large models are continually making the need for more research evident. In this thesis, we contribute four automated interactive techniques for ensuring the consistency of models and model transformation results during the construction process. The first two contributions construct consistent models of a given DSML in an automated and interactive way. The construction can start at a seed model being potentially inconsistent. Since enhancing a set of transformations to satisfy a set of constraints is a tedious and error-prone task and requires high skills related to the theoretical foundation, we present the other contributions. They ensure model consistency by enhancing the behavior of model transformations through automatically constructing application conditions. The resulting application conditions control the applicability of the transformations to respect a set of constraints. Moreover, we provide several optimizing strategies. Specifically, we present the following: First, we present a model repair technique for repairing models in an automated and interactive way. Our approach guides the modeler to repair the whole model by resolving all the cardinalities violations and thereby yields a desired, consistent model. Second, we introduce a model generation technique to efficiently generate large, consistent, and diverse models. Both techniques are DSML-agnostic, i.e., they can deal with any meta-models. We present meta-techniques to instantiate both approaches to a given DSML; namely, we develop meta-tools to generate the corresponding DSML tools (model repair and generation) for a given meta-model automatically. We present the soundness of our techniques and evaluate and discuss their features such as scalability. Third, we develop a tool based on a correct-by-construction technique for translating OCL constraints into semantically equivalent graph constraints and integrating them as guaranteeing application conditions into a transformation rule in a fully automated way. A constraint-guaranteeing application condition ensures that a rule applies successfully to a model if and only if the resulting model after the rule application satisfies the constraint. Fourth, we propose an optimizing-by-construction technique for application conditions for transformation rules that need to be constraint-preserving. A constraint-preserving application condition ensures that a rule applies successfully to a consistent model (w.r.t. the constraint) if and only if the resulting model after the rule application still satisfies the constraint. We show the soundness of our techniques, develop them as ready-to-use tools, evaluate the efficiency (complexity and performance) of both works, and assess the overall approach in general as well. All our four techniques are compliant with the Eclipse Modeling Framework (EMF), which is the realization of the OMG standard specification in practice. Thus, the interoperability and the interchangeability of the techniques are ensured. Our techniques not only improve the quality of the modeled system but also increase software productivity by providing meta-tools for generating the DSML tool supports and automating the tasks

    Pattern-based refactoring in model-driven engineering

    Full text link
    L’ingĂ©nierie dirigĂ©e par les modĂšles (IDM) est un paradigme du gĂ©nie logiciel qui utilise les modĂšles comme concepts de premier ordre Ă  partir desquels la validation, le code, les tests et la documentation sont dĂ©rivĂ©s. Ce paradigme met en jeu divers artefacts tels que les modĂšles, les mĂ©ta-modĂšles ou les programmes de transformation des modĂšles. Dans un contexte industriel, ces artefacts sont de plus en plus complexes. En particulier, leur maintenance demande beaucoup de temps et de ressources. Afin de rĂ©duire la complexitĂ© des artefacts et le coĂ»t de leur maintenance, de nombreux chercheurs se sont intĂ©ressĂ©s au refactoring de ces artefacts pour amĂ©liorer leur qualitĂ©. Dans cette thĂšse, nous proposons d’étudier le refactoring dans l’IDM dans sa globalitĂ©, par son application Ă  ces diffĂ©rents artefacts. Dans un premier temps, nous utilisons des patrons de conception spĂ©cifiques, comme une connaissance a priori, appliquĂ©s aux transformations de modĂšles comme un vĂ©hicule pour le refactoring. Nous procĂ©dons d’abord par une phase de dĂ©tection des patrons de conception avec diffĂ©rentes formes et diffĂ©rents niveaux de complĂ©tude. Les occurrences dĂ©tectĂ©es forment ainsi des opportunitĂ©s de refactoring qui seront exploitĂ©es pour aboutir Ă  des formes plus souhaitables et/ou plus complĂštes de ces patrons de conceptions. Dans le cas d’absence de connaissance a priori, comme les patrons de conception, nous proposons une approche basĂ©e sur la programmation gĂ©nĂ©tique, pour apprendre des rĂšgles de transformations, capables de dĂ©tecter des opportunitĂ©s de refactoring et de les corriger. Comme alternative Ă  la connaissance disponible a priori, l’approche utilise des exemples de paires d’artefacts d’avant et d’aprĂšs le refactoring, pour ainsi apprendre les rĂšgles de refactoring. Nous illustrons cette approche sur le refactoring de modĂšles.Model-Driven Engineering (MDE) is a software engineering paradigm that uses models as first-class concepts from which validation, code, testing, and documentation are derived. This paradigm involves various artifacts such as models, meta-models, or model transformation programs. In an industrial context, these artifacts are increasingly complex. In particular, their maintenance is time and resources consuming. In order to reduce the complexity of artifacts and the cost of their maintenance, many researchers have been interested in refactoring these artifacts to improve their quality. In this thesis, we propose to study refactoring in MDE holistically, by its application to these different artifacts. First, we use specific design patterns, as an example of prior knowledge, applied to model transformations to enable refactoring. We first proceed with a detecting phase of design patterns, with different forms and levels of completeness. The detected occurrences thus form refactoring opportunities that will be exploited to implement more desirable and/or more complete forms of these design patterns. In the absence of prior knowledge, such as design patterns, we propose an approach based on genetic programming, to learn transformation rules, capable of detecting refactoring opportunities and correcting them. As an alternative to prior knowledge, our approach uses examples of pairs of artifacts before and after refactoring, in order to learn refactoring rules. We illustrate this approach on model refactoring

    Cascade: A meta-language for change, cause and effect

    Get PDF
    Live programming brings code to life with immediate and continuous feedback. To enjoy its benefits, programmers need powerful languages and live programming environments for understanding the effects of coding actions and developing running programs. Unfortunately, the enabling technology that powers these languages is missing. Change, a crucial enabler for explorative coding, omniscient debugging and version control, is a potential solution. In this position paper, we argue that an explicit representation of change is instrumental for how these languages are built, and that cause-and-effect relationships are vital for more precise feedback. We aim to deliver generic solutions for creating these languages. We introduce Cascade, a meta-language and framework for expressing languages with interface- and feedback-mechanisms that drive live programming. Our preliminary results show that Cascade is a promising approach that simplifies developing language back-ends

    Assessing and improving quality of QVTo model transformations

    Get PDF
    We investigate quality improvement in QVT operational mappings (QVTo) model transformations, one of the languages defined in the OMG standard on model-to-model transformations. Two research questions are addressed. First, how can we assess quality of QVTo model transformations? Second, how can we develop higher-quality QVTo transformations? To address the first question, we utilize a bottom–up approach, starting with a broad exploratory study including QVTo expert interviews, a review of existing material, and introspection. We then formalize QVTo transformation quality into a QVTo quality model. The quality model is validated through a survey of a broader group of QVTo developers. We find that although many quality properties recognized as important for QVTo do have counterparts in general purpose languages, a number of them are specific to QVTo or model transformation languages. To address the second research question, we leverage the quality model to identify developer support tooling for QVTo. We then implemented and evaluated one of the tools, namely a code test coverage tool. In designing the tool, code coverage criteria for QVTo model transformations are also identified. The primary contributions of this paper are a QVTo quality model relevant to QVTo practitioners and an open-source code coverage tool already usable by QVTo transformation developers. Secondary contributions are a bottom–up approach to building a quality model, a validation approach leveraging developer perceptions to evaluate quality properties, code test coverage criteria for QVTo, and numerous directions for future research and tooling related to QVTo quality

    Managing Consistency of Business Process Models across Abstraction Levels

    Get PDF
    Process models support the transition from business requirements to IT implementations. An organization that adopts process modeling often maintain several co-existing models of the same business process. These models target different abstraction levels and stakeholder perspectives. Maintaining consistency among these models has become a major challenge for such an organization. For instance, propagating changes requires identifying tacit correspondences among the models, which may be only in the memories of their original creators or may be lost entirely. Although different tools target specific needs of different roles, we lack appropriate support for checking whether related models maintained by different groups of specialists are still consistent after independent editing. As a result, typical consistency management tasks such as tracing, differencing, comparing, refactoring, merging, conformance checking, change notification, and versioning are frequently done manually, which is time-consuming and error-prone. This thesis presents the Shared Model, a framework designed to improve support for consistency management and impact analysis in process modeling. The framework is designed as a result of a comprehensive industrial study that elicited typical correspondence patterns between Business and IT process models and the meaning of consistency between them. The framework encompasses three major techniques and contributions: 1) matching heuristics to automatically discover complex correspondences patterns among the models, and to maintain traceability among model parts---elements and fragments; 2) a generator of edit operations to compute the difference between process models; 3) a process model synchronizer, capable of consistently propagating changes made to any model to its counterpart. We evaluated the Shared Model experimentally. The evaluation shows that the framework can consistently synchronize Business and IT views related by correspondence patterns, after non-simultaneous independent editing

    Formal Foundations for Information-Preserving Model Synchronization Processes Based on Triple Graph Grammars

    Get PDF
    Zwischen verschiedenen Artefakten, die Informationen teilen, wieder Konsistenz herzustellen, nachdem eines von ihnen geĂ€ndert wurde, ist ein wichtiges Problem, das in verschiedenen Bereichen der Informatik auftaucht. Mit dieser Dissertation legen wir eine Lösung fĂŒr das grundlegende Modellsynchronisationsproblem vor. Bei diesem Problem ist ein Paar solcher Artefakte (Modelle) gegeben, von denen eines geĂ€ndert wurde; Aufgabe ist die Wiederherstellung der Konsistenz. Tripelgraphgrammatiken (TGGs) sind ein etablierter und geeigneter Formalismus, um dieses und verwandte Probleme anzugehen. Da sie auf der algebraischen Theorie der Graphtransformation und dem (Double-)Pushout Zugang zu Ersetzungssystemen basieren, sind sie besonders geeignet, um Lösungen zu entwickeln, deren Eigenschaften formal bewiesen werden können. Doch obwohl TGG-basierte AnsĂ€tze etabliert sind, leiden viele von ihnen unter dem Problem des Informationsverlustes. Wenn ein Modell geĂ€ndert wurde, können wĂ€hrend eines Synchronisationsprozesses Informationen verloren gehen, die nur im zweiten Modell vorliegen. Das liegt daran, dass solche Synchronisationsprozesse darauf zurĂŒckfallen Konsistenz dadurch wiederherzustellen, dass sie das geĂ€nderte Modell (bzw. große Teile von ihm) neu ĂŒbersetzen. Wir schlagen einen TGG-basierten Ansatz vor, der fortgeschrittene Features von TGGs unterstĂŒtzt (Attribute und negative Constraints), durchgĂ€ngig formalisiert ist, implementiert und inkrementell in dem Sinne ist, dass er den Informationsverlust im Vergleich mit vorherigen AnsĂ€tzen drastisch reduziert. Bisher gibt es keinen TGG-basierten Ansatz mit vergleichbaren Eigenschaften. Zentraler Beitrag dieser Dissertation ist es, diesen Ansatz formal auszuarbeiten und seine wesentlichen Eigenschaften, nĂ€mlich Korrektheit, VollstĂ€ndigkeit und Termination, zu beweisen. Die entscheidende neue Idee unseres Ansatzes ist es, Reparaturregeln anzuwenden. Dies sind spezielle Regeln, die es erlauben, Änderungen an einem Modell direkt zu propagieren anstatt auf NeuĂŒbersetzung zurĂŒckzugreifen. Um diese Reparaturregeln erstellen und anwenden zu können, entwickeln wir grundlegende BeitrĂ€ge zur Theorie der algebraischen Graphtransformation. ZunĂ€chst entwickeln wir eine neue Art der sequentiellen Komposition von Regeln. Im Gegensatz zur gewöhnlichen Komposition, die zu Regeln fĂŒhrt, die Elemente löschen und dann wieder neu erzeugen, können wir Regeln herleiten, die solche Elemente stattdessen bewahren. Technisch gesehen findet der Synchronisationsprozess, den wir entwickeln, außerdem in der Kategorie der partiellen Tripelgraphen statt und nicht in der der normalen Tripelgraphen. Daher mĂŒssen wir sicherstellen, dass die fĂŒr Double-Pushout-Ersetzungssysteme ausgearbeitete Theorie immer noch gĂŒltig ist. Dazu entwickeln wir eine (kategorientheoretische) Konstruktion neuer Kategorien aus gegebenen und zeigen, dass (i) diese Konstruktion die Axiome erhĂ€lt, die nötig sind, um die Theorie fĂŒr Double-Pushout-Ersetzungssysteme zu entwickeln, und (ii) partielle Tripelgraphen als eine solche Kategorie konstruiert werden können. Zusammen ermöglichen diese beiden grundsĂ€tzlichen BeitrĂ€ge es uns, unsere Lösung fĂŒr das grundlegende Modellsynchronisationsproblem vollstĂ€ndig formal auszuarbeiten und ihre zentralen Eigenschaften zu beweisen.Restoring consistency between different information-sharing artifacts after one of them has been changed is an important problem that arises in several areas of computer science. In this thesis, we provide a solution to the basic model synchronization problem. There, a pair of such artifacts (models), one of which has been changed, is given and consistency shall be restored. Triple graph grammars (TGGs) are an established and suitable formalism to address this and related problems. Being based on the algebraic theory of graph transformation and (double-)pushout rewriting, they are especially suited to develop solutions whose properties can be formally proven. Despite being established, many TGG-based solutions do not satisfactorily deal with the problem of information loss. When one model is changed, in the process of restoring consistency such solutions may lose information that is only present in the second model because the synchronization process resorts to restoring consistency by re-translating (large parts of) the updated model. We introduce a TGG-based approach that supports advanced features of TGGs (attributes and negative constraints), is comprehensively formalized, implemented, and is incremental in the sense that it drastically reduces the amount of information loss compared to former approaches. Up to now, a TGG-based approach with these characteristics is not available. The central contribution of this thesis is to formally develop that approach and to prove its essential properties, namely correctness, completeness, and termination. The crucial new idea in our approach is the use of repair rules, which are special rules that allow one to directly propagate changes from one model to the other instead of resorting to re-translation. To be able to construct and apply these repair rules, we contribute more fundamentally to the theory of algebraic graph transformation. First, we develop a new kind of sequential rule composition. Whereas the conventional composition of rules leads to rules that delete and re-create elements, we can compute rules that preserve such elements instead. Furthermore, technically the setting in which the synchronization process we develop takes place is the category of partial triple graphs and not the one of ordinary triple graphs. Hence, we have to ensure that the elaborate theory of double-pushout rewriting still applies. Therefore, we develop a (category-theoretic) construction of new categories from given ones and show that (i) this construction preserves the axioms that are necessary to develop the theory of double-pushout rewriting and (ii) partial triple graphs can be constructed as such a category. Together, those two more fundamental contributions enable us to develop our solution to the basic model synchronization problem in a fully formal manner and to prove its central properties
    • 

    corecore