2,191 research outputs found

    Well-Formed and Scalable Invasive Software Composition

    Get PDF
    Software components provide essential means to structure and organize software effectively. However, frequently, required component abstractions are not available in a programming language or system, or are not adequately combinable with each other. Invasive software composition (ISC) is a general approach to software composition that unifies component-like abstractions such as templates, aspects and macros. ISC is based on fragment composition, and composes programs and other software artifacts at the level of syntax trees. Therefore, a unifying fragment component model is related to the context-free grammar of a language to identify extension and variation points in syntax trees as well as valid component types. By doing so, fragment components can be composed by transformations at respective extension and variation points so that always valid composition results regarding the underlying context-free grammar are yielded. However, given a language’s context-free grammar, the composition result may still be incorrect. Context-sensitive constraints such as type constraints may be violated so that the program cannot be compiled and/or interpreted correctly. While a compiler can detect such errors after composition, it is difficult to relate them back to the original transformation step in the composition system, especially in the case of complex compositions with several hundreds of such steps. To tackle this problem, this thesis proposes well-formed ISC—an extension to ISC that uses reference attribute grammars (RAGs) to specify fragment component models and fragment contracts to guard compositions with context-sensitive constraints. Additionally, well-formed ISC provides composition strategies as a means to configure composition algorithms and handle interferences between composition steps. Developing ISC systems for complex languages such as programming languages is a complex undertaking. Composition-system developers need to supply or develop adequate language and parser specifications that can be processed by an ISC composition engine. Moreover, the specifications may need to be extended with rules for the intended composition abstractions. Current approaches to ISC require complete grammars to be able to compose fragments in the respective languages. Hence, the specifications need to be developed exhaustively before any component model can be supplied. To tackle this problem, this thesis introduces scalable ISC—a variant of ISC that uses island component models as a means to define component models for partially specified languages while still the whole language is supported. Additionally, a scalable workflow for agile composition-system development is proposed which supports a development of ISC systems in small increments using modular extensions. All theoretical concepts introduced in this thesis are implemented in the Skeletons and Application Templates framework SkAT. It supports “classic”, well-formed and scalable ISC by leveraging RAGs as its main specification and implementation language. Moreover, several composition systems based on SkAT are discussed, e.g., a well-formed composition system for Java and a C preprocessor-like macro language. In turn, those composition systems are used as composers in several example applications such as a library of parallel algorithmic skeletons

    A graph-based aspect interference detection approach for UML-based aspect-oriented models

    Get PDF
    Aspect Oriented Modeling (AOM) techniques facilitate separate modeling of concerns and allow for a more flexible composition of these than traditional modeling technique. While this improves the understandability of each submodel, in order to reason about the behavior of the composed system and to detect conflicts among submodels, automated tool support is required. Current techniques for conflict detection among aspects generally have at least one of the following weaknesses. They require to manually model the abstract semantics for each system; or they derive the system semantics from code assuming one specific aspect-oriented language. Defining an extra semantics model for verification bears the risk of inconsistencies between the actual and the verified design; verifying only at implementation level hinders fixng errors in earlier phases. We propose a technique for fully automatic detection of conflicts between aspects at the model level; more specifically, our approach works on UML models with an extension for modeling pointcuts and advice. As back-end we use a graph-based model checker, for which we have defined an operational semantics of UML diagrams, pointcuts and advice. In order to simulate the system, we automatically derive a graph model from the diagrams. The result is another graph, which represents all possible program executions, and which can be verified against a declarative specification of invariants.\ud To demonstrate our approach, we discuss a UML-based AOM model of the "Crisis Management System" and a possible design and evolution scenario. The complexity of the system makes con°icts among composed aspects hard to detect: already in the case of two simulated aspects, the state space contains 623 di®erent states and 9 different execution paths. Nevertheless, in case the right pruning methods are used, the state-space only grows linearly with the number of aspects; therefore, the automatic analysis scales

    Incorporating Security Behaviour into Business Models Using a Model Driven Approach

    Get PDF
    There has, in recent years, been growing interest in Model Driven Engineering (MDE), in which models are the primary design artifacts and transformations are applied to these models to generate refinements leading to usable implementations over specific platforms. There is also interest in factoring out a number of non-functional aspects, such as security, to provide reusable solutions applicable to a number of different applications. This paper brings these two approaches together, investigating, in particular, the way behaviour from the different sources can be combined and integrated into a single design model. Doing so involves transformations that weave together the constraints from the various aspects and are, as a result, more complex to specify than the linear pipelines of transformations used in most MDE work to date. The approach taken here involves using an aspect model as a template for refining particular patterns in the business model, and the transformations are expressed as graph rewriting rules for both static and behaviour elements of the models

    3rd international software language engineering conference (SLE) : pre-proceedings, October 12-13, 2010, Eindhoven, the Netherlands

    Get PDF
    We are pleased to present the proceedings of the Third International Conference on Software Language Engineering (SLE 2010). The conference will be held in Eindhoven, the Netherlands during October 12-13, 2010 and will be co-located with The Ninth International Conference on Generative Programming and Component Engineering (GPCE'10), and The Workshop on Feature-Oriented Software Development (FOSD). An important goal of SLE is to integrate the different sub-communities of the software-language-engineering community to foster cross-fertilization and strengthen research overall. The Doctoral Symposium at SLE 2010 contributes towards these goals by providing a forum for both early and late-stage PhD students to present their research and get detailed feedback and advice from other researchers. The SLE conference series is devoted to a wide range of topics related to artificial languages in software engineering. SLE is an international research forum that brings together researchers and practitioners from both industry and academia to expand the frontiers of software language engineering. SLE's foremost mission is to encourage and organize communication between communities that have traditionally looked at software languages from different, more specialized, and yet complementary perspectives. SLE emphasizes the fundamental notion of languages as opposed to any realization in specific technical spaces. In this context, the term "software language" comprises all sorts of artificial languages used in software development including general-purpose programming languages, domain-specific languages, modeling and meta-modeling languages, data models, and ontologies. Software language engineering is the application of a systematic, disciplined, quantifiable approach to the development, use, and maintenance of these languages. The SLE conference is concerned with all phases of the lifecycle of software languages; these include the design, implementation, documentation, testing, deployment, evolution, recovery, and retirement of languages. Of special interest are tools, techniques, methods, and formalisms that support these activities. In particular, tools are often based on, or automatically generated from, a formal description of the language. Hence, the treatment of language descriptions as software artifacts, akin to programs, is of particular interest - while noting the special status of language descriptions, and the tailored engineering principles and methods for modularization, refactoring, refinement, composition, versioning, co-evolution, and analysis that can be applied to them. The response to the call for papers for SLE 2010 was very enthusiastic. We received 79 full submissions from 108 initial abstract submissions. From these submissions, the Program Committee (PC) selected 25 papers: 17 full papers, five short papers, and two tool demonstration papers, resulting in an acceptance rate of 32%. To ensure the quality of the accepted papers, each submitted paper was reviewed by at least three PC members. Each paper was discussed in detail during the electronic PC meeting. A summary of this discussion was prepared by members of the PC and provided to the authors along with the reviews

    Towards the flexible reuse of model transformations: A formal approach based on Graph Transformation

    Full text link
    This is the author’s version of a work that was accepted for publication in Journal of Logical and Algebraic Methods in Programming. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Logical and Algebraic Methods in Programming 83.5-6 (2014) , DOI:10.1016/j.jlamp.2014.08.005This special issue of the Journal of Logic and Algebraic Methods in Programming (JLAMP) includes full revised versions of selected papers that were presented at the 24th Nordic Workshop on Programming Theory (NWPT 2012). The workshop took place in Bergen, Norway, during 31 October–2 November 2012 and was organized by the Department of Informatics, University of Bergen, and the Bergen University College.Model transformations are the heart and soul of Model Driven Engineering (MDE). However, in order to increase the adoption of MDE by industry, techniques for developing model transformations in the large and raising the quality and productivity in their construction, like reusability, are still needed. In previous works, we developed a reutilization approach for graph transformations based on the definition of concepts, which gather the structural requirements needed by meta-models to qualify for the transformations. Reusable transformations are typed by concepts, becoming transformation templates. Transformation templates are instantiated by binding the concept to a concrete meta-model, inducing a retyping of the transformation for the given meta-model. This paper extends the approach allowing heterogeneities between the concept and the metamodel, thus increasing the reuse opportunities of transformation templates. Heterogeneities are resolved by using algebraic adapters which induce both a retyping and an adaptation of the transformation. As an alternative, the adapters can also be employed to induce an adaptation of the meta-model, and in this work we show the conditions for equivalence of both approaches to transformation reuse.We thank the referees for their detailed comments, which helped to greatly improve the paper. This work has been supported by the Spanish Ministry of Economy and Competitivity with project Go-Lite (TIN2011-24139)
    corecore