1,407 research outputs found

    A Framework for Datatype Transformation

    Get PDF
    We study one dimension in program evolution, namely the evolution of the datatype declarations in a program. To this end, a suite of basic transformation operators is designed. We cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. Both the object programs that are subject to datatype transformations, and the meta programs that encode datatype transformations are functional programs.Comment: Minor revision; now accepted at LDTA 200

    Evaluating the performance of model transformation styles in Maude

    Get PDF
    Rule-based programming has been shown to be very successful in many application areas. Two prominent examples are the specification of model transformations in model driven development approaches and the definition of structured operational semantics of formal languages. General rewriting frameworks such as Maude are flexible enough to allow the programmer to adopt and mix various rule styles. The choice between styles can be biased by the programmer’s background. For instance, experts in visual formalisms might prefer graph-rewriting styles, while experts in semantics might prefer structurally inductive rules. This paper evaluates the performance of different rule styles on a significant benchmark taken from the literature on model transformation. Depending on the actual transformation being carried out, our results show that different rule styles can offer drastically different performances. We point out the situations from which each rule style benefits to offer a valuable set of hints for choosing one style over the other

    On Preserving the Behavior in Software Refactoring: A Systematic Mapping Study

    Get PDF
    Context: Refactoring is the art of modifying the design of a system without altering its behavior. The idea is to reorganize variables, classes and methods to facilitate their future adaptations and comprehension. As the concept of behavior preservation is fundamental for refactoring, several studies, using formal verification, language transformation and dynamic analysis, have been proposed to monitor the execution of refactoring operations and their impact on the program semantics. However, there is no existing study that examines the available behavior preservation strategies for each refactoring operation. Objective: This paper identifies behavior preservation approaches in the research literature. Method: We conduct, in this paper, a systematic mapping study, to capture all existing behavior preservation approaches that we classify based on several criteria including their methodology, applicability, and their degree of automation. Results: The results indicate that several behavior preservation approaches have been proposed in the literature. The approaches vary between using formalisms and techniques, developing automatic refactoring safety tools, and performing a manual analysis of the source code. Conclusion: Our taxonomy reveals that there exist some types of refactoring operations whose behavior preservation is under-researched. Our classification also indicates that several possible strategies can be combined to better detect any violation of the program semantics

    Collaborative Verification-Driven Engineering of Hybrid Systems

    Full text link
    Hybrid systems with both discrete and continuous dynamics are an important model for real-world cyber-physical systems. The key challenge is to ensure their correct functioning w.r.t. safety requirements. Promising techniques to ensure safety seem to be model-driven engineering to develop hybrid systems in a well-defined and traceable manner, and formal verification to prove their correctness. Their combination forms the vision of verification-driven engineering. Often, hybrid systems are rather complex in that they require expertise from many domains (e.g., robotics, control systems, computer science, software engineering, and mechanical engineering). Moreover, despite the remarkable progress in automating formal verification of hybrid systems, the construction of proofs of complex systems often requires nontrivial human guidance, since hybrid systems verification tools solve undecidable problems. It is, thus, not uncommon for development and verification teams to consist of many players with diverse expertise. This paper introduces a verification-driven engineering toolset that extends our previous work on hybrid and arithmetic verification with tools for (i) graphical (UML) and textual modeling of hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. This toolset makes it easier to tackle large-scale verification tasks

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Towards the systematic construction of domain-specific transformation languages

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-09195-2-13Proceedings of 10th European Conference, ECMFA 2014, Held as Part of STAF 2014, York, UK, July 21-25, 2014General-purpose transformation languages, like ATL or QVT, are the basis for model manipulation in Model-Driven Engineering (MDE). However, as MDE moves to more complex scenarios, there is the need for specialized transformation languages for activities like model merging, migration or aspect weaving, or for specific domains of wide use like UML. Such domain-specific transformation languages (DSTLs) encapsulate transformation knowledge within a language, enabling the reuse of recurrent solutions to transformation problems. Nowadays, many DSTLs are built in an ad-hoc manner, which requires a high development cost to achieve a full-featured implementation. Alternatively, they are realised by an embedding into general-purpose transformation or programming languages like ATL or Java. In this paper, we propose a framework for the systematic creation of DSTLs. First, we look into the characteristics of domain-specific transformation tools, deriving a categorization which is the basis of our framework. Then, we propose a domain-specific language to describe DSTLs, from which we derive a ready-to-run workbench which includes the abstract syntax, concrete syntax and translational semantics of the DSTL.This work has been funded by the Spanish Ministry of Economy and Competitivity with project “Go Lite” (TIN2011-24139

    Extensible modeling with managed data in Java

    Get PDF
    Many model-driven development (MDD) tools employ specialized frameworks and modeling languages, and assume that the semantics of models is provided by some form of code generation. As a result, programming against models is cumbersome and does not integrate well with ordinary programming languages and IDEs. In this paper we present MD4J, a modeling approach for embedding metamodels directly in Java, using plain interfaces and annotations. The semantics is provided by data managers that create and manipulate models. This architecture enables two kinds of extensibility. First, the data managers can be changed or extended to obtain different base semantics of a model. This allows a kind of aspect-oriented programming. Second, the metamodels themselves can be extended with additional fields and methods to modularly enrich a modeling language. We illustrate our approach using the example of state machines, discuss the implementation, and evaluate it with two case-studies: The execution of UML activity diagrams and an aspect-oriented refactoring of JHotDraw

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    A Layered Reference Architecture for Metamodels to Tailor Quality Modeling and Analysis

    Get PDF
    corecore