884 research outputs found

    A UML/OCL framework for the analysis of fraph transformation rules

    Get PDF
    In this paper we present an approach for the analysis of graph transformation rules based on an intermediate OCL representation. We translate different rule semantics into OCL, together with the properties of interest (like rule applicability, conflicts or independence). The intermediate representation serves three purposes: (i) it allows the seamless integration of graph transformation rules with the MOF and OCL standards, and enables taking the meta-model and its OCL constraints (i.e. well-formedness rules) into account when verifying the correctness of the rules; (ii) it permits the interoperability of graph transformation concepts with a number of standards-based model-driven development tools; and (iii) it makes available a plethora of OCL tools to actually perform the rule analysis. This approach is especially useful to analyse the operational semantics of Domain Specific Visual Languages. We have automated these ideas by providing designers with tools for the graphical specification and analysis of graph transformation rules, including a backannotation mechanism that presents the analysis results in terms of the original language notation

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    A review of slicing techniques in software engineering

    Get PDF
    Program slice is the part of program that may take the program off the path of the desired output at some point of its execution. Such point is known as the slicing criterion. This point is generally identified at a location in a given program coupled with the subset of variables of program. This process in which program slices are computed is called program slicing. Weiser was the person who gave the original definition of program slice in 1979. Since its first definition, many ideas related to the program slice have been formulated along with the numerous numbers of techniques to compute program slice. Meanwhile, distinction between the static slice and dynamic slice was also made. Program slicing is now among the most useful techniques that can fetch the particular elements of a program which are related to a particular computation. Quite a large numbers of variants for the program slicing have been analyzed along with the algorithms to compute the slice. Model based slicing split the large architectures of software into smaller sub models during early stages of SDLC. Software testing is regarded as an activity to evaluate the functionality and features of a system. It verifies whether the system is meeting the requirement or not. A common practice now is to extract the sub models out of the giant models based upon the slicing criteria. Process of model based slicing is utilized to extract the desired lump out of slice diagram. This specific survey focuses on slicing techniques in the fields of numerous programing paradigms like web applications, object oriented, and components based. Owing to the efforts of various researchers, this technique has been extended to numerous other platforms that include debugging of program, program integration and analysis, testing and maintenance of software, reengineering, and reverse engineering. This survey portrays on the role of model based slicing and various techniques that are being taken on to compute the slices

    Workshop proceedings of the 1st workshop on quality in modeling

    Get PDF
    Quality assessment and assurance constitute an important part of software engineering. The issues of software quality management are widely researched and approached from multiple perspectives and viewpoints. The introduction of a new paradigm in software development – namely Model Driven Development (MDD) and its variations (e.g., MDA [Model Driven Architecture], MDE [Model Driven Engineering], MBD [Model Based Development], MIC [Model Integrated Computing]) – raises new challenges in software quality management, and as such should be given a special attention. In particular, the issues of early quality assessment, based on models at a high abstraction level, and building (or customizing the existing) prediction models for software quality based on model metrics are of central importance for the software engineering community. The workshop is continuation of a series of workshops on consistency that have taken place during the subsequent annual UML conferences and recently MDA-FA. The idea behind this workshop is to extend the scope of interests and address a wide spectrum of problems related to MDD. It is also in line with the overall initiative of the shift from UML to MoDELS. The goal of this workshop is to gather researchers and practitioners interested in the emerging issues of quality in the context of MDD. The workshop is intended to provide a premier forum for discussions related to software quality and MDD. And the aims of the workshop are: - Presenting ongoing research related to quality in modeling in the context of MDD, - Defining and organizing issues related to quality in the MDD. The format of the workshop consists of two parts: presentation and discussion. The presentation part is aimed at reporting research results related to quality aspects in modeling. Seven papers were selected for the presentation out of 16 submissions; the selected papers are included in these proceedings. The discussion part is intended to be a forum for exchange of ideas related to understanding of quality and approaching it in a systematic way

    Workshop proceedings of the 1st workshop on quality in modeling

    Get PDF
    Quality assessment and assurance constitute an important part of software engineering. The issues of software quality management are widely researched and approached from multiple perspectives and viewpoints. The introduction of a new paradigm in software development – namely Model Driven Development (MDD) and its variations (e.g., MDA [Model Driven Architecture], MDE [Model Driven Engineering], MBD [Model Based Development], MIC [Model Integrated Computing]) – raises new challenges in software quality management, and as such should be given a special attention. In particular, the issues of early quality assessment, based on models at a high abstraction level, and building (or customizing the existing) prediction models for software quality based on model metrics are of central importance for the software engineering community. The workshop is continuation of a series of workshops on consistency that have taken place during the subsequent annual UML conferences and recently MDA-FA. The idea behind this workshop is to extend the scope of interests and address a wide spectrum of problems related to MDD. It is also in line with the overall initiative of the shift from UML to MoDELS. The goal of this workshop is to gather researchers and practitioners interested in the emerging issues of quality in the context of MDD. The workshop is intended to provide a premier forum for discussions related to software quality and MDD. And the aims of the workshop are: - Presenting ongoing research related to quality in modeling in the context of MDD, - Defining and organizing issues related to quality in the MDD. The format of the workshop consists of two parts: presentation and discussion. The presentation part is aimed at reporting research results related to quality aspects in modeling. Seven papers were selected for the presentation out of 16 submissions; the selected papers are included in these proceedings. The discussion part is intended to be a forum for exchange of ideas related to understanding of quality and approaching it in a systematic way

    VMTL: a language for end-user model transformation

    Get PDF

    Consistency-by-Construction Techniques for Software Models and Model Transformations

    Get PDF
    A model is consistent with given specifications (specs) if and only if all the specifications are held on the model, i.e., all the specs are true (correct) for the model. Constructing consistent models (e.g., programs or artifacts) is vital during software development, especially in Model-Driven Engineering (MDE), where models are employed throughout the life cycle of software development phases (analysis, design, implementation, and testing). Models are usually written using domain-specific modeling languages (DSMLs) and specified to describe a domain problem or a system from different perspectives and at several levels of abstraction. If a model conforms to the definition of its DSML (denoted usually by a meta-model and integrity constraints), the model is consistent. Model transformations are an essential technology for manipulating models, including, e.g., refactoring and code generation in a (semi)automated way. They are often supposed to have a well-defined behavior in the sense that their resulting models are consistent with regard to a set of constraints. Inconsistent models may affect their applicability and thus the automation becomes untrustworthy and error-prone. The consistency of the models and model transformation results contribute to the quality of the overall modeled system. Although MDE has significantly progressed and become an accepted best practice in many application domains such as automotive and aerospace, there are still several significant challenges that have to be tackled to realize the MDE vision in the industry. Challenges such as handling and resolving inconsistent models (e.g., incomplete models), enabling and enforcing model consistency/correctness during the construction, fostering the trust in and use of model transformations (e.g., by ensuring the resulting models are consistent), developing efficient (automated, standardized and reliable) domain-specific modeling tools, and dealing with large models are continually making the need for more research evident. In this thesis, we contribute four automated interactive techniques for ensuring the consistency of models and model transformation results during the construction process. The first two contributions construct consistent models of a given DSML in an automated and interactive way. The construction can start at a seed model being potentially inconsistent. Since enhancing a set of transformations to satisfy a set of constraints is a tedious and error-prone task and requires high skills related to the theoretical foundation, we present the other contributions. They ensure model consistency by enhancing the behavior of model transformations through automatically constructing application conditions. The resulting application conditions control the applicability of the transformations to respect a set of constraints. Moreover, we provide several optimizing strategies. Specifically, we present the following: First, we present a model repair technique for repairing models in an automated and interactive way. Our approach guides the modeler to repair the whole model by resolving all the cardinalities violations and thereby yields a desired, consistent model. Second, we introduce a model generation technique to efficiently generate large, consistent, and diverse models. Both techniques are DSML-agnostic, i.e., they can deal with any meta-models. We present meta-techniques to instantiate both approaches to a given DSML; namely, we develop meta-tools to generate the corresponding DSML tools (model repair and generation) for a given meta-model automatically. We present the soundness of our techniques and evaluate and discuss their features such as scalability. Third, we develop a tool based on a correct-by-construction technique for translating OCL constraints into semantically equivalent graph constraints and integrating them as guaranteeing application conditions into a transformation rule in a fully automated way. A constraint-guaranteeing application condition ensures that a rule applies successfully to a model if and only if the resulting model after the rule application satisfies the constraint. Fourth, we propose an optimizing-by-construction technique for application conditions for transformation rules that need to be constraint-preserving. A constraint-preserving application condition ensures that a rule applies successfully to a consistent model (w.r.t. the constraint) if and only if the resulting model after the rule application still satisfies the constraint. We show the soundness of our techniques, develop them as ready-to-use tools, evaluate the efficiency (complexity and performance) of both works, and assess the overall approach in general as well. All our four techniques are compliant with the Eclipse Modeling Framework (EMF), which is the realization of the OMG standard specification in practice. Thus, the interoperability and the interchangeability of the techniques are ensured. Our techniques not only improve the quality of the modeled system but also increase software productivity by providing meta-tools for generating the DSML tool supports and automating the tasks
    corecore