701 research outputs found

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Implicit Incremental Model Analyses and Transformations

    Get PDF
    When models of a system change, analyses based on them have to be reevaluated in order for the results to stay meaningful. In many cases, the time to get updated analysis results is critical. This thesis proposes multiple, combinable approaches and a new formalism based on category theory for implicitly incremental model analyses and transformations. The advantages of the implementation are validated using seven case studies, partially drawn from the Transformation Tool Contest (TTC)

    Multi-model Consistency through Transitive Combination of Binary Transformations

    Get PDF
    Softwaresysteme werden hĂ€uïŹg durch eine Vielzahl an Modellen beschrieben, von denen jedes unterschiedliche Systemeigenschaften abbildet. Diese Modelle können geteilte Informationen enthalten, was zu redundanten Beschreibungen und AbhĂ€ngigkeiten zwischen den Modellen fĂŒhrt. Damit die Systembeschreibung korrekt ist, mĂŒssen alle geteilten Informationen zueinander konsistent beschrieben sein. Die Weiterentwicklung eines Modells kann zu Inkonsistenzen mit anderen Modellen des gleichen Systems fĂŒhren. Deshalb ist es wichtig einen Mechanismus zur Konsistenzwiederherstellung anzuwenden, nachdem Änderungen erfolgt sind. Manuelle Konsistenzwiederherstellung ist fehleranfallig und zeitaufwĂ€ndig, weshalb eine automatisierte Konsistenzwiederherstellung notwendig ist. Viele existierende AnsĂ€tze nutzen binĂ€re Transformationen, um Konsistenz zwischen zwei Modellen wiederherzustellen, jedoch werden Systeme im Allgemeinen durch mehr als zwei Modelle beschrieben. Um Konsistenzerhaltung fĂŒr mehrere Modelle mit binĂ€ren Transformationen zu erreichen, mĂŒssen diese durch transitive AusfĂŒhrung kombiniert werden. In dieser Masterarbeit untersuchen wir die transitive Kombination von binĂ€ren Transformationen und welche Probleme mit ihr einhergehen. Wir entwickeln einen Katalog aus sechs Fehlerpotentialen, die zu Konsistenzfehlern fĂŒhren können. Das Wissen ĂŒber diese Fehlerpotentiale kann den Transformationsentwickler ĂŒber mögliche Probleme beim Kombinieren von Transformationen informieren. Eines der Fehlerpotentiale entsteht als Folge der Topologie des Transformationsnetzwerks und der benutzten Modelltypen, und kann nur durch TopologieĂ€nderungen vermieden werden. Ein weiteres Fehlerpotential entsteht, wenn die kombinierten Transformationen versuchen zueinander widersprĂŒchliche Konsistenzregeln zu erfĂŒllen. Dies kann nur durch Anpassung der Konsistenzregeln behoben werden. Beide Fehlerpotentiale sind fallabhĂ€ngig und können nicht behoben werden, ohne zu wissen, welche Transformationen kombiniert werden. ZusĂ€tzlich wurden zwei Implementierungsmuster entworfen, um zwei weitere Fehlerpotentiale zu verhindern. Sie können auf die einzelnen TransformationsdeïŹnitionen angewendet werden, unabhĂ€ngig davon welche Transformationen letztendlich kombiniert werden. FĂŒr die zwei ĂŒbrigen Fehlerpotentiale wurden noch keine generellen Lösungen gefunden. Wir evaluieren die Ergebnisse mit einer Fallstudie, bestehend aus zwei voneinander unabhĂ€ngig entwickelten binĂ€ren Transformationen zwischen einem komponentenbasierten Softwarearchitekturmodell, einem UML Klassendiagramm und der dazugehörigen Java-Implementierung. Alle gefundenen Fehler konnten einem der Fehlerpotentiale zugewiesen werden, was auf die VollstĂ€ndigkeit des Fehlerkatalogs hindeutet. Die entwickelten Implementierungsmuster konnten alle Fehler beheben, die dem Fehlerpotential zugeordnet wurden, fĂŒr das sie entworfen wurden, was 70% aller gefundenen Fehler ausgemacht hat. Dies zeigt, dass die Implementierungsmuster tatsĂ€chlich anwendbar sind und Fehler verhindern können

    Refactoring OCL annotated UML class diagrams

    Get PDF
    Refactoring of UML class diagrams is an emerging research topic and heavily inspired by refactoring of program code written in object-oriented implementation languages. Current class diagram refactoring techniques concentrate on the diagrammatic part but neglect OCL constraints that might become syntactically incorrect by changing the underlying class diagram. This paper formalizes the most important refactoring rules for class diagrams and classifies them with respect to their impact on attached OCL constraints. For refactoring rules that have an impact on OCL constraints, we formalize the necessary changes of the attached constraints. Our refactoring rules are specified in a graph-grammar inspired formalism. They have been implemented as QVT transformation rules. We finally discuss for our refactoring rules the problem of syntax preservation and show, by using the KeY-system, how this can be resolve

    Handling High-Level Model Changes Using Search Based Software Engineering

    Full text link
    Model-Driven Engineering (MDE) considers models as first-class artifacts during the software lifecycle. The number of available tools, techniques, and approaches for MDE is increasing as its use gains traction in driving quality, and controlling cost in evolution of large software systems. Software models, defined as code abstractions, are iteratively refined, restructured, and evolved. This is due to many reasons such as fixing defects in design, reflecting changes in requirements, and modifying a design to enhance existing features. In this work, we focus on four main problems related to the evolution of software models: 1) the detection of applied model changes, 2) merging parallel evolved models, 3) detection of design defects in merged model, and 4) the recommendation of new changes to fix defects in software models. Regarding the first contribution, a-posteriori multi-objective change detection approach has been proposed for evolved models. The changes are expressed in terms of atomic and composite refactoring operations. The majority of existing approaches detects atomic changes but do not adequately address composite changes which mask atomic operations in intermediate models. For the second contribution, several approaches exist to construct a merged model by incorporating all non-conflicting operations of evolved models. Conflicts arise when the application of one operation disables the applicability of another one. The essence of the problem is to identify and prioritize conflicting operations based on importance and context – a gap in existing approaches. This work proposes a multi-objective formulation of model merging that aims to maximize the number of successfully applied merged operations. For the third and fourth contributions, the majority of existing works focuses on refactoring at source code level, and does not exploit the benefits of software design optimization at model level. However, refactoring at model level is inherently more challenging due to difficulty in assessing the potential impact on structural and behavioral features of the software system. This requires analysis of class and activity diagrams to appraise the overall system quality, feasibility, and inter-diagram consistency. This work focuses on designing, implementing, and evaluating a multi-objective refactoring framework for detection and fixing of design defects in software models.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136077/1/Usman Mansoor Final.pdfDescription of Usman Mansoor Final.pdf : Dissertatio

    Evaluating Model Differencing for the Consistency Preservation of State-based Views

    Get PDF
    While developers and users of modern software systems usually only need to interact with a specific part of the system at a time, they are hindered by the ever-increasing complexity of the entire system. Views are projections of underlying models and can be employed to abstract from that complexity. When a view is modified, the changes must be propagated back into the underlying model without overriding simultaneous modifications. Hence, the view needs to provide a fine-grained sequence of changes to update the model minimally invasively. Such fine-grained changes are often unavailable for views that integrate with existing workflows and tools. To this end, model differencing approaches can be leveraged to compare two states of a view and derive an estimated change sequence. However, these model differencing approaches are not intended to operate with views, as their correctness is judged solely by comparing the input models. For views, the changes are derived from the view states, but the correctness depends on the underlying model. This work introduces a refined notion of correctness for change sequences in the context of model-view consistency. Furthermore, we evaluate state-of-the-art model differencing regarding model-view consistency. Our results show that model differencing largely performs very well. However, incorrect change sequences were derived for two common refactoring operation types, leading to an incorrect model state. These types can be easily reproduced and are likely to occur in practice. By considering our change sequence properties in the view type design, incorrect change sequences can be detected and semi-automatically repaired to prevent such incorrect model states

    Model Transformation the Heart and Soul of Model-Driven Software Development

    Get PDF
    The motivation behind model-driven software development is to move the focus of work from programming to solution modeling. The model-driven approach has a potential to increase development productivity and quality by describing important aspects of a solution with more human-friendly abstractions and by generating common application fragments with templates. For this vision to become reality, software development tools need to automate the many tasks of model construction and transformation, including construction and transformation of models that can be round-trip engineered into code. In this article, we briefly examine different approaches to model transformation and offer recommendations on the desirable characteristics of a language for describing model transformations. In doing so, we are hoping to offer a measuring stick for judging the quality of future model transformation technologies

    Designing Round-Trip Systems by Change Propagation and Model Partitioning

    Get PDF
    Software development processes incorporate a variety of different artifacts (e.g., source code, models, and documentation). For multiple reasons the data that is contained in these artifacts does expose some degree of redundancy. Ensuring global consistency across artifacts during all stages in the development of software systems is required, because inconsistent artifacts can yield to failures. Ensuring consistency can be either achieved by reducing the amount of redundancy or by synchronizing the information that is shared across multiple artifacts. The discipline of software engineering that addresses these problems is called Round-Trip Engineering (RTE). In this thesis we present a conceptual framework for the design RTE systems. This framework delivers precise definitions for essential terms in the context of RTE and a process that can be used to address new RTE applications. The main idea of the framework is to partition models into parts that require synchronization - skeletons - and parts that do not - clothings. Once such a partitioning is obtained, the relations between the elements of the skeletons determine whether a deterministic RTE system can be built. If not, manual decisions may be required by developers. Based on this conceptual framework, two concrete approaches to RTE are presented. The first one - Backpropagation-based RTE - employs change translation, traceability and synchronization fitness functions to allow for synchronization of artifacts that are connected by non-injective transformations. The second approach - Role-based Tool Integration - provides means to avoid redundancy. To do so, a novel tool design method that relies on role modeling is presented. Tool integration is then performed by the creation of role bindings between role models. In addition to the two concrete approaches to RTE, which form the main contributions of the thesis, we investigate the creation of bridges between technical spaces. We consider these bridges as an essential prerequisite for performing logical synchronization between artifacts. Also, the feasibility of semantic web technologies is a subject of the thesis, because the specification of synchronization rules was identified as a blocking factor during our problem analysis. The thesis is complemented by an evaluation of all presented RTE approaches in different scenarios. Based on this evaluation, the strengths and weaknesses of the approaches are identified. Also, the practical feasibility of our approaches is confirmed w.r.t. the presented RTE applications
    • 

    corecore