1,736 research outputs found

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    A model-driven approach to broaden the detection of software performance antipatterns at runtime

    Full text link
    Performance antipatterns document bad design patterns that have negative influence on system performance. In our previous work we formalized such antipatterns as logical predicates that predicate on four views: (i) the static view that captures the software elements (e.g. classes, components) and the static relationships among them; (ii) the dynamic view that represents the interaction (e.g. messages) that occurs between the software entities elements to provide the system functionalities; (iii) the deployment view that describes the hardware elements (e.g. processing nodes) and the mapping of the software entities onto the hardware platform; (iv) the performance view that collects specific performance indices. In this paper we present a lightweight infrastructure that is able to detect performance antipatterns at runtime through monitoring. The proposed approach precalculates such predicates and identifies antipatterns whose static, dynamic and deployment sub-predicates are validated by the current system configuration and brings at runtime the verification of performance sub-predicates. The proposed infrastructure leverages model-driven techniques to generate probes for monitoring the performance sub-predicates and detecting antipatterns at runtime.Comment: In Proceedings FESCA 2014, arXiv:1404.043

    On Preserving the Behavior in Software Refactoring: A Systematic Mapping Study

    Get PDF
    Context: Refactoring is the art of modifying the design of a system without altering its behavior. The idea is to reorganize variables, classes and methods to facilitate their future adaptations and comprehension. As the concept of behavior preservation is fundamental for refactoring, several studies, using formal verification, language transformation and dynamic analysis, have been proposed to monitor the execution of refactoring operations and their impact on the program semantics. However, there is no existing study that examines the available behavior preservation strategies for each refactoring operation. Objective: This paper identifies behavior preservation approaches in the research literature. Method: We conduct, in this paper, a systematic mapping study, to capture all existing behavior preservation approaches that we classify based on several criteria including their methodology, applicability, and their degree of automation. Results: The results indicate that several behavior preservation approaches have been proposed in the literature. The approaches vary between using formalisms and techniques, developing automatic refactoring safety tools, and performing a manual analysis of the source code. Conclusion: Our taxonomy reveals that there exist some types of refactoring operations whose behavior preservation is under-researched. Our classification also indicates that several possible strategies can be combined to better detect any violation of the program semantics

    The Impact Of Design Patterns In Refactoring Technique To Measure Performance Efficiency

    Get PDF
    Designing and developing software application has never been an easy task. The process is often time consuming and requires interaction between several different aspects. It will be harder in re-engineering the legacy system through refactoring technique, especially when consider to achieve software standard quality. Performance is one of the essential a quality attribute of software quality. Many studies in the literature have premise that design patterns improve the quality of object-oriented software systems but some studies suggest that the use of design patterns do not always result in appropriate designs. There are remaining question issues on negative or positive impacts of pattern refactoring in performance. In practice, refactoring in any part or structure of the system may take effect to another related part or structure. Effect of the process using refactoring technique and design patterns may improve software quality by making it more performable efficiency. Considerable research has been devoted in re-designing the system to improve software quality as maintainability and reliability. Less attention has been paid in measuring impact of performance efficiency quality factor. The main idea of this thesis is to investigate the impact, demonstrate how design patterns can be used to refactor the legacy software application in term of performance efficiency. It is therefore beneficial to investigate whether design patterns may influence performance of applications. In the thesis, an enterprise project named SIA (Sistem Informasi Akademik) is designed by applying Java EE platform. Some issues related to design patterns are addressed. The selection of design pattern is based on the application context issue. There are three kind of parameters measure, time behavior, resource utilization and capacity measures that based on standard guideline. We use tools support in experimentation as Apache JMeter and Java Mission Control. These tools provide convenient and generate appropriate result of performance measurement. The experiment results shown that the comparison between the legacy and refactored system that implemented design pattern indicates influence on application quality, especially on performance efficiency. ================================================================================================== Merancang dan mengembangkan aplikasi perangkat lunak bukan merupakan pekerjaan yang mudah karena membutuhkan waktu dan interaksi antara beberapa aspek. Proses desain pada rekayasa ulang akan lebih sulit meskipun melalui teknik refactoring, terutama untuk mencapai standar kualitas perangkat lunak. Kinerja merupakan salah satu atribut terpenting kualitas perangkat lunak. Banyak penelitian menjelaskan pola desain memperbaiki kualitas sistem perangkat lunak berorientasi objek, namun beberapa penelitian juga menunjukkan bahwa penggunaan pola desain tidak selalu menghasilkan desain yang sesuai. Masih ada pertanyaan tentang dampak negatif atau positif dari kinerja pola refactoring. Pada praktiknya, melakukan refactoring pada suatu bagian atau struktur sistem akan berpengaruh pada bagian atau struktur lain yang terkait. Penggunaan teknik refactoring dan pola desain dapat meningkatkan kualitas perangkat lunak dengan kinerja lebih efisien. Sudah banyak penelitian yang berfokus untuk merancang ulang sistem untuk meningkatkan kualitas perangkat lunak sebagai kemampuan rawatan dan keandalan. Tetapi masih kurang penelitian perhatian dalam mengukur dampak faktor kualitas efisiensi kinerja. Tujuan utama dalam tesis ini adalah untuk mengetahui dampaknya, menunjukkan bagaimana pola desain dapat digunakan untuk refactor aplikasi perangkat lunak terdahulu dalam hal efisiensi kinerja. Oleh karena itu, akan bermanfaat untuk menyelidiki apakah pola desain dapat mempengaruhi kinerja aplikasi. Dalam tesis ini, sebuah proyek perusahaan bernama SIA (Sistem Informasi Akademik) dirancang dengan menerapkan platform Java EE. Beberapa masalah yang terkait dengan pola desain diketahui. Pemilihan pola desain berdasarkan pada isu konteks aplikasi. Tiga jenis ukuran parameter dipakai untuk penilitian ini, perilaku waktu, pemanfaatan sumber daya dan ukuran kapasitas yang berdasarkan pada pedoman standar. Kami menggunakan Apache JMeter dan Java Mission Control sebagai alat bantu dalam eksperimen. Hasil percobaan menunjukkan bahwa perbandingan antara sistem terdahulu dengan penelitian ini yang menerapkan pola desain menunjukkan bahwa hasilnya berpengaruh terhadap kualitas aplikasi terutama pada efisiensi kinerja

    Testing and test-driven development of conceptual schemas

    Get PDF
    The traditional focus for Information Systems (IS) quality assurance relies on the evaluation of its implementation. However, the quality of an IS can be largely determined in the first stages of its development. Several studies reveal that more than half the errors that occur during systems development are requirements errors. A requirements error is defined as a mismatch between requirements specification and stakeholders¿ needs and expectations. Conceptual modeling is an essential activity in requirements engineering aimed at developing the conceptual schema of an IS. The conceptual schema is the general knowledge that an IS needs to know in order to perform its functions. A conceptual schema specification has semantic quality when it is valid and complete. Validity means that the schema is correct (the knowledge it defines is true for the domain) and relevant (the knowledge it defines is necessary for the system). Completeness means that the conceptual schema includes all relevant knowledge. The validation of a conceptual schema pursues the detection of requirements errors in order to improve its semantic quality. Conceptual schema validation is still a critical challenge in requirements engineering. In this work we contribute to this challenge, taking into account that, since conceptual schemas of IS can be specified in executable artifacts, they can be tested. In this context, the main contributions of this Thesis are (1) an approach to test conceptual schemas of information systems, and (2) a novel method for the incremental development of conceptual schemas supported by continuous test-driven validation. As far as we know, this is the first work that proposes and implements an environment for automated testing of UML/OCL conceptual schemas, and the first work that explores the use of test-driven approaches in conceptual modeling. The testing of conceptual schemas may be an important and practical means for their validation. It allows checking correctness and completeness according to stakeholders¿ needs and expectations. Moreover, in conjunction with the automatic check of basic test adequacy criteria, we can also analyze the relevance of the elements defined in the schema. The testing environment we propose requires a specialized language for writing tests of conceptual schemas. We defined the Conceptual Schema Testing Language (CSTL), which may be used to specify automated tests of executable schemas specified in UML/OCL. We also describe a prototype implementation of a test processor that makes feasible the approach in practice. The conceptual schema testing approach supports test-last validation of conceptual schemas, but it also makes sense to test incomplete conceptual schemas while they are developed. This fact lays the groundwork of Test-Driven Conceptual Modeling (TDCM), which is our second main contribution. TDCM is a novel conceptual modeling method based on the main principles of Test-Driven Development (TDD), an extreme programming method in which a software system is developed in short iterations driven by tests. We have applied the method in several case studies, in the context of Design Research, which is the general research framework we adopted. Finally, we also describe an integration approach of TDCM into a broad set of software development methodologies, including the Unified Process development methodology, MDD-based approaches, storytest-driven agile methods and goal and scenario-oriented requirements engineering methods.Els enfocaments per assegurar la qualitat deis sistemes d'informació s'han basal tradicional m en! en l'avaluació de la seva implementació. No obstan! aix6, la qualitat d'un sis tema d'informació pot ser ampliament determinada en les primeres fases del seu desenvolupament. Diversos estudis indiquen que més de la meitat deis errors de software són errors de requisits . Un error de requisit es defineix com una desalineació entre l'especificació deis requisits i les necessitats i expectatives de les parts im plicades (stakeholders ). La modelització conceptual és una activitat essencial en l'enginyeria de requisits , l'objectiu de la qual és desenvolupar !'esquema conceptual d'un sistema d'informació. L'esquema conceptual és el coneixement general que un sistema d'informació requereix per tal de desenvolupar les seves funcions . Un esquema conceptual té qualitat semantica quan és va lid i complet. La valides a implica que !'esquema sigui correcte (el coneixement definit és cert peral domini) i rellevant (el coneixement definit és necessari peral sistema). La completes a significa que !'esquema conceptual inclou tot el coneixement rellevant. La validació de !'esquema conceptual té coma objectiu la detecció d'errors de requisits per tal de millorar la qualitat semantica. La validació d'esquemes conceptuals és un repte crític en l'enginyeria de requisits . Aquesta te si contribueix a aquest repte i es basa en el fet que els es quemes conceptuals de sistemes d'informació poden ser especificats en artefactes executables i, per tant, poden ser provats. Les principals contribucions de la te si són (1) un enfocament pera les pro ves d'esquemes conceptuals de sistemes d'informació, i (2) una metodología innovadora pel desenvolupament incremental d'esquemes conceptuals assistit per una validació continuada basada en proves . Les pro ves d'esquemes conceptuals poden ser una im portant i practica técnica pera la se va validació, jaque permeten provar la correctesa i la completesa d'acord ambles necessitats i expectatives de les parts interessades. En conjunció amb la comprovació d'un conjunt basic de criteris d'adequació de les proves, també podem analitzar la rellevancia deis elements definits a !'esquema. L'entorn de test proposat inclou un llenguatge especialitzat per escriure proves automatitzades d'esquemes conceptuals, anomenat Conceptual Schema Testing Language (CSTL). També hem descrit i implementa! a un prototip de processador de tes tos que fa possible l'aplicació de l'enfocament proposat a la practica. D'acord amb l'estat de l'art en validació d'esquemes conceptuals , aquest és el primer treball que proposa i implementa un entorn pel testing automatitzat d'esquemes conceptuals definits en UML!OCL. L'enfocament de proves d'esquemes conceptuals permet dura terme la validació d'esquemes existents , pero també té sentit provar es quemes conceptuals incomplets m entre estant sent desenvolupats. Aquest fet és la base de la metodología Test-Driven Conceptual Modeling (TDCM), que és la segona contribució principal. El TDCM és una metodología de modelització conceptual basada en principis basics del Test-Driven Development (TDD), un métode de programació en el qual un sistema software és desenvolupat en petites iteracions guiades per proves. També hem aplicat el métode en diversos casos d'estudi en el context de la metodología de recerca Design Science Research. Finalment, hem proposat enfocaments d'integració del TDCM en diverses metodologies de desenvolupament de software

    Handling High-Level Model Changes Using Search Based Software Engineering

    Full text link
    Model-Driven Engineering (MDE) considers models as first-class artifacts during the software lifecycle. The number of available tools, techniques, and approaches for MDE is increasing as its use gains traction in driving quality, and controlling cost in evolution of large software systems. Software models, defined as code abstractions, are iteratively refined, restructured, and evolved. This is due to many reasons such as fixing defects in design, reflecting changes in requirements, and modifying a design to enhance existing features. In this work, we focus on four main problems related to the evolution of software models: 1) the detection of applied model changes, 2) merging parallel evolved models, 3) detection of design defects in merged model, and 4) the recommendation of new changes to fix defects in software models. Regarding the first contribution, a-posteriori multi-objective change detection approach has been proposed for evolved models. The changes are expressed in terms of atomic and composite refactoring operations. The majority of existing approaches detects atomic changes but do not adequately address composite changes which mask atomic operations in intermediate models. For the second contribution, several approaches exist to construct a merged model by incorporating all non-conflicting operations of evolved models. Conflicts arise when the application of one operation disables the applicability of another one. The essence of the problem is to identify and prioritize conflicting operations based on importance and context – a gap in existing approaches. This work proposes a multi-objective formulation of model merging that aims to maximize the number of successfully applied merged operations. For the third and fourth contributions, the majority of existing works focuses on refactoring at source code level, and does not exploit the benefits of software design optimization at model level. However, refactoring at model level is inherently more challenging due to difficulty in assessing the potential impact on structural and behavioral features of the software system. This requires analysis of class and activity diagrams to appraise the overall system quality, feasibility, and inter-diagram consistency. This work focuses on designing, implementing, and evaluating a multi-objective refactoring framework for detection and fixing of design defects in software models.Ph.D.Information Systems Engineering, College of Engineering and Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/136077/1/Usman Mansoor Final.pdfDescription of Usman Mansoor Final.pdf : Dissertatio
    corecore