640,837 research outputs found

    On the Stability of Software Clones: A Genealogy-Based Empirical Study

    Get PDF
    Clones are a matter of great concern to the software engineering community because of their dual but contradictory impact on software maintenance. While there is strong empirical evidence of the harmful impact of clones on maintenance, a number of studies have also identified positive sides of code cloning during maintenance. Recently, to help determine if clones are beneficial or not during software maintenance, software researchers have been conducting studies that measure source code stability (the likelihood that code will be modified) of cloned code compared to non-cloned code. If the presence of clones in program artifacts (files, classes, methods, variables) causes the artifacts to be more frequently changed (i.e., cloned code is more unstable than non-cloned code), clones are considered harmful. Unfortunately, existing stability studies have resulted in contradictory results and even now there is no concrete answer to the research question "Is cloned or non-cloned code more stable during software maintenance?" The possible reasons behind the contradictory results of the existing studies are that they were conducted on different sets of subject systems with different experimental setups involving different clone detection tools investigating different stability metrics. Also, there are four major types of clones (Type 1: exact; Type 2: syntactically similar; Type 3: with some added, deleted or modified lines; and, Type 4: semantically similar) and none of these studies compared the instability of different types of clones. Focusing on these issues we perform an empirical study implementing seven methodologies that calculate eight stability-related metrics on the same experimental setup to compare the instability of cloned and non-cloned code in the maintenance phase. We investigated the instability of three major types of clones (Type 1, Type 2, and Type 3) from different dimensions. We excluded Type 4 clones from our investigation, because the existing clone detection tools cannot detect Type 4 clones well. According to our in-depth investigation on hundreds of revisions of 16 subject systems covering four different programming languages (Java, C, C#, and Python) using two clone detection tools (NiCad and CCFinder) we found that clones generally exhibit higher instability in the maintenance phase compared to non-cloned code. Specifically, Type 1 and Type 3 clones are more unstable as well as more harmful compared to Type 2 clones. However, although clones are generally more unstable sometimes they exhibit higher stability than non-cloned code. We further investigated the effect of clones on another important aspect of stability: method co-changeability (the degree methods change together). Intuitively, higher method co-changeability is an indication of higher instability of software systems. We found that clones do not have any negative effect on method co-changeability; rather, cloning can be a possible way of minimizing method co-changeability when clones are likely to evolve independently. Thus, clones have both positive and negative effects on software stability. Our empirical studies demonstrate how we can effectively use the positive sides of clones by minimizing their negative impacts

    Three-Dimensional Finite Element Analysis of Composite Laminates Subjected to Transverse Impact

    Get PDF
    An interest in the low velocity impact probkins has been revived with the advent of laminated composite materials and their increasing use in aerospace and other applications. The reason for this new activity is that despite certain advantages of these materials over more traditional materials, composites are known to be vulnerable to impact. Impacts may occur anywhere during manufacture, normal operations,or maintenance and may induce significant internal damage in the form of matrix cracking, delarnination or fibre breakage, that are undetectable by visual inspection and cause significant reductions in the strength and stability of the structure. In the present paper. a three-dimensional finite element and transient dynamic analysis of fibre-reinforced polymer matrix composite laminates (e.g. graphite/epoxy, glass/epoxy, etc.) subjected to transverse foreign object impact is performed. Layered version of eight-noded isoparametric brick element with incompatible modes is used to model the laminate. Transient dynamic equilibrium equation is integrated step-by-step with respect to time using Newmark direct time integration method. Non-linear contact law reported in literature is used to model the local contact behavior and the timevartiing contact force is calculated based on the relative displacement between impactor and laminate using Newton-Raphson method. Based on the finite element model, a versatile computer software was developed in C++ programming language using object- oriented approach. The software can be used to determine several results such as contact force history, displacement and velocity histories of impactor and the timevarying displacements, forces, strains and stresses throughout the laminate. Some example problems are considered to study the effects of impactor velocity and laminate boundary conditions on impact behavior of graphite/epoxy composite laminates, and results are presented for time-history of contact force and laminate central deflection.The transient dynamic strains and stresses inside the laminate were also calculated for few case

    Test Naming Failures. An Exploratory Study of Bad Naming Practices in Test Code

    Get PDF
    Unit tests are a key component during the software development process, helping ensure that a developer\u27s code is functioning as expected. Developers interact with unit tests when trying to understand, maintain, and when updating code. Good test names are essential for making these various processes easier, which is important considering the substantial costs and effort of software maintenance. Despite this, it has been found that the quality of test code is often lacking, specifically when it comes to test names. When a test fails, its name is often the first thing developers will see when trying to fix the failure, therefore it is important that names are of high quality in order to help with the debugging process. The objective of this work was to find anti-patterns having to do with test method names that may have a negative impact on developer comprehension. In order to do this, a grounded theory study was conducted on 12 open-source Java and C# GitHub projects. From this dataset, many patterns were discovered to be common throughout the test code. Some of these patterns fit the necessary criteria of anti-patterns that would probably hinder developer comprehension. With the avoidance of these anti-patterns it is believed that developers will be able to write better test names that can help speed the time to debug errors as test names will be more comprehensive

    Change-centric improvement of team collaboration

    Get PDF
    In software development, teamwork is essential to the successful delivery of a final product. The software industry has historically built software utilizing development teams that share the workplace. Process models, tools, and methodologies have been enhanced to support the development of software in a collocated setting. However, since the dawn of the 21st century, this scenario has begun to change: an increasing number of software companies are adopting global software development to cut costs and speed up the development process. Global software development introduces several challenges for the creation of quality software, from the adaptation of current methods, tools, techniques, etc., to new challenges imposed by the distributed setting, including physical and cultural distance between teams, communication problems, and coordination breakdowns. A particular challenge for distributed teams is the maintenance of a level of collaboration naturally present in collocated teams. Collaboration in this situation naturally d r ops due to low awareness of the activity of the team. Awareness is intrinsic to a collocated team, being obtained through human interaction such as informal conversation or meetings. For a distributed team, however, geographical distance and a subsequent lack of human interaction negatively impact this awareness. This dissertation focuses on the improvement of collaboration, especially within geographically dispersed teams. Our thesis is that by modeling the evolution of a software system in terms of fine-grained changes, we can produce a detailed history that may be leveraged to help developers collaborate. To validate this claim, we first c r eate a model to accurately represent the evolution of a system as sequences of fine- grained changes. We proceed to build a tool infrastructure able to capture and store fine-grained changes for both immediate and later use. Upon this foundation, we devise and evaluate a number of applications for our work with two distinct goals: 1. To assist developers with real-time information about the activity of the team. These applications aim to improve developers’ awareness of team member activity that can impact their work. We propose visualizations to notify developers of ongoing change activity, as well as a new technique for detecting and informing developers about potential emerging conflicts. 2. To help developers satisfy their needs for information related to the evolution of the software system. These applications aim to exploit the detailed change history generated by our approach in order to help developers find answers to questions arising during their work. To this end, we present two new measurements of code expertise, and a novel approach to replaying past changes according to user-defined criteria. We evaluate the approach and applications by adopting appropriate empirical methods for each case. A total of two case studies – one controlled experiment, and one qualitative user study – are reported. The results provide evidence that applications leveraging a fine-grained change history of a software system can effectively help developers collaborate in a distributed setting

    Mechanical and optical studies for an extremely large telescope mid-infrared instrument

    Get PDF
    Extremely Large Telescopes are considered worldwide as one of the highest priorities in ground-based astronomy, since they have the potential to vastly advance astrophysical knowledge. ESO is building its own Extremely Large optical and infrared Telescope, the ELT. This new telescope will have a 39m main mirror and will be the largest optical telescope in the world, able to work at the diffraction limit. METIS, one of the first light instruments of the ELT, has powerful imaging and spectrographic capabilities on the thermal wavelengths. It will allow the investigation of key properties of a wide range of celestial objects. METIS is an extremely complex instrument, weighing almost 11t, and requiring high positioning and steering precisions. Here I present the ELT’s METIS’ Warm Support Structure. It consists of a seven leg elevation platform, an hexapod capable of providing METIS with sub-millimetre and arcsecond positioning and steering resolutions, and an access platform where personnel can perform in-situ maintenance activities. The structure weighs less than 5 t and is capable of surviving earthquake conditions with accelerations up to 5g. The current design is supported by FEM simulations in ANSYS®, and was approved for Phase C. I also study the impact of the Talbot effect on the optics of METIS. This near-field effect reimages high frequencies of the phase into the amplitude, having the potential to harm the High contrast imaging (HCI) modes of the instrument. I analyse the phase errors resulting from the surface form errors of optical elements and conclude that they have an impact of less than 3% on the amplitude considering the current specifications. Finally, I develop a way of replicating the behaviour of a vortex coronagraph with raytracing software. I use this to assess the straylight caused by this kind of coronagraphs

    Maintenance of Automated Test Suites in Industry: An Empirical study on Visual GUI Testing

    Full text link
    Context: Verification and validation (V&V) activities make up 20 to 50 percent of the total development costs of a software system in practice. Test automation is proposed to lower these V&V costs but available research only provides limited empirical data from industrial practice about the maintenance costs of automated tests and what factors affect these costs. In particular, these costs and factors are unknown for automated GUI-based testing. Objective: This paper addresses this lack of knowledge through analysis of the costs and factors associated with the maintenance of automated GUI-based tests in industrial practice. Method: An empirical study at two companies, Siemens and Saab, is reported where interviews about, and empirical work with, Visual GUI Testing is performed to acquire data about the technique's maintenance costs and feasibility. Results: 13 factors are observed that affect maintenance, e.g. tester knowledge/experience and test case complexity. Further, statistical analysis shows that developing new test scripts is costlier than maintenance but also that frequent maintenance is less costly than infrequent, big bang maintenance. In addition a cost model, based on previous work, is presented that estimates the time to positive return on investment (ROI) of test automation compared to manual testing. Conclusions: It is concluded that test automation can lower overall software development costs of a project whilst also having positive effects on software quality. However, maintenance costs can still be considerable and the less time a company currently spends on manual testing, the more time is required before positive, economic, ROI is reached after automation

    Advanced Techniques for Assets Maintenance Management

    Get PDF
    16th IFAC Symposium on Information Control Problems in Manufacturing INCOM 2018 Bergamo, Italy, 11–13 June 2018. Edited by Marco Macchi, László Monostori, Roberto PintoThe aim of this paper is to remark the importance of new and advanced techniques supporting decision making in different business processes for maintenance and assets management, as well as the basic need of adopting a certain management framework with a clear processes map and the corresponding IT supporting systems. Framework processes and systems will be the key fundamental enablers for success and for continuous improvement. The suggested framework will help to define and improve business policies and work procedures for the assets operation and maintenance along their life cycle. The following sections present some achievements on this focus, proposing finally possible future lines for a research agenda within this field of assets management
    • …
    corecore