8 research outputs found

    Reproducibility and replicability of software defect prediction studies

    Get PDF
    © 2018 Elsevier B.V. Context: Replications are an important part of scientific disciplines. Replications test the credibility of original studies and can separate true results from those that are unreliable.Objective: In this paper we investigate the replication of defect prediction studies and identify the characteristics of replicated studies. We further assess how defect prediction replications are performed and the consistency of replication findings.Method: Our analysis is based on tracking the replication of 208 defect prediction studies identified by a highly cited Systematic Literature Review (SLR) [1]. We identify how often each of these 208 studies has been replicated and determine the type of replication carried out. We identify quality, citation counts, publication venue, impact factor, and data availability from all 208 SLR defect prediction papers to see if any of these factors are associated with the frequency with which they are replicated.Results: Only 13 (6%) of the 208 studies are replicated. Replication seems related to original papers appearing in the Transactions of Software Engineering (TSE) journal. The number of citations an original paper had was also an indicator of replications. In addition, studies conducted using closed source data seems to have more replications than those based on open source data. Where a paper has been replicated, 11 (38%) out of 29 studies revealed different results to the original study.Conclusion: Very few defect prediction studies are replicated. The lack of replication means that it remains unclear how reliable defect prediction is. We provide practical steps for improving the state of replication

    Quality Assessment and Prediction in Software Product Lines

    Get PDF
    At the heart of product line development is the assumption that through structured reuse later products will be of a higher quality and require less time and effort to develop and test. This thesis presents empirical results from two case studies aimed at assessing the quality aspect of this claim and exploring fault prediction in the context of software product lines. The first case study examines pre-release faults and change proneness of four products in PolyFlow, a medium-sized, industrial software product line; the second case study analyzes post-release faults using pre-release data over seven releases of four products in Eclipse, a very large, open source software product line.;The goals of our research are (1) to determine the association between various software metrics, as well as their correlation with the number of faults at the component/package level; (2) to characterize the fault and change proneness of components/packages at various levels of reuse; (3) to explore the benefits of the structured reuse found in software product lines; and (4) to evaluate the effectiveness of predictive models, built on a variety of products in a software product line, to make accurate predictions of pre-release software faults (in the case of PolyFlow) and post-release software faults (in the case of Eclipse).;The research results of both studies confirm, in a software product line setting, the findings of others that faults (both pre- and post-release) are more highly correlated to change metrics than to static code metrics, and are mostly contained in a small set of components/ packages. The longitudinal aspect of our research indicates that new products do benefit from the development and testing of previous products. The results also indicate that pre-existing components/packages, including the common components/packages, undergo continuous change, but tend to sustain low fault densities. However, this is not always true for newly developed components/packages. Finally, the results also show that predictions of pre-release faults in the case of PolyFlow and post-release faults in the case of Eclipse can be done accurately from pre-release data, and furthermore, that these predictions benefit from information about additional products in the software product lines

    Evidence-based defect assessment and prediction for software product lines

    Get PDF
    The systematic reuse provided by software product lines provides opportunities to achieve increased quality and reliability as a product line matures. This has led to a widely accepted assumption that as a product line evolves, its reliability improves. However, evidence in terms of empirical investigation of the relationship among change, reuse and reliability in evolving software product lines is lacking. To address the problem this work investigates: 1) whether reliability as measured by post-deployment failures improves as the products and components in a software product line change over time, and 2) whether the stabilizing effect of shared artifacts enables accurate prediction of failure-prone files in the product line. The first part of this work performs defect assessment and investigates defect trends in Eclipse, an open-source software product line. It analyzes the evolution of the product line over time in terms of the total number of defects, the percentage of severe defects and the relationship between defects and changes. The second part of this work explores prediction of failure-prone files in the Eclipse product line to determine whether prediction improves as the product line evolves over time. In addition, this part investigates the effect of defect and data collection periods on the prediction performance. The main contributions of this work include findings that the majority of files with severe defects are reused files rather than new files, but that common components experience less change than variation components. The work also found that there is a consistent set of metrics which serve as prominent predictors across multiple products and reuse categories over time. Classification of post-release, failure-prone files using change data for the Eclipse product line gives better recall and false positive rates as compared to classification using static code metrics. The work also found that on-going change in product lines hinders the ability to predict failure-prone files, and that predicting post-release defects using pre-release change data for the Eclipse case study is difficult. For example, using more data from the past to predict future failure-prone files does not necessarily give better results than using data only from the recent past. The empirical investigation of product line change and defect data leads to an improved understanding of the interplay among change, reuse and reliability as a product line evolves

    Elaboración de métricas basada en un framework de atributos para líneas de productos

    Get PDF
    Una línea de productos es un conjunto de productos relacionados que comparten unas características comunes, así como una variabilidad. Lo anterior facilita la producción masiva de dichos productos además de su adaptación a requisitos particulares. La ingeniería de líneas de productos es un paradigma de producción que permite la personalización masiva de productos. Esto ayuda a una mayor reutilización de componentes, a disminuir el tiempo de desarrollo y a mejorar la calidad final de los productos. Como en cualquier rama de la ingeniería, en la ingeniería de líneas de productos la medición juega un papel clave. Esto permite una mejor gestión de los procesos y los recursos requeridos por estos, así como monitorear la calidad de los productos. Este proceso se implementa mediante el uso de métricas, que permiten tener una medida cuantitativa del grado en el que un sistema, componente o proceso poseen un atributo dado (como costo, mantenibilidad o complejidad). Sin embargo; ni la comunidad académica ni la industria, cuentan con un framework para la aplicación de métricas en líneas de productos. Además, en la ingeniería en general y particularmente en la ingeniería de software, no existe un consenso en la terminología ni en una metodología de medición. Todo esto lleva a que se presenten dificultades, tanto para la definición como para la validación de métricas, en la ingeniería de software y en la ingeniería de líneas de productos. Durante el desarrollo del presente trabajo se propone un framework de medición, enfocado en los atributos, para la ingeniería de líneas de productos. También se define un conjunto de métricas con base en dicho framework. El trabajo propuesto comprende un mapeo sistemático de la literatura sobre la medición en la ingeniería de líneas de productos, un framework de medición centrado en un conjunto de atributos extraídos de la ingeniería de líneas de productos, un conjunto de métricas para líneas de productos y la correspondiente validación de dichas métricas con un análisis teórico.Abstract: A product line is a set of related products that share some common features and variable ones. It allows the massive production of those products besides their adaptation to particular requirements. Product line engineering is a production paradigm that implies mass customization of products. It helps to a greater reuse of components, to decrease the time of development and to improve the final quality of products. Like in any other engineering branch, in product line engineering measurement has a key role. It allows a better management of the processes and the resources required by them, as well as to monitor the quality of the products. This process is implemented through the use of metrics, which allow to have a quantitative measure of the degree in which a system, component or process possesses a given attribute (like cost, maintainability or complexity). However, neither the academy nor the industry have a framework for using metrics in product lines. Although, in engineering but mainly in software engineering, there is not a consensus in a measurement terminology or in a measurement methodology. All this has led to some difficulties in the definition and the validation of metrics in software engineering and in product line engineering. In this work, we propose a measurement framework for product line engineering focused on attributes. We also define a set of metrics based on this framework. The proposed work includes a systematic mapping about measurement in product line engineering, a measurement framework focused on the attributes of the product lines, a set of metrics for product lines and the validation of those metrics with a theoretical analysis.Maestrí

    Explanatory and Causality Analysis in Software Engineering

    Get PDF
    Software fault proneness and software development efforts are two key areas of software engineering. Improving them will significantly reduce the cost and promote good planning and practice in developing and managing software projects. Traditionally, studies of software fault proneness and software development efforts were focused on analysis and prediction, which can help to answer questions like `when’ and `where’. The focus of this dissertation is on explanatory and causality studies that address questions like `why’ and `how’. First, we applied a case-control study to explain software fault proneness. We found that Bugfixes (Prerelease bugs), Developers, Code Churn, and Age of a file are the main contributors to the Postrelease bugs in some of the open-source projects. In terms of the interactions, we found that Bugfixes and Developers reduced the risk of post release software faults. The explanatory models were tested for prediction and their performance was either comparable or better than the top-performing classifiers used in related studies. Our results indicate that software project practitioners should pay more attention to the prerelease bug fixing process and the number of Developers assigned, as well as their interaction. Also, they need to pay more attention to the new files (less than one year old) which contributed significantly more to Postrelease bugs more than old files. Second, we built a model that explains and predicts multiple levels of software development effort and measured the effects of several metrics and their interactions using categorical regression models. The final models for the three data sets used were statistically fit, and performance was comparable to related studies. We found that project size, duration, the existence of any type of faults, the use of first- or second generation of programming languages, and team size significantly increased the software development effort. On the other side, the interactions between duration and defective project, and between duration and team size reduced the software development effort. These results suggest that software practitioners should pay extra attention to the time of the project and the team size assigned for every task because when they increased from a low to a higher level, they significantly increased the software development effort. Third, a structural equation modeling method was applied for causality analysis of software fault proneness. The method combined statistical and regression analysis to find the direct and indirect causes for software faults using partial least square path modeling method. We found direct and indirect paths from measurement models that led to software postrelease bugs. Specifically, the highest direct effect came from the change request, while changing the code had a minor impact on software faults. The highest impact of the code change resulted from the change requests (either for bug fixing or refactoring). Interestingly, the indirect impact from code characteristics to software fault proneness was higher than the direct impact. We found a similar level of direct and indirect impact from code characteristics to code change

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    A semantic metadata enrichment software ecosystem (SMESE) : its prototypes for digital libraries, metadata enrichments and assisted literature reviews

    Get PDF
    Contribution 1: Initial design of a semantic metadata enrichment ecosystem (SMESE) for Digital Libraries The Semantic Metadata Enrichments Software Ecosystem (SMESE V1) for Digital Libraries (DLs) proposed in this paper implements a Software Product Line Engineering (SPLE) process using a metadata-based software architecture approach. It integrates a components-based ecosystem, including metadata harvesting, text and data mining and machine learning models. SMESE V1 is based on a generic model for standardizing meta-entity metadata and a mapping ontology to support the harvesting of various types of documents and their metadata from the web, databases and linked open data. SMESE V1 supports a dynamic metadata-based configuration model using multiple thesauri. The proposed model defines rules-based crosswalks that create pathways to different sources of data and metadata. Each pathway checks the metadata source structure and performs data and metadata harvesting. SMESE V1 proposes a metadata model in six categories of metadata instead of the four currently proposed in the literature for DLs; this makes it possible to describe content by defined entity, thus increasing usability. In addition, to tackle the issue of varying degrees of depth, the proposed metadata model describes the most elementary aspects of a harvested entity. A mapping ontology model has been prototyped in SMESE V1 to identify specific text segments based on thesauri in order to enrich content metadata with topics and emotions; this mapping ontology also allows interoperability between existing metadata models. Contribution 2: Metadata enrichments ecosystem based on topics and interests The second contribution extends the original SMESE V1 proposed in Contribution 1. Contribution 2 proposes a set of topic- and interest-based content semantic enrichments. The improved prototype, SMESE V3 (see following figure), uses text analysis approaches for sentiment and emotion detection and provides machine learning models to create a semantically enriched repository, thus enabling topic- and interest-based search and discovery. SMESE V3 has been designed to find short descriptions in terms of topics, sentiments and emotions. It allows efficient processing of large collections while keeping the semantic and statistical relationships that are useful for tasks such as: 1. topic detection, 2. contents classification, 3. novelty detection, 4. text summarization, 5. similarity detection. Contribution 3: Metadata-based scientific assisted literature review The third contribution proposes an assisted literature review (ALR) prototype, STELLAR V1 (Semantic Topics Ecosystem Learning-based Literature Assisted Review), based on machine learning models and a semantic metadata ecosystem. Its purpose is to identify, rank and recommend relevant papers for a literature review (LR). This third prototype can assist researchers, in an iterative process, in finding, evaluating and annotating relevant papers harvested from different sources and input into the SMESE V3 platform, available at any time. The key elements and concepts of this prototype are: 1. text and data mining, 2. machine learning models, 3. classification models, 4. researchers annotations, 5. semantically enriched metadata. STELLAR V1 helps the researcher to build a list of relevant papers according to a selection of metadata related to the subject of the ALR. The following figure presents the model, the related machine learning models and the metadata ecosystem used to assist the researcher in the task of producing an ALR on a specific topic
    corecore