1,781 research outputs found

    A Study on Architectural Smells Prediction

    Get PDF
    Architectural smells can be detrimental to the system maintainability, evolvability and represent a source of architectural debt. Thus, it is very important to be able to understand how they evolved in the past and to predict their future evolution. In this paper, we evaluate if the existence of architectural smells in the past versions of a project can be used to predict their presence in the future. We analyzed four Java projects in 295 Github releases and we applied for the prediction four different supervised learning models in a repeated cross-validation setting. We found that historical architectural smell information can be used to predict the presence of architectural smells in the future. Hence, practitioners should carefully monitor the evolution of architectural smells and take preventative actions to avoid introducing them and stave off their progressive growth.</p

    Software evolution prediction using seasonal time analysis: a comparative study

    Get PDF
    Prediction models of software change requests are useful for supporting rational and timely resource allocation to the evolution process. In this paper we use a time series forecasting model to predict software maintenance and evolution requests in an open source software project (Eclipse), as an example of projects with seasonal release cycles. We build an ARIMA model based on data collected from Eclipse’s change request tracking system since the project’s start. A change request may refer to defects found in the software, but also to suggested improvements in the system under scrutiny. Our model includes the identification of seasonal patterns and tendencies, and is validated through the forecast of the change requests evolution for the next 12 months. The usage of seasonal information significantly improves the estimation ability of this model, when compared to other ARIMA models found in the literature, and does so for a much longer estimation period. Being able to accurately forecast the change requests’ evolution over a fairly long time period is an important ability for enabling adequate process control in maintenance activities, and facilitates effort estimation and timely resources allocation. The approach presented in this paper is suitable for projects with a relatively long history, as the model building process relies on historic data

    How do developers fix issues and pay back technical debt in the Apache ecosystem?

    Get PDF
    During software evolution technical debt (TD) follows a constant ebb and flow, being incurred and paid back, sometimes in the same day and sometimes ten years later. There have been several studies in the literature investigating how technical debt in source code accumulates during time and the consequences of this accumulation for software maintenance. However, to the best of our knowledge there are no large scale studies that focus on the types of issues that are fixed and the amount of TD that is paid back during software evolution. In this paper we present the results of a case study, in which we analyzed the evolution of fifty-seven Java open-source software projects by the Apache Software Foundation at the temporal granularity level of weekly snapshots. In particular, we focus on the amount of technical debt that is paid back and the types of issues that are fixed. The findings reveal that a small subset of all issue types is responsible for the largest percentage of TD repayment and thus, targeting particular violations the development team can achieve higher benefits

    RePOR: Mimicking humans on refactoring tasks. Are we there yet?

    Full text link
    Refactoring is a maintenance activity that aims to improve design quality while preserving the behavior of a system. Several (semi)automated approaches have been proposed to support developers in this maintenance activity, based on the correction of anti-patterns, which are `poor' solutions to recurring design problems. However, little quantitative evidence exists about the impact of automatically refactored code on program comprehension, and in which context automated refactoring can be as effective as manual refactoring. Leveraging RePOR, an automated refactoring approach based on partial order reduction techniques, we performed an empirical study to investigate whether automated refactoring code structure affects the understandability of systems during comprehension tasks. (1) We surveyed 80 developers, asking them to identify from a set of 20 refactoring changes if they were generated by developers or by a tool, and to rate the refactoring changes according to their design quality; (2) we asked 30 developers to complete code comprehension tasks on 10 systems that were refactored by either a freelancer or an automated refactoring tool. To make comparison fair, for a subset of refactoring actions that introduce new code entities, only synthetic identifiers were presented to practitioners. We measured developers' performance using the NASA task load index for their effort, the time that they spent performing the tasks, and their percentages of correct answers. Our findings, despite current technology limitations, show that it is reasonable to expect a refactoring tools to match developer code

    Evolution of statistical analysis in empirical software engineering research: Current state and steps forward

    Full text link
    Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001--2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.Comment: journal submission, 34 pages, 8 figure
    • …
    corecore