13 research outputs found

    A Longitudinal Cohort Study on the Retainment of Test-Driven Development

    Full text link
    Background: Test-Driven Development (TDD) is an agile software development practice, which is claimed to boost both external quality of software products and developers' productivity. Aims: We want to study (i) the TDD effects on the external quality of software products as well as the developers' productivity, and (ii) the retainment of TDD over a period of five months. Method: We conducted a (quantitative) longitudinal cohort study with 30 third year undergraduate students in Computer Science at the University of Bari in Italy. Results: The use of TDD has a statistically significant effect neither on the external quality of software products nor on the developers' productivity. However, we observed that participants using TDD produced significantly more tests than those applying a non-TDD development process and that the retainment of TDD is particularly noticeable in the amount of tests written. Conclusions: Our results should encourage software companies to adopt TDD because who practices TDD tends to write more tests---having more tests can come in handy when testing software systems or localizing faults---and it seems that novice developers retain TDD.Comment: ESEM, October 2018, Oulu, Finlan

    Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle

    Full text link
    Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.Comment: 31 pages. Accepted with minor revisions to Journal of Empirical Software Engineering. Keywords: software economics, phase delay, cost to fi

    Exploring Organizations\u27 Software Quality Assurance Strategies

    Get PDF
    Poor software quality leads to lost profits and even loss of life. U.S. organizations lose billions of dollars annually because of poor software quality. The purpose of this multiple case study was to explore the strategies that quality assurance (QA) leaders in small software development organizations used for successful software quality assurance (SQA) processes. A case study provided the best research design to allow for the exploration of organizational and managerial processes. The target population group was the QA leaders of 3 small software development organizations who successfully implemented SQA processes, located in Saint John, New Brunswick, Canada. The conceptual framework that grounded this study was total quality management (TQM) established by Deming in 1980. Face-to-face semistructured interviews with 2 QA leaders from each organization and documentation including process and training materials provided all the data for analysis. NVivo software aided a qualitative analysis of all collected data using a process of disassembling the data into common codes, reassembling the data into themes, interpreting the meaning, and concluding the data. The resulting major themes were Agile practices, documentation, testing, and lost profits. The results were in contrast to the main themes discovered in the literature review, although there was some overlap. The implications for positive social change include the potential to provide QA leaders with the strategies to improve SQA processes, thereby allowing for improved profits, contributing to the organizations\u27 longevity in business, and strengthening the local economy

    Improving Requirements-Test Alignment by Prescribing Practices that Mitigate Communication Gaps

    Get PDF
    The communication of requirements within software development is vital for project success. Requirements engineering and testing are two processes that when aligned can enable the discovery of issues and misunderstandings earlier, rather than later, and avoid costly and time-consuming rework and delays. There are a number of practices that support requirements-test alignment. However, each organisation and project is different and there is no one-fits-all set of practices. The software process improvement method called Gap Finder is designed to increase requirements-test alignment. The method contains two parts: an assessment part and a prescriptive part. It detects potential communication gaps between people and between artefacts (the assessment part), and identifies practices for mitigating these gaps (the prescriptive part). This paper presents the design and formative evaluation of the prescriptive part; an evaluation of the assessment part was published previously. The Gap Finder method was constructed using a design science research approach and is built on the Theory of Distances for Software Engineering, which in turn is grounded in empirical evidence from five case companies. The formative evaluation was performed through a case study in which Gap Finder was applied to an on-going development project. A qualitative and mixed-method approach was taken in the evaluation, including ethnographically-informed observations. The results show that Gap Finder can detect relevant communication gaps and seven of the nine prescribed practices were deemed practically relevant for mitigating these gaps. The project team found the method to be useful and supported joint reflection and improvement of their requirements communication. Our findings demonstrate that an empirically-based theory can be used to improve software development practices and provide a foundation for further research on factors that affect requirements communicatio

    Improving Recurrent Software Development: A Contextualist Inquiry Into Release Cycle Management

    Get PDF
    Software development is increasingly conducted in a recurrent fashion, where the same product or service is continuously being developed for the marketplace. Still, we lack detailed studies about this particular context of software development. Against this backdrop, this dissertation presents an action research study into Software Inc., a large multi-national software provider. The research addressed the challenges the company faced in managing releases and organizing software process improvement (SPI) to help recurrently develop and deliver a specific product, Secure-on-Request, to its customers and the wider marketplace. The initial problem situation was characterized by recent acquisition of additional software, complexity of service delivery, new engineering and product management teams, and low software development process maturity. Asking how release management can be organized and improved in the context of recurrent development of software, we draw on Pettigrew’s contextualist inquiry to focus on the ongoing interaction between the contents, context and process to organize and improve release cycle practices and outcomes. As a result, the dissertation offers two contributions. Practically, it contributes to the resolution of the problem situation at Software Inc. Theoretically, it introduces a new software engineering discipline, release cycle management (RCM), focused on recurrent delivery of software, including SPI as an integral part, and grounded in the specific experiences at Software Inc

    Does Software Process Improvement Reduce the Severity of Defects? A Longitudinal Field Study

    No full text

    Software defect prediction using maximal information coefficient and fast correlation-based filter feature selection

    Get PDF
    Software quality ensures that applications that are developed are failure free. Some modern systems are intricate, due to the complexity of their information processes. Software fault prediction is an important quality assurance activity, since it is a mechanism that correctly predicts the defect proneness of modules and classifies modules that saves resources, time and developers’ efforts. In this study, a model that selects relevant features that can be used in defect prediction was proposed. The literature was reviewed and it revealed that process metrics are better predictors of defects in version systems and are based on historic source code over time. These metrics are extracted from the source-code module and include, for example, the number of additions and deletions from the source code, the number of distinct committers and the number of modified lines. In this research, defect prediction was conducted using open source software (OSS) of software product line(s) (SPL), hence process metrics were chosen. Data sets that are used in defect prediction may contain non-significant and redundant attributes that may affect the accuracy of machine-learning algorithms. In order to improve the prediction accuracy of classification models, features that are significant in the defect prediction process are utilised. In machine learning, feature selection techniques are applied in the identification of the relevant data. Feature selection is a pre-processing step that helps to reduce the dimensionality of data in machine learning. Feature selection techniques include information theoretic methods that are based on the entropy concept. This study experimented the efficiency of the feature selection techniques. It was realised that software defect prediction using significant attributes improves the prediction accuracy. A novel MICFastCR model, which is based on the Maximal Information Coefficient (MIC) was developed to select significant attributes and Fast Correlation Based Filter (FCBF) to eliminate redundant attributes. Machine learning algorithms were then run to predict software defects. The MICFastCR achieved the highest prediction accuracy as reported by various performance measures.School of ComputingPh. D. (Computer Science
    corecore