17,819 research outputs found

    EMPIRICAL CHARACTERIZATION OF SOFTWARE QUALITY

    Get PDF
    The research topic focuses on the characterization of software quality considering the main software elements such as people, process and product. Many attributes (size, language, testing techniques etc.) probably could have an effect on the quality of software. In this thesis we aim to understand the impact of attributes of three P’s (people, product, process) on the quality of software by empirical means. Software quality can be interpreted in many ways, such as customer satisfaction, stability and defects etc. In this thesis we adopt ‘defect density’ as a quality measure. Therefore the research focus on the empirical evidences of the impact of attributes of the three P’s on the software defect density. For this reason empirical research methods (systematic literature reviews, case studies, and interviews) are utilized to collect empirical evidence. Each of this research method helps to extract the empirical evidences of the object under study and for data analysis statistical methods are used. Considering the product attributes, we have studied the size, language, development mode, age, complexity, module structure, module dependency, and module quality and their impact on project quality. Considering the process attributes, we have studied the process maturity and structure, and their impact on the project quality. Considering the people attributes, we have studied the experience and capability, and their impact on the project quality. Moreover, in the process category, we have studied the impact of one testing approach called ‘exploratory testing’ and its impact on the quality of software. Exploratory testing is a widely used software-testing practice and means simultaneous learning, test design, and test execution. We have analyzed the exploratory testing weaknesses, and proposed a hybrid testing approach in an attempt to improve the quality. Concerning the product attributes, we found that there exist a significant difference of quality between open and close source projects, java and C projects, and large and small projects. Very small and defect free modules have impact on the software quality. Different complexity metrics have different impact on the software quality considering the size. Product complexity as defined in Table 53 has partial impact on the software quality. However software age and module dependencies are not factor to characterize the software quality. Concerning the people attributes, we found that platform experience, application experience and language and tool experience have significant impact on the software quality. Regarding the capability we found that programmer capability has partial impact on the software quality where as analyst capability has no impact on the software quality. Concerning process attributes we found that there is no difference of quality between the project developed under CMMI and those that are not developed under CMMI. Regarding the CMMI levels there is difference of software quality particularly between CMMI level 1 and CMMI level 3. Comparing different process types we found that hybrid projects are of better quality than waterfall projects. Process maturity defined by (SEI-CMM) has partial impact on the software quality. Concerning exploratory testing, we found that exploratory testing weaknesses induce the testing technical debt therefore a process is defined in conjunction with the scripted testing in an attempt to reduce the associated technical debt of exploratory testing. The findings are useful for both researchers and practitioners to evaluate their project

    Implementation of Software Process Improvement Through TSPi in Very Small Enterprises

    Get PDF
    This article shows an experience in a very small enterprise related to improving software quality in terms of test and process productivity. A customized process from the current organizational process based on TSPi was defined and the team was trained on it. The pilot project had schedule and budget constraints. The process began by gathering historical data from previous projects in order to get a measurement repository. Then the project was launched and some metrics were collected. Finally, results were analyzed and the improvements verified

    Using the ISO/IEC 9126 product quality model to classify defects : a Controlled Experiment

    Get PDF
    Background: Existing software defect classification schemes support multiple tasks, such as root cause analysis and process improvement guidance. However, existing schemes do not assist in assigning defects to a broad range of high level software goals, such as software quality characteristics like functionality, maintainability, and usability. Aim: We investigate whether a classification based on the ISO/IEC 9126 software product quality model is reliable and useful to link defects to quality aspects impacted. Method: Six different subjects, divided in two groups with respect to their expertise, classified 78 defects from an industrial web application using the ISO/IEC 9126 quality main characteristics and sub-characteristics, and a set of proposed extended guidelines. Results: The ISO/IEC 9126 model is reasonably reliable when used to classify defects, even using incomplete defect reports. Reliability and variability is better for the six high level main characteristics of the model than for the 22 sub- characteristics. Conclusions: The ISO/IEC 9126 software quality model provides a solid foundation for defect classification. We also recommend, based on the follow up qualitative analysis performed, to use more complete defect reports and tailor the quality model to the context of us

    Towards Automated Performance Bug Identification in Python

    Full text link
    Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs; our software under study was a real time system used in the advertisement/marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Na\"{\i}ve Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall=0.73, accuracy=0.85, and precision=0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs

    Systems Engineering Leading Indicators Guide, Version 1.0

    Get PDF
    The Systems Engineering Leading Indicators guide set reflects the initial subset of possible indicators that were considered to be the highest priority for evaluating effectiveness before the fact. A leading indicator is a measure for evaluating the effectiveness of a how a specific activity is applied on a program in a manner that provides information about impacts that are likely to affect the system performance objectives. A leading indicator may be an individual measure, or collection of measures, that are predictive of future system performance before the performance is realized. Leading indicators aid leadership in delivering value to customers and end users, while assisting in taking interventions and actions to avoid rework and wasted effort. The Systems Engineering Leading Indicators Guide was initiated as a result of the June 2004 Air Force/LAI Workshop on Systems Engineering for Robustness, this guide supports systems engineering revitalization. Over several years, a group of industry, government, and academic stakeholders worked to define and validate a set of thirteen indicators for evaluating the effectiveness of systems engineering on a program. Released as version 1.0 in June 2007 the leading indicators provide predictive information to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. While the leading indicators appear similar to existing measures and often use the same base information, the difference lies in how the information is gathered, evaluated, interpreted and used to provide a forward looking perspective

    Systems Engineering Leading Indicators Guide, Version 2.0

    Get PDF
    The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with: the International Council on Systems Engineering (INCOSE), Practical Software and Systems Measurement (PSM), and the Systems Engineering Advancement Research Initiative (SEAri) at MIT. A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort. Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged. Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative

    Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle

    Full text link
    Many practitioners and academics believe in a delayed issue effect (DIE); i.e. the longer an issue lingers in the system, the more effort it requires to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006--2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the effort to resolve issues in a later phase was not consistently or substantially greater than when issues were resolved soon after their introduction. This paper documents the above study and explores reasons for this mismatch between this common rule of thumb and empirical data. In summary, DIE is not some constant across all projects. Rather, DIE might be an historical relic that occurs intermittently only in certain kinds of projects. This is a significant result since it predicts that new development processes that promise to faster retire more issues will not have a guaranteed return on investment (depending on the context where applied), and that a long-held truth in software engineering should not be considered a global truism.Comment: 31 pages. Accepted with minor revisions to Journal of Empirical Software Engineering. Keywords: software economics, phase delay, cost to fi
    • 

    corecore