100 research outputs found
Development and Application of New Quality Model for Software Projects
The IT industry tries to employ a number of models to identify the defects in the construction of software projects. In this paper, we present COQUALMO and its limitations and aim to increase the quality without increasing the cost and time. The computation time, cost, and effort to predict the residual defects are very high; this was overcome by developing an appropriate new quality model named the software testing defect corrective model (STDCM). The STDCM was used to estimate the number of remaining residual defects in the software product; a few assumptions and the detailed steps of the STDCM are highlighted. The application of the STDCM is explored in software projects. The implementation of the model is validated using statistical inference, which shows there is a significant improvement in the quality of the software projects
An Efficient Cost Estimation Model with Fuzzy Expert System
In this paper we are proposing fault prediction based cost effective analysis over source code, we register the measurements over deficiency inclined modules and contrast and past methodology issue inclined, for each test approach we process measurements and advances to fuzzy master framework and contrast and past form. Our proposed approach gives more productive results than conventional methodology. The need of conveyed and complex business applications in big business requests fault free and quality application frameworks. This makes it critical in software advancement to create quality and fault free software. It is likewise critical to plan solid and simple to keep up as it includes a considerable measure of human endeavors, cost and time amid software life cycle. A product advancement process performs different exercises to minimize the shortcomings, for example, flaw forecast, location, avoidance and remedy. This paper exhibits an overview on current practices for software issue location and counteractive action components in the product improvement. It additionally talks about the preferences and confinements of these instruments which identifies with the quality item improvement and support. As of not long ago, different strategies have been proposed for anticipating flaw inclined modules in light of expectation execution. Sadly quality change and cost lessening has been once in a while surveyed. The fundamental inspiration here is improvement of acknowledgment testing to give fantastic administrations to clients
Towards A Software Failure Cost Impact Model for the Customer An Analysis of an Open Source Product
ABSTRACT While the financial consequences of software errors on the developer's side have been explored extensively, the costs arising for the end user have been largely neglected. One reason is the difficulty of linking errors in the code with emerging failure behavior of the software. The problem becomes even more difficult when trying to predict failure probabilities based on models or code metrics. In this paper we take a first step towards a cost prediction model by exploring the possibilities of modeling the financial consequences of already identified software failures. Firefox, a well-known open source software, is used as a test subject. Historically identified failures are modeled using fault trees. To identify costs, usage profiles are employed to depict the interaction with the system. The presented approach demonstrates the possibility to model failure cost for an organization using a specific software by establishing a relationship between user behavior, software failures, and costs. As future work, an extension with software error prediction techniques as well as an empirical validation of the model is aspired
Improvements to Test Case Prioritisation considering Efficiency and Effectiveness on Real Faults
Despite the best efforts of programmers and component manufacturers, software does not always work perfectly. In order to guard against this, developers write test suites that execute parts of the code and compare the expected result with the actual result. Over time, test suites become expensive to run for every change, which has led to optimisation techniques such as test case prioritisation.
Test case prioritisation reorders test cases within the test suite with the goal of revealing faults as soon as possible. Test case prioritisation has received a lot of research that has indicated that prioritised test suites can reveal faults faster, but due to a lack of real fault repositories available for research, prior evaluations have often been conducted on artificial faults. This thesis aims to investigate whether the use of artificial faults represents a threat to the validity of previous studies, and proposes new strategies for test case prioritisation that increase the effectiveness of test case prioritisation on real faults.
This thesis conducts an empirical evaluation of existing test case prioritisation strategies on real and artificial faults, which establishes that artificial faults provide unreliable results for real faults. The study found that there are four occasions on which a strategy for test case prioritisation would be considered no better than the baseline when using one fault type, but would be considered a significant improvement over the baseline when using the other. Moreover, this evaluation reveals that existing test case prioritisation strategies perform poorly on real faults, with no strategies significantly outperforming the baseline.
Given the need to improve test case prioritisation strategies for real faults, this thesis proceeds to consider other techniques that have been shown to be effective on real faults. One such technique is defect prediction, a technique that provides estimates that a class contains a fault. This thesis proposes a test case prioritisation strategy, called G-Clef, that leverages defect prediction estimates to reorder test suites. While the evaluation of G-Clef indicates that it outperforms existing test case prioritisation strategies, the average predicted location of a faulty class is 13% of all classes in a system, which shows potential for improvement. Finally, this thesis conducts an investigative study as to whether sentiments expressed in commit messages could be used to improve the defect prediction element of G-Clef.
Throughout the course of this PhD, I have created a tool called Kanonizo, an open-source tool for performing test case prioritisation on Java programs. All of the experiments and strategies used in this thesis were implemented into Kanonizo
Recommended from our members
HEDP: A Method for Early Forecasting Software Defects based on Human Error Mechanisms
As the primary cause of software defects, human error is the key to understanding, and perhaps to predicting and avoiding them. Little research has been done to predict defects on the basis of the cognitive errors that cause them. This paper proposes an approach to predicting software defects, so that they may be more easily avoided and/or removed, through knowledge about the cognitive mechanisms of human errors. Our theory is that the main process behind a software defect is that an error-prone scenario triggers human error modes, which psychologists have observed to recur across diverse activities. Software defects can then be predicted by identifying such scenarios, guided by this knowledge of typical error modes. Compared to current “defect prediction models” that provide a relative likelihood that a program module may contain defects, the proposed idea emphasizes predicting the exact location and form of a possible defect. We conducted two case studies to demonstrate and validate this approach, with 55 programmers in a programming competition and 5 analysts serving as the users of the approach. We found it impressive that the approach was able to predict, at the requirement phase, the exact locations and forms of 7 out of the 22 (31.8%) specific types of defects that were found in the code. The defects predicted tended to be common defects: their occurrences constituted 75.7% of the total number of defects in the 55 developed programs; each of them was introduced by at least two persons. The fraction of the defects introduced by a programmer that were predicted was on average (over all programmers) 75%. Furthermore, these predicted defects were highly persistent through the debugging process. If the prediction had been used to successfully prevent these defects, this could have saved 46.2% of the debugging iterations. This excellent capability of forecasting the exact locations and forms of possible defects at the early phases of software development recommends the approach for substantial benefits to defect prevention and early detection
Recommended from our members
HEDF: A Method for Early Forecasting Software Defects Based on Human Error Mechanisms
As the primary cause of software defects, human error is the key to understanding, and perhaps to forecasting and avoiding defects. Little research has been done to forecast defects on the basis of the cognitive errors that cause them. The existing “defect prediction” models are applied to code once it has been produced: therefore, their “predictions” have little implications for preventing the defects. This paper proposes an approach, “Human-Error-based Defect Forecast” (HEDF), to forecasting the exact defects at early stages of software development, before the code is produced, through knowledge about the cognitive mechanisms that cause developers’ errors. This approach is based on a model of human error mechanisms underlying software defects: a defect is caused by an error-prone scenario triggering human error modes, which psychologists have observed to recur across diverse activities. Software defects can then be forecast by identifying such error-prone scenarios the in requirements and/or design documents. We assessed this approach empirically, with 55 programmers in a programming competition and four representative analysts serving as the users of the approach. Impressively, the approach was able to forecast, at the requirement phase, 75.7% of the defects later committed by all of the programmers. When considering just the defect forms, which may manifest as distinct defects even in the same program, the proposed method predicted 31.8% of them. This approach substantially improved the defect forecasting performances for analysts of various expertise, with a minimum of 100% improvement, compared to forecasts without the approach. If the forecast had been used to prevent the defects, it could have saved an estimated 46.2% of the debugging effort and increased the fraction of programmers delivering an acceptable program by 32.6%. The observed excellent performance of HEDF in forecasting (early at requirement stage) the exact forms and locations of defects that may be later introduced by developers into code makes it a promising candidate for preventing the defects, worthy of further study
Availability estimation and management for complex processing systems
“Availability” is the terminology used in asset intensive industries such as petrochemical and hydrocarbons processing to describe the readiness of equipment, systems or plants to perform their designed functions. It is a measure to suggest a facility’s capability of meeting targeted production in a safe working environment. Availability is also vital as it encompasses reliability and maintainability, allowing engineers to manage and operate facilities by focusing on one performance indicator. These benefits make availability a very demanding and highly desired area of interest and research for both industry and academia.
In this dissertation, new models, approaches and algorithms have been explored to estimate and manage the availability of complex hydrocarbon processing systems. The risk of equipment failure and its effect on availability is vital in the hydrocarbon industry, and is also explored in this research. The importance of availability encouraged companies to invest in this domain by putting efforts and resources to develop novel techniques for system availability enhancement. Most of the work in this area is focused on individual equipment compared to facility or system level availability assessment and management. This research is focused on developing an new systematic methods to estimate system availability. The main focus areas in this research are to address availability estimation and management through physical asset management, risk-based availability estimation strategies, availability and safety using a failure assessment framework, and availability enhancement using early equipment fault detection and maintenance scheduling optimization
Revisiting supervised and unsupervised models for effort-aware just-in-time defect prediction
Data available at https://doi.org/10.5281/zenodo.1432582</p
Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at ABB Inc.
Quantitatively-based risk management can reduce the risks associated with field defects for both software producers and software consumers. In this paper, we report experiences and results from initiating risk-management activities at a large systems development organization. The initiated activities aim to improve product testing (system/integration testing), to improve maintenance resource allocation, and to plan for future process improvements. The experiences we report address practical issues not commonly addressed in research studies: how to select an appropriate modeling method for product testing prioritization and process improvement planning, how to evaluate accuracy of predictions across multiple releases in time, and how to conduct analysis with incomplete information. In addition, we report initial empirical results for two systems with 13 and 15 releases. We present prioritization of configurations to guide product testing, field defect predictions within the first year of deployment to aid maintenance resource allocation, and important predictors across both systems to guide process improvement planning. Our results and experiences are steps towards quantitatively-based risk management.</p
- …