54,119 research outputs found
Towards Developing and Analysing Metric-Based Software Defect Severity Prediction Model
In a critical software system, the testers have to spend an enormous amount
of time and effort to maintain the software due to the continuous occurrence of
defects. Among such defects, some severe defects may adversely affect the
software. To reduce the time and effort of a tester, many machine learning
models have been proposed in the literature, which use the documented defect
reports to automatically predict the severity of the defective software
modules. In contrast to the traditional approaches, in this work we propose a
metric-based software defect severity prediction (SDSP) model that uses a
self-training semi-supervised learning approach to classify the severity of the
defective software modules. The approach is constructed on a mixture of
unlabelled and labelled defect severity data. The self-training works on the
basis of a decision tree classifier to assign the pseudo-class labels to the
unlabelled instances. The predictions are promising since the self-training
successfully assigns the suitable class labels to the unlabelled instances.
On the other hand, numerous research studies have covered proposing
prediction approaches as well as the methodological aspects of defect severity
prediction models, the gap in estimating project attributes from the prediction
model remains unresolved. To bridge the gap, we propose five project specific
measures such as the Risk-Factor (RF), the Percent of Saved Budget (PSB), the
Loss in the Saved Budget (LSB), the Remaining Service Time (RST) and Gratuitous
Service Time (GST) to capture project outcomes from the predictions. Similar to
the traditional measures, these measures are also calculated from the observed
confusion matrix. These measures are used to analyse the impact that the
prediction model has on the software project
Analisa Studi Empirik Kerangka Kerja Pengukuran Kualitas Perangkat Lunak Bebas Cacat
Testing activitiy is a strategic step to determine software quality was generated, so that is accepted by the end user. In the testing an errors were found that may be cause to risk a defect on the software. This study was conducted by establishing a measurement framework to analyze software metrics test toward risk prediction of defects consisting of defect density, defect removal, and Line of code. In the analysis, the data set contains 53 module samples through a statistical approach with correlation analysis techniques. Based on the hypothesis were proposed, that there are only 2 of 3 items is received and shows a high significance of defect density and removal of defects towards software quality measurement
Too Trivial To Test? An Inverse View on Defect Prediction to Identify Methods with Low Fault Risk
Background. Test resources are usually limited and therefore it is often not
possible to completely test an application before a release. To cope with the
problem of scarce resources, development teams can apply defect prediction to
identify fault-prone code regions. However, defect prediction tends to low
precision in cross-project prediction scenarios.
Aims. We take an inverse view on defect prediction and aim to identify
methods that can be deferred when testing because they contain hardly any
faults due to their code being "trivial". We expect that characteristics of
such methods might be project-independent, so that our approach could improve
cross-project predictions.
Method. We compute code metrics and apply association rule mining to create
rules for identifying methods with low fault risk. We conduct an empirical
study to assess our approach with six Java open-source projects containing
precise fault data at the method level.
Results. Our results show that inverse defect prediction can identify approx.
32-44% of the methods of a project to have a low fault risk; on average, they
are about six times less likely to contain a fault than other methods. In
cross-project predictions with larger, more diversified training sets,
identified methods are even eleven times less likely to contain a fault.
Conclusions. Inverse defect prediction supports the efficient allocation of
test resources by identifying methods that can be treated with less priority in
testing activities and is well applicable in cross-project prediction
scenarios.Comment: Submitted to PeerJ C
- âŠ