40 research outputs found
An Empirical Study of Classifier Combination on Cross-Project Defect Prediction
Abstract—To help developers better allocate testing and de-bugging efforts, many software defect prediction techniques have been proposed in the literature. These techniques can be used to predict classes that are more likely to be buggy based on past history of buggy classes. These techniques work well as long as a sufficient amount of data is available to train a prediction model. However, there is rarely enough training data for new software projects. To deal with this problem, cross-project defect prediction, which transfers a prediction model trained using data from one project to another, has been proposed and is regarded as a new challenge for defect prediction. So far, only a few cross-project defect prediction techniques have been proposed. To advance the state-of-the-art, in this work, we investigate 7 composite algorithms, which integrate multiple machine learning classifiers, to improve cross-project defect prediction. To evaluate the performance of the composite algorithms, we perform exper-iments on 10 open source software systems from the PROMISE repository which contain a total of 5,305 instances labeled as defective or clean. We compare the composite algorithms with CODEPLogistic, which is the latest cross-project defect prediction algorithm proposed by Panichella et al. [1], in terms of two standard evaluation metrics: cost effectiveness and F-measure. Our experiment results show that several algorithms outperform CODEPLogistic: Max performs the best in terms of F-measure and its average F-measure outperforms that of CODEPLogistic by 36.88%. BaggingJ48 performs the best in terms of cost effectiveness and its average cost effectiveness outperforms that of CODEPLogistic by 15.34%
High impact bug report identification with imbalanced learning strategies
Supplementary code and data available from GitHub:
https://github.com/goddding/JCST</p
BOAT: An Experimental Platform for Researchers to Comparatively and Reproducibly Evaluate Bug Localization Techniques
BOAT available at http://www.vlis.zju.edu.cn/blp.</p
Detecting similar repositories on GitHub
NSFC Progra