7 research outputs found

    Search based training data selection for cross project defect prediction

    Get PDF
    Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction (CPDP). On the other hand, data quality is an issue to consider in CPDP. Aim: We aim at utilising the Nearest Neighbor (NN)-Filter, embedded in a genetic algorithm, for generating evolving training datasets to tackle CPDP, while accounting for potential noise in defect labels. Method: We propose a new search based training data (i.e., instance) selection approach for CPDP called GIS (Genetic Instance Selection) that looks for solutions to optimize a combined measure of F-Measure and GMean, on a validation set generated by (NN)-filter. The genetic operations consider the similarities in features and address possible noise in assigned defect labels. We use 13 datasets from PROMISE repository in order to compare the performance of GIS with benchmark CPDP methods, namely (NN)-filter and naive CPDP, as well as with within project defect prediction (WPDP). Results: Our results show that GIS is significantly better than (NN)-Filter in terms of F-Measure (p – value ≪ 0.001, Cohen’s d = 0.697) and GMean (p – value ≪ 0.001, Cohen’s d = 0.946). It also outperforms the naive CPDP approach in terms of F-Measure (p – value ≪ 0.001, Cohen’s d = 0.753) and GMean (p – value ≪ 0.001, Cohen’s d = 0.994). In addition, the performance of our approach is better than that of WPDP, again considering F-Measure (p – value ≪ 0.001, Cohen’s d = 0.227) and GMean (p – value ≪ 0.001, Cohen’s d = 0.595) values. Conclusions: We conclude that search based instance selection is a promising way to tackle CPDP. Especially, the performance comparison with the within project scenario encourages further investigation of our approach. However, the performance of GIS is based on high recall in the expense of low precision. Using different optimization goals, e.g. targeting high precision, would be a future direction to investigate

    Connecting Software Metrics across Versions to Predict Defects

    Full text link
    Accurate software defect prediction could help software practitioners allocate test resources to defect-prone modules effectively and efficiently. In the last decades, much effort has been devoted to build accurate defect prediction models, including developing quality defect predictors and modeling techniques. However, current widely used defect predictors such as code metrics and process metrics could not well describe how software modules change over the project evolution, which we believe is important for defect prediction. In order to deal with this problem, in this paper, we propose to use the Historical Version Sequence of Metrics (HVSM) in continuous software versions as defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN), a popular modeling technique, to take HVSM as the input to build software prediction models. The experimental results show that, in most cases, the proposed HVSM-based RNN model has a significantly better effort-aware ranking effectiveness than the commonly used baseline models

    A Review Of Training Data Selection In Software Defect Prediction

    Get PDF
    The publicly available dataset poses a challenge in selecting the suitable data to train a defect prediction model to predict defect on other projects. Using a cross-project training dataset without a careful selection will degrade the defect prediction performance. Consequently, training data selection is an essential step to develop a defect prediction model. This paper aims to synthesize the state-of-the-art for training data selection methods published from 2009 to 2019. The existing approaches addressing the training data selection issue fall into three groups, which are nearest neighbour, cluster-based, and evolutionary method. According to the results in the literature, the cluster-based method tends to outperform the nearest neighbour method. On the other hand, the research on evolutionary techniques gives promising results but is still scarce. Therefore, the review concludes that there is still some open area for further investigation in training data selection. We also present research direction within this are

    An exploratory study of search based training data selection for cross project defect prediction

    No full text
    Abstract Context: Search based approaches are gaining attention in cross project defect prediction (CPDP). The complexity of such approaches and existence of various design decisions are important issues to consider. Objective: We aim at investigating factors that can affect the performance of search based selection (SBS) approaches. We study a genetic instance selection approach (GIS) and present an evaluation of design options for search based CPDP. Method: Using an exploratory approach, data from different options of models are gathered and analyzed through ANOVA tests and effect sizes. Results: Both feature sets and validation dataset selection options show small or insignificant impacts on F-measure and precision, unlike the more affected false positive and true negative rates. Size of training data does not seem to be related to significant changes in F-measure and precision and high variability in performance are discouraging evidence for using larger datasets. Fitness function is one of the major factors that impact performance with much larger effect than the choice of validation dataset. Finally, while showing slight impacts, data label changes do not seem to be the top contributor to performance. Conclusions: We conclude that exploratory approaches can be effective for making design decisions in constructing search based CPDP models. Effect of individual tuned learners and their interaction with other affecting parameters and more in depth study of quality affecting factors guided by label changes are directions to investigate
    corecore