Skip to main content
Article thumbnail
Location of Repository

Implications Of Ceiling Effects In Defect Predictors

By Tim Menziesa, Burak Turhanb, Ayse Benerb, Gregory Gaya, Bojan Cukica A and Yue Jiang

Abstract

Context: There are many methods that input static code features and output a predictor for faulty code modules. These data mining methods have hit a “performance ceiling”; i.e., some inherent upper bound on the amount of information offered by, say, static code features when identifying modules which contain faults. Objective: We seek an explanation for this ceiling effect. Perhaps static code features have “limited information content”; i.e. their information can be quickly and completely discovered by even simple learners. Method: An initial literature review documents the ceiling effect in other work. Next, using three sub-sampling techniques (under-, over-, and micro-sampling), we look for the lower useful bound on the number of training instances. Results: Using micro-sampling, we find that as few as 50 instances yield as much information as larger training sets. Conclusions: We have found much evidence for the limited information hypothesis. Further progress in learning defect predictors may not come from better algorithms. Rather, we need to be improving the information content of the training data, perhaps with case-based reasoning methods. Categories and Subject Descriptors i.5 [learning]: machine learning; d.2.8 [software engineering]: product metric

Topics: under-sampling, over-sampling, defect prediction
Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.353.114
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://menzies.us/pdf/08ceilin... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.