1 research outputs found

    Predicting the Generalization Performance of Cross Validatory Model Selection Criteria

    No full text
    We conduct an average-case analysis of the generalization error rate of holdout testing and n-fold cross validation "wrappers" for model selection. Unlike previous approaches, we do not rely on worst-case bounds that hold for all possible learning problems. Instead, we study the behavior of a learning algorithm with a cross-validation wrapper for a given problem, taking properties of the problem (that can be estimated using the sample) into account. We have to pay for this (and the efficiency of our solution) by having to make some approximations. Experiments show that our analysis can nevertheless predict the behavior of cross validation wrappers fairly accurately
    corecore