3 research outputs found

    Some Results Concerning Off-Training-Set and IID Error for the Gibbs and the Bayes Optimal Generalizers

    No full text
    In this paper we analyze the average behavior of the Bayes-optimal and Gibbs learning algorithms. We do this both for off-training-set error and conventional IID error (for which test sets overlap with training sets). For the IID case we provide a major extension to one of the better known results of [7]. We also show that expected IID test set error is a non-increasing function of training set size for either algorithm. On the other hand, as we show, the expected off training-set error for both learning algorithms can increase with training set size, for non-uniform sampling distributions. We characterize what relationship the sampling distribution must have with the prior for such an increase. We show in particular that for uniform sampling distributions and either algorithm, the expected off-training set error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algo..

    Some Results Concerning Off-Training-Set and IID Error for the Gibbs and the Bayes Optimal Generalizers

    No full text
    In this paper we analyze the average behavior of the Bayes-optimal and Gibbs learning algorithms. We do this both for off-training-set error and conventional IID error (for which test sets overlap with training sets). For the IID case we provide a major extension to one of the better known results of [7]. We also show that expected IID test set error is a non-increasing function of training set size for either algorithm. On the other hand, as we show, the expected off training-set error for both learning algorithms can increase with training set size, for non-uniform sampling distributions. We characterize what relationship the sampling distribution must have with the prior for such an increase. We show in particular that for uniform sampling distributions and either algorithm, the expected off-training set error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algo..
    corecore