3 research outputs found
Recommended from our members
Off-training-set error for the Gibbs and the Bayes optimal generalizers
In this paper we analyze the average off-training-set behavior of the Bayes-optimal and Gibbs learning algorithms. We do this by exploiting the concept of refinement, which concerns the relationship between probability distributions. For non-uniform sampling distributions the expected off training-set error for both learning algorithms can rise with, training set size. However we show in this paper that for uniform sampling and either algorithm, the expected error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algorithm stays constant. In addition we show that when the target function is fixed, expected off-training-set error can increase with training set size if and only if the expected error averaged over all targets decreases with training set size. Our results hold for arbitrary noise and arbitrary loss functions
Some Results Concerning Off-Training-Set and IID Error for the Gibbs and the Bayes Optimal Generalizers
In this paper we analyze the average behavior of the Bayes-optimal and Gibbs learning algorithms. We do this both for off-training-set error and conventional IID error (for which test sets overlap with training sets). For the IID case we provide a major extension to one of the better known results of [7]. We also show that expected IID test set error is a non-increasing function of training set size for either algorithm. On the other hand, as we show, the expected off training-set error for both learning algorithms can increase with training set size, for non-uniform sampling distributions. We characterize what relationship the sampling distribution must have with the prior for such an increase. We show in particular that for uniform sampling distributions and either algorithm, the expected off-training set error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algo..
Some Results Concerning Off-Training-Set and IID Error for the Gibbs and the Bayes Optimal Generalizers
In this paper we analyze the average behavior of the Bayes-optimal and Gibbs learning algorithms. We do this both for off-training-set error and conventional IID error (for which test sets overlap with training sets). For the IID case we provide a major extension to one of the better known results of [7]. We also show that expected IID test set error is a non-increasing function of training set size for either algorithm. On the other hand, as we show, the expected off training-set error for both learning algorithms can increase with training set size, for non-uniform sampling distributions. We characterize what relationship the sampling distribution must have with the prior for such an increase. We show in particular that for uniform sampling distributions and either algorithm, the expected off-training set error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algo..