1 research outputs found

    An empirical study of the sample size variability of optimal active learning using Gaussian process regression

    No full text
    Optimal active learning refers to a framework where the learner actively selects data points to be added to its training set in a statistically optimal way. Under the assumption of log-loss, optimal active learning can be implemented in a relatively simple and efficient manner for regression problems using Gaussian processes. However (to date), there has been little attempt to study the experimental behavior and performance of this technique. In this paper, we present a detailed empirical evaluation of optimal active learning using Gaussian processes across a set of seven regression problems from the DELVE repository. In particular, we examine the evaluation of optimal active learning compared to making random queries and the impact of experimental factors such as the size and construction of the different sub-datasets used as part of training and testing the models. It is shown that the multiple sources of variability can be quite significant and suggests that more care needs to be taken in the evaluation of active learning algorithms
    corecore