Classification bandits are multi-armed bandit problems whose task is to
classify a given set of arms into either positive or negative class depending
on whether the rate of the arms with the expected reward of at least h is not
less than w for given thresholds h and w. We study a special classification
bandit problem in which arms correspond to points x in d-dimensional real space
with expected rewards f(x) which are generated according to a Gaussian process
prior. We develop a framework algorithm for the problem using various arm
selection policies and propose policies called FCB and FTSV. We show a smaller
sample complexity upper bound for FCB than that for the existing algorithm of
the level set estimation, in which whether f(x) is at least h or not must be
decided for every arm's x. Arm selection policies depending on an estimated
rate of arms with rewards of at least h are also proposed and shown to improve
empirical sample complexity. According to our experimental results, the
rate-estimation versions of FCB and FTSV, together with that of the popular
active learning policy that selects the point with the maximum variance,
outperform other policies for synthetic functions, and the version of FTSV is
also the best performer for our real-world dataset