4,393 research outputs found
Bandit-Based Task Assignment for Heterogeneous Crowdsourcing
We consider a task assignment problem in crowdsourcing, which is aimed at
collecting as many reliable labels as possible within a limited budget. A
challenge in this scenario is how to cope with the diversity of tasks and the
task-dependent reliability of workers, e.g., a worker may be good at
recognizing the name of sports teams, but not be familiar with cosmetics
brands. We refer to this practical setting as heterogeneous crowdsourcing. In
this paper, we propose a contextual bandit formulation for task assignment in
heterogeneous crowdsourcing, which is able to deal with the
exploration-exploitation trade-off in worker selection. We also theoretically
investigate the regret bounds for the proposed method, and demonstrate its
practical usefulness experimentally
Active Learning with Expert Advice
Conventional learning with expert advice methods assumes a learner is always
receiving the outcome (e.g., class labels) of every incoming training instance
at the end of each trial. In real applications, acquiring the outcome from
oracle can be costly or time consuming. In this paper, we address a new problem
of active learning with expert advice, where the outcome of an instance is
disclosed only when it is requested by the online learner. Our goal is to learn
an accurate prediction model by asking the oracle the number of questions as
small as possible. To address this challenge, we propose a framework of active
forecasters for online active learning with expert advice, which attempts to
extend two regular forecasters, i.e., Exponentially Weighted Average Forecaster
and Greedy Forecaster, to tackle the task of active learning with expert
advice. We prove that the proposed algorithms satisfy the Hannan consistency
under some proper assumptions, and validate the efficacy of our technique by an
extensive set of experiments.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013
Task Selection for Bandit-Based Task Assignment in Heterogeneous Crowdsourcing
Task selection (picking an appropriate labeling task) and worker selection
(assigning the labeling task to a suitable worker) are two major challenges in
task assignment for crowdsourcing. Recently, worker selection has been
successfully addressed by the bandit-based task assignment (BBTA) method, while
task selection has not been thoroughly investigated yet. In this paper, we
experimentally compare several task selection strategies borrowed from active
learning literature, and show that the least confidence strategy significantly
improves the performance of task assignment in crowdsourcing.Comment: arXiv admin note: substantial text overlap with arXiv:1507.0580
A Contextual Bandit Bake-off
Contextual bandit algorithms are essential for solving many real-world
interactive machine learning problems. Despite multiple recent successes on
statistically and computationally efficient methods, the practical behavior of
these algorithms is still poorly understood. We leverage the availability of
large numbers of supervised learning datasets to empirically evaluate
contextual bandit algorithms, focusing on practical methods that learn by
relying on optimization oracles from supervised learning. We find that a recent
method (Foster et al., 2018) using optimism under uncertainty works the best
overall. A surprisingly close second is a simple greedy baseline that only
explores implicitly through the diversity of contexts, followed by a variant of
Online Cover (Agarwal et al., 2014) which tends to be more conservative but
robust to problem specification by design. Along the way, we also evaluate
various components of contextual bandit algorithm design such as loss
estimators. Overall, this is a thorough study and review of contextual bandit
methodology
- …