Skip to main content
Article thumbnail
Location of Repository

Towards Anytime Active Learning: Interrupting Experts to Reduce Annotation Costs

By Maria E. Ramirez-loaiza, Aron Culotta and Mustafa Bilgic

Abstract

Many active learning methods use annotation cost or expert quality as part of their framework to select the best data for annotation. While these methods model expert quality, availability, or expertise, they have no direct influence on any of these elements. We present a novel framework built upon decision-theoretic active learning that allows the learner to directly control label quality by allocating a time budget to each annotation. We show that our method is able to improve performance efficiency of the active learner through an interruption mechanism trading off the induced error with the cost of annotation. Our simulation experiments on three document classification tasks show that some interruption is almost always better than none, but that the optimal interruption time varies by dataset

Topics: Categories and Subject Descriptors I.5.2 [Pattern Recognition, Design Methodology—Classifier design and evaluation General Terms Algorithms, Experimentation, Human Factors, Measurement, Performance Keywords Active learning
Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.353.2253
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://poloclub.gatech.edu/ide... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.