9 research outputs found
Comprehensive and Reliable Crowd Assessment Algorithms
Evaluating workers is a critical aspect of any crowdsourcing system. In this
paper, we devise techniques for evaluating workers by finding confidence
intervals on their error rates. Unlike prior work, we focus on
"conciseness"---that is, giving as tight a confidence interval as possible.
Conciseness is of utmost importance because it allows us to be sure that we
have the best guarantee possible on worker error rate. Also unlike prior work,
we provide techniques that work under very general scenarios, such as when not
all workers have attempted every task (a fairly common scenario in practice),
when tasks have non-boolean responses, and when workers have different biases
for positive and negative tasks. We demonstrate conciseness as well as accuracy
of our confidence intervals by testing them on a variety of conditions and
multiple real-world datasets.Comment: ICDE 201
Data Programming: Creating Large Training Sets, Quickly
Abstract Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable