23 research outputs found

    Learning From Noisy Singly-labeled Data

    Get PDF
    Supervised learning depends on annotated examples, which are taken to be the \emph{ground truth}. But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality is above a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits.Comment: 18 pages, 3 figure

    Social computation: Fundamental limits and efficient algorithms

    Get PDF
    Social computing systems bring enormous value to society by harnessing the data generated by the members of a community. Though each individual reveals a little information through his online traces, collectively this information gives significant insights on the societal preferences that can be used in designing better systems for the society. Challenging societal problems can be solved using the collective power of a crowd wherein each individual offers only a limited knowledge on a specifically designed online platform. There exists general approaches to design such online platforms, to aggregate the collected data, and to use them for the downstream tasks, but are typically sub-optimal and inefficient. In this work, we investigate several social computing problems and provide efficient algorithms for solving them. This work studies several topics: (a) designing efficient algorithms for aggregating preferences from partially observed traces of online activities, and characterizing the fundamental trade-off between the computational complexity and statistical efficiency; (b) characterizing the fundamental trade-off between the budget and accuracy in aggregated answers in crowdsourcing systems, and designing efficient algorithms for training supervised learning models using the crowdsourced answers; (c) designing efficient algorithms for estimating fundamental spectral properties of a partially observed data such as a movie rating data matrix in recommendation systems, and connections in a large network
    corecore