4 research outputs found

    Large-scale Markov decision problems with KL control cost and its application to crowdsourcing

    No full text
    We study average and total cost Markov decision problems with large state spaces. Since the computational and statistical costs of finding the optimal policy scale with the size of the state space, we focus on searching for near-optimality in a low-dimensional family of policies. In particular, we show that for problems with a Kullback-Leibler divergence cost function, we can reduce policy optimization to a convex optimization and solve it approximately using a stochastic subgradient algorithm. We show that the performance of the resulting policy is close to the best in the low-dimensional family. We demonstrate the efficacy of our approach by controlling the important crowdsourcing application of budget allocation in crowd labeling

    A Statistical Analysis of the Aggregation of Crowdsourced Labels

    Get PDF
    Crowdsourcing, due to its inexpensive and timely nature, has become a popular method of collecting data that is difficult for computers to generate. We focus on using this method of human computation to gather labels for classification tasks, to be used for machine learning. However, data gathered this way may be of varying quality, ranging from spam to perfect. We aim to maintain the cost-effective property of crowdsourcing, while also obtaining quality results. Towards a solution, we have multiple workers label the same problem instance, aggregating the responses into one label afterwards. We study what aggregation method to use, and what guarantees we can provide on its estimates. Different crowdsourcing models call for different techniques – we outline and organize various directions taken in the literature, and focus on the Dawid-Skene model. In this setting each instance has a true label, workers are independent, and the performance of each individual is assumed to be uniform over all instances, in the sense that she has an inherent skill that governs the probability with which she labels correctly. Her skill is unknown to us. Aggregation methods aim to find the true label of each task based solely on the labels the workers reported. We measure the performance of these methods by the probability with which the estimates they output match the true label. In practice, a popular procedure is to run the EM algorithm to find estimates of the skills and labels. However, this method is not directly guaranteed to perform well in our measure. We collect and evaluate theoretical results that bound the error of various aggregation methods, including specific variants of EM. Finally, we prove a guarantee on the error suffered by the maximum likelihood estimator, the global optima of the function that EM aims to numerically optimize
    corecore