1 research outputs found

    Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling

    Get PDF
    Abstract Annotation is an essential step in the development cycle of many Natural Language Processing (NLP) systems. Lately, crowdsourcing has been employed to facilitate large scale annotation at a reduced cost. Unfortunately, verifying the quality of the submitted annotations is a daunting task. Existing approaches address this problem either through sampling or redundancy. However, these approaches do have a cost associated with it. Based on the observation that a crowdsourcing worker returns to do a task that he has done previously, a novel framework for automatic validation of crowd-sourced task is proposed in this paper. A case study based on sentiment analysis is presented to elucidate the framework and its feasibility. The result suggests that validation of the crowd-sourced task can be automated to a certain extent. Keywords: Crowdsourcing, Evaluation, User-modelling Annotation is an unavoidable task for developing NLP systems. Large scale annotation projects such as 1. We present a framework for automatic verifying a crowd sourced task. This can save time and effort spend for validating the submitted task. Moreover, using this framework, a set of reliable worker force can selected a priori for a future task of similar nature. 2. Our results suggest that making the task easier can expedite the task completion rate when compared to increasing the monetary incentive associated with task
    corecore