3 research outputs found

    Slua: Towards semantic linking of users with actions in crowdsourcing

    Get PDF
    Recent advances in web technologies allow people to help solve complex problems by performing online tasks in return for money, learning, or fun. At present, human contribution is limited to the tasks defined on individual crowdsourcing platforms. Furthermore, there is a lack of tools and technologies that support matching of tasks with appropriate users, across multiple systems. A more explicit capture of the semantics of crowdsourcing tasks could enable the design and development of matchmaking services between users and tasks. The paper presents the SLUA ontology that aims to model users and tasks in crowdsourcing systems in terms of the relevant actions, capabilities, and rewards. This model describes different types of human tasks that help in solving complex problems using crowds. The paper provides examples of describing users and tasks in some real world systems, with SLUA ontology

    Effects of Expertise Assessment on the Quality of Task Routing in Human Computation

    Get PDF
    Human computation systems are characterized by the use of human workers to solve computationally difficult problems. Expertise profiling involves assessment and representation of a worker’s expertise, in order to route human computation tasks to appropriate workers. This paper studies the relationship between the assessment workload on workers and the quality of task routing. Three expertise assessment approaches were compared with the help of a user study, using two different groups of human workers. The first approach requests workers to provide self-assessment of their knowledge. The second approach measures the knowledge of workers through their performance against tasks with known responses. We propose a third approach based on a combination of self-assessment and task-assessment. The results suggest that the self-assessment approach requires minimum assessment workload from workers during expertise profiling. By comparison, the task-assessment approach achieved the highest response rate and accuracy. The proposed approach requires less assessment workload, while achieving the response rate and accuracy similar to the task-assessment approach

    A Capability Requirements Approach for Predicting Worker Performance in Crowdsourcing

    Get PDF
    Abstract—Assigning heterogeneous tasks to workers is an important challenge of crowdsourcing platforms. Current ap-proaches to task assignment have primarily focused on content-based approaches, qualifications, or work history. We propose an alternative and complementary approach that focuses on what capabilities workers employ to perform tasks. First, we model various tasks according to the human capabilities required to perform them. Second, we capture the capability traces of the crowd workers performance on existing tasks. Third, we predict performance of workers on new tasks to make task routing decisions, with the help of capability traces. We evaluate the ef-fectiveness of our approach on three different tasks including fact verification, image comparison, and information extraction. The results demonstrate that we can predict worker’s performance based on worker capabilities. We also highlight limitations and extensions of the proposed approach. Keywords—microtask, taxonomy, crowdsourcing, performance I
    corecore