19 research outputs found
CrowdED: Guideline for Optimal Crowdsourcing Experimental Design
Crowdsourcing involves the creating of HITs (Human Intelligent Tasks), submitting them to a crowdsourcing platform and providing a monetary reward for each HIT. One of the advantages of using crowdsourcing is that the tasks can be highly parallelized, that is, the work is performed by a high number of workers in a decentralized setting. The design also offers a means to cross-check the accuracy of the answers by assigning each task to more than one person and thus relying on majority consensus as well as reward the workers according to their performance and productivity. Since each worker is paid per task, the costs can significantly increase, irrespective of the overall accuracy of the results. Thus, one important question when designing such crowdsourcing tasks that arise is how many workers to employ and how many tasks to assign to each worker when dealing with large amounts of tasks. That is, the main research questions we aim to answer is: 'Can we a-priori estimate optimal workers and tasks' assignment to obtain maximum accuracy on all tasks?'. Thus, we introduce a two-staged statistical guideline, CrowdED, for optimal crowdsourcing experimental design in order to a-priori estimate optimal workers and tasks' assignment to obtain maximum accuracy on all tasks. We describe the algorithm and present preliminary results and discussions. We implement the algorithm in Python and make it openly available on Github, provide a Jupyter Notebook and a R Shiny app for users to re-use, interact and apply in their own crowdsourcing experiments
ConStance: Modeling Annotation Contexts to Improve Stance Classification
Manual annotations are a prerequisite for many applications of machine
learning. However, weaknesses in the annotation process itself are easy to
overlook. In particular, scholars often choose what information to give to
annotators without examining these decisions empirically. For subjective tasks
such as sentiment analysis, sarcasm, and stance detection, such choices can
impact results. Here, for the task of political stance detection on Twitter, we
show that providing too little context can result in noisy and uncertain
annotations, whereas providing too strong a context may cause it to outweigh
other signals. To characterize and reduce these biases, we develop ConStance, a
general model for reasoning about annotations across information conditions.
Given conflicting labels produced by multiple annotators seeing the same
instances with different contexts, ConStance simultaneously estimates gold
standard labels and also learns a classifier for new instances. We show that
the classifier learned by ConStance outperforms a variety of baselines at
predicting political stance, while the model's interpretable parameters shed
light on the effects of each context.Comment: To appear at EMNLP 201