Crowdsourcing has emerged as a new method for obtaining annotations for training models for machine learning. While many variants of this process exist, they largely differ in their method of motivating subjects to contribute and the scale of their applications. To date, however, there has yet to be a study that helps a practitioner to decide what form an annotation application should take to best reach its objectives within the constraints of a project. We first provide a faceted analysis of existing crowdsourcing annotation applications. We then use our analysis to discuss our recommendations on how practitioners can take advantage of crowdsourcing and discuss our view on potential opportunities in this area
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.