26 research outputs found

    Fairness and Transparency in Crowdsourcing

    Get PDF
    International audienceDespite the success of crowdsourcing, the question of ethics has not yet been addressed in its entirety. Existing efforts have studied fairness in worker compensation and in helping requesters detect malevolent workers. In this paper, we propose fairness axioms that generalize existing work and pave the way to studying fairness for task assignment, task completion, and worker compensation. Transparency on the other hand, has been addressed with the development of plug-ins and forums to track workers' performance and rate requesters. Similarly to fairness, we define transparency axioms and advocate the need to address it in a holistic manner by providing declarative specifications. We also discuss how fairness and transparency could be enforced and evaluated in a crowdsourcing platform

    Towards an integrated crowdsourcing definition

    Full text link
    Crowdsourcing is a relatively recent concept that encompasses many practices. This diversity leads to the blurring of the limits of crowdsourcing that may be identified virtually with any type of internet-based collaborative activity, such as co-creation or user innovation. Varying definitions of crowdsourcing exist, and therefore some authors present certain specific examples of crowdsourcing as paradigmatic, while others present the same examples as the opposite. In this article, existing definitions of crowdsourcing are analysed to extract common elements and to establish the basic characteristics of any crowdsourcing initiative. Based on these existing definitions, an exhaustive and consistent definition for crowdsourcing is presented and contrasted in 11 cases.Estelles Arolas, E.; González-Ladrón-De-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science. 32(2):189-200. doi:10.1177/0165551512437638S189200322Vukovic, M., & Bartolini, C. (2010). Towards a Research Agenda for Enterprise Crowdsourcing. Leveraging Applications of Formal Methods, Verification, and Validation, 425-434. doi:10.1007/978-3-642-16558-0_36Brabham, D. C. (2008). Crowdsourcing as a Model for Problem Solving. Convergence: The International Journal of Research into New Media Technologies, 14(1), 75-90. doi:10.1177/1354856507084420Vukovic, M. (2009). Crowdsourcing for Enterprises. 2009 Congress on Services - I. doi:10.1109/services-i.2009.56Doan, A., Ramakrishnan, R., & Halevy, A. Y. (2011). Crowdsourcing systems on the World-Wide Web. Communications of the ACM, 54(4), 86. doi:10.1145/1924421.1924442Brabham, D. C. (2008). Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application. First Monday, 13(6). doi:10.5210/fm.v13i6.2159Huberman, B. A., Romero, D. M., & Wu, F. (2009). Crowdsourcing, attention and productivity. Journal of Information Science, 35(6), 758-765. doi:10.1177/0165551509346786Andriole, S. J. (2010). Business impact of Web 2.0 technologies. Communications of the ACM, 53(12), 67. doi:10.1145/1859204.1859225Denyer, D., Tranfield, D., & van Aken, J. E. (2008). Developing Design Propositions through Research Synthesis. Organization Studies, 29(3), 393-413. doi:10.1177/0170840607088020Egger, M., Smith, G. D., & Altman, D. G. (Eds.). (2001). Systematic Reviews in Health Care. doi:10.1002/9780470693926Tatarkiewicz, W. (1980). A History of Six Ideas. doi:10.1007/978-94-009-8805-7Cosma, G., & Joy, M. (2008). Towards a Definition of Source-Code Plagiarism. IEEE Transactions on Education, 51(2), 195-200. doi:10.1109/te.2007.906776Brabham, D. C. (2009). Crowdsourcing the Public Participation Process for Planning Projects. Planning Theory, 8(3), 242-262. doi:10.1177/1473095209104824Alonso, O., & Lease, M. (2011). Crowdsourcing 101. Proceedings of the fourth ACM international conference on Web search and data mining - WSDM ’11. doi:10.1145/1935826.1935831Bederson, B. B., & Quinn, A. J. (2011). Web workers unite! addressing challenges of online laborers. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems - CHI EA ’11. doi:10.1145/1979742.1979606Grier, D. A. (2011). Not for All Markets. Computer, 44(5), 6-8. doi:10.1109/mc.2011.155Heer, J., & Bostock, M. (2010). Crowdsourcing graphical perception. Proceedings of the 28th international conference on Human factors in computing systems - CHI ’10. doi:10.1145/1753326.1753357Heymann, P., & Garcia-Molina, H. (2011). Turkalytics. Proceedings of the 20th international conference on World wide web - WWW ’11. doi:10.1145/1963405.1963473Kazai, G. (2011). In Search of Quality in Crowdsourcing for Search Engine Evaluation. Advances in Information Retrieval, 165-176. doi:10.1007/978-3-642-20161-5_17La Vecchia, G., & Cisternino, A. (2010). Collaborative Workforce, Business Process Crowdsourcing as an Alternative of BPO. Lecture Notes in Computer Science, 425-430. doi:10.1007/978-3-642-16985-4_40Liu, E., & Porter, T. (2010). Culture and KM in China. VINE, 40(3/4), 326-333. doi:10.1108/03055721011071449Oliveira, F., Ramos, I., & Santos, L. (2010). Definition of a Crowdsourcing Innovation Service for the European SMEs. Lecture Notes in Computer Science, 412-416. doi:10.1007/978-3-642-16985-4_37Porta, M., House, B., Buckley, L., & Blitz, A. (2008). Value 2.0: eight new rules for creating and capturing value from innovative technologies. Strategy & Leadership, 36(4), 10-18. doi:10.1108/10878570810888713Ribiere, V. M., & Tuggle, F. D. (Doug). (2010). Fostering innovation with KM 2.0. VINE, 40(1), 90-101. doi:10.1108/03055721011024955Sloane, P. (2011). The brave new world of open innovation. Strategic Direction, 27(5), 3-4. doi:10.1108/02580541111125725Wexler, M. N. (2011). Reconfiguring the sociology of the crowd: exploring crowdsourcing. International Journal of Sociology and Social Policy, 31(1/2), 6-20. doi:10.1108/01443331111104779Whitla, P. (2009). Crowdsourcing and Its Application in Marketing Activities. Contemporary Management Research, 5(1). doi:10.7903/cmr.1145Yang, J., Adamic, L. A., & Ackerman, M. S. (2008). Crowdsourcing and knowledge sharing. Proceedings of the 9th ACM conference on Electronic commerce - EC ’08. doi:10.1145/1386790.1386829Brabham, D. C. (2010). MOVING THE CROWD AT THREADLESS. Information, Communication & Society, 13(8), 1122-1145. doi:10.1080/13691181003624090Giudice, K. D. (2010). Crowdsourcing credibility: The impact of audience feedback on Web page credibility. Proceedings of the American Society for Information Science and Technology, 47(1), 1-9. doi:10.1002/meet.14504701099Stewart, O., Huerta, J. M., & Sader, M. (2009). Designing crowdsourcing community for the enterprise. Proceedings of the ACM SIGKDD Workshop on Human Computation - HCOMP ’09. doi:10.1145/1600150.1600168Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 370-396. doi:10.1037/h0054346Veal, A. J. (Ed.). (2002). Leisure and tourism policy and planning. doi:10.1079/9780851995465.0000Dahlander, L., & Gann, D. M. (2010). How open is innovation? Research Policy, 39(6), 699-709. doi:10.1016/j.respol.2010.01.01

    Conceptualisation of "crowdsourcing" term in management sciences

    Get PDF
    Background. Crowdsourcing is a relatively new concept, nonetheless it has been raising more and more interest with researchers. This is a result of its potential since it enables improving business processes, creating open innovations, building of competitive advantage, access to experience, information, crowd skills and work, problem solving, crisis management, expanding the organisation's existing activity and offer, creating the organisation's image, improving communication with the surroundings, optimising costs of the organisation's activity. However, although the subject of crowdsourcing constitutes one of the currently emerging directions of research on the basis of management sciences, one observes a peculiar exploration difficulty. It may result from incoherence in conceptualisation or explication of this term. Research aims. The aim of this article is an attempt, basing on the existing research efforts, to conceptualise crowdsourcing based on management sciences. In the article a proposal of conceptualising the notion of crowdsourcing was presented including its levels. Methodology. For the needs of specifying, evaluation, and identification of the existing state of knowledge on crowdsourcing, a systematic literature review was conducted. It enabled getting familiar with the results of similar research, its selection and critical analysis and based on that it was used for expanding the earlier findings of other researchers. The biggest, full text databases i.e Ebsco, Elsevier/Springer, Emerald, Proquest, Scopus, and ISI Web of Science, which include the majority of journals on strategic management were analysed. In order to establish the state of knowledge and existing findings a review of databases in Poland: BazEkon and CEON was also conducted. 54 elaborations of English language databases and 41 from Polish language databases from the period of 2006-2017 were analysed. Key findings. A review of the scientific output revealed incoherence in the conceptualisation of the term of crowdsourcing. The approaches proposed in the existing literature are inadequate and do not allow for full understanding of crowdsourcing

    Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms

    Get PDF
    Paid crowdsourcing platforms suffer from low-quality workand unfair rejections, but paradoxically, most workers and requesters have high reputation scores. These inflated scores, which make high-quality work and workers difficult to find,stem from social pressure to avoid giving negative feedback. We introduce Boomerang, a reputation system for crowdsourcing that elicits more accurate feedback by rebounding the consequences of feedback directly back onto the person who gave it. With Boomerang, requesters find that their highly rated workers gain earliest access to their future tasks, and workers find tasks from their highly-rated requesters at the top of their task feed. Field experiments verify that Boomerang causes both workers and requesters to provide feedback that is more closely aligned with their private opinions. Inspired by a game-theoretic notion of incentive-compatibility, Boomerang opens opportunities for interaction design to incentivize honest reporting over strategic dishonesty

    The Four Pillars of Crowdsourcing: A Reference Model

    Get PDF
    Crowdsourcing is an emerging business model where tasks are accomplished by the general public; the crowd. Crowdsourcing has been used in a variety of disciplines, including information systems development, marketing and operationalization. It has been shown to be a successful model in recommendation systems, multimedia design and evaluation, database design, and search engine evaluation. Despite the increasing academic and industrial interest in crowdsourcing,there is still a high degree of diversity in the interpretation and the application of the concept. This paper analyses the literature and deduces a taxonomy of crowdsourcing. The taxonomy is meant to represent the different configurations of crowdsourcing in its main four pillars: the crowdsourcer, the crowd, the crowdsourced task and the crowdsourcing platform. Our outcome will help researchers and developers as a reference model to concretely and precisely state their particular interpretation and configuration of crowdsourcing

    Multilingual Lexicography with a Focus on Less-Resourced Languages: Data Mining, Expert Input, Crowdsourcing, and Gamification

    Get PDF
    This paper looks at the challenges that the Kamusi Project faces for acquiring open lexical data for less-resourced languages (LRLs), of a range, depth, and quality that can be useful within Human Language Technology (HLT). These challenges include accessing and reforming existing lexicons into interoperable data, recruiting language specialists and citizen linguists, and obtaining large volumes of quality input from the crowd. We introduce our crowdsourcing model, specifically (1) motivating participation using a “play to pay” system, games, social rewards, and material prizes; (2) steering the crowd to contribute structured and reliable data via targeted questions; and (3) evaluating participants’ input through crowd validation and statistical analysis to ensure that only trust-worthy material is incorporated into Kamusi’s master database. We discuss the mobile application Kamusi has developed for crowd participation that elicits high-quality structured data directly from each language’s speakers through narrow questions that can be answered with a minimum of time and effort. Through the integration of existing lexicons, expert input, and innovative methods of acquiring knowledge from the crowd, an accurate and reliable multilingual dictionary with a focus on LRLs will grow and become available as a free public resource

    The motivations and experiences of the on-demand mobile workforce

    Get PDF
    ABSTRACT On-demand mobile workforce applications match physical world tasks and willing workers. These systems offer to help conserve resources, streamline courses of action, and increase market efficiency for micro-and mid-level tasks, from verifying the existence of a pothole to walking a neighbor's dog. This study reports on the motivations and experiences of individuals who regularly complete physical world tasks posted in on-demand mobile workforce marketplaces. Data collection included semi-structured interviews with members (workers) of two different services. The analysis revealed the main drivers for participating in an on-demand mobile workforce, including desires for monetary compensation and control over schedules and task selection. We also reveal main reasons for task selection, which involve situational factors, convenient physical locations, and task requester profile information. Finally, we discuss the key characteristics of the most worthwhile tasks and offer implications for novel crowdsourcing systems for physical world tasks

    A data-driven analysis of workers' earnings on Amazon Mechanical Turk

    Get PDF
    A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~2 USD/h, and only 4% earned more than 7.25 USD/h. While the average requester pays more than 11 USD/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work
    corecore