22,303 research outputs found
Towards an integrated crowdsourcing definition
Crowdsourcing is a relatively recent concept that encompasses many practices. This diversity leads to the blurring of the limits of crowdsourcing that may be identified virtually with any type of internet-based collaborative activity, such as co-creation or user innovation. Varying definitions of crowdsourcing exist, and therefore some authors present certain specific examples of crowdsourcing as paradigmatic, while others present the same examples as the opposite. In this article, existing definitions of crowdsourcing are analysed to extract common elements and to establish the basic characteristics of any crowdsourcing initiative. Based on these existing definitions, an exhaustive and consistent definition for crowdsourcing is presented and contrasted in 11 cases.Estelles Arolas, E.; GonzĂĄlez-LadrĂłn-De-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science. 32(2):189-200. doi:10.1177/0165551512437638S189200322Vukovic, M., & Bartolini, C. (2010). Towards a Research Agenda for Enterprise Crowdsourcing. Leveraging Applications of Formal Methods, Verification, and Validation, 425-434. doi:10.1007/978-3-642-16558-0_36Brabham, D. C. (2008). Crowdsourcing as a Model for Problem Solving. Convergence: The International Journal of Research into New Media Technologies, 14(1), 75-90. doi:10.1177/1354856507084420Vukovic, M. (2009). Crowdsourcing for Enterprises. 2009 Congress on Services - I. doi:10.1109/services-i.2009.56Doan, A., Ramakrishnan, R., & Halevy, A. Y. (2011). Crowdsourcing systems on the World-Wide Web. Communications of the ACM, 54(4), 86. doi:10.1145/1924421.1924442Brabham, D. C. (2008). Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application. First Monday, 13(6). doi:10.5210/fm.v13i6.2159Huberman, B. A., Romero, D. M., & Wu, F. (2009). Crowdsourcing, attention and productivity. Journal of Information Science, 35(6), 758-765. doi:10.1177/0165551509346786Andriole, S. J. (2010). Business impact of Web 2.0 technologies. Communications of the ACM, 53(12), 67. doi:10.1145/1859204.1859225Denyer, D., Tranfield, D., & van Aken, J. E. (2008). Developing Design Propositions through Research Synthesis. Organization Studies, 29(3), 393-413. doi:10.1177/0170840607088020Egger, M., Smith, G. D., & Altman, D. G. (Eds.). (2001). Systematic Reviews in Health Care. doi:10.1002/9780470693926Tatarkiewicz, W. (1980). A History of Six Ideas. doi:10.1007/978-94-009-8805-7Cosma, G., & Joy, M. (2008). Towards a Definition of Source-Code Plagiarism. IEEE Transactions on Education, 51(2), 195-200. doi:10.1109/te.2007.906776Brabham, D. C. (2009). Crowdsourcing the Public Participation Process for Planning Projects. Planning Theory, 8(3), 242-262. doi:10.1177/1473095209104824Alonso, O., & Lease, M. (2011). Crowdsourcing 101. Proceedings of the fourth ACM international conference on Web search and data mining - WSDM â11. doi:10.1145/1935826.1935831Bederson, B. B., & Quinn, A. J. (2011). Web workers unite! addressing challenges of online laborers. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems - CHI EA â11. doi:10.1145/1979742.1979606Grier, D. A. (2011). Not for All Markets. Computer, 44(5), 6-8. doi:10.1109/mc.2011.155Heer, J., & Bostock, M. (2010). Crowdsourcing graphical perception. Proceedings of the 28th international conference on Human factors in computing systems - CHI â10. doi:10.1145/1753326.1753357Heymann, P., & Garcia-Molina, H. (2011). Turkalytics. Proceedings of the 20th international conference on World wide web - WWW â11. doi:10.1145/1963405.1963473Kazai, G. (2011). In Search of Quality in Crowdsourcing for Search Engine Evaluation. Advances in Information Retrieval, 165-176. doi:10.1007/978-3-642-20161-5_17La Vecchia, G., & Cisternino, A. (2010). Collaborative Workforce, Business Process Crowdsourcing as an Alternative of BPO. Lecture Notes in Computer Science, 425-430. doi:10.1007/978-3-642-16985-4_40Liu, E., & Porter, T. (2010). Culture and KM in China. VINE, 40(3/4), 326-333. doi:10.1108/03055721011071449Oliveira, F., Ramos, I., & Santos, L. (2010). Definition of a Crowdsourcing Innovation Service for the European SMEs. Lecture Notes in Computer Science, 412-416. doi:10.1007/978-3-642-16985-4_37Porta, M., House, B., Buckley, L., & Blitz, A. (2008). Value 2.0: eight new rules for creating and capturing value from innovative technologies. Strategy & Leadership, 36(4), 10-18. doi:10.1108/10878570810888713Ribiere, V. M., & Tuggle, F. D. (Doug). (2010). Fostering innovation with KM 2.0. VINE, 40(1), 90-101. doi:10.1108/03055721011024955Sloane, P. (2011). The brave new world of open innovation. Strategic Direction, 27(5), 3-4. doi:10.1108/02580541111125725Wexler, M. N. (2011). Reconfiguring the sociology of the crowd: exploring crowdsourcing. International Journal of Sociology and Social Policy, 31(1/2), 6-20. doi:10.1108/01443331111104779Whitla, P. (2009). Crowdsourcing and Its Application in Marketing Activities. Contemporary Management Research, 5(1). doi:10.7903/cmr.1145Yang, J., Adamic, L. A., & Ackerman, M. S. (2008). Crowdsourcing and knowledge sharing. Proceedings of the 9th ACM conference on Electronic commerce - EC â08. doi:10.1145/1386790.1386829Brabham, D. C. (2010). MOVING THE CROWD AT THREADLESS. Information, Communication & Society, 13(8), 1122-1145. doi:10.1080/13691181003624090Giudice, K. D. (2010). Crowdsourcing credibility: The impact of audience feedback on Web page credibility. Proceedings of the American Society for Information Science and Technology, 47(1), 1-9. doi:10.1002/meet.14504701099Stewart, O., Huerta, J. M., & Sader, M. (2009). Designing crowdsourcing community for the enterprise. Proceedings of the ACM SIGKDD Workshop on Human Computation - HCOMP â09. doi:10.1145/1600150.1600168Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 370-396. doi:10.1037/h0054346Veal, A. J. (Ed.). (2002). Leisure and tourism policy and planning. doi:10.1079/9780851995465.0000Dahlander, L., & Gann, D. M. (2010). How open is innovation? Research Policy, 39(6), 699-709. doi:10.1016/j.respol.2010.01.01
Recomendation systems and crowdsourcing: a good wedding for enabling innovation? Results from technology affordances and costraints theory
Recommendation Systems have come a long way since their first appearance in the e-commerce platforms.Since then, evolved Recommendation Systems have been successfully integrated in social networks. Now its time to test their usability and replicate their success in exciting new areas of web -enabled phenomena. One of these is crowdsourcing. Research in the IS field is investigating the need, benefits and challenges of linking the two phenomena. At the moment, empirical works have only highlighted the need to implement these techniques for tasks assignment in crowdsourcing distributed work platforms and the derived benefits for contributors and firms. We review the variety of the tasks that can be crowdsourced through these platforms and theoretically evaluate the efficiency of using RS to recommend a task in creative crowdsourcing platforms. Adopting a Technology Affordances and Constraints Theory, an emerging perspective in the Information Systems (IS) literature to understand technology use and consequences, we anticipate the tensions that this implementation can generate
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
Empirical Methodology for Crowdsourcing Ground Truth
The process of gathering ground truth data through human annotation is a
major bottleneck in the use of information extraction methods for populating
the Semantic Web. Crowdsourcing-based approaches are gaining popularity in the
attempt to solve the issues related to volume of data and lack of annotators.
Typically these practices use inter-annotator agreement as a measure of
quality. However, in many domains, such as event detection, there is ambiguity
in the data, as well as a multitude of perspectives of the information
examples. We present an empirically derived methodology for efficiently
gathering of ground truth data in a diverse set of use cases covering a variety
of domains and annotation tasks. Central to our approach is the use of
CrowdTruth metrics that capture inter-annotator disagreement. We show that
measuring disagreement is essential for acquiring a high quality ground truth.
We achieve this by comparing the quality of the data aggregated with CrowdTruth
metrics with majority vote, over a set of diverse crowdsourcing tasks: Medical
Relation Extraction, Twitter Event Identification, News Event Extraction and
Sound Interpretation. We also show that an increased number of crowd workers
leads to growth and stabilization in the quality of annotations, going against
the usual practice of employing a small number of annotators.Comment: in publication at the Semantic Web Journa
FEMwiki: crowdsourcing semantic taxonomy and wiki input to domain experts while keeping editorial control: Mission Possible!
Highly specialized professional communities of practice (CoP) inevitably need to operate across geographically dispersed area - members frequently need to interact and share professional content. Crowdsourcing using wiki platforms provides a novel way for a professional community to share ideas and collaborate on content creation, curation, maintenance and sharing. This is the aim of the Field Epidemiological Manual wiki (FEMwiki) project enabling online collaborative content sharing and interaction for field epidemiologists around a growing training wiki resource. However, while user contributions are the driving force for content creation, any medical information resource needs to keep editorial control and quality assurance. This requirement is typically in conflict with community-driven Web 2.0 content creation. However, to maximize the opportunities for the network of epidemiologists actively editing the wiki content while keeping quality and editorial control, a novel structure was developed to encourage crowdsourcing â a support for dual versioning for each wiki page enabling maintenance of expertreviewed pages in parallel with user-updated versions, and a clear navigation between the related versions. Secondly, the training wiki content needs to be organized in a semantically-enhanced taxonomical navigation structure enabling domain experts to find information on a growing site easily. This also provides an ideal opportunity for crowdsourcing. We developed a user-editable collaborative interface crowdsourcing the taxonomy live maintenance to the community of field epidemiologists by embedding the taxonomy in a training wiki platform and generating the semantic navigation hierarchy on the fly. Launched in 2010, FEMwiki is a real world service supporting field epidemiologists in Europe and worldwide. The crowdsourcing success was evaluated by assessing the number and type of changes made by the professional network of epidemiologists over several months and demonstrated that crowdsourcing encourages user to edit existing and create new content and also leads to expansion of the domain taxonomy
Crime applications and social machines: crowdsourcing sensitive data
The authors explore some issues with the United Kingdom (U.K.) crime reporting and recording systems which currently produce Open Crime Data. The availability of Open Crime Data seems to create a potential data ecosystem which would encourage crowdsourcing, or the creation of social machines, in order to counter some of these issues. While such solutions are enticing, we suggest that in fact the theoretical solution brings to light fairly compelling problems, which highlight some limitations of crowdsourcing as a means of addressing Berners-Leeâs âsocial constraint.â The authors present a thought experiment â a Gendankenexperiment - in order to explore the implications, both good and bad, of a social machine in such a sensitive space and suggest a Web Science perspective to pick apart the ramifications of this thought experiment as a theoretical approach to the characterisation of social machine
Engineering Crowdsourced Stream Processing Systems
A crowdsourced stream processing system (CSP) is a system that incorporates
crowdsourced tasks in the processing of a data stream. This can be seen as
enabling crowdsourcing work to be applied on a sample of large-scale data at
high speed, or equivalently, enabling stream processing to employ human
intelligence. It also leads to a substantial expansion of the capabilities of
data processing systems. Engineering a CSP system requires the combination of
human and machine computation elements. From a general systems theory
perspective, this means taking into account inherited as well as emerging
properties from both these elements. In this paper, we position CSP systems
within a broader taxonomy, outline a series of design principles and evaluation
metrics, present an extensible framework for their design, and describe several
design patterns. We showcase the capabilities of CSP systems by performing a
case study that applies our proposed framework to the design and analysis of a
real system (AIDR) that classifies social media messages during time-critical
crisis events. Results show that compared to a pure stream processing system,
AIDR can achieve a higher data classification accuracy, while compared to a
pure crowdsourcing solution, the system makes better use of human workers by
requiring much less manual work effort
- âŠ