12,790 research outputs found

    A U.S. Research Roadmap for Human Computation

    Full text link
    The Web has made it possible to harness human cognition en masse to achieve new capabilities. Some of these successes are well known; for example Wikipedia has become the go-to place for basic information on all things; Duolingo engages millions of people in real-life translation of text, while simultaneously teaching them to speak foreign languages; and fold.it has enabled public-driven scientific discoveries by recasting complex biomedical challenges into popular online puzzle games. These and other early successes hint at the tremendous potential for future crowd-powered capabilities for the benefit of health, education, science, and society. In the process, a new field called Human Computation has emerged to better understand, replicate, and improve upon these successes through scientific research. Human Computation refers to the science that underlies online crowd-powered systems and was the topic of a recent visioning activity in which a representative cross-section of researchers, industry practitioners, visionaries, funding agency representatives, and policy makers came together to understand what makes crowd-powered systems successful. Teams of experts considered past, present, and future human computation systems to explore which kinds of crowd-powered systems have the greatest potential for societal impact and which kinds of research will best enable the efficient development of new crowd-powered systems to achieve this impact. This report summarize the products and findings of those activities as well as the unconventional process and activities employed by the workshop, which were informed by human computation research.Comment: 32 pages, 25 figures, Workshop report from the CRA-sponsored Human Computation Roadmap Summit: P. Michelucci, L. Shanley, J. Dickinson, and H. Hirsh, A U.S. Research Roadmap for Human Computation, Computing Community Consortium Technical Report, 201

    Collective Creativity: Where we are and where we might go

    Full text link
    Creativity is individual, and it is social. The social aspects of creativity have become of increasing interest as systems have emerged that mobilize large numbers of people to engage in creative tasks. We examine research related to collective intelligence and differentiate work on collective creativity from other collective activities by analyzing systems with respect to the tasks that are performed and the outputs that result. Three types of systems are discussed: games, contests and networks. We conclude by suggesting how systems that generate collective creativity can be improved and how new systems might be constructed.Comment: Presented at Collective Intelligence conference, 2012 (arXiv:1204.2991

    Beyond AMT: An Analysis of Crowd Work Platforms

    Full text link
    While Amazon's Mechanical Turk (AMT) helped launch the paid crowd work industry eight years ago, many new vendors now offer a range of alternative models. Despite this, little crowd work research has explored other platforms. Such near-exclusive focus risks letting AMT's particular vagaries and limitations overly shape our understanding of crowd work and the research questions and directions being pursued. To address this, we present a cross-platform content analysis of seven crowd work platforms. We begin by reviewing how AMT assumptions and limitations have influenced prior research. Next, we formulate key criteria for characterizing and differentiating crowd work platforms. Our analysis of platforms contrasts them with AMT, informing both methodology of use and directions for future research. Our cross-platform analysis represents the only such study by researchers for researchers, intended to further enrich the diversity of research on crowd work and accelerate progress

    Collaborative Interactive Learning -- A clarification of terms and a differentiation from other research fields

    Full text link
    The field of collaborative interactive learning (CIL) aims at developing and investigating the technological foundations for a new generation of smart systems that support humans in their everyday life. While the concept of CIL has already been carved out in detail (including the fields of dedicated CIL and opportunistic CIL) and many research objectives have been stated, there is still the need to clarify some terms such as information, knowledge, and experience in the context of CIL and to differentiate CIL from recent and ongoing research in related fields such as active learning, collaborative learning, and others. Both aspects are addressed in this paper

    Crowdsourcing for Bioinformatics

    Full text link
    Motivation: Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base construction and protein structure determination all benefit from human input. In some cases people are needed in vast quantities while in others we need just a few with very rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Results: Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types including: volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions.Comment: Revie

    Efficient Crowd Exploration of Large Networks: The Case of Causal Attribution

    Full text link
    Accurately and efficiently crowdsourcing complex, open-ended tasks can be difficult, as crowd participants tend to favor short, repetitive "microtasks". We study the crowdsourcing of large networks where the crowd provides the network topology via microtasks. Crowds can explore many types of social and information networks, but we focus on the network of causal attributions, an important network that signifies cause-and-effect relationships. We conduct experiments on Amazon Mechanical Turk (AMT) testing how workers propose and validate individual causal relationships and introduce a method for independent crowd workers to explore large networks. The core of the method, Iterative Pathway Refinement, is a theoretically-principled mechanism for efficient exploration via microtasks. We evaluate the method using synthetic networks and apply it on AMT to extract a large-scale causal attribution network, then investigate the structure of this network as well as the activity patterns and efficiency of the workers who constructed this network. Worker interactions reveal important characteristics of causal perception and the network data they generate can improve our understanding of causality and causal inference.Comment: 25 pages, 14 figures, in CSCW'1

    A Survey on Data Collection for Machine Learning: a Big Data -- AI Integration Perspective

    Full text link
    Data collection is a major bottleneck in machine learning and an active research topic in multiple communities. There are largely two reasons data collection has recently become a critical issue. First, as machine learning is becoming more widely-used, we are seeing new applications that do not necessarily have enough labeled data. Second, unlike traditional machine learning, deep learning techniques automatically generate features, which saves feature engineering costs, but in return may require larger amounts of labeled data. Interestingly, recent research in data collection comes not only from the machine learning, natural language, and computer vision communities, but also from the data management community due to the importance of handling large amounts of data. In this survey, we perform a comprehensive study of data collection from a data management point of view. Data collection largely consists of data acquisition, data labeling, and improvement of existing data or models. We provide a research landscape of these operations, provide guidelines on which technique to use when, and identify interesting research challenges. The integration of machine learning and data management for data collection is part of a larger trend of Big data and Artificial Intelligence (AI) integration and opens many opportunities for new research.Comment: 20 page

    Toward a System Building Agenda for Data Integration

    Full text link
    In this paper we argue that the data management community should devote far more effort to building data integration (DI) systems, in order to truly advance the field. Toward this goal, we make three contributions. First, we draw on our recent industrial experience to discuss the limitations of current DI systems. Second, we propose an agenda to build a new kind of DI systems to address these limitations. These systems guide users through the DI workflow, step by step. They provide tools to address the "pain points" of the steps, and tools are built on top of the Python data science and Big Data ecosystem (PyData). We discuss how to foster an ecosystem of such tools within PyData, then use it to build DI systems for collaborative/cloud/crowd/lay user settings. Finally, we discuss ongoing work at Wisconsin, which suggests that these DI systems are highly promising and building them raises many interesting research challenges

    TurKPF: TurKontrol as a Particle Filter

    Full text link
    TurKontrol, and algorithm presented in (Dai et al. 2010), uses a POMDP to model and control an iterative workflow for crowdsourced work. Here, TurKontrol is re-implemented as "TurKPF," which uses a Particle Filter to reduce computation time & memory usage. Most importantly, in our experimental environment with default parameter settings, the action is chosen nearly instantaneously. Through a series of experiments we see that TurKPF and TurKontrol perform similarly.Comment: 8 pages, 6 figures, formula appendi

    Atelier: Repurposing Expert Crowdsourcing Tasks as Micro-internships

    Full text link
    Expert crowdsourcing marketplaces have untapped potential to empower workers' career and skill development. Currently, many workers cannot afford to invest the time and sacrifice the earnings required to learn a new skill, and a lack of experience makes it difficult to get job offers even if they do. In this paper, we seek to lower the threshold to skill development by repurposing existing tasks on the marketplace as mentored, paid, real-world work experiences, which we refer to as micro-internships. We instantiate this idea in Atelier, a micro-internship platform that connects crowd interns with crowd mentors. Atelier guides mentor-intern pairs to break down expert crowdsourcing tasks into milestones, review intermediate output, and problem-solve together. We conducted a field experiment comparing Atelier's mentorship model to a non-mentored alternative on a real-world programming crowdsourcing task, finding that Atelier helped interns maintain forward progress and absorb best practices.Comment: CHI 201
    • …
    corecore