6,048 research outputs found

    Estimating Software Task Effort in Crowds

    Get PDF
    A key task during software maintenance is the refinement and elaboration of emerging software issues, such as feature implementations and bug resolution. It includes the annotation of software tasks with additional information, such as criticality, assignee and estimated cost of resolution. This paper reports on a first study to investigate the feasibility of using crowd workers supplied with limited information about an issue and project to provide comparably accurate estimates using planning poker. The paper describes our adaptation of planning poker to crowdsourcing and our initial trials. The results demonstrate the feasibility and potential efficiency of using crowds to deliver estimates. We also review the additional benefit that asking crowds for an estimate brings, in terms of further elaboration of the details of an issue. Finally, we outline our plans for a more extensive evaluation of planning poker in crowds

    Markerless Motion Capture in the Crowd

    Full text link
    This work uses crowdsourcing to obtain motion capture data from video recordings. The data is obtained by information workers who click repeatedly to indicate body configurations in the frames of a video, resulting in a model of 2D structure over time. We discuss techniques to optimize the tracking task and strategies for maximizing accuracy and efficiency. We show visualizations of a variety of motions captured with our pipeline then apply reconstruction techniques to derive 3D structure.Comment: Presented at Collective Intelligence conference, 2012 (arXiv:1204.2991

    Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

    Full text link
    Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.Comment: 10 pages. To appear in the Proceedings of the Conference on Human Factors in Computing Systems 2018 (CHI'18

    Harnessing Crowds: Mapping the Genome of Collective Intelligence

    Get PDF
    Over the past decade, the rise of the Internet has enabled the emergence of surprising new forms of collective intelligence. Examples include Google, Wikipedia, Threadless, and many others. To take advantage of the possibilities these new systems represent, it is necessary to go beyond just seeing them as a fuzzy collection of “cool” ideas. What is needed is a deeper understanding of how these systems work. This article offers a new framework to help provide that understanding. It identifies the underlying building blocks—to use a biological metaphor, the “genes”—at the heart of collective intelligence systems. These genes are defined by the answers to two pairs of key questions: – Who is performing the task? Why are they doing it? – What is being accomplished? How is it being done? The paper goes on to list the genes of collective intelligence—the possible answers to these key questions—and shows how combinations of genes comprise a “genome” that characterizes each collective intelligence system. In addition, the paper describes the conditions under which each gene is useful and the possibilities for combining and re-combining these genes to harness crowds effectively. Using this framework, managers can systematically consider many possible combinations of genes as they seek to develop new collective intelligence systems. ∗ University of Maryland

    Crowdsourcing in Computer Vision

    Full text link
    Computer vision systems require large amounts of manually annotated data to properly learn challenging visual concepts. Crowdsourcing platforms offer an inexpensive method to capture human knowledge and understanding, for a vast number of visual perception tasks. In this survey, we describe the types of annotations computer vision researchers have collected using crowdsourcing, and how they have ensured that this data is of high quality while annotation effort is minimized. We begin by discussing data collection on both classic (e.g., object recognition) and recent (e.g., visual story-telling) vision tasks. We then summarize key design decisions for creating effective data collection interfaces and workflows, and present strategies for intelligently selecting the most important data instances to annotate. Finally, we conclude with some thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in Computer Graphics and Vision, 201

    Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems

    Get PDF
    Crowdsourcing systems commonly face the problem of aggregating multiple judgments provided by potentially unreliable workers. In addition, several aspects of the design of efficient crowdsourcing processes, such as defining worker's bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. Bringing this together, in this work we introduce a new time--sensitive Bayesian aggregation method that simultaneously estimates a task's duration and obtains reliable aggregations of crowdsourced judgments. Our method, called BCCTime, builds on the key insight that the time taken by a worker to perform a task is an important indicator of the likely quality of the produced judgment. To capture this, BCCTime uses latent variables to represent the uncertainty about the workers' completion time, the tasks' duration and the workers' accuracy. To relate the quality of a judgment to the time a worker spends on a task, our model assumes that each task is completed within a latent time window within which all workers with a propensity to genuinely attempt the labelling task (i.e., no spammers) are expected to submit their judgments. In contrast, workers with a lower propensity to valid labeling, such as spammers, bots or lazy labelers, are assumed to perform tasks considerably faster or slower than the time required by normal workers. Specifically, we use efficient message-passing Bayesian inference to learn approximate posterior probabilities of (i) the confusion matrix of each worker, (ii) the propensity to valid labeling of each worker, (iii) the unbiased duration of each task and (iv) the true label of each task. Using two real-world public datasets for entity linking tasks, we show that BCCTime produces up to 11% more accurate classifications and up to 100% more informative estimates of a task's duration compared to state-of-the-art methods

    All the protestors fit to count: using geospatial affordances to estimate protest event size

    Get PDF
    Protest events are a hallmark of social movement tactics. Large crowds in public spaces send a clear message to those in authority. Consequently, estimating crowd size is important for clarifying how much support a particular movement has been able to garner. This is significant for policymakers and constructing public opinion alike. Efforts to accurately estimate crowd size are plagued with issues: the cost of renting aircraft (if done by air), the challenge of visibility and securing building access (if done by rooftops), and issues related to perspective and scale (if done on the ground). Airborne camera platforms like drones, balloons, and kites are geospatial affordances that open new opportunities to better estimate crowd size. In this article we adapt traditional aerial imaging techniques for deployment on an “unmanned aerial vehicle” (UAV, popularly drone) and apply the method to small (1,000) and large (30,000+) events. Ethical guidelines related to drone safety are advanced, questions related to privacy are raised, and we conclude with a discussion of what standards should guide new technologies if they are to be used for the public good
    • 

    corecore