34 research outputs found

    Evaluating Design Solutions Using Crowds

    Get PDF
    Crowds can be used to generate and evaluate design solutions. To increase a crowdsourcing system’s effectiveness, we propose and compare two evaluation methods, one using five-point Likert scale rating and the other prediction voting. Our results indicate that although the two evaluation methods correlate, they have different goals: whereas prediction voting focuses evaluators on identifying the very best solutions, the rating focuses evaluators on the entire range of solutions. Thus, prediction voting is appropriate when there are many poor quality solutions that need to be filtered out, and rating is suited when all ideas are reasonable and distinctions need to be made across all solutions. The crowd prefers participating in prediction voting. The results have pragmatic implications, suggesting that evaluation methods should be assigned in relation to the distribution of quality present at each stage of crowdsourcing

    Crowd-powered positive psychological interventions

    Get PDF
    Recent advances in crowdsourcing have led to new forms of assistive technologies, commonly referred to as crowd-powered devices. To best serve the user, these technologies crowdsource human intelligence as needed, when automated methods alone are insufficient. In this paper, we provide an overview of how these systems work and how they can be used to enhance technological interventions for positive psychology. As a specific example, we describe previous work that crowdsources positive reappraisals, providing users timely and personalized suggestions for ways to reconstrue stressful thoughts and situations. We then describe how this approach could be extended for use with other positive psychological interventions. Finally, we outline future directions for crowd-powered positive psychological interventions

    What You Know and What You Don\u27t Know: A Discussion of Knowledge Intensity and Support Architectures in Improving Crowdsourcing Creativity

    Get PDF
    Building on the componential theory of creativity, we studied how the crowdsourcing creativity support architectures and the task knowledge intensity levels affect the crowd’s creativity. Using an online experiment, we found that remixing can trigger the crowd to be more creative than external stimuli and using either architecture triggers the crowd to be more creative overall. Also, the crowd is more creative in solving low-knowledge-intensity tasks than in solving high-knowledge-intensity tasks. Interestingly, regardless of the knowledge intensity levels of tasks, crowdsourcing support architectures have a significant impact on the crowd’s creativity. Therefore, our paper contributes to the crowdsourcing literature on promoting crowd creativity and provides practical implications on solving societal challenges, especially large-scale problems

    Mosaic: Designing Online Creative Communities for Sharing Works-in-Progress

    Full text link
    Online creative communities allow creators to share their work with a large audience, maximizing opportunities to showcase their work and connect with fans and peers. However, sharing in-progress work can be technically and socially challenging in environments designed for sharing completed pieces. We propose an online creative community where sharing process, rather than showcasing outcomes, is the main method of sharing creative work. Based on this, we present Mosaic---an online community where illustrators share work-in-progress snapshots showing how an artwork was completed from start to finish. In an online deployment and observational study, artists used Mosaic as a vehicle for reflecting on how they can improve their own creative process, developed a social norm of detailed feedback, and became less apprehensive of sharing early versions of artwork. Through Mosaic, we argue that communities oriented around sharing creative process can create a collaborative environment that is beneficial for creative growth

    CrowDEA: Multi-view Idea Prioritization with Crowds

    Full text link
    Given a set of ideas collected from crowds with regard to an open-ended question, how can we organize and prioritize them in order to determine the preferred ones based on preference comparisons by crowd evaluators? As there are diverse latent criteria for the value of an idea, multiple ideas can be considered as "the best". In addition, evaluators can have different preference criteria, and their comparison results often disagree. In this paper, we propose an analysis method for obtaining a subset of ideas, which we call frontier ideas, that are the best in terms of at least one latent evaluation criterion. We propose an approach, called CrowDEA, which estimates the embeddings of the ideas in the multiple-criteria preference space, the best viewpoint for each idea, and preference criterion for each evaluator, to obtain a set of frontier ideas. Experimental results using real datasets containing numerous ideas or designs demonstrate that the proposed approach can effectively prioritize ideas from multiple viewpoints, thereby detecting frontier ideas. The embeddings of ideas learned by the proposed approach provide a visualization that facilitates observation of the frontier ideas. In addition, the proposed approach prioritizes ideas from a wider variety of viewpoints, whereas the baselines tend to use to the same viewpoints; it can also handle various viewpoints and prioritize ideas in situations where only a limited number of evaluators or labels are available.Comment: Accepted in HCOMP 202
    corecore