6,492 research outputs found

    Mosaic: Designing Online Creative Communities for Sharing Works-in-Progress

    Full text link
    Online creative communities allow creators to share their work with a large audience, maximizing opportunities to showcase their work and connect with fans and peers. However, sharing in-progress work can be technically and socially challenging in environments designed for sharing completed pieces. We propose an online creative community where sharing process, rather than showcasing outcomes, is the main method of sharing creative work. Based on this, we present Mosaic---an online community where illustrators share work-in-progress snapshots showing how an artwork was completed from start to finish. In an online deployment and observational study, artists used Mosaic as a vehicle for reflecting on how they can improve their own creative process, developed a social norm of detailed feedback, and became less apprehensive of sharing early versions of artwork. Through Mosaic, we argue that communities oriented around sharing creative process can create a collaborative environment that is beneficial for creative growth

    Leveraging Crowdsourcing Data For Deep Active Learning - An Application: Learning Intents in Alexa

    Full text link
    This paper presents a generic Bayesian framework that enables any deep learning model to actively learn from targeted crowds. Our framework inherits from recent advances in Bayesian deep learning, and extends existing work by considering the targeted crowdsourcing approach, where multiple annotators with unknown expertise contribute an uncontrolled amount (often limited) of annotations. Our framework leverages the low-rank structure in annotations to learn individual annotator expertise, which then helps to infer the true labels from noisy and sparse annotations. It provides a unified Bayesian model to simultaneously infer the true labels and train the deep learning model in order to reach an optimal learning efficacy. Finally, our framework exploits the uncertainty of the deep learning model during prediction as well as the annotators' estimated expertise to minimize the number of required annotations and annotators for optimally training the deep learning model. We evaluate the effectiveness of our framework for intent classification in Alexa (Amazon's personal assistant), using both synthetic and real-world datasets. Experiments show that our framework can accurately learn annotator expertise, infer true labels, and effectively reduce the amount of annotations in model training as compared to state-of-the-art approaches. We further discuss the potential of our proposed framework in bridging machine learning and crowdsourcing towards improved human-in-the-loop systems

    Active learning in annotating micro-blogs dealing with e-reputation

    Full text link
    Elections unleash strong political views on Twitter, but what do people really think about politics? Opinion and trend mining on micro blogs dealing with politics has recently attracted researchers in several fields including Information Retrieval and Machine Learning (ML). Since the performance of ML and Natural Language Processing (NLP) approaches are limited by the amount and quality of data available, one promising alternative for some tasks is the automatic propagation of expert annotations. This paper intends to develop a so-called active learning process for automatically annotating French language tweets that deal with the image (i.e., representation, web reputation) of politicians. Our main focus is on the methodology followed to build an original annotated dataset expressing opinion from two French politicians over time. We therefore review state of the art NLP-based ML algorithms to automatically annotate tweets using a manual initiation step as bootstrap. This paper focuses on key issues about active learning while building a large annotated data set from noise. This will be introduced by human annotators, abundance of data and the label distribution across data and entities. In turn, we show that Twitter characteristics such as the author's name or hashtags can be considered as the bearing point to not only improve automatic systems for Opinion Mining (OM) and Topic Classification but also to reduce noise in human annotations. However, a later thorough analysis shows that reducing noise might induce the loss of crucial information.Comment: Journal of Interdisciplinary Methodologies and Issues in Science - Vol 3 - Contextualisation digitale - 201

    Broadening community engagement in clinical research: Designing and assessing a pilot crowdsourcing project to obtain community feedback on an HIV clinical trial.

    Get PDF
    BACKGROUND/AIMS:Community engagement is widely acknowledged as an important step in clinical trials. One underexplored method for engagement in clinical trials is crowdsourcing. Crowdsourcing involves having community members attempt to solve a problem and then publicly sharing innovative solutions. We designed and conducted a pilot using a crowdsourcing approach to obtain community feedback on an HIV clinical trial, called the Acceptability of Combined Community Engagement Strategies Study. In this work, we describe and assess the Acceptability of Combined Community Engagement Strategies Study's crowdsourcing activities in order to examine the opportunities of crowdsourcing as a clinical trial community engagement strategy. METHODS:The crowdsourcing engagement activities involved in the Acceptability of Combined Community Engagement Strategies Study were conducted in the context of a phase 1 HIV antibody trial (ClinicalTrials.gov identifier: NCT03803605). We designed a series of crowdsourcing activities to collect feedback on three aspects of this clinical trial: the informed consent process, the experience of participating in the trial, and fairness/reciprocity in HIV clinical trials. All crowdsourcing activities were open to members of the general public 18 years of age or older, and participation was solicited from the local community. A group discussion was held with representatives of the clinical trial team to obtain feedback on the utility of crowdsourcing as a community engagement strategy for informing future clinical trials. RESULTS:Crowdsourcing activities made use of innovative tools and a combination of in-person and online participation opportunities to engage community members in the clinical trial feedback process. Community feedback on informed consent was collected by transforming the clinical trial's informed consent form into a series of interactive video modules, which were screened at an open public discussion. Feedback on the experience of trial participation involved designing three fictional vignettes which were then transformed into animated videos and screened at an open public discussion. Finally, feedback on fairness/reciprocity in HIV clinical trials was collected using a crowdsourcing idea contest with online and in-person submission opportunities. Our public discussion events were attended by 38 participants in total; our idea contest received 43 submissions (27 in-person, 16 online). Facebook and Twitter metrics demonstrated substantial engagement in the project. The clinical team found crowdsourcing primarily useful for enhancing informed consent and trial recruitment. CONCLUSION:There is sufficient lay community interest in open calls for feedback on the design and conduct of clinical trials, making crowdsourcing both a novel and feasible engagement strategy. Clinical trial researchers are encouraged to consider the opportunities of implementing crowdsourcing to inform trial processes from a community perspective

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd
    • …
    corecore