21 research outputs found

    What Motivates People to Trust 'AI' Systems?

    Full text link
    Companies, organizations, and governments across the world are eager to employ so-called 'AI' (artificial intelligence) technology in a broad range of different products and systems. The promise of this cause c\'el\`ebre is that the technologies offer increased automation, efficiency, and productivity - meanwhile, critics sound warnings of illusions of objectivity, pollution of our information ecosystems, and reproduction of biases and discriminatory outcomes. This paper explores patterns of motivation in the general population for trusting (or distrusting) 'AI' systems. Based on a survey with more than 450 respondents from more than 30 different countries (and about 3000 open text answers), this paper presents a qualitative analysis of current opinions and thoughts about 'AI' technology, focusing on reasons for trusting such systems. The different reasons are synthesized into four rationales (lines of reasoning): the Human favoritism rationale, the Black box rationale, the OPSEC rationale, and the 'Wicked world, tame computers' rationale. These rationales provide insights into human motivation for trusting 'AI' which could be relevant for developers and designers of such systems, as well as for scholars developing measures of trust in technological systems

    An IDR Framework of Opportunities and Barriers between HCI and NLP

    Get PDF

    Aiki - Turning Online Procrastination into Microlearning

    Get PDF

    Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild

    Full text link
    Engaging in the deliberate generation of abnormal outputs from large language models (LLMs) by attacking them is a novel human activity. This paper presents a thorough exposition of how and why people perform such attacks. Using a formal qualitative methodology, we interviewed dozens of practitioners from a broad range of backgrounds, all contributors to this novel work of attempting to cause LLMs to fail. We relate and connect this activity between its practitioners' motivations and goals; the strategies and techniques they deploy; and the crucial role the community plays. As a result, this paper presents a grounded theory of how and why people attack large language models: LLM red teaming in the wild

    Evaluating Academic Reading Support Tools: Developing the aRSX-Questionnaire

    Get PDF

    Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI

    Get PDF
    Generative AI, i.e., the group of technologies that automaticallygenerate visual or written content based on text prompts, has un-dergone a leap in complexity and become widely available withinjust a few years. Such technologies potentially introduce a massivedisruption to creative fields. This paper presents the results of aqualitative survey ( = 23) investigating how creative professionalsthink about generative AI. The results show that the advancementof these AI models prompts important reflections on what definescreativity and how creatives imagine using AI to support theirworkflows. Based on these reflections, we discuss how we mightdesign participatory AI in the domain of creative expertise withthe goal of empowering creative professionals in their present andfuture coexistence with AI

    Annotating Online Misogyny

    Get PDF

    Programming under the influence: On the effect of Heat, Noise, and Alcohol on Novice programmers

    Get PDF
    When humans are exposed to environmental and physical stressors, cognitive performance is degraded. Even though several studies have examined the effect of various stressors individually, there are limited studies comparing the impact of different types. This study examined the effects of Heat, Noise, and Alcohol on cognitive performance during two programming tasks to quantify the impact of stressors on novice programmers. The experiment enrolled N=100 university student volunteers for a between-subjects experiment. Participants were randomly assigned to one of four conditions (M=25): a room with at 38 °C (100 °F), a room with conversational noise around 80 dBA, a blood alcohol content of 1.0‰, or a base condition. Two programming tasks were administered: one analysis task (reading programs) and one synthesis task (writing programs), taking about half an hour to complete in total. Short-term exposure to heat appears to not significantly affect neither reading nor writing programs; conversational noise significantly impacts analytical tasks but not synthesis tasks; while alcohol significantly worsens performance in both analytical and synthesis tasks. To provide a tangible summary for decision-makers able to influence conditions for novice programmers, an approximated comparison is provided, which “translates” negative cognitive effects of heat, noise, and alcohol to one another
    corecore