5 research outputs found

    Supporting Accurate Interpretation of Self-Administered Medical Test Results for Mobile Health: Assessment of Design, Demographics, and Health Condition

    Get PDF
    Background: Technological advances in personal informatics allow people to track their own health in a variety of ways, representing a dramatic change in individuals’ control of their own wellness. However, research regarding patient interpretation of traditional medical tests highlights the risks in making complex medical data available to a general audience. Objective: This study aimed to explore how people interpret medical test results, examined in the context of a mobile blood testing system developed to enable self-care and health management. Methods: In a preliminary investigation and main study, we presented 27 and 303 adults, respectively, with hypothetical results from several blood tests via one of the several mobile interface designs: a number representing the raw measurement of the tested biomarker, natural language text indicating whether the biomarker’s level was low or high, or a one-dimensional chart illustrating this level along a low-healthy axis. We measured respondents’ correctness in evaluating these results and their confidence in their interpretations. Participants also told us about any follow-up actions they would take based on the result and how they envisioned, generally, using our proposed personal health system. Results: We find that a majority of participants (242/328, 73.8%) were accurate in their interpretations of their diagnostic results. However, 135 of 328 participants (41.1%) expressed uncertainty and confusion about their ability to correctly interpret these results. We also find that demographics and interface design can impact interpretation accuracy, including false confidence, which we define as a respondent having above average confidence despite interpreting a result inaccurately. Specifically, participants who saw a natural language design were the least likely (421.47 times, P=.02) to exhibit false confidence, and women who saw a graph design were less likely (8.67 times, P=.04) to have false confidence. On the other hand, false confidence was more likely among participants who self-identified as Asian (25.30 times, P=.02), white (13.99 times, P=.01), and Hispanic (6.19 times, P=.04). Finally, with the natural language design, participants who were more educated were, for each one-unit increase in education level, more likely (3.06 times, P=.02) to have false confidence. Conclusions: Our findings illustrate both promises and challenges of interpreting medical data outside of a clinical setting and suggest instances where personal informatics may be inappropriate. In surfacing these tensions, we outline concrete interface design strategies that are more sensitive to users’ capabilities and conditions

    Vero: A Method for Remotely Studying Human-AI Collaboration

    Get PDF
    Despite the recognized need in the IS community to prepare for a future of human-AI collaboration, the technical skills necessary to develop and deploy AI systems are considerable, making such research difficult to perform without specialized knowledge. To make human-AI collaboration research more accessible, we developed a novel experimental method that combines a video conferencing platform, controlled content, and Wizard of Oz methods to simulate a group interaction with an AI teammate. Through a case study, we demonstrate the flexibility and ease of deployment of this approach. We also provide evidence that the method creates a highly believable experience of interacting with an AI agent. By detailing this method, we hope that multidisciplinary researchers can replicate it to more easily answer questions that will inform the design and development of future human-AI collaboration technologies

    AI-MEDIATED COMMUNICATION: EFFECTS ON LANGUAGE AND INTERPERSONAL PERCEPTIONS

    Full text link
    170 pagesThis dissertation investigates the potential promises and perils of AI-mediated communication (AI-MC), a subset of computer-mediated communication (CMC) where communication is augmented or generated by AI. While users are accustomed to some existing implementations of AI-MC, such as spell check and grammar correction, new systems display a much higher amount of intervention, such as smart replies in messaging and email. Smart replies offer algorithmically-generated suggested responses based on the conversation content. Despite the fact that smart replies are directly aimed at shaping the production of messages, we do not know how they are influencing conversational and interpersonal dynamics. To avoid unexpected social consequences, this dissertation focuses on examining the effects that smart replies have on human interactions. I argue that while smart replies increase efficiency by allowing users to respond to messages more quickly, they also alter language and impact receivers’ interpersonal perceptions. Our work shows that smart replies are different from conversation content across various linguistic measures of sentiment. Specifically, commercially-available smart replies offer a disproportionate amount of positive sentiment compared to everyday conversation, and, as a result, our communication contains more positive sentiment when mediated by AI than it would have without smart replies. We also find that the presence of smart replies serves to increase trust between communicators and that, when interactions are unsuccessful, the AI can act like moral crumple zone by taking on responsibility that would otherwise have been assigned to the other human communicator. Similarly, we find that while actual smart reply use leads to improved interpersonal perceptions, the idea of sending these impersonal smart reply messages in everyday conversation is perceived negatively. In other words, even though using smart replies makes language more positive and can improve interpersonal perceptions, the idea among users that smart replies are inherently negative has persisted throughout the studies presented in this dissertation. Overall, I argue that while AI-MC in the form of smart replies can save time, improve communication efficiency, and better interpersonal perceptions, users should be cautioned that these benefits are coupled with altered language and a potential loss of personal expression. Additionally, it seems that user perceptions about smart reply use are quite negative, although this does not match the reality of using smart replies. Taking these promises and perils into account, our work suggests that a more promising role for smart replies could be as a mediator that works to recognize when a conversation is going awry and affect the necessary reparative actions between communicators

    Artificial intelligence in communication impacts language and social relationships

    No full text
    Abstract Artificial intelligence (AI) is already widely used in daily communication, but despite concerns about AI’s negative effects on society the social consequences of using it to communicate remain largely unexplored. We investigate the social consequences of one of the most pervasive AI applications, algorithmic response suggestions (“smart replies”), which are used to send billions of messages each day. Two randomized experiments provide evidence that these types of algorithmic recommender systems change how people interact with and perceive one another in both pro-social and anti-social ways. We find that using algorithmic responses changes language and social relationships. More specifically, it increases communication speed, use of positive emotional language, and conversation partners evaluate each other as closer and more cooperative. However, consistent with common assumptions about the adverse effects of AI, people are evaluated more negatively if they are suspected to be using algorithmic responses. Thus, even though AI can increase the speed of communication and improve interpersonal perceptions, the prevailing anti-social connotations of AI undermine these potential benefits if used overtly
    corecore