39,377 research outputs found

    Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness

    Get PDF
    INTRODUCTION: Conversational agents are of great interest in the field of mental health, often in the news these days as a solution to the problem of a limited number of clinicians per patient. Until very recently, little research was actually done in patients with mental health conditions, but rather, only in healthy controls. Little is actually known if those with mental health conditions would want to use conversational agents, and how comfortable they might feel hearing results they would normally hear from a clinician, instead from a chatbot. OBJECTIVES: We asked patients with mental health conditions to ask a chatbot to read a results document to them and tell us how they found the experience. To our knowledge, this is one of the earliest studies to consider actual patient perspectives on conversational agents for mental health, and would inform whether this avenue of research is worth pursuing in the future. Our specific aims are to first and foremost determine the usability of such conversational agent tools, second, to determine their likely adoption among individuals with mental health disorders, and third, to determine whether those using them would grow a sense of artificial trust with the agent. METHODS: We designed and implemented a conversational agent specific to mental health tracking along with a supporting scale able to measure its efficacy in the selected domains of Adoption, Usability, and Trust. These specific domains were selected based on the phases of interaction during a conversation that patients would have with a conversational agent and adapted for simplicity in measurement. Patients were briefly introduced to the technology, our particular conversational agent, and a demo, before using it themselves and taking the survey with the supporting scale thereafter. RESULTS: With a mean score of 3.27 and standard deviation of 0.99 in the Adoption domain, we see that subjects typically felt less than content with adoption but believe that the conversational agent could become commonplace without complicated technical hurdles. With a mean score of 3.4 and standard deviation of 0.93 in the Usability domain, we see that subjects tended to feel more content with the usability of the conversational agent. With a mean score of 2.65 and standard deviation of 0.95 in the Trust domain, we see that subjects felt least content with trusting the conversational agent. CONCLUSIONS: In summary, though conversational agents are now readily accessible and relatively easy to use, we see there is a bridge to be crossed before patients are willing to trust a conversational agent over speaking directly with a clinician in mental health settings. With increased attention, clinic adoption, and patient experience, however, we feel that conversational agents could be readily adopted for simple or routine tasks and requesting information that would otherwise require time, cost, and effort to acquire. The field is still young, however, and with advances in digital technologies and artificial intelligence, capturing the essence of natural language conversation could transform this currently simple tool with limited use-cases into a powerful one for the digital clinician

    Public perceptions of diabetes, healthy living and conversational agents in Singapore: a needs assessment

    Get PDF
    Background: The incidence of chronic diseases such as type 2 diabetes is on the rise in countries worldwide, including Singapore. Health professional-delivered healthy lifestyle interventions have been shown to prevent type 2 diabetes. Yet ongoing personalised guidance from health professionals is not feasible or affordable at the population level. Novel digital interventions delivered using mobile technology such as conversational agents are a potential alternative for delivery of healthy lifestyle change behavioural interventions to the public. Objective: We explored Singaporeans’ perceptions on and experience of healthy living, diabetes and mobile health interventions (apps and conversational agents). This survey was done to help inform the design and development of a conversational agent focusing on healthy lifestyle change. Methods: This qualitative study was conducted over Aug and Sept 2019. 20 participants were recruited from relevant healthy living Facebook pages and groups. Semi-structured interviews were conducted in person or over the telephone using an interview guide. Interviews were transcribed and analysed in parallel by two researchers using Burnard’s method, a structured approach for thematic content analysis. Results: The collected data was organised into four main themes: (1) use of conversational agents, (2) ubiquity of smartphone applications, (3) understanding of diabetes and (4) barriers and facilitators to a healthy living in Singapore. Most participants used health-related mobile applications as well as conversational agents unrelated to healthcare. They provided diverse suggestions for future conversational agent-delivered interventions. Participants also highlighted several knowledge gaps in relation to diabetes and healthy living. In terms of barriers to healthy living, frequent dining out, high stress levels, lack of work-life balance and dearth of free time to engage in physical activity were mentioned. In contrast, discipline, pre-planning and sticking to a routine were important for enabling a healthy lifestyle. Conclusions: Participants in our study commonly used mobile health interventions and provided important insights into their knowledge gaps and needs in relation to healthy lifestyle behaviour change. Future digital interventions like conversational agents focusing on healthy lifestyle and diabetes prevention should aim to address the barriers highlighted in our study and motivate individuals to adopt habits for healthy living

    A smartphone-based health care chatbot to promote self-management of chronic pain (SELMA) : pilot randomized controlled trial

    Get PDF
    Background: Ongoing pain is one of the most common diseases and has major physical, psychological, social, and economic impacts. A mobile health intervention utilizing a fully automated text-based health care chatbot (TBHC) may offer an innovative way not only to deliver coping strategies and psychoeducation for pain management but also to build a working alliance between a participant and the TBHC. Objective: The objectives of this study are twofold: (1) to describe the design and implementation to promote the chatbot painSELfMAnagement (SELMA), a 2-month smartphone-based cognitive behavior therapy (CBT) TBHC intervention for pain self-management in patients with ongoing or cyclic pain, and (2) to present findings from a pilot randomized controlled trial, in which effectiveness, influence of intention to change behavior, pain duration, working alliance, acceptance, and adherence were evaluated. Methods: Participants were recruited online and in collaboration with pain experts, and were randomized to interact with SELMA for 8 weeks either every day or every other day concerning CBT-based pain management (n=59), or weekly concerning content not related to pain management (n=43). Pain-related impairment (primary outcome), general well-being, pain intensity, and the bond scale of working alliance were measured at baseline and postintervention. Intention to change behavior and pain duration were measured at baseline only, and acceptance postintervention was assessed via self-reporting instruments. Adherence was assessed via usage data. Results: From May 2018 to August 2018, 311 adults downloaded the SELMA app, 102 of whom consented to participate and met the inclusion criteria. The average age of the women (88/102, 86.4%) and men (14/102, 13.6%) participating was 43.7 (SD 12.7) years. Baseline group comparison did not differ with respect to any demographic or clinical variable. The intervention group reported no significant change in pain-related impairment (P=.68) compared to the control group postintervention. The intention to change behavior was positively related to pain-related impairment (P=.01) and pain intensity (P=.01). Working alliance with the TBHC SELMA was comparable to that obtained in guided internet therapies with human coaches. Participants enjoyed using the app, perceiving it as useful and easy to use. Participants of the intervention group replied with an average answer ratio of 0.71 (SD 0.20) to 200 (SD 58.45) conversations initiated by SELMA. Participants’ comments revealed an appreciation of the empathic and responsible interaction with the TBHC SELMA. A main criticism was that there was no option to enter free text for the patients’ own comments. Conclusions: SELMA is feasible, as revealed mainly by positive feedback and valuable suggestions for future revisions. For example, the participants’ intention to change behavior or a more homogenous sample (eg, with a specific type of chronic pain) should be considered in further tailoring of SELMA

    Ethical Challenges in Data-Driven Dialogue Systems

    Full text link
    The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.Comment: In Submission to the AAAI/ACM conference on Artificial Intelligence, Ethics, and Societ

    An End-to-End Conversational Style Matching Agent

    Full text link
    We present an end-to-end voice-based conversational agent that is able to engage in naturalistic multi-turn dialogue and align with the interlocutor's conversational style. The system uses a series of deep neural network components for speech recognition, dialogue generation, prosodic analysis and speech synthesis to generate language and prosodic expression with qualities that match those of the user. We conducted a user study (N=30) in which participants talked with the agent for 15 to 20 minutes, resulting in over 8 hours of natural interaction data. Users with high consideration conversational styles reported the agent to be more trustworthy when it matched their conversational style. Whereas, users with high involvement conversational styles were indifferent. Finally, we provide design guidelines for multi-turn dialogue interactions using conversational style adaptation
    • …
    corecore