8 research outputs found
Benefits and Harms of Large Language Models in Digital Mental Health
The past decade has been transformative for mental health research and
practice. The ability to harness large repositories of data, whether from
electronic health records (EHR), mobile devices, or social media, has revealed
a potential for valuable insights into patient experiences, promising early,
proactive interventions, as well as personalized treatment plans. Recent
developments in generative artificial intelligence, particularly large language
models (LLMs), show promise in leading digital mental health to uncharted
territory. Patients are arriving at doctors' appointments with information
sourced from chatbots, state-of-the-art LLMs are being incorporated in medical
software and EHR systems, and chatbots from an ever-increasing number of
startups promise to serve as AI companions, friends, and partners. This article
presents contemporary perspectives on the opportunities and risks posed by LLMs
in the design, development, and implementation of digital mental health tools.
We adopt an ecological framework and draw on the affordances offered by LLMs to
discuss four application areas -- care-seeking behaviors from individuals in
need of care, community care provision, institutional and medical care
provision, and larger care ecologies at the societal level. We engage in a
thoughtful consideration of whether and how LLM-based technologies could or
should be employed for enhancing mental health. The benefits and harms our
article surfaces could serve to help shape future research, advocacy, and
regulatory efforts focused on creating more responsible, user-friendly,
equitable, and secure LLM-based tools for mental health treatment and
intervention
The Typing Cure: Experiences with Large Language Model Chatbots for Mental Health Support
People experiencing severe distress increasingly use Large Language Model
(LLM) chatbots as mental health support tools. Discussions on social media have
described how engagements were lifesaving for some, but evidence suggests that
general-purpose LLM chatbots also have notable risks that could endanger the
welfare of users if not designed responsibly. In this study, we investigate the
lived experiences of people who have used LLM chatbots for mental health
support. We build on interviews with 21 individuals from globally diverse
backgrounds to analyze how users create unique support roles for their
chatbots, fill in gaps in everyday care, and navigate associated cultural
limitations when seeking support from chatbots. We ground our analysis in
psychotherapy literature around effective support, and introduce the concept of
therapeutic alignment, or aligning AI with therapeutic values for mental health
contexts. Our study offers recommendations for how designers can approach the
ethical and effective use of LLM chatbots and other AI mental health support
tools in mental health care.Comment: The first two authors contributed equally to this work; typos
correcte
The future of care work: towards a radical politics of care in CSCW research and practice
Computer-Supported Cooperative Work (CSCW) and Human- Computer Interaction (HCI) have long studied how technology can support material and relational aspects of care work, typically in clinical healthcare settings. More recently, we see increasing recognition of care work such as informal healthcare provision, child and elderly care, organizing and advocacy, domestic work, and service work. However, the COVID-19 pandemic has underscored long-present tensions between the deep necessity and simultaneous devaluation of our care infrastructures. This highlights the need to attend to the broader social, political, and economic systems that shape care work and the emerging technologies being used in care work. This leads us to ask several critical questions: What counts as care work and why? How is care work (de)valued, (un)supported, or coerced under capitalism and to what end? What narratives drive the push for technology in care work and whom does it benefit? How does care work resist or build resilience against and within oppressive systems? And how can we as researchers advocate for and with care and caregivers? In this one-day workshop, we will bring together researchers from academia, industry, and community-based organizations to reflect on these questions and extend conversations on the future of technology for care work
Benefits and Harms of Large Language Models in Digital Mental Health
The past decade has been transformative for mental health research and practice. The ability to harness large repositories of data, whether from electronic health records (EHR), mobile devices, or social media, has revealed a potential for valuable insights into patient experiences, promising early, proactive interventions, as well as personalized treatment plans. Recent developments in generative artificial intelligence, particularly large language models (LLMs), show promise in leading digital mental health to uncharted territory. Patients are arriving at doctors' appointments with information sourced from chatbots, state-of-the-art LLMs are being incorporated in medical software and EHR systems, and chatbots from an ever-increasing number of startups promise to serve as AI companions, friends, and partners. This article presents contemporary perspectives on the opportunities and risks posed by LLMs in the design, development, and implementation of digital mental health tools. We adopt an ecological framework and draw on the affordances offered by LLMs to discuss four application areas---care-seeking behaviors from individuals in need of care, community care provision, institutional and medical care provision, and larger care ecologies at the societal level. We engage in a thoughtful consideration of whether and how LLM-based technologies could or should be employed for enhancing mental health. The benefits and harms our article surfaces could serve to help shape future research, advocacy, and regulatory efforts focused on creating more responsible, user-friendly, equitable, and secure LLM-based tools for mental health treatment and intervention
Digital Innovations for Global Mental Health: Opportunities for Data Science, Task Sharing, and Early Intervention
Purpose
Globally, individuals living with mental disorders are more likely to have access to a mobile phone than mental health care. In this commentary, we highlight opportunities for expanding access to and use of digital technologies to advance research and intervention in mental health, with emphasis on the potential impact in lower resource settings.
Recent findings
Drawing from empirical evidence, largely from higher income settings, we considered three emerging areas where digital technology will potentially play a prominent role: supporting methods in data science to further our understanding of mental health and inform interventions, task sharing for building workforce capacity by training and supervising non-specialist health workers, and facilitating new opportunities for early intervention for young people in lower resource settings. Challenges were identified related to inequities in access, threats of bias in big data analyses, risks to users, and need for user involvement to support engagement and sustained use of digital interventions.
Summary
For digital technology to achieve its potential to transform the ways we detect, treat, and prevent mental disorders, there is a clear need for continued research involving multiple stakeholders, and rigorous studies showing that these technologies can successfully drive measurable improvements in mental health outcomes
Subtle CSCW traits : tensions around identity formation and online activism in the Asian diaspora
The COVID-19 pandemic has been uniquely challenging for the Asian diaspora. The virus has directly devastated Asian communities around the world, most notably across India. Its indirect effects have also been crushing: violent hate crimes against elders, the dissolution of once-thriving businesses, and the trauma of pandemic-enforced disconnect from transnational family networks have all weighed heavily on Asian people. Publicly grappling with these difficulties, through hashtags and GoFundMes across social media, has raised awareness of the issues that Asian people have dealt with long before COVID. But doing so amidst isolation has illuminated a need for space to build relationships, confront intra- and inter-community biases, and envision a more hopeful future. This workshop looks to create that space. By convening social computing researchers with ties to Asian diaspora identities, we aim to foster discussion of how social platforms enable identity formation and online activism unique to the Asian diasporic experience. We will consider what it means to be an Asian diaspora researcher, challenge CSCW's notion of what it means to be Asian, and explore how Asianness can work in alliance with other marginalized identities to ultimately concretize a research agenda for CSCW to more meaningfully engage with Asian diaspora experiences