The past decade has been transformative for mental health research and
practice. The ability to harness large repositories of data, whether from
electronic health records (EHR), mobile devices, or social media, has revealed
a potential for valuable insights into patient experiences, promising early,
proactive interventions, as well as personalized treatment plans. Recent
developments in generative artificial intelligence, particularly large language
models (LLMs), show promise in leading digital mental health to uncharted
territory. Patients are arriving at doctors' appointments with information
sourced from chatbots, state-of-the-art LLMs are being incorporated in medical
software and EHR systems, and chatbots from an ever-increasing number of
startups promise to serve as AI companions, friends, and partners. This article
presents contemporary perspectives on the opportunities and risks posed by LLMs
in the design, development, and implementation of digital mental health tools.
We adopt an ecological framework and draw on the affordances offered by LLMs to
discuss four application areas -- care-seeking behaviors from individuals in
need of care, community care provision, institutional and medical care
provision, and larger care ecologies at the societal level. We engage in a
thoughtful consideration of whether and how LLM-based technologies could or
should be employed for enhancing mental health. The benefits and harms our
article surfaces could serve to help shape future research, advocacy, and
regulatory efforts focused on creating more responsible, user-friendly,
equitable, and secure LLM-based tools for mental health treatment and
intervention