9 research outputs found

    Attitudes Toward the Adoption of 2 Artificial Intelligence-Enabled Mental Health Tools Among Prospective Psychotherapists: Cross-sectional Study

    Get PDF
    BACKGROUND: Despite growing efforts to develop user-friendly artificial intelligence (AI) applications for clinical care, their adoption remains limited because of the barriers at individual, organizational, and system levels. There is limited research on the intention to use AI systems in mental health care. OBJECTIVE: This study aimed to address this gap by examining the predictors of psychology students' and early practitioners' intention to use 2 specific AI-enabled mental health tools based on the Unified Theory of Acceptance and Use of Technology. METHODS: This cross-sectional study included 206 psychology students and psychotherapists in training to examine the predictors of their intention to use 2 AI-enabled mental health care tools. The first tool provides feedback to the psychotherapist on their adherence to motivational interviewing techniques. The second tool uses patient voice samples to derive mood scores that the therapists may use for treatment decisions. Participants were presented with graphic depictions of the tools' functioning mechanisms before measuring the variables of the extended Unified Theory of Acceptance and Use of Technology. In total, 2 structural equation models (1 for each tool) were specified, which included direct and mediated paths for predicting tool use intentions. RESULTS: Perceived usefulness and social influence had a positive effect on the intention to use the feedback tool (P<.001) and the treatment recommendation tool (perceived usefulness, P=.01 and social influence, P<.001). However, trust was unrelated to use intentions for both the tools. Moreover, perceived ease of use was unrelated (feedback tool) and even negatively related (treatment recommendation tool) to use intentions when considering all predictors (P=.004). In addition, a positive relationship between cognitive technology readiness (P=.02) and the intention to use the feedback tool and a negative relationship between AI anxiety and the intention to use the feedback tool (P=.001) and the treatment recommendation tool (P<.001) were observed. CONCLUSIONS: The results shed light on the general and tool-dependent drivers of AI technology adoption in mental health care. Future research may explore the technological and user group characteristics that influence the adoption of AI-enabled tools in mental health care

    Attitudes Toward the Adoption of two AI-Enabled Mental Health Tools Among Prospective Psychotherapists: A Cross-Sectional Study

    No full text
    Background: An investigation of individual-level predictors of the intention to use an AI-enabled feedback and a treatment recommendation tool in mental healthcare. Objective: Using an extended UTAUT model to gain insight into the predictors of technology usage intentions for two specific AI-enabled mental healthcare tools. Methods: A cross-sectional study included 206 psychology students and psychotherapists-in-training to examine the predictors of their intention to use two AI-enabled mental healthcare tools. The first tool provides feedback to the psychotherapist on their adherence to motivational interviewing techniques. The second tool uses patient voice samples to derive mood scores that the therapist may use for treatment decisions. Participants were presented with graphic depictions of the tools’ functioning mechanisms before measuring the variables of an extended UTAUT. Two structural equation models (one for each tool) were specified that included direct and mediated paths predicting tool usage intentions. Results: Perceived usefulness and social influence had a positive effect on the intention to use both tools. However, trust was unrelated to usage intention for both tools. Moreover, perceived ease of use was unrelated (feedback tool) and even negatively related (treatment recommendation tool) to usage intentions when considering all predictors. In addition, a direct relationship between cognitive technology readiness and the intention to use the feedback tool, and a negative relationship between AI anxiety and usage intentions were observed. Conclusions: The results shed light on general and tool-dependent drivers of AI technology adoption in mental healthcare. Future research may explore technological and user group characteristics that influence the adoption of AI-enabled tools in mental healthcare

    AI-Enabled Clinical Decision Support Tools for Mental Healthcare: A Product Review

    No full text
    This review seeks to promote transparency in the availability of regulated AI-enabled Clinical Decision Support Systems (AI-CDSSs) for mental healthcare. From an initial pool of 240 potential products, only seven fulfilled the inclusion criteria. The products can be categorized into three major areas: Autism Spectrum Disorder (ASD) diagnosis in children through behavioral data; multifaceted disorder diagnosis via conversational data; and depression medication prescription derived from clinical and genetic data. The exploration of unregulated products reveals the intricate challenges present in AI-CDSS regulation. Although we found five scientific articles evaluating the devices’ performance and external validity, the average completeness of reporting, indicated by a 52 percent adherence to the CONSORT-AI (Consolidated Standards of Reporting Trials Artificial Intelligence) checklist, was modest, signaling room for improvement in reporting quality

    A Tutorial on Tailored Simulation-Based Power Analysis for Experimental Designs with Generalized Linear Mixed Models

    No full text
    When planning experimental research, determining an appropriate sample size and using suitable statistical models are crucial for robust and informative results. However, the recent replication crisis underlines the need for more rigorous statistical methodology and well-powered designs. Generalized linear mixed models (GLMMs) offer a flexible statistical framework to analyze experimental data with complex (e.g., dependent and hierarchical) data structures. Yet, available methods and software for a priori power analyses for GLMMs are often limited to specific designs, while data simulation approaches offer more flexibility. Based on a practical case study, the current tutorial equips researchers with a step-by-step guide and corresponding code for conducting tailored a priori power analyses to determine appropriate sample sizes with GLMMs. Finally, we give an outlook on the increasing importance of simulation-based power analysis in experimental research

    Advancing mental health care with AI-enabled precision psychiatry tools: A patent review

    No full text
    The review provides an overview of patents on AI-enabled precision psychiatry tools published between 2015 and mid-October 2022. Multiple analytic approaches, such as graphic network analysis and topic modeling, are used to analyze the scope, content, and trends of the retained patents. The included tools aim to provide accurate diagnoses according to established psychometric criteria, predict the response to specific treatment approaches, suggest optimal treatments, and make prognoses regarding disorder courses without intervention. About one-third of the tools recommend treatment options or include treatment administration related to digital therapeutics, pharmacotherapy, and electrotherapy. Data sources used to make predictions include behavioral data collected through mobile devices, neuroimaging, and electronic health records. The complexity of technology combinations used in the included devices has increased until 2021. The topics extracted from the patent data illuminate current trends and potential future developments in AI-enabled precision psychiatry. The most impactful patents and associated available products reveal relevant commercialization possibilities and likely future developments. Overall, the review highlights the potential of adopting AI-enabled precision psychiatry tools in practice
    corecore