9 research outputs found

    Digitized thought records: A practitioner-focused review of cognitive restructuring apps

    Get PDF
    Mental health (MH) apps can be used as adjunctive tools in traditional face-To-face therapy to help implement components of evidence-based treatments. However, practitioners interested in using MH apps face a variety of challenges, including knowing which apps would be appropriate to use. Although some resources are available to help practitioners identify apps, granular analyses of how faithfully specific clinical skills are represented in apps are lacking. This study aimed to conduct a review and analysis of MH apps containing a core component of cognitive behaviour therapy (CBT)-cognitive restructuring (CR). A keyword search for apps providing CR functionality on the Apple App and Android Google Play stores yielded 246 apps after removal of duplicates, which was further reduced to 15 apps following verification of a CR component and application of other inclusionary/exclusionary criteria. Apps were coded based on their inclusion of core elements of CR, and general app features including app content, interoperability/data sharing, professional involvement, ethics, and data safeguards. They were also rated on user experience as assessed by the Mobile App Rating Scale (MARS). Whereas a majority of the CR apps include most core CR elements, they vary considerably with respect to more granular sub-elements of CR (e.g. rating the intensity of emotions), other general app features, and user experience (average MARS = 3.53, range from 2.30 to 4.58). Specific apps that fared best with respect to CR fidelity and user experience dimensions are highlighted, and implications of findings for clinicians, researchers and app developers are discussed. Key learning aims (1) To identify no-cost mobile health apps that practitioners can adopt to facilitate cognitive restructuring. (2) To review how well the core elements of cognitive restructuring are represented in these apps. (3) To characterize these apps with respect to their user experience and additional features. (4) To provide examples of high-quality apps that represent cognitive restructuring with fidelity and facilitate its clinical implementation

    A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect

    Get PDF
    We conducted a preregistered multilaboratory project (k = 36; N = 3,531) to assess the size and robustness of ego-depletion effects using a novel replication method, termed the paradigmatic replication approach. Each laboratory implemented one of two procedures that was intended to manipulate self-control and tested performance on a subsequent measure of self-control. Confirmatory tests found a nonsignificant result (d = 0.06). Confirmatory Bayesian meta-analyses using an informed-prior hypothesis (δ = 0.30, SD = 0.15) found that the data were 4 times more likely under the null than the alternative hypothesis. Hence, preregistered analyses did not find evidence for a depletion effect. Exploratory analyses on the full sample (i.e., ignoring exclusion criteria) found a statistically significant effect (d = 0.08); Bayesian analyses showed that the data were about equally likely under the null and informed-prior hypotheses. Exploratory moderator tests suggested that the depletion effect was larger for participants who reported more fatigue but was not moderated by trait self-control, willpower beliefs, or action orientation.</p

    A process for reviewing mental health apps:Using the One Mind PsyberGuide Credibility Rating System

    No full text
    OBJECTIVE: Given the increasing number of publicly available mental health apps, we need independent advice to guide adoption. This paper discusses the challenges and opportunities of current mental health app rating systems and describes the refinement process of one prominent system, the One Mind PsyberGuide Credibility Rating Scale (PGCRS). METHODS: PGCRS Version 1 was developed in 2013 and deployed for 7 years, during which time a number of limitations were identified. Version 2 was created through multiple stages, including a review of evaluation guidelines and consumer research, input from scientific experts, testing, and evaluation of face validity. We then re-reviewed 161 mental health apps using the updated rating scale, investigated the reliability and discrepancy of initial scores, and updated ratings on the One Mind PsyberGuide public app guide. RESULTS: Reliabilities across the scale's 9 items ranged from −0.10 to 1.00, demonstrating that some characteristics of apps are more difficult to rate consistently. The average overall score of the 161 reviewed mental health apps was 2.51/5.00 (range 0.33–5.00). Ratings were not strongly correlated with app store star ratings, suggesting that credibility scores provide different information to what is contained in star ratings. CONCLUSION: PGCRS summarizes and weights available information in 4 domains: intervention specificity, consumer ratings, research, and development. Final scores are created through an iterative process of initial rating and consensus review. The process of updating this rating scale and integrating it into a procedure for evaluating apps demonstrates one method for determining app quality

    A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect

    Get PDF
    We conducted a preregistered multilaboratory project (k = 36; N = 3,531) to assess the size and robustness of ego-depletion effects using a novel replication method, termed the paradigmatic replication approach. Each laboratory implemented one of two procedures that was intended to manipulate self-control and tested performance on a subsequent measure of self-control. Confirmatory tests found a nonsignificant result (d = 0.06). Confirmatory Bayesian meta-analyses using an informed-prior hypothesis (delta = 0.30, SD = 0.15) found that the data were 4 times more likely under the null than the alternative hypothesis. Hence, preregistered analyses did not find evidence for a depletion effect. Exploratory analyses on the full sample (i.e., ignoring exclusion criteria) found a statistically significant effect (d = 0.08); Bayesian analyses showed that the data were about equally likely under the null and informed-prior hypotheses. Exploratory moderator tests suggested that the depletion effect was larger for participants who reported more fatigue but was not moderated by trait self-control, willpower beliefs, or action orientation
    corecore