8 research outputs found

    Developing Digital Tools for Remote Clinical Research:How to Evaluate the Validity and Practicality of Active Assessments in Field Settings

    Get PDF
    The ability of remote research tools to collect granular, high-frequency data on symptoms and digital biomarkers is an important strength because it circumvents many limitations of traditional clinical trials and improves the ability to capture clinically relevant data. This approach allows researchers to capture more robust baselines and derive novel phenotypes for improved precision in diagnosis and accuracy in outcomes. The process for developing these tools however is complex because data need to be collected at a frequency that is meaningful but not burdensome for the participant or patient. Furthermore, traditional techniques, which rely on fixed conditions to validate assessments, may be inappropriate for validating tools that are designed to capture data under flexible conditions. This paper discusses the process for determining whether a digital assessment is suitable for remote research and offers suggestions on how to validate these novel tools

    Prediction of mental effort derived from an automated vocal biomarker using machine learning in a large-scale remote sample

    Get PDF
    IntroductionBiomarkers of mental effort may help to identify subtle cognitive impairments in the absence of task performance deficits. Here, we aim to detect mental effort on a verbal task, using automated voice analysis and machine learning.MethodsAudio data from the digit span backwards task were recorded and scored with automated speech recognition using the online platform NeuroVocalixTM, yielding usable data from 2,764 healthy adults (1,022 male, 1,742 female; mean age 31.4 years). Acoustic features were aggregated across each trial and normalized within each subject. Cognitive load was dichotomized for each trial by categorizing trials at >0.6 of each participants' maximum span as “high load.” Data were divided into training (60%), test (20%), and validate (20%) datasets, each containing different participants. Training and test data were used in model building and hyper-parameter tuning. Five classification models (Logistic Regression, Naive Bayes, Support Vector Machine, Random Forest, and Gradient Boosting) were trained to predict cognitive load (“high” vs. “low”) based on acoustic features. Analyses were limited to correct responses. The model was evaluated using the validation dataset, across all span lengths and within the subset of trials with a four-digit span. Classifier discriminant power was examined with Receiver Operating Curve (ROC) analysis.ResultsParticipants reached a mean span of 6.34 out of 8 items (SD = 1.38). The Gradient Boosting classifier provided the best performing model on test data (AUC = 0.98) and showed excellent discriminant power for cognitive load on the validation dataset, across all span lengths (AUC = 0.99), and for four-digit only utterances (AUC = 0.95).DiscussionA sensitive biomarker of mental effort can be derived from vocal acoustic features in remotely administered verbal cognitive tests. The use-case of this biomarker for improving sensitivity of cognitive tests to subtle pathology now needs to be examined

    PsyCog:A computerised mini battery for assessing cognition in psychosis

    Get PDF
    Despite the functional impact of cognitive deficit in people with psychosis, objective cognitive assessment is not typically part of routine clinical care. This is partly due to the length of traditional assessments and the need for a highly trained administrator. Brief, automated computerised assessments could help to address this issue. We present data from an evaluation of PsyCog, a computerised, non-verbal, mini battery of cognitive tests. Healthy Control (HC) ( N = 135), Clinical High Risk (CHR) ( N = 233), and First Episode Psychosis (FEP) ( N = 301) participants from a multi-centre prospective study were assessed at baseline, 6 months, and 12 months. PsyCog was used to assess cognitive performance at baseline and at up to two follow-up timepoints. Mean total testing time was 35.95 min (SD = 2.87). Relative to HCs, effect sizes of performance impairments were medium to large in FEP patients (composite score G = 1.21, subtest range = 0.52-0.88) and small to medium in CHR patients (composite score G = 0.59, subtest range = 0.18-0.49). Site effects were minimal, and test-retest reliability of the PsyCog composite was good (ICC = 0.82-0.89), though some practice effects and differences in data completion between groups were found. The present implementation of PsyCog shows it to be a useful tool for assessing cognitive function in people with psychosis. Computerised cognitive assessments have the potential to facilitate the evaluation of cognition in psychosis in both research and in clinical care, though caution should still be taken in terms of implementation and study design. </p

    Description of the Method for Evaluating Digital Endpoints in Alzheimer Disease Study : Protocol for an Exploratory, Cross-sectional Study

    Get PDF
    ©Jelena Curcic, Vanessa Vallejo, Jennifer Sorinas, Oleksandr Sverdlov, Jens Praestgaard, Mateusz Piksa, Mark Deurinck, Gul Erdemli, Maximilian Bügler, Ioannis Tarnanas, Nick Taptiklis, Francesca Cormack, Rebekka Anker, Fabien Massé, William Souillard-Mandar, Nathan Intrator, Lior Molcho, Erica Madero, Nicholas Bott, Mieko Chambers, Josef Tamory, Matias Shulz, Gerardo Fernandez, William Simpson, Jessica Robin, Jón G Snædal, Jang-Ho Cha, Kristin Hannesdottir. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 10.08.2022.BACKGROUND: More sensitive and less burdensome efficacy end points are urgently needed to improve the effectiveness of clinical drug development for Alzheimer disease (AD). Although conventional end points lack sensitivity, digital technologies hold promise for amplifying the detection of treatment signals and capturing cognitive anomalies at earlier disease stages. Using digital technologies and combining several test modalities allow for the collection of richer information about cognitive and functional status, which is not ascertainable via conventional paper-and-pencil tests. OBJECTIVE: This study aimed to assess the psychometric properties, operational feasibility, and patient acceptance of 10 promising technologies that are to be used as efficacy end points to measure cognition in future clinical drug trials. METHODS: The Method for Evaluating Digital Endpoints in Alzheimer Disease study is an exploratory, cross-sectional, noninterventional study that will evaluate 10 digital technologies' ability to accurately classify participants into 4 cohorts according to the severity of cognitive impairment and dementia. Moreover, this study will assess the psychometric properties of each of the tested digital technologies, including the acceptable range to assess ceiling and floor effects, concurrent validity to correlate digital outcome measures to traditional paper-and-pencil tests in AD, reliability to compare test and retest, and responsiveness to evaluate the sensitivity to change in a mild cognitive challenge model. This study included 50 eligible male and female participants (aged between 60 and 80 years), of whom 13 (26%) were amyloid-negative, cognitively healthy participants (controls); 12 (24%) were amyloid-positive, cognitively healthy participants (presymptomatic); 13 (26%) had mild cognitive impairment (predementia); and 12 (24%) had mild AD (mild dementia). This study involved 4 in-clinic visits. During the initial visit, all participants completed all conventional paper-and-pencil assessments. During the following 3 visits, the participants underwent a series of novel digital assessments. RESULTS: Participant recruitment and data collection began in June 2020 and continued until June 2021. Hence, the data collection occurred during the COVID-19 pandemic (SARS-CoV-2 virus pandemic). Data were successfully collected from all digital technologies to evaluate statistical and operational performance and patient acceptance. This paper reports the baseline demographics and characteristics of the population studied as well as the study's progress during the pandemic. CONCLUSIONS: This study was designed to generate feasibility insights and validation data to help advance novel digital technologies in clinical drug development. The learnings from this study will help guide future methods for assessing novel digital technologies and inform clinical drug trials in early AD, aiming to enhance clinical end point strategies with digital technologies. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/35442.Peer reviewe

    Reverse Engineering of Digital Measures: Inviting Patients to the Conversation

    Get PDF
    Background: Digital measures offer an unparalleled opportunity to create a more holistic picture of how people who are patients behave in their real-world environments, thereby establishing a better connection between patients, caregivers, and the clinical evidence used to drive drug development and disease management. Reaching this vision will require achieving a new level of co-creation between the stakeholders who design, develop, use, and make decisions using evidence from digital measures. Summary: In September 2022, the second in a series of meetings hosted by the Swiss Federal Institute of Technology in Zürich, the Foundation for the National Institutes of Health Biomarkers Consortium, and sponsored by Wellcome Trust, entitled “Reverse Engineering of Digital Measures,” was held in Zurich, Switzerland, with a broad range of stakeholders sharing their experience across four case studies to examine how patient centricity is essential in shaping development and validation of digital evidence generation tools. Key Messages: In this paper, we discuss progress and the remaining barriers to widespread use of digital measures for evidence generation in clinical development and care delivery. We also present key discussion points and takeaways in order to continue discourse and provide a basis for dissemination and outreach to the wider community and other stakeholders. The work presented here shows us a blueprint for how and why the patient voice can be thoughtfully integrated into digital measure development and that continued multistakeholder engagement is critical for further progress.ISSN:2504-110
    corecore