4 research outputs found

    Validation of the Remote Automated ki:e Speech Biomarker for Cognition in Mild Cognitive Impairment:Verification and Validation following DiME V3 Framework

    Get PDF
    INTRODUCTION: Progressive cognitive decline is the cardinal behavioral symptom in most dementia-causing diseases such as Alzheimer's disease. While most well-established measures for cognition might not fit tomorrow's decentralized remote clinical trials, digital cognitive assessments will gain importance. We present the evaluation of a novel digital speech biomarker for cognition (SB-C) following the Digital Medicine Society's V3 framework: verification, analytical validation, and clinical validation. METHODS: Evaluation was done in two independent clinical samples: the Dutch DeepSpA (N = 69 subjective cognitive impairment [SCI], N = 52 mild cognitive impairment [MCI], and N = 13 dementia) and the Scottish SPeAk datasets (N = 25, healthy controls). For validation, two anchor scores were used: the Mini-Mental State Examination (MMSE) and the Clinical Dementia Rating (CDR) scale. RESULTS: Verification: The SB-C could be reliably extracted for both languages using an automatic speech processing pipeline. Analytical Validation: In both languages, the SB-C was strongly correlated with MMSE scores. Clinical Validation: The SB-C significantly differed between clinical groups (including MCI and dementia), was strongly correlated with the CDR, and could track the clinically meaningful decline. CONCLUSION: Our results suggest that the ki:e SB-C is an objective, scalable, and reliable indicator of cognitive decline, fit for purpose as a remote assessment in clinical early dementia trials

    Artificial Intelligence empowered recruitment for clinical trials

    No full text
    BACKGROUND: Recruitment of clinical drug trials for Alzheimer's disease (AD) is a lengthy process with an ultimately high screening failure rate, as current research focuses on prodromal stages. We evaluate the option of pre-screening potential participants for AD trials via an automated phone-based assessment using speech analysis. METHOD: 140 participants were recruited at the memory clinic in Maastricht as part of the MUMC+ study (SCI, MCI, ADD). They underwent a in-person baseline assessment (0M) at the clinic. Cognitive assessments were performed and speech was recorded during each assessment using the Delta platform. For the next assessments (6M), participants were contacted via telephone by a trained research nurse. Cognitive assessments were performed on the phone and speech was recorded during each assessment. Speech from each assessment point was analysed automatically to extract relevant features. Machine learning models to predict disease status, Clinical Dementia Rating Scale (CDR) scores, and Mini-Mental-Status-Examination scores were trained and evaluated on the data. RESULT: Models based on speech features extracted from the phone assessment were able to predict disease status with an AUC of 0.93±0.06, CDR score with a Mean Absolute Error (MAE) of 1.9±0.8, and MMSE score with an MAE of 2.3±1.1. Adding longitudinal data from baseline assessments increased accuracy across all models. CONCLUSION: Automated pre-screening through speech analysis could be an effective tool to increase the efficiency and effectiveness of recruitment for AD drug trials

    Validation of an Automated Speech Analysis of Cognitive Tasks within a Semiautomated Phone Assessment

    No full text
    Introduction: We studied the accuracy of the automatic speech recognition (ASR) software by comparing ASR scores with manual scores from a verbal learning test (VLT) and a semantic verbal fluency (SVF) task in a semiautomated phone assessment in a memory clinic population. Furthermore, we examined the differentiating value of these tests between participants with subjective cognitive decline (SCD) and mild cognitive impairment (MCI). We also investigated whether the automatically calculated speech and linguistic features had an additional value compared to the commonly used total scores in a semiautomated phone assessment. Methods: We included 94 participants from the memory clinic of the Maastricht University Medical Center+ (SCD N = 56 and MCI N = 38). The test leader guided the participant through a semiautomated phone assessment. The VLT and SVF were audio recorded and processed via a mobile application. The recall count and speech and linguistic features were automatically extracted. The diagnostic groups were classified by training machine learning classifiers to differentiate SCD and MCI participants. Results: The intraclass correlation for inter-rater reliability between the manual and the ASR total word count was 0.89 (95% CI 0.09-0.97) for the VLT immediate recall, 0.94 (95% CI 0.68-0.98) for the VLT delayed recall, and 0.93 (95% CI 0.56-0.97) for the SVF. The full model including the total word count and speech and linguistic features had an area under the curve of 0.81 and 0.77 for the VLT immediate and delayed recall, respectively, and 0.61 for the SVF. Conclusion: There was a high agreement between the ASR and manual scores, keeping the broad confidence intervals in mind. The phone-based VLT was able to differentiate between SCD and MCI and can have opportunities for clinical trial screening

    Screening for Mild Cognitive Impairment Using a Machine Learning Classifier and the Remote Speech Biomarker for Cognition:Evidence from Two Clinically Relevant Cohorts

    No full text
    BACKGROUND: Modern prodromal Alzheimer's disease (AD) clinical trials might extend outreach to a general population, causing high screen-out rates and thereby increasing study time and costs. Thus, screening tools that cost-effectively detect mild cognitive impairment (MCI) at scale are needed.OBJECTIVE: Develop a screening algorithm that can differentiate between healthy and MCI participants in different clinically relevant populations.METHODS: Two screening algorithms based on the remote ki:e speech biomarker for cognition (ki:e SB-C) were designed on a Dutch memory clinic cohort (N = 121) and a Swedish birth cohort (N = 404). MCI classification was each evaluated on the training cohort as well as across on the unrelated validation cohort.RESULTS: The algorithms achieved a performance of AUC 0.73 and AUC 0.77 in the respective training cohorts and AUC 0.81 in the unseen validation cohort.CONCLUSION: The results indicate that a ki:e SB-C based algorithm robustly detects MCI across different cohorts and languages, which has the potential to make current trials more efficient and improve future primary health care.</p
    corecore