4,848 research outputs found

    Dementia detection using automatic analysis of conversations

    Get PDF
    Neurogenerative disorders, like dementia, can affect a person's speech, language and as a consequence, conversational interaction capabilities. A recent study, aimed at improving dementia detection accuracy, investigated the use of conversation analysis (CA) of interviews between patients and neurologists as a means to differentiate between patients with progressive neurodegenerative memory disorder (ND) and those with (non-progressive) functional memory disorders (FMD). However, doing manual CA is expensive and difficult to scale up for routine clinical use. In this paper, we present an automatic classification system using an intelligent virtual agent (IVA). In particular, using two parallel corpora of respectively neurologist- and IVA-led interactions, we show that using acoustic, lexical and CA-inspired features enable ND/FMD classification rates of 90.0% for the neurologist-patient conversations, and 90.9% for the IVA-patient conversations. Analysis of the differentiating potential of individual features show that some differences exist between the IVA and human-led conversations, for example in average turn length of patients

    Detecting early signs of dementia in conversation

    Get PDF
    Dementia can affect a person's speech, language and conversational interaction capabilities. The early diagnosis of dementia is of great clinical importance. Recent studies using the qualitative methodology of Conversation Analysis (CA) demonstrated that communication problems may be picked up during conversations between patients and neurologists and that this can be used to differentiate between patients with Neuro-degenerative Disorders (ND) and those with non-progressive Functional Memory Disorder (FMD). However, conducting manual CA is expensive and difficult to scale up for routine clinical use.\ud This study introduces an automatic approach for processing such conversations which can help in identifying the early signs of dementia and distinguishing them from the other clinical categories (FMD, Mild Cognitive Impairment (MCI), and Healthy Control (HC)). The dementia detection system starts with a speaker diarisation module to segment an input audio file (determining who talks when). Then the segmented files are passed to an automatic speech recogniser (ASR) to transcribe the utterances of each speaker. Next, the feature extraction unit extracts a number of features (CA-inspired, acoustic, lexical and word vector) from the transcripts and audio files. Finally, a classifier is trained by the features to determine the clinical category of the input conversation. Moreover, we investigate replacing the role of a neurologist in the conversation with an Intelligent Virtual Agent (IVA) (asking similar questions). We show that despite differences between the IVA-led and the neurologist-led conversations, the results achieved by the IVA are as good as those gained by the neurologists. Furthermore, the IVA can be used for administering more standard cognitive tests, like the verbal fluency tests and produce automatic scores, which then can boost the performance of the classifier. The final blind evaluation of the system shows that the classifier can identify early signs of dementia with an acceptable level of accuracy and robustness (considering both sensitivity and specificity)

    An avatar-based system for identifying individuals likely to develop dementia

    Get PDF
    This paper presents work on developing an automatic dementia screening test based on patients’ ability to interact and communicate — a highly cognitively demanding process where early signs of dementia can often be detected. Such a test would help general practitioners, with no specialist knowledge, make better diagnostic decisions as current tests lack specificity and sensitivity. We investigate the feasibility of basing the test on conversations between a ‘talking head’ (avatar) and a patient and we present a system for analysing such conversations for signs of dementia in the patient’s speech and language. Previously we proposed a semi-automatic system that transcribed conversations between patients and neurologists and extracted conversation analysis style features in order to differentiate between patients with progressive neurodegenerative dementia (ND) and functional memory disorders (FMD). Determining who talks when in the conversations was performed manually. In this study, we investigate a fully automatic system including speaker diarisation, and the use of additional acoustic and lexical features. Initial results from a pilot study are presented which shows that the avatar conversations can successfully classify ND/FMD with around 91% accuracy, which is in line with previous results for conversations that were led by a neurologist

    Diagnosing people with dementia using automatic conversation analysis

    Get PDF
    A recent study using Conversation Analysis (CA) has demonstrated that communication problems may be picked up during conversations between patients and neurologists, and that this can be used to differentiate between patients with (progressive neurodegenerative dementia) ND and those with (nonprogressive) functional memory disorders (FMD). This paper presents a novel automatic method for transcribing such conversations and extracting CA-style features. A range of acoustic, syntactic, semantic and visual features were automatically extracted and used to train a set of classifiers. In a proof-of-principle style study, using data recording during real neurologist-patient consultations, we demonstrate that automatically extracting CA-style features gives a classification accuracy of 95%when using verbatim transcripts. Replacing those transcripts with automatic speech recognition transcripts, we obtain a classification accuracy of 79% which improves to 90% when feature selection is applied. This is a first and encouraging step towards replacing inaccurate, potentially stressful cognitive tests with a test based on monitoring conversation capabilities that could be conducted in e.g. the privacy of the patient’s own home

    A Method for Analysis of Patient Speech in Dialogue for Dementia Detection

    Get PDF
    We present an approach to automatic detection of Alzheimer's type dementia based on characteristics of spontaneous spoken language dialogue consisting of interviews recorded in natural settings. The proposed method employs additive logistic regression (a machine learning boosting method) on content-free features extracted from dialogical interaction to build a predictive model. The model training data consisted of 21 dialogues between patients with Alzheimer's and interviewers, and 17 dialogues between patients with other health conditions and interviewers. Features analysed included speech rate, turn-taking patterns and other speech parameters. Despite relying solely on content-free features, our method obtains overall accuracy of 86.5\%, a result comparable to those of state-of-the-art methods that employ more complex lexical, syntactic and semantic features. While further investigation is needed, the fact that we were able to obtain promising results using only features that can be easily extracted from spontaneous dialogues suggests the possibility of designing non-invasive and low-cost mental health monitoring tools for use at scale.Comment: 8 pages, Resources and ProcessIng of linguistic, paralinguistic and extra-linguistic Data from people with various forms of cognitive impairment, LREC 201

    Deep Learning-based Cognitive Impairment Diseases Prediction and Assistance using Multimodal Data

    Get PDF
    In this project, we propose a mobile robot-based system capable of analyzing data from elderly people and patients with cognitive impairment diseases, such as aphasia or dementia. The project entails the deployment of two primary tasks that will be performed by the robot. The first task is the detection of these diseases in their early stages to initiate professional treatment, thereby improving the patient's quality of life. The other task focuses on automatic emotion detection, particularly during interactions with other people, in this case, clinicians. Additionally, the project aims to examine how the combination of different modalities, such as audio or text, can influence the model's results. Extensive research has been conducted on various dementia and aphasia datasets, as well as the implemented tasks. For this purpose, we utilized the DementiaBank and AphasiaBank datasets, which contain multimodal data in different formats, including video, audio, and audio transcriptions. We employed diverse models for the prediction task, including Convolutional Neural Networks for audio classification, Transformers for text classification, and a multimodal model combining both approaches. These models underwent testing on a separate test set, and the best results were achieved using the text modality, achieving a 90.36% accuracy in detecting dementia. Additionally, we conducted a detailed analysis of the available data to explain the obtained results and the model's explainability. The pipeline for automatic emotion recognition was evaluated by manually reviewing initial frames of one hundred randomly selected video samples from the dataset. This pipeline was also employed to recognize emotions in both healthy patients, and those with aphasia. The study revealed that individuals with aphasia express different emotional moods than healthy ones when listening to someone's speech, primarily due to their difficulties in understanding and expressing speech. Due to this, it negatively impacts their mood. Analyzing their emotional state can facilitate improved interactions by avoiding conversations that may have a negative impact on their mood, thus providing better assistance

    Toward the Automation of Diagnostic Conversation Analysis in Patients with Memory Complaints.

    Get PDF
    BACKGROUND: The early diagnosis of dementia is of great clinical and social importance. A recent study using the qualitative methodology of conversation analysis (CA) demonstrated that language and communication problems are evident during interactions between patients and neurologists, and that interactional observations can be used to differentiate between cognitive difficulties due to neurodegenerative disorders (ND) or functional memory disorders (FMD). OBJECTIVE: This study explores whether the differential diagnostic analysis of doctor-patient interactions in a memory clinic can be automated. METHODS: Verbatim transcripts of conversations between neurologists and patients initially presenting with memory problems to a specialist clinic were produced manually (15 with FMD, and 15 with ND). A range of automatically detectable features focusing on acoustic, lexical, semantic, and visual information contained in the transcripts were defined aiming to replicate the diagnostic qualitative observations. The features were used to train a set of five machine learning classifiers to distinguish between ND and FMD. RESULTS: The mean rate of correct classification between ND and FMD was 93% ranging from 97% by the Perceptron classifier to 90% by the Random Forest classifier.Using only the ten best features, the mean correct classification score increased to 95%. CONCLUSION: This pilot study provides proof-of-principle that a machine learning approach to analyzing transcripts of interactions between neurologists and patients describing memory problems can distinguish people with neurodegenerative dementia from people with FMD

    A longitudinal observational study of home-based conversations for detecting early dementia:protocol for the CUBOId TV task

    Get PDF
    INTRODUCTION: Limitations in effective dementia therapies mean that early diagnosis and monitoring are critical for disease management, but current clinical tools are impractical and/or unreliable, and disregard short-term symptom variability. Behavioural biomarkers of cognitive decline, such as speech, sleep and activity patterns, can manifest prodromal pathological changes. They can be continuously measured at home with smart sensing technologies, and permit leveraging of interpersonal interactions for optimising diagnostic and prognostic performance. Here we describe the ContinUous behavioural Biomarkers Of cognitive Impairment (CUBOId) study, which explores the feasibility of multimodal data fusion for in-home monitoring of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). The report focuses on a subset of CUBOId participants who perform a novel speech task, the ‘TV task’, designed to track changes in ecologically valid conversations with disease progression. METHODS AND ANALYSIS: CUBOId is a longitudinal observational study. Participants have diagnoses of MCI or AD, and controls are their live-in partners with no such diagnosis. Multimodal activity data were passively acquired from wearables and in-home fixed sensors over timespans of 8–25 months. At two time points participants completed the TV task over 5 days by recording audio of their conversations as they watched a favourite TV programme, with further testing to be completed after removal of the sensor installations. Behavioural testing is supported by neuropsychological assessment for deriving ground truths on cognitive status. Deep learning will be used to generate fused multimodal activity-speech embeddings for optimisation of diagnostic and predictive performance from speech alone. ETHICS AND DISSEMINATION: CUBOId was approved by an NHS Research Ethics Committee (Wales REC; ref: 18/WA/0158) and is sponsored by University of Bristol. It is supported by the National Institute for Health Research Clinical Research Network West of England. Results will be reported at conferences and in peer-reviewed scientific journals
    • …
    corecore