1,849 research outputs found

    Detecting Alzheimer's Disease Using Interactional and Acoustic Features from Spontaneous Speech

    Get PDF
    Alzheimer’s Disease (AD) is a form of Dementia that manifests in cognitive decline including memory, language, and changes in behavior. Speech data has proven valuable for inferring cognitive status, used in many health assessment tasks, and can be easily elicited in natural settings. Much work focuses on analysis using linguistic features; here, we focus on non-linguistic features and their use in distinguishing AD patients from similar-age Non-AD patients with other health conditions in the Carolinas Conversation Collection (CCC) dataset. We used two types of features: patterns of interaction including pausing behaviour and floor control, and acoustic features including pitch, amplitude, energy, and cepstral coefficients. Fusion of the two kinds of features, combined with feature selection, obtains very promising classification results: classification accuracy of 90% using standard models such as support vector machines and logistic regression. We also obtain promising results using interactional features alone (87% accuracy), which can be easily extracted from natural conversations in daily life and thus have the potential for future implementation as a noninvasive method for AD diagnosis and monitoring

    Alzheimer’s Dementia Recognition Through Spontaneous Speech

    Get PDF

    Automatic Detection of Dementia and related Affective Disorders through Processing of Speech and Language

    Get PDF
    In 2019, dementia is has become a trillion dollar disorder. Alzheimer’s disease (AD) is a type of dementia in which the main observable symptom is a decline in cognitive functions, notably memory, as well as language and problem-solving. Experts agree that early detection is crucial to effectively develop and apply interventions and treatments, underlining the need for effective and pervasive assessment and screening tools. The goal of this thesis is to explores how computational techniques can be used to process speech and language samples produced by patients suffering from dementia or related affective disorders, to the end of automatically detecting them in large populations us- ing machine learning models. A strong focus is laid on the detection of early stage dementia (MCI), as most clinical trials today focus on intervention at this level. To this end, novel automatic and semi-automatic analysis schemes for a speech-based cogni- tive task, i.e., verbal fluency, are explored and evaluated to be an appropriate screening task. Due to a lack of available patient data in most languages, world-first multilingual approaches to detecting dementia are introduced in this thesis. Results are encouraging and clear benefits on a small French dataset become visible. Lastly, the task of detecting these people with dementia who also suffer from an affective disorder called apathy is explored. Since they are more likely to convert into later stage of dementia faster, it is crucial to identify them. These are the fist experiments that consider this task us- ing solely speech and language as inputs. Results are again encouraging, both using only speech or language data elicited using emotional questions. Overall, strong results encourage further research in establishing speech-based biomarkers for early detection and monitoring of these disorders to better patients’ lives.Im Jahr 2019 ist Demenz zu einer Billionen-Dollar-Krankheit geworden. Die Alzheimer- Krankheit (AD) ist eine Form der Demenz, bei der das Hauptsymptom eine Abnahme der kognitiven Funktionen ist, insbesondere des Gedächtnisses sowie der Sprache und des Problemlösungsvermögens. Experten sind sich einig, dass eine frühzeitige Erkennung entscheidend für die effektive Entwicklung und Anwendung von Interventionen und Behandlungen ist, was den Bedarf an effektiven und durchgängigen Bewertungsund Screening-Tools unterstreicht. Das Ziel dieser Arbeit ist es zu erforschen, wie computergest ützte Techniken eingesetzt werden können, um Sprach- und Sprechproben von Patienten, die an Demenz oder verwandten affektiven Störungen leiden, zu verarbeiten, mit dem Ziel, diese in großen Populationen mit Hilfe von maschinellen Lernmodellen automatisch zu erkennen. Ein starker Fokus liegt auf der Erkennung von Demenz im Frühstadium (MCI), da sich die meisten klinischen Studien heute auf eine Intervention auf dieser Ebene konzentrieren. Zu diesem Zweck werden neuartige automatische und halbautomatische Analyseschemata für eine sprachbasierte kognitive Aufgabe, d.h. die verbale Geläufigkeit, erforscht und als geeignete Screening-Aufgabe bewertet. Aufgrund des Mangels an verfügbaren Patientendaten in den meisten Sprachen werden in dieser Arbeit weltweit erstmalig mehrsprachige Ansätze zur Erkennung von Demenz vorgestellt. Die Ergebnisse sind ermutigend und es werden deutliche Vorteile an einem kleinen französischen Datensatz sichtbar. Schließlich wird die Aufgabe untersucht, jene Menschen mit Demenz zu erkennen, die auch an einer affektiven Störung namens Apathie leiden. Da sie mit größerer Wahrscheinlichkeit schneller in ein späteres Stadium der Demenz übergehen, ist es entscheidend, sie zu identifizieren. Dies sind die ersten Experimente, die diese Aufgabe unter ausschließlicher Verwendung von Sprache und Sprache als Input betrachten. Die Ergebnisse sind wieder ermutigend, sowohl bei der Verwendung von reiner Sprache als auch bei der Verwendung von Sprachdaten, die durch emotionale Fragen ausgelöst werden. Insgesamt sind die Ergebnisse sehr ermutigend und ermutigen zu weiterer Forschung, um sprachbasierte Biomarker für die Früherkennung und Überwachung dieser Erkrankungen zu etablieren und so das Leben der Patienten zu verbessern

    Linguistic- and Acoustic-based Automatic Dementia Detection using Deep Learning Methods

    Get PDF
    Dementia can affect a person's speech and language abilities, even in the early stages. Dementia is incurable, but early detection can enable treatment that can slow down and maintain mental function. Therefore, early diagnosis of dementia is of great importance. However, current dementia detection procedures in clinical practice are expensive, invasive, and sometimes inaccurate. In comparison, computational tools based on the automatic analysis of spoken language have the potential to be applied as a cheap, easy-to-use, and objective clinical assistance tool for dementia detection. In recent years, several studies have shown promise in this area. However, most studies focus heavily on the machine learning aspects and, as a consequence, often lack sufficient incorporation of clinical knowledge. Many studies also concentrate on clinically less relevant tasks such as the distinction between HC and people with AD which is relatively easy and therefore less interesting both in terms of the machine learning and the clinical application. The studies in this thesis concentrate on automatically identifying signs of neurodegenerative dementia in the early stages and distinguishing them from other clinical, diagnostic categories related to memory problems: (FMD, MCI, and HC). A key focus, when designing the proposed systems has been to better consider (and incorporate) currently used clinical knowledge and also to bear in mind how these machine-learning based systems could be translated for use in real clinical settings. Firstly, a state-of-the-art end-to-end system is constructed for extracting linguistic information from automatically transcribed spontaneous speech. The system's architecture is based on hierarchical principles thereby mimicking those used in clinical practice where information at both word-, sentence- and paragraph-level is used when extracting information to be used for diagnosis. Secondly, hand-crafted features are designed that are based on clinical knowledge of the importance of pausing and rhythm. These are successfully joined with features extracted from the end-to-end system. Thirdly, different classification tasks are explored, each set up so as to represent the types of diagnostic decision-making that is relevant in clinical practice. Finally, experiments are conducted to explore how to better deal with the known problem of confounding and overlapping symptoms on speech and language from age and cognitive decline. A multi-task system is constructed that takes age into account while predicting cognitive decline. The studies use the publicly available DementiaBank dataset as well as the IVA dataset, which has been collected by our collaborators at the Royal Hallamshire Hospital, UK. In conclusion, this thesis proposes multiple methods of using speech and language information for dementia detection with state-of-the-art deep learning technologies, confirming the automatic system's potential for dementia detection

    Predictive cognition in dementia: the case of music

    Get PDF
    The clinical complexity and pathological diversity of neurodegenerative diseases impose immense challenges for diagnosis and the design of rational interventions. To address these challenges, there is a need to identify new paradigms and biomarkers that capture shared pathophysiological processes and can be applied across a range of diseases. One core paradigm of brain function is predictive coding: the processes by which the brain establishes predictions and uses them to minimise prediction errors represented as the difference between predictions and actual sensory inputs. The processes involved in processing unexpected events and responding appropriately are vulnerable in common dementias but difficult to characterise. In my PhD work, I have exploited key properties of music – its universality, ecological relevance and structural regularity – to model and assess predictive cognition in patients representing major syndromes of frontotemporal dementia – non-fluent variant PPA (nfvPPA), semantic-variant PPA (svPPA) and behavioural-variant FTD (bvFTD) - and Alzheimer’s disease relative to healthy older individuals. In my first experiment, I presented patients with well-known melodies containing no deviants or one of three types of deviant - acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). I assessed accuracy detecting melodic deviants and simultaneously-recorded pupillary responses to these deviants. I used voxel-based morphometry to define neuroanatomical substrates for the behavioural and autonomic processing of these different types of deviants, and identified a posterior temporo-parietal network for detection of basic acoustic deviants and a more anterior fronto-temporo-striatal network for detection of syntactic pitch deviants. In my second chapter, I investigated the ability of patients to track the statistical structure of the same musical stimuli, using a computational model of the information dynamics of music to calculate the information-content of deviants (unexpectedness) and entropy of melodies (uncertainty). I related these information-theoretic metrics to performance for detection of deviants and to ‘evoked’ and ‘integrative’ pupil reactivity to deviants and melodies respectively and found neuroanatomical correlates in bilateral dorsal and ventral striatum, hippocampus, superior temporal gyri, right temporal pole and left inferior frontal gyrus. Together, chapters 3 and 4 revealed new hypotheses about the way FTD and AD pathologies disrupt the integration of predictive errors with predictions: a retained ability of AD patients to detect deviants at all levels of the hierarchy with a preserved autonomic sensitivity to information-theoretic properties of musical stimuli; a generalized impairment of surprise detection and statistical tracking of musical information at both a cognitive and autonomic levels for svPPA patients underlying a diminished precision of predictions; the exact mirror profile of svPPA patients in nfvPPA patients with an abnormally high rate of false-alarms with up-regulated pupillary reactivity to deviants, interpreted as over-precise or inflexible predictions accompanied with normal cognitive and autonomic probabilistic tracking of information; an impaired behavioural and autonomic reactivity to unexpected events with a retained reactivity to environmental uncertainty in bvFTD patients. Chapters 5 and 6 assessed the status of reward prediction error processing and updating via actions in bvFTD. I created pleasant and aversive musical stimuli by manipulating chord progressions and used a classic reinforcement-learning paradigm which asked participants to choose the visual cue with the highest probability of obtaining a musical ‘reward’. bvFTD patients showed reduced sensitivity to the consequence of an action and lower learning rate in response to aversive stimuli compared to reward. These results correlated with neuroanatomical substrates in ventral and dorsal attention networks, dorsal striatum, parahippocampal gyrus and temporo-parietal junction. Deficits were governed by the level of environmental uncertainty with normal learning dynamics in a structured and binarized environment but exacerbated deficits in noisier environments. Impaired choice accuracy in noisy environments correlated with measures of ritualistic and compulsive behavioural changes and abnormally reduced learning dynamics correlated with behavioural changes related to empathy and theory-of-mind. Together, these experiments represent the most comprehensive attempt to date to define the way neurodegenerative pathologies disrupts the perceptual, behavioural and physiological encoding of unexpected events in predictive coding terms

    Detecting early signs of dementia in conversation

    Get PDF
    Dementia can affect a person's speech, language and conversational interaction capabilities. The early diagnosis of dementia is of great clinical importance. Recent studies using the qualitative methodology of Conversation Analysis (CA) demonstrated that communication problems may be picked up during conversations between patients and neurologists and that this can be used to differentiate between patients with Neuro-degenerative Disorders (ND) and those with non-progressive Functional Memory Disorder (FMD). However, conducting manual CA is expensive and difficult to scale up for routine clinical use.\ud This study introduces an automatic approach for processing such conversations which can help in identifying the early signs of dementia and distinguishing them from the other clinical categories (FMD, Mild Cognitive Impairment (MCI), and Healthy Control (HC)). The dementia detection system starts with a speaker diarisation module to segment an input audio file (determining who talks when). Then the segmented files are passed to an automatic speech recogniser (ASR) to transcribe the utterances of each speaker. Next, the feature extraction unit extracts a number of features (CA-inspired, acoustic, lexical and word vector) from the transcripts and audio files. Finally, a classifier is trained by the features to determine the clinical category of the input conversation. Moreover, we investigate replacing the role of a neurologist in the conversation with an Intelligent Virtual Agent (IVA) (asking similar questions). We show that despite differences between the IVA-led and the neurologist-led conversations, the results achieved by the IVA are as good as those gained by the neurologists. Furthermore, the IVA can be used for administering more standard cognitive tests, like the verbal fluency tests and produce automatic scores, which then can boost the performance of the classifier. The final blind evaluation of the system shows that the classifier can identify early signs of dementia with an acceptable level of accuracy and robustness (considering both sensitivity and specificity)

    Voice analysis for neurological disorder recognition – a systematic review and perspective on emerging trends

    Get PDF
    Quantifying neurological disorders from voice is a rapidly growing field of research and holds promise for unobtrusive and large-scale disorder monitoring. The data recording setup and data analysis pipelines are both crucial aspects to effectively obtain relevant information from participants. Therefore, we performed a systematic review to provide a high-level overview of practices across various neurological disorders and highlight emerging trends. PRISMA-based literature searches were conducted through PubMed, Web of Science, and IEEE Xplore to identify publications in which original (i.e., newly recorded) datasets were collected. Disorders of interest were psychiatric as well as neurodegenerative disorders, such as bipolar disorder, depression, and stress, as well as amyotrophic lateral sclerosis amyotrophic lateral sclerosis, Alzheimer's, and Parkinson's disease, and speech impairments (aphasia, dysarthria, and dysphonia). Of the 43 retrieved studies, Parkinson's disease is represented most prominently with 19 discovered datasets. Free speech and read speech tasks are most commonly used across disorders. Besides popular feature extraction toolkits, many studies utilise custom-built feature sets. Correlations of acoustic features with psychiatric and neurodegenerative disorders are presented. In terms of analysis, statistical analysis for significance of individual features is commonly used, as well as predictive modeling approaches, especially with support vector machines and a small number of artificial neural networks. An emerging trend and recommendation for future studies is to collect data in everyday life to facilitate longitudinal data collection and to capture the behavior of participants more naturally. Another emerging trend is to record additional modalities to voice, which can potentially increase analytical performance
    • …
    corecore