5 research outputs found

    An avatar-based system for identifying individuals likely to develop dementia

    Get PDF
    This paper presents work on developing an automatic dementia screening test based on patients’ ability to interact and communicate — a highly cognitively demanding process where early signs of dementia can often be detected. Such a test would help general practitioners, with no specialist knowledge, make better diagnostic decisions as current tests lack specificity and sensitivity. We investigate the feasibility of basing the test on conversations between a ‘talking head’ (avatar) and a patient and we present a system for analysing such conversations for signs of dementia in the patient’s speech and language. Previously we proposed a semi-automatic system that transcribed conversations between patients and neurologists and extracted conversation analysis style features in order to differentiate between patients with progressive neurodegenerative dementia (ND) and functional memory disorders (FMD). Determining who talks when in the conversations was performed manually. In this study, we investigate a fully automatic system including speaker diarisation, and the use of additional acoustic and lexical features. Initial results from a pilot study are presented which shows that the avatar conversations can successfully classify ND/FMD with around 91% accuracy, which is in line with previous results for conversations that were led by a neurologist

    Detecting Alzheimer's Disease by estimating attention and elicitation path through the alignment of spoken picture descriptions with the picture prompt.

    Get PDF
    Cognitive decline is a sign of Alzheimer's disease (AD), and there is evidence that tracking a person's eye movement, using eye tracking devices, can be used for the automatic identification of early signs of cognitive decline. However, such devices are expensive and may not be easy-to-use for people with cognitive problems. In this paper, we present a new way of capturing similar visual features, by using the speech of people describing the Cookie Theft picture - a common cognitive testing task - to identify regions in the picture prompt that will have caught the speaker's attention and elicited their speech. After aligning the automatically recognised words with different regions of the picture prompt, we extract information inspired by eye tracking metrics such as coordinates of the area of interests (AOI)s, time spent in AOI, time to reach the AOI, and the number of AOI visits. Using the DementiaBank dataset we train a binary classifier (AD vs. healthy control) using 10-fold cross-validation and achieve an 80% F1-score using the timing information from the forced alignments of the automatic speech recogniser (ASR); this achieved around 72% using the timing information from the ASR outputs

    Developing an intelligent virtual agent to stratify people with cognitive complaints: A comparison of human-patient and intelligent virtual agent-patient interaction

    Get PDF
    Previous work on interactions in the memory clinic has shown that conversation analysis can be used to differentiate neurodegenerative dementia from functional memory disorder. Based on this work, a screening system was developed that uses a computerised 'talking head' (intelligent virtual agent) and a combination of automatic speech recognition and conversation analysis-informed programming. This system can reliably differentiate patients with functional memory disorder from those with neurodegenerative dementia by analysing the way they respond to questions from either a human doctor or the intelligent virtual agent. However, much of this computerised analysis has relied on simplistic, nonlinguistic phonetic features such as the length of pauses between talk by the two parties. To gain confidence in automation of the stratification procedure, this paper investigates whether the patients' responses to questions asked by the intelligent virtual agent are qualitatively similar to those given in response to a doctor. All the participants in this study have a clear functional memory disorder or neurodegenerative dementia diagnosis. Analyses of patients' responses to the intelligent virtual agent showed similar, diagnostically relevant sequential features to those found in responses to doctors' questions. However, since the intelligent virtual agent's questions are invariant, its use results in more consistent responses across people - regardless of diagnosis - which facilitates automatic speech recognition and makes it easier for a machine to learn patterns. Our analysis also shows why doctors do not always ask the same question in the exact same way to different patients. This sensitivity and adaptation to nuances of conversation may be interactionally helpful; for instance, altering a question may make it easier for patients to understand. While we demonstrate that some of what is said in such interactions is bound to be constructed collaboratively between doctor and patient, doctors could consider ensuring that certain, particularly important and/or relevant questions are asked in as invariant a form as possible to be better able to identify diagnostically relevant differences in patients' responses
    corecore