1,583 research outputs found

    Affective Computing for Late-Life Mood and Cognitive Disorders

    Get PDF
    Affective computing (also referred to as artificial emotion intelligence or emotion AI) is the study and development of systems and devices that can recognize, interpret, process, and simulate emotion or other affective phenomena. With the rapid growth in the aging population around the world, affective computing has immense potential to benefit the treatment and care of late-life mood and cognitive disorders. For late-life depression, affective computing ranging from vocal biomarkers to facial expressions to social media behavioral analysis can be used to address inadequacies of current screening and diagnostic approaches, mitigate loneliness and isolation, provide more personalized treatment approaches, and detect risk of suicide. Similarly, for Alzheimer\u27s disease, eye movement analysis, vocal biomarkers, and driving and behavior can provide objective biomarkers for early identification and monitoring, allow more comprehensive understanding of daily life and disease fluctuations, and facilitate an understanding of behavioral and psychological symptoms such as agitation. To optimize the utility of affective computing while mitigating potential risks and ensure responsible development, ethical development of affective computing applications for late-life mood and cognitive disorders is needed

    AI and Non AI Assessments for Dementia

    Full text link
    Current progress in the artificial intelligence domain has led to the development of various types of AI-powered dementia assessments, which can be employed to identify patients at the early stage of dementia. It can revolutionize the dementia care settings. It is essential that the medical community be aware of various AI assessments and choose them considering their degrees of validity, efficiency, practicality, reliability, and accuracy concerning the early identification of patients with dementia (PwD). On the other hand, AI developers should be informed about various non-AI assessments as well as recently developed AI assessments. Thus, this paper, which can be readable by both clinicians and AI engineers, fills the gap in the literature in explaining the existing solutions for the recognition of dementia to clinicians, as well as the techniques used and the most widespread dementia datasets to AI engineers. It follows a review of papers on AI and non-AI assessments for dementia to provide valuable information about various dementia assessments for both the AI and medical communities. The discussion and conclusion highlight the most prominent research directions and the maturity of existing solutions.Comment: 49 page

    주요 우울 장애의 음성 기반 분석: 연속적인 발화의 음향적 변화를 중심으로

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 융합과학기술대학원 융합과학부(디지털정보융합전공), 2023. 2. 이교구.Major depressive disorder (commonly referred to as depression) is a common disorder that affects 3.8% of the world's population. Depression stems from various causes, such as genetics, aging, social factors, and abnormalities in the neurotransmitter system; thus, early detection and monitoring are essential. The human voice is considered a representative biomarker for observing depression; accordingly, several studies have developed an automatic depression diagnosis system based on speech. However, constructing a speech corpus is a challenge, studies focus on adults under 60 years of age, and there are insufficient medical hypotheses based on the clinical findings of psychiatrists, limiting the evolution of the medical diagnostic tool. Moreover, the effect of taking antipsychotic drugs on speech characteristics during the treatment phase is overlooked. Thus, this thesis studies a speech-based automatic depression diagnosis system at the semantic level (sentence). First, to analyze depression among the elderly whose emotional changes do not adequately reflect speech characteristics, it developed the mood-induced sentence to build the elderly depression speech corpus and designed an automatic depression diagnosis system for the elderly. Second, it constructed an extrapyramidal symptom speech corpus to investigate the extrapyramidal symptoms, a typical side effect that can appear from an antipsychotic drug overdose. Accordingly, there is a strong correlation between the antipsychotic dose and speech characteristics. The study paved the way for a comprehensive examination of the automatic diagnosis system for depression.주요 우울 장애 즉 흔히 우울증이라고 일컬어지는 기분 장애는 전 세계인 중 3.8%에 달하는 사람들이 겪은바 있는 매우 흔한 질병이다. 유전, 노화, 사회적 요인, 신경전달물질 체계의 이상등 다양한 원인으로 발생하는 우울증은 조기 발견 및 일상 생활에서의 관리가 매우 중요하다고 할 수 있다. 인간의 음성은 우울증을 관찰하기에 대표적인 바이오마커로 여겨져 왔으며, 음성 데이터를 기반으로한 자동 우울증 진단 시스템 개발을 위한 여러 연구들이 진행되어 왔다. 그러나 음성 말뭉치 구축의 어려움과 60세 이하의 성인들에게 초점이 맞추어진 연구, 정신과 의사들의 임상 소견을 바탕으로한 의학적 가설 설정의 미흡등의 한계점을 가지고 있으며, 이는 의료 진단 기구로 발전하는데 한계점이라고 할 수 있다. 또한, 항정신성 약물의 복용이 음성 특징에 미칠 수 있는 영향 또한 간과되고 있다. 본 논문에서는 위의 한계점들을 보완하기 위한 의미론적 수준 (문장 단위)에서의 음성 기반 자동 우울증 진단에 대한 연구를 시행하고자 한다. 우선적으로 감정의 변화가 음성 특징을 잘 반영되지 않는 노인층의 우울증 분석을 위해 감정 발화 문장을 개발하여 노인 우울증 음성 말뭉치를 구축하고, 문장 단위에서의 관찰을 통해 노인 우울증 군에서 감정 문장 발화가 미치는 영향과 감정 전이를 확인할 수 있었으며, 노인층의 자동 우울증 진단 시스템을 설계하였다. 최종적으로 항정신병 약물의 과복용으로 나타날 수 있는 대표적인 부작용인 추체외로 증상을 조사하기 위해 추체외로 증상 음성 말뭉치를 구축하였고, 항정신병 약물의 복용량과 음성 특징간의 상관관계를 분석하여 우울증의 치료 과정에서 항정신병 약물이 음성에 미칠 수 있는 영향에 대해서 조사하였다. 이를 통해 주요 우울 장애의 영역에 대한 포괄적인 연구를 진행하였다.Chapter 1 Introduction 1 1.1 Research Motivations 3 1.1.1 Bridging the Gap Between Clinical View and Engineering 3 1.1.2 Limitations of Conventional Depressed Speech Corpora 4 1.1.3 Lack of Studies on Depression Among the Elderly 4 1.1.4 Depression Analysis on Semantic Level 6 1.1.5 How Antipsychotic Drug Affects the Human Voice? 7 1.2 Thesis objectives 9 1.3 Outline of the thesis 10 Chapter 2 Theoretical Background 13 2.1 Clinical View of Major Depressive Disorder 13 2.1.1 Types of Depression 14 2.1.2 Major Causes of Depression 15 2.1.3 Symptoms of Depression 17 2.1.4 Diagnosis of Depression 17 2.2 Objective Diagnostic Markers of Depression 19 2.3 Speech in Mental Disorder 19 2.4 Speech Production and Depression 21 2.5 Automatic Depression Diagnostic System 23 2.5.1 Acoustic Feature Representation 24 2.5.2 Classification / Prediction 27 Chapter 3 Developing Sentences for New Depressed Speech Corpus 31 3.1 Introduction 31 3.2 Building Depressed Speech Corpus 32 3.2.1 Elements of Speech Corpus Production 32 3.2.2 Conventional Depressed Speech Corpora 35 3.2.3 Factors Affecting Depressed Speech Characteristics 39 3.3 Motivations 40 3.3.1 Limitations of Conventional Depressed Speech Corpora 40 3.3.2 Attitude of Subjects to Depression: Masked Depression 43 3.3.3 Emotions in Reading 45 3.3.4 Objectives of this Chapter 45 3.4 Proposed Methods 46 3.4.1 Selection of Words 46 3.4.2 Structure of Sentence 47 3.5 Results 49 3.5.1 Mood-Inducing Sentences (MIS) 49 3.5.2 Neutral Sentences for Extrapyramidal Symptom Analysis 49 3.6 Summary 51 Chapter 4 Screening Depression in The Elderly 52 4.1 Introduction 52 4.2 Korean Elderly Depressive Speech Corpus 55 4.2.1 Participants 55 4.2.2 Recording Procedure 57 4.2.3 Recording Specification 58 4.3 Proposed Methods 59 4.3.1 Voice-based Screening Algorithm for Depression 59 4.3.2 Extraction of Acoustic Features 59 4.3.3 Feature Selection System and Distance Computation 62 4.3.4 Classification and Statistical Analyses 63 4.4 Results 65 4.5 Discussion 69 4.6 Summary 74 Chapter 5 Correlation Analysis of Antipsychotic Dose and Speech Characteristics 75 5.1 Introduction 75 5.2 Korean Extrapyramidal Symptoms Speech Corpus 78 5.2.1 Participants 78 5.2.2 Recording Process 79 5.2.3 Extrapyramidal Symptoms Annotation and Equivalent Dose Calculations 80 5.3 Proposed Methods 81 5.3.1 Acoustic Feature Extraction 81 5.3.2 Speech Characteristics Analysis recording to Eq.dose 83 5.4 Results 83 5.5 Discussion 87 5.6 Summary 90 Chapter 6 Conclusions and Future Work 91 6.1 Conclusions 91 6.2 Future work 95 Bibliography 97 초 록 121박

    Machine Learning and Deep Learning Approaches for Brain Disease Diagnosis : Principles and Recent Advances

    Get PDF
    This work was supported in part by the National Research Foundation of Korea-Grant funded by the Korean Government (Ministry of Science and ICT) under Grant NRF 2020R1A2B5B02002478, and in part by Sejong University through its Faculty Research Program under Grant 20212023.Peer reviewedPublisher PD

    Automatic Speech Classifier for Mild Cognitive Impairment and Early Dementia

    Get PDF
    none5noThe World Health Organization estimates that 50 million people are currently living with dementia worldwide and this figure will almost triple by 2050. Current pharmacological treatments are only symptomatic, and drugs or other therapies are ineffective in slowing down or curing the neurodegenerative process at the basis of dementia. Therefore, early detection of cognitive decline is of the utmost importance to respond significantly and deliver preventive interventions. Recently, the researchers showed that speech alterations might be one of the earliest signs of cognitive defect, observable well in advance before other cognitive deficits become manifest. In this article, we propose a full automated method able to classify the audio file of the subjects according to the progress level of the pathology. In particular, we trained a specific type of artificial neural network, called autoencoder, using the visual representation of the audio signal of the subjects, that is, the spectrogram. Moreover, we used a data augmentation approach to overcome the problem of the large amount of annotated data usually required during the training phase, which represents one of the most major obstacles in deep learning. We evaluated the proposed method using a dataset of 288 audio files from 96 subjects: 48 healthy controls and 48 cognitively impaired participants. The proposed method obtained good classification results compared to the state-of-the-art neuropsychological screening tests and, with an accuracy of 90.57%, outperformed the methods based on manual transcription and annotation of speech.mixedBertini, Flavio; Allevi, Davide; Lutero, Gianluca; Montesi, Danilo; Calzà, LauraBertini, Flavio; Allevi, Davide; Lutero, Gianluca; Montesi, Danilo; Calzà, Laur

    Remote data collection speech analysis and prediction of the identification of Alzheimer’s disease biomarkers in people at risk for Alzheimer’s disease dementia: the Speech on the Phone Assessment (SPeAk) prospective observational study protocol

    Get PDF
    International audienceIntroduction Identifying cost-effective, non-invasive biomarkers of Alzheimer's disease (AD) is a clinical and research priority. Speech data are easy to collect, and studies suggest it can identify those with AD. We do not know if speech features can predict AD biomarkers in a preclinical population. Methods and analysis The Speech on the Phone Assessment (SPeAk) study is a prospective observational study. SPeAk recruits participants aged 50 years and over who have previously completed studies with AD biomarker collection. Participants complete a baseline telephone assessment, including spontaneous speech and cognitive tests. A 3-month visit will repeat the cognitive tests with a conversational artificial intelligence bot. Participants complete acceptability questionnaires after each visit. Participants are randomised to receive their cognitive test results either after each visit or only after they have completed the study. We will combine SPeAK data with AD biomarker data collected in a previous study and analyse for correlations between extracted speech features and AD biomarkers. The outcome of this analysis will inform the development of an algorithm for prediction of AD risk based on speech features. Ethics and dissemination This study has been approved by the Edinburgh Medical School Research Ethics Committee (REC reference 20-EMREC-007). All participants will provide informed consent before completing any study-related procedures, participants must have capacity to consent to participate in this study. Participants may find the tests, or receiving their scores, causes anxiety or stress. Previous exposure to similar tests may make this more familiar and reduce this anxiety. The study information will include signposting in case of distress. Study results will be disseminated to study participants, presented at conferences and published in a peer reviewed journal. No study participants will be identifiable in the study results

    Alzheimer’s Dementia Recognition Through Spontaneous Speech

    Get PDF

    Automatic Detection of Dementia and related Affective Disorders through Processing of Speech and Language

    Get PDF
    In 2019, dementia is has become a trillion dollar disorder. Alzheimer’s disease (AD) is a type of dementia in which the main observable symptom is a decline in cognitive functions, notably memory, as well as language and problem-solving. Experts agree that early detection is crucial to effectively develop and apply interventions and treatments, underlining the need for effective and pervasive assessment and screening tools. The goal of this thesis is to explores how computational techniques can be used to process speech and language samples produced by patients suffering from dementia or related affective disorders, to the end of automatically detecting them in large populations us- ing machine learning models. A strong focus is laid on the detection of early stage dementia (MCI), as most clinical trials today focus on intervention at this level. To this end, novel automatic and semi-automatic analysis schemes for a speech-based cogni- tive task, i.e., verbal fluency, are explored and evaluated to be an appropriate screening task. Due to a lack of available patient data in most languages, world-first multilingual approaches to detecting dementia are introduced in this thesis. Results are encouraging and clear benefits on a small French dataset become visible. Lastly, the task of detecting these people with dementia who also suffer from an affective disorder called apathy is explored. Since they are more likely to convert into later stage of dementia faster, it is crucial to identify them. These are the fist experiments that consider this task us- ing solely speech and language as inputs. Results are again encouraging, both using only speech or language data elicited using emotional questions. Overall, strong results encourage further research in establishing speech-based biomarkers for early detection and monitoring of these disorders to better patients’ lives.Im Jahr 2019 ist Demenz zu einer Billionen-Dollar-Krankheit geworden. Die Alzheimer- Krankheit (AD) ist eine Form der Demenz, bei der das Hauptsymptom eine Abnahme der kognitiven Funktionen ist, insbesondere des Gedächtnisses sowie der Sprache und des Problemlösungsvermögens. Experten sind sich einig, dass eine frühzeitige Erkennung entscheidend für die effektive Entwicklung und Anwendung von Interventionen und Behandlungen ist, was den Bedarf an effektiven und durchgängigen Bewertungsund Screening-Tools unterstreicht. Das Ziel dieser Arbeit ist es zu erforschen, wie computergest ützte Techniken eingesetzt werden können, um Sprach- und Sprechproben von Patienten, die an Demenz oder verwandten affektiven Störungen leiden, zu verarbeiten, mit dem Ziel, diese in großen Populationen mit Hilfe von maschinellen Lernmodellen automatisch zu erkennen. Ein starker Fokus liegt auf der Erkennung von Demenz im Frühstadium (MCI), da sich die meisten klinischen Studien heute auf eine Intervention auf dieser Ebene konzentrieren. Zu diesem Zweck werden neuartige automatische und halbautomatische Analyseschemata für eine sprachbasierte kognitive Aufgabe, d.h. die verbale Geläufigkeit, erforscht und als geeignete Screening-Aufgabe bewertet. Aufgrund des Mangels an verfügbaren Patientendaten in den meisten Sprachen werden in dieser Arbeit weltweit erstmalig mehrsprachige Ansätze zur Erkennung von Demenz vorgestellt. Die Ergebnisse sind ermutigend und es werden deutliche Vorteile an einem kleinen französischen Datensatz sichtbar. Schließlich wird die Aufgabe untersucht, jene Menschen mit Demenz zu erkennen, die auch an einer affektiven Störung namens Apathie leiden. Da sie mit größerer Wahrscheinlichkeit schneller in ein späteres Stadium der Demenz übergehen, ist es entscheidend, sie zu identifizieren. Dies sind die ersten Experimente, die diese Aufgabe unter ausschließlicher Verwendung von Sprache und Sprache als Input betrachten. Die Ergebnisse sind wieder ermutigend, sowohl bei der Verwendung von reiner Sprache als auch bei der Verwendung von Sprachdaten, die durch emotionale Fragen ausgelöst werden. Insgesamt sind die Ergebnisse sehr ermutigend und ermutigen zu weiterer Forschung, um sprachbasierte Biomarker für die Früherkennung und Überwachung dieser Erkrankungen zu etablieren und so das Leben der Patienten zu verbessern
    corecore