31,264 research outputs found

    Multi-head attention-based long short-term memory for depression detection from speech.

    Get PDF
    Depression is a mental disorder that threatens the health and normal life of people. Hence, it is essential to provide an effective way to detect depression. However, research on depression detection mainly focuses on utilizing different parallel features from audio, video, and text for performance enhancement regardless of making full usage of the inherent information from speech. To focus on more emotionally salient regions of depression speech, in this research, we propose a multi-head time-dimension attention-based long short-term memory (LSTM) model. We first extract frame-level features to store the original temporal relationship of a speech sequence and then analyze their difference between speeches of depression and those of health status. Then, we study the performance of various features and use a modified feature set as the input of the LSTM layer. Instead of using the output of the traditional LSTM, multi-head time-dimension attention is employed to obtain more key time information related to depression detection by projecting the output into different subspaces. The experimental results show the proposed model leads to improvements of 2.3 and 10.3% over the LSTM model on the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) and the Multi-modal Open Dataset for Mental-disorder Analysis (MODMA) corpus, respectively

    Automatic Detection of Depression in Speech Using Ensemble Convolutional Neural Networks

    Get PDF
    This paper proposes a speech-based method for automatic depression classification. The system is based on ensemble learning for Convolutional Neural Networks (CNNs) and is evaluated using the data and the experimental protocol provided in the Depression Classification Sub-Challenge (DCC) at the 2016 Audio–Visual Emotion Challenge (AVEC-2016). In the pre-processing phase, speech files are represented as a sequence of log-spectrograms and randomly sampled to balance positive and negative samples. For the classification task itself, first, a more suitable architecture for this task, based on One-Dimensional Convolutional Neural Networks, is built. Secondly, several of these CNN-based models are trained with different initializations and then the corresponding individual predictions are fused by using an Ensemble Averaging algorithm and combined per speaker to get an appropriate final decision. The proposed ensemble system achieves satisfactory results on the DCC at the AVEC-2016 in comparison with a reference system based on Support Vector Machines and hand-crafted features, with a CNN+LSTM-based system called DepAudionet, and with the case of a single CNN-based classifier.This research was partly funded by Spanish Government grant TEC2017-84395-P

    Classifying Dementia in the Presence of Depression: A Cross-Corpus Study

    Full text link
    Automated dementia screening enables early detection and intervention, reducing costs to healthcare systems and increasing quality of life for those affected. Depression has shared symptoms with dementia, adding complexity to diagnoses. The research focus so far has been on binary classification of dementia (DEM) and healthy controls (HC) using speech from picture description tests from a single dataset. In this work, we apply established baseline systems to discriminate cognitive impairment in speech from the semantic Verbal Fluency Test and the Boston Naming Test using text, audio and emotion embeddings in a 3-class classification problem (HC vs. MCI vs. DEM). We perform cross-corpus and mixed-corpus experiments on two independently recorded German datasets to investigate generalization to larger populations and different recording conditions. In a detailed error analysis, we look at depression as a secondary diagnosis to understand what our classifiers actually learn.Comment: Accepted at INTERSPEECH 202

    Towards an artificial therapy assistant: Measuring excessive stress from speech

    Get PDF
    The measurement of (excessive) stress is still a challenging endeavor. Most tools rely on either introspection or expert opinion and are, therefore, often less reliable or a burden on the patient. An objective method could relieve these problems and, consequently, assist diagnostics. Speech was considered an excellent candidate for an objective, unobtrusive measure of emotion. True stress was successfully induced, using two storytelling\ud sessions performed by 25 patients suffering from a stress disorder. When reading either a happy or a sad story, different stress levels were reported using the Subjective Unit of Distress (SUD). A linear regression model consisting of the high-frequency energy, pitch, and zero crossings of the speech signal was able to explain 70% of the variance in the subjectively reported stress. The results demonstrate the feasibility of an objective measurement of stress in speech. As such, the foundation for an Artificial Therapeutic Agent is laid, capable of assisting therapists through an objective measurement of experienced stress
    corecore