12 research outputs found

    Mental Depression Deduction Using Modified Regression Model to Prevent Suicidal Attempt

    Get PDF
    This study explores a novel approach for predicting depression using association-based multilevel linear regression. The suggested approach, known as association-based multilevel linear regression, uses data on mental depression to predict the prevalence of depression. Several statistical techniques can be used to forecast depression. Several statistical methods, including Linear Regression (LR), Multilevel Linear Regression (MLR), Naïve Bayes algorithm and Decision Tree (DT) were used in this investigation. Because these algorithms are able to predict mental depression based on certain characteristics such as precision and efficiency, their performance reduces. The results of these algorithms' predictions vary significantly, especially in terms of accuracy. The mental health data is fed into a developed model that has been trained to make predictions in order to address the aforementioned problem. Depression is the subject of conversation. A great degree of accuracy is shown by the association-based multilevel linear regression technique and the evaluation of prediction of accuracy in relation to other statistical methods. This study used association-based multilevel linear regression technique.  When compared to traditional methods, the method exhibits a substantially greater level of accuracy, almost 99%

    Ruh sağlığı hastalıkları tanısında LIWC ve makine öğrenimi yaklaşımlarının incelenmesi

    Get PDF
    Machine learning methods are becoming increasingly popular in data analysis. In the field of mental healthcare, these methods provide support to mental disorder diagnosis. Pennebaker developed a dictionary-based text analysis program, and it is also used in mental health diagnosis. In this study, ML and Linguistic Inquiry Word Count (LIWC) studies conducted in the field of mental disorder diagnosis were examined. Researchers aim to integrate LIWC with machine learning to conduct more comprehensive studies. The objective of this study is to examine how combining ML and LIWC methods can detect mental disorder with a focus on comparative research. For this purpose, publications related to ML and LIWC in Google Scholar, Web of Science, Scopus, EBSCO, PubMed were examined. Studies utilizing machine learning and LIWC methods in mental health diagnosis were reviewed to establish an overview of the literature. A table summarizing 15 articles on integrating machine learning and LIWC for mental disorder identification was compiled. Subsequently, the working principles of machine learning and LIWC were examined, and research conducted in the field of mental disorder diagnosis was reviewed. Further research particularly those integrating or comparing these two methods needed to better understand machine learning and LIWC in mental disorder detection.Makine öğrenmesi yöntemleri veri analizi alanlarında giderek popülerlik kazanmaktadır. Bu yöntemler ruh sağlığı alanındaki tanı belirleme çalışmalarına da destek sağlamaktadır. İlk olarak, Pennebaker sözlük tabanlı bir metin analizi programı geliştirmiştir ve bu program ruh sağlığı teşhisinde de kullanılmaktadır. Bu çalışma kapsamında ruh sağlığı hastalıklar teşhisi alanında yapılmış olan makine öğrenmesi ve Linquistic Inquiry Word Count (LIWC) çalışmaları incelenmiştir. Günümüzde daha geniş araştırmalar yapabilmesi için LIWC ile makine öğrenimini birbirine entegre etmek amaçlanmaktadır. Bu çalışmanın amacı, makine öğrenmesi ve LIWC yöntemlerinin birbirine entegre edilmesinin ruh sağlığı hastalıklarının teşhisinde etkisinin araştırılmasıdır. Özellikle karşılaştırmalı araştırmalara odaklanılmıştır. Bu amaçla, makine öğrenmesi ve LIWC ile ilgili olan Google Scholar, SAGE journals, Web of Science, Scopus, EBSCO, PubMed kaynaklarındaki yayınlar incelenmiştir. Literatürdeki genel durumun ortaya konması amacıyla, ruh sağlığı hastalıkları tespitinde makine öğrenmesi ve LIWC yöntemlerinden yararlanan çalışmalar derlenmiştir. Son olarak makine öğrenimi ve LIWC’in çalışma prensipleri incelenip ruh sağlığı hastalıkları alanında yapılan araştırmalar ve bazı çalışmalar tablolaştırılmıştır. Bu çalışmanın, ruh sağlığı hastalıkları tespitinde makine öğrenimi ve Dilbilimsel Sorgulama Kelime Sayımını daha iyi anlamak için özellikle bu iki yöntemi entegre eden veya karşılaştıran daha fazla araştırmaya ihtiyaç olduğundan, araştırmacılara faydalı olabileceği umulmaktadır.Publisher's Versio

    Plausibility of a Neural Network Classifier-Based Neuroprosthesis for Depression Detection via Laughter Records

    Get PDF
    The present work explores the diagnostic performance for depression of neural network classifiers analyzing the sound structures of laughter as registered from clinical patients and healthy controls. The main methodological novelty of this work is that simple sound variables of laughter are used as inputs, instead of electrophysiological signals or local field potentials (LFPs) or spoken language utterances, which are the usual protocols up-to-date. In the present study, involving 934 laughs from 30 patients and 20 controls, four different neural networks models were tested for sensitivity analysis, and were additionally trained for depression detection. Some elementary sound variables were extracted from the records: timing, fundamental frequency mean, first three formants, average power, and the Shannon-Wiener entropy. In the results obtained, two of the neural networks show a diagnostic discrimination capability of 93.02 and 91.15% respectively, while the third and fourth ones have an 87.96 and 82.40% percentage of success. Remarkably, entropy turns out to be a fundamental variable to distinguish between patients and controls, and this is a significant factor which becomes essential to understand the deep neurocognitive relationships between laughter and depression. In biomedical terms, our neural network classifier-based neuroprosthesis opens up the possibility of applying the same methodology to other mental-health and neuropsychiatric pathologies. Indeed, exploring the application of laughter in the early detection and prognosis of Alzheimer and Parkinson would represent an enticing possibility, both from the biomedical and the computational points of view

    Automatic Identification of Emotional Information in Spanish TV Debates and Human-Machine Interactions

    Get PDF
    Automatic emotion detection is a very attractive field of research that can help build more natural human–machine interaction systems. However, several issues arise when real scenarios are considered, such as the tendency toward neutrality, which makes it difficult to obtain balanced datasets, or the lack of standards for the annotation of emotional categories. Moreover, the intrinsic subjectivity of emotional information increases the difficulty of obtaining valuable data to train machine learning-based algorithms. In this work, two different real scenarios were tackled: human–human interactions in TV debates and human–machine interactions with a virtual agent. For comparison purposes, an analysis of the emotional information was conducted in both. Thus, a profiling of the speakers associated with each task was carried out. Furthermore, different classification experiments show that deep learning approaches can be useful for detecting speakers’ emotional information, mainly for arousal, valence, and dominance levels, reaching a 0.7F1-score.The research presented in this paper was conducted as part of the AMIC and EMPATHIC projects, which received funding from the Spanish Minister of Science under grants TIN2017-85854-C4-3-R and PDC2021-120846-C43 and from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 769872. The first author also received a PhD scholarship from the University of the Basque Country UPV/EHU, PIF17/310

    Artificial Intelligence for Suicide Assessment using Audiovisual Cues: A Review

    Get PDF
    Death by suicide is the seventh leading death cause worldwide. The recent advancement in Artificial Intelligence (AI), specifically AI applications in image and voice processing, has created a promising opportunity to revolutionize suicide risk assessment. Subsequently, we have witnessed fast-growing literature of research that applies AI to extract audiovisual non-verbal cues for mental illness assessment. However, the majority of the recent works focus on depression, despite the evident difference between depression symptoms and suicidal behavior and non-verbal cues. This paper reviews recent works that study suicide ideation and suicide behavior detection through audiovisual feature analysis, mainly suicidal voice/speech acoustic features analysis and suicidal visual cues. Automatic suicide assessment is a promising research direction that is still in the early stages. Accordingly, there is a lack of large datasets that can be used to train machine learning and deep learning models proven to be effective in other, similar tasks.Comment: Manuscript submitted to Arificial Intelligence Reviews (2022

    Acoustic features of voice in adults suffering from depression

    Get PDF
    In order to examine the differences in people suffering from depression (EG, N=18) compared to the healthy controls (CG1, N=24) and people with the diagnosed psychogenic voice disorder (CG2, N=9), nine acoustic features of voice were assessed among the total of 51 participants using the MDVP software programme (“Kay Elemetrics” Corp., model 4300). Nine acoustic parameters were analysed on the basis of the sustained phonation of the vowel /a/. The results revealed that the mean values of all acoustic parameters differed in the EG compared to both the CG1 and CG2 as follows: the parameters which indicate frequency variability (Jitt, PPQ), amplitude variability (Shim, vAm, APQ) and noise and tremor parameters (NHR, VTI) were higher; only the parameters of fundamental frequency (F0) and soft index phonation (SPI) were lower (F0 compared to CG1, and SPI compared to CG1 and CG2). Only the PPQ parameter was not significant. vAm and APQ had the highest discriminant value for depression. The acoustic features of voice, analysed in this study with regard to the sustained phonation of a vowel, were different and discriminant in the EG compared to CG1 and CG2. In voice analysis, the parameters vAm and APQ could potentially be the markers indicative of depression. The results of this research point to the importance of the voice, that is, its acoustic indicators, in recognizing depression. Important parameters that could help create a programme for the automatic recognition of depression are those from the domain of voice intensity variation.U cilju utvrđivanja razlika između grupe osoba sa depresivnim poremećajem (EG, N=18) u odnosu na grupu osoba iz tipične populacije (CG1, N=24) i grupu osoba sa dijagnostikovanim psihogenim poremećajem glasa (CG2, N=9) analizirano je 9 akustičkih karakteristika glasa primenom MDVP softverskog programa (“Kay Elemetrics” Corp., model 4300) na uzorku od 51 ispitanika. Devet akustičkih parametara analizirano je na osnovu produženog foniranja vokala /a/. Rezultati istraživanja pokazuju da se srednje vrednosti svih akustičkih parametara razlikuju između osoba sa depresivnim poremećajem u odnosu na obe kontrolne grupe i to: parametri varijabilnosti frekvencije (Jitter, PPQ), varijabilnosti amplitude (Shimmer, vAm i APQ), i parametri procene šuma i tremora (NHR i VTI) imaju više vrednosti; samo su parametar fundamentalne frekvencije (F0) i indeks prigušene fonacije (SPI) niži (F0 u odnosu na CG1, i SPI u odnosu na CG2). Samo se parametar PPQ nije pokazao značajnim. Parametri vAm i APQ imaju najveću diskriminativnu vrednost za depresivni poremećaj. Akustičke karakteristike glasa analizirane na osnovu produženog foniranja vokala u ovom istraživanju razlikuju i diskriminišu EG i u odnosu na CG1 i u odnosu na CG2. U vokalnoj analizi parametri vAm i APQ bi potencijalno mogli biti markeri koji ukazuju na depresivni poremećaj. Rezultati ovog istraživanja ukazuju na značaj glasa, odnosno njegovih akustičkih pokazatelja, u prepoznavanju depresije. Važni parametri koji bi mogli da pomognu u kreiranju programa za automatsko prepoznavanje depresije su oni iz domena varijacije intenziteta glasa

    Plausibility of a Neural Network Classifier-Based Neuroprosthesis for Depression Detection via Laughter Records

    Get PDF
    The present work explores the diagnostic performance for depression of neural network classifiers analyzing the sound structures of laughter as registered from clinical patients and healthy controls. The main methodological novelty of this work is that simple sound variables of laughter are used as inputs, instead of electrophysiological signals or local field potentials (LFPs) or spoken language utterances, which are the usual protocols up-to-date. In the present study, involving 934 laughs from 30 patients and 20 controls, four different neural networks models were tested for sensitivity analysis, and were additionally trained for depression detection. Some elementary sound variables were extracted from the records: timing, fundamental frequency mean, first three formants, average power, and the Shannon-Wiener entropy. In the results obtained, two of the neural networks show a diagnostic discrimination capability of 93.02 and 91.15% respectively, while the third and fourth ones have an 87.96 and 82.40% percentage of success. Remarkably, entropy turns out to be a fundamental variable to distinguish between patients and controls, and this is a significant factor which becomes essential to understand the deep neurocognitive relationships between laughter and depression. In biomedical terms, our neural network classifier-based neuroprosthesis opens up the possibility of applying the same methodology to other mental-health and neuropsychiatric pathologies. Indeed, exploring the application of laughter in the early detection and prognosis of Alzheimer and Parkinson would represent an enticing possibility, both from the biomedical and the computational points of view

    A study on artificial intelligence-based clinical decision support system to evaluate depression and suicide risk using voice and text

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 의과대학 의학과, 2022.2. 안용민.Introduction: The incidence of depression and suicide continues to increase worldwide, and the resulting socio-economic loss is also enormous. However, in diagnosing depression and suicide, it is difficult to effectively intervene due to a misdiagnosis in situations where it is difficult for the patient to report symptoms with reduced symptoms or conduct an in-depth interview because the diagnosis must be made through the patient's subjective answer. In addition, voices and text used by participants in interviews are traditionally known for their clinical significance in the Department of Psychiatry. In the past, the voice and the subject's utterance content had to be used for diagnosis based on the clinical experience accumulated by clinicians. As technologies for extracting various indicators of voice and extracting spoken words have been developed recently, differences in voices and differences in spoken words according to the risk of depression and suicide are being revealed. In addition, as machine learning is applied to the medical field, it has been able to detect these subtle differences and help make decisions. However, previous studies have limitations in that the number of subjects is not yet sufficient and consideration of various clinical situations such as drugs is insufficient. Therefore, this study aims to overcome the existing limitations and build an AI-based clinical decision support system that evaluates depression and suicide risk groups based on the subject's voice and words used during the interview. Method: A patient group complaining of depressive symptoms and a normal control group were recruited, respectively, and a Mini International Neuropsychiatric Interview was performed on all subjects and the interview was recorded. After extracting only the section uttered by the subject from among the recorded interview files, various indicators of voice and text data were extracted from each section. In Study I, the initial data were used to compare and analyze the normal group, the mild depression group, and the major depression group. In Study II, the final initial evaluation data was used to identify voice indicators that can distinguish normal and depression, and a diagnostic algorithm using text was constructed. In Study III, an algorithm for diagnosing the high-risk group for suicide was established by defining the high-risk group for depression through the Beck’s suicide ideation scale and the suicide-related module of the mini international neuropsychiatric interview. Results: In Study I, 7 voice and speech indicators that can distinguish 33 normal group, 26 mild depression group, and 34 major depression group were extracted. When the accuracy of various models was confirmed based on the voice and speech features, it was confirmed that the performance of the multilayer perceptron was the best. In Study II, the speech features and text data of 83 normal and 83 depressed patients were analyzed, and the area under the curve of the machine learning algorithm based on each was 0.806 and 0.905. In Study III, 83 people in the depression group were compared with both the Beck’s suicide ideation scale and the classification method through a mini international neuropsychiatric interview. However, in the case of the algorithm for predicting the suicide high risk group based on voice, the maximum sensitivity was 0.535, and the best performance was the average accuracy of 0.495 in the model using the logistic regression formula. On the other hand, the algorithm for predicting suicide risk based on text also had an area under the curve of only 0.632, but the ensemble model built by integrating text data and sociodemographic information was able to confirm the diagnostic usefulness with an area under the curve of 0.800. Conclusion: This study established an algorithm for diagnosing depression and suicide risk by extracting voice and speech features and text data from the subject's utterance section in a structured interview. Both data showed excellent performance in diagnosing depression, but insufficient for diagnosing suicide risk. In the case of text data, although there are limitations as data obtained through structured interviews, when it was integrated with sociodemographic information using statistical techniques, it showed better performance than predicted by sociodemographic information alone, showing utility value. This study is the first study in South Korea to prove the objective diagnostic value of voice and text data, and it is a challenge to a new field of psychiatry as a digital objective diagnostic tool. In the future, additional research is needed on data from more diverse regions and environments in the field. Keywords : depression, suicide risk, voice, text analysis, machine learning, clinical decision support system Student Number : 2018-25300서론: 전 세계적으로 우울증과 자살은 발병률이 지속적으로 증가하고 있으며, 그로 인한 사회경제적 손실이 막대하다. 하지만 이를 진단하는 방법은 환자의 주관적인 대답을 바탕으로 진단하는 방법 뿐이다. 그로 인해 환자가 증상을 축소하여 보고하거나 심층적인 면담을 진행하기 어려운 상황에서는 정확한 진단이 어렵고, 효율적인 개입을 하기 어렵다. 따라서 우울증과 자살을 진단할 수 있는 객관적인 마커들에 대한 다양한 연구들이 진행되고 있다. 그 중 인터뷰 시의 목소리와, 사용하는 단어 등은 임상의가 축적된 임상경험을 바탕으로 내리는 임상적 판단에 많이 활용되었던 지표들이다. 최근 목소리의 다양한 지표를 추출하고 발화 단어를 추출하는 기술들이 개발됨에 따라서 우울증과 자살 위험에 따른 목소리의 차이와 발화 단어들의 차이들이 밝혀지고 있다. 또한 인공 지능이 의료계에 접목됨에 따라서 이러한 미세한 차이들을 감지하고 임상적 의사 결정에 도움을 줄 수 있게 되었다. 하지만 국내외의 선행 연구들에서는 아직 피험자의 수가 충분하지 않고, 약물 등의 다양한 임상적 상황들에 대한 고려가 부족하다는 한계가 있다. 따라서 본 연구에서는 기존의 한계들을 극복하여, 인터뷰 중의 피험자 목소리와 사용된 단어를 기반으로 우울증과 자살 위험군을 평가하는 인공지능 기반 임상의사결정지원시스템을 구축해보고자 한다. 방법: 우울한 증상을 호소하는 환자군과 정상대조군을 각각 모집하였고, 모든 피험자에게 간이 국제 신경 정신 인터뷰(Mini International Neuropsychiatric Interview)를 시행하여 해당 인터뷰를 녹음하였다. 녹음된 인터뷰 파일 중 피험자가 발화한 구간만을 추출한 뒤, 각 구간에서 목소리의 다양한 지표들과 텍스트 데이터를 추출하였다. 연구 I에서는 2차 년도까지의 초기 데이터를 이용하여 정상군과 경도 우울증군, 주요 우울증군을 비교하여 분석하였다. 그리고 연구 II와 III을 통해 최종적으로 모집된 초기 평가 데이터를 활용하여 우울증과 자살 위험을 평가하는 인공지능 기반 임상의사결정시스템을 구축해보고자 하였다. 자살 위험은 빈도가 작기 때문에, 자살 위험을 정확하게 평가하기 위하여 연구 II를 통해 음성과 텍스트를 이용하여 정상과 우울증을 구별할 수 있는 평가 알고리즘을 구축하였다. 그 후 2단계로서 연구 III을 통해 우울증 군 내에서 자살 저위험군과 자살 고위험군을 구분하여 자살 위험을 평가하는 알고리즘을 구축하여 그 성능을 확인하였다. 결과: 2차 년도까지 모집된 정상군 33명과 경도 우울증군 26명, 주요 우울증군 34명을 대상으로 분석한 연구 I의 결과, 세 군을 구별할 수 있는 음성 지표 7개를 발견하였다. 또한 음성 지표를 기반으로 세 군을 평가하는 진단 알고리즘을 다양한 모델을 통해 구축하였고, 다층 퍼셉트론의 성능이 가장 우수함을 확인하였다. 3차 년도까지 최종적으로 모집된 환자는 총 85명이었고, 그 중 자가보고설문지를 누락한 2명은 분석에서 제외하였다. 또한 정상대조군은 학내 게시판과 온라인 광고를 통해 총 105명이 모집되었으며 정신건강의학과 병력이 있는 22명은 분석에서 제외되었다. 이를 기반으로 진행된 연구 II에서는 음성 지표와 텍스트데이터를 분석하여 정상군과 우울증군을 구분할 수 있는 인공지능 기반 알고리즘을 구축하였고, 음성을 기반으로 한 인공지능 알고리즘의 곡선하면적은 0.806, 텍스트데이터를 기반으로 한 인공지능 알고리즘의 곡선하면적은 0.905로 우수한 성능을 확인하였다. 이어 연구 III의 우울증군 83명을 대상으로 자살 고위험군을 구분하는 인공지능 임상의사결정지원시스템을 구축하였다. 자살 고위험군을 구분하는 방법으로 각각 벡 자살사고 척도 기준의 방법과, 간이 국제 신경정신 인터뷰를 통해 분류하는 방법을 통해 구분하는 방법을 모두 적용하였다. 그 결과 음성지표를 기반으로 자살 고위험군을 평가하는 알고리즘의 경우 민감도 0.535가 최대값이었으며, 가장 우수한 성능은 로지스틱 회귀공식을 이용한 모델에서 보인 평균 정확도 0.495였다. 또한 텍스트를 기반으로 자살 위험을 평가한 알고리즘 역시 곡선하면적 0.632에 불과하여, 목소리와 텍스트를 기반으로 한 자살 고위험군 진단 알고리즘은 임상적으로 활용하기는 어려운 수준이었다. 하지만 텍스트 데이터와 사회인구학적 정보를 통합하여 구축한 앙상블 모델의 곡선하면적은 0.800으로 성능이 향상됨을 확인하였다. 결론: 본 연구는 구조화된 면담에서 피험자의 발화 구간을 통해 음성과 텍스트 데이터를 추출하여, 이를 기반으로 우울증과 자살 위험성을 평가하는 임상의사결정지원 알고리즘을 국내에서는 최초로 구축하였다. 우울증을 진단함에 있어서는 두 데이터 모두 우수한 성능을 보였으나, 자살 위험을 진단하기에는 부족하였다. 텍스트 데이터의 경우 구조화된 인터뷰를 통해 얻어진 텍스트라는 한계가 있음에도 불구하고 우울증의 진단함에 있어 임상의사결정지원시스템의 가능성을 확인하였다. 더욱이 자살 위험 진단에 있어서도 사회인구학적 정보와 통합된 앙상블모델의 우수한 성능을 확인함으로써 향후 임상적 활용가치를 보여주었다. 본 연구는 목소리, 텍스트 데이터가 가지는 객관적 진단 가치를 국내 최초로 입증한 연구로서, 디지털 진단 도구라는 정신건강의학과적으로는 새로운 분야로의 도전이다. 향후 해당 분야의 더 다양한 지역, 다양한 환경에서의 데이터에 대해서 추가적인 연구가 필요하다. * 본 내용의 일부는 Shin, D., Cho, W. I., Park, C., Rhee, S. J., Kim, M. J., Lee, H., ... & Ahn, Y. M. (2021). Detection of Minor and Major Depression through Voice as a Biomarker Using Machine Learning. Journal of clinical medicine, 10(14), 3046. 에 출판 완료된 내용이며, 그 외 내용들은 현재 출판 준비 중임. 주요어 : 우울증, 자살 위험, 목소리, 텍스트, 기계학습, 임상의사결정지원시스템 학 번 : 2018-25300초록 i 표 목차 vi 그림 목차 ix 약어 목록 x 제 1 장 서 론 1 제 2 장 목소리를 통해 경도 우울증과 주요 우울증을 구별하는 인공지능 알고리즘의 개발 (연구 I) 10 제 1 절 연구대상 및 방법 11 제 2 절 결과 22 제 3 절 고찰 26 제 3 장 목소리와 텍스트 데이터를 이용하여 우울증 진단 인공지능 알고리즘의 개발 (연구 II) 31 제 1 절 연구대상 및 방법 33 제 2 절 결과 38 제 3 절 고찰 43 제 4 장 목소리와 텍스트 데이터를 통해 자살고위험군을 진단하는 인공지능 알고리즘 개발 (연구 III) 49 제 1 절 연구대상 및 방법 51 제 2 절 결과 54 제 3 절 고찰 59 제 5 장 종합 고찰 65 참고문헌 128 Abstract 152 표 목차 [표 1-1] Comparison of demographics according to depressive episodes 78 [표 1-2] Clinical characteristics by depressive episode 80 [표 1-3] Difference of voice features by depressive episode 81 [표 1-4 JonckheereTerpstra test result of voice features by depressive episode 83 [표 1-5] Machine learning model performance through voice features 85 [표 1-6] Comparison of demographics between healthy control and major depressive disorder and bipolar disorder 86 [표 1-7] Comparison of clinical characteristics between healthy control and major depressive disorder and bipolar disorder 87 [표 1-8] Voice and speech features between healthy control and major depressive disorder and bipolar disorder 88 [표 2-1] Descriptive statistics of voice and speech features 90 [표 2-2] The results of applying the diagnostic algorithm from Study I to the newly recruited test set from Study II 91 [표 2-3] Demographics comparison between healthy control group and current depression group 92 [표 2-4] Clinical characteristics between HC and CD 93 [표 2-5] Voice and speech features comparison between healthy control and current depression 94 [표 2-6] Classification results by voice and speech features between HC and CD 96 [표 2-7] Classification results by text between HC and CD 97 [표 2-8] Correlation analysis between voice and speech features and clinical characteristics in healthy control group 98 [표 2-9] Correlation analysis between voice and speech features and clinical characteristics in current depression group 100 [표 2-10] Mediation analysis between voice and speech features and clinical characteristics in healthy control group 102 [표 2-11] Mediation analysis between voice and speech features and clinical characteristics in current depression group 103 [표 3-1] Demographics between depression with low suicidal risk and depression with high suicidal risk (suicidal risk : BSS) 104 [표 3-2] Clinical characteristics between depression with low suicidal risk and depression with high suicidal risk (suicidal risk : BSS) 105 [표 3-3] Voice and Speech features between DLSR and DHSR (suicidal risk : BSS) 106 [표 3-4] Classification results by voice and speech features between DLSR and DHSR (suicidal risk; BSS) 108 [표 3-5] Classification results by voice and speech features between DLSR and DHSR (suicidal risk : BSS) 109 [표 3-6] Demographics between depression with low suicidal risk and depression with high suicidal risk (suicidal risk : MINI) 110 [표 3-7] Clinical characteristics between low suicidal risk and depression with high suicidal risk (suicidal risk : MINI) 111 [표 3-8] Voice and Speech features between DLSR and DHSR (suicidal risk : MINI) 112 [표 3-9] Classification results by voice and speech features between DLSR and DHSR (suicidal risk; MINI) 114 [표 3-10] Classification results by voice and speech features between DLSR and DHSR (suicidal risk : MINI) 115 그림 목차 [그림 1-1] Difference of voice features by depressive episode 116 [그림 1-2] AUC curve predicting minor and major episodes using MLP 117 [그림 2-1] ROC curve of classification HC and CD by voice and speech features 118 [그림 2-2] Feature importance for classifying HC and CD 119 [그림 2-3] Text mining of important features in HC and CD 120 [그림 2-4] Classification results between HC and CD by text 121 [그림 2-5] Mediation analysis between voice and speech feature and clinical characteristics in healthy control group 122 [그림 2-6] Mediation analysis between voice and speech feature and clinical characteristics in current depression group 123 [그림 3-1] ROC curve of classification DLSR and DHSR by voice and speech features (suicidal risk : BSS) 124 [그림 3-2] Classification results of DLSR and DHSR by text and clinical characteristics (suicidal risk : BSS) 125 [그림 3-3] ROC curve of classification DLSR and DHSR by voice and speech features (suicidal risk : MINI) 126 [그림 3-4] Classification results of DLSR and DHSR by text and clinical characteristics (suicidal risk : MINI) 127박

    Analysis and automatic identification of spontaneous emotions in speech from human-human and human-machine communication

    Get PDF
    383 p.This research mainly focuses on improving our understanding of human-human and human-machineinteractions by analysing paricipants¿ emotional status. For this purpose, we have developed andenhanced Speech Emotion Recognition (SER) systems for both interactions in real-life scenarios,explicitly emphasising the Spanish language. In this framework, we have conducted an in-depth analysisof how humans express emotions using speech when communicating with other persons or machines inactual situations. Thus, we have analysed and studied the way in which emotional information isexpressed in a variety of true-to-life environments, which is a crucial aspect for the development of SERsystems. This study aimed to comprehensively understand the challenge we wanted to address:identifying emotional information on speech using machine learning technologies. Neural networks havebeen demonstrated to be adequate tools for identifying events in speech and language. Most of themaimed to make local comparisons between some specific aspects; thus, the experimental conditions weretailored to each particular analysis. The experiments across different articles (from P1 to P19) are hardlycomparable due to our continuous learning of dealing with the difficult task of identifying emotions inspeech. In order to make a fair comparison, additional unpublished results are presented in the Appendix.These experiments were carried out under identical and rigorous conditions. This general comparisonoffers an overview of the advantages and disadvantages of the different methodologies for the automaticrecognition of emotions in speech
    corecore