483 research outputs found

    Conversational Agents for depression screening: a systematic review

    Get PDF
    Objective: This work explores the advances in conversational agents aimed at the detection of mental health disorders, and specifically the screening of depression. The focus is put on those based on voice interaction, but other approaches are also tackled, such as text-based interaction or embodied avatars. Methods: PRISMA was selected as the systematic methodology for the analysis of existing literature, which was retrieved from Scopus, PubMed, IEEE Xplore, APA PsycINFO, Cochrane, and Web of Science. Relevant research addresses the detection of depression using conversational agents, and the selection criteria utilized include their effectiveness, usability, personalization, and psychometric properties. Results: Of the 993 references initially retrieved, 36 were finally included in our work. The analysis of these studies allowed us to identify 30 conversational agents that claim to detect depression, specifically or in combination with other disorders such as anxiety or stress disorders. As a general approach, screening was implemented in the conversational agents taking as a reference standardized or psychometrically validated clinical tests, which were also utilized as a golden standard for their validation. The implementation of questionnaires such as Patient Health Questionnaire or the Beck Depression Inventory, which are used in 65% of the articles analyzed, stand out. Conclusions: The usefulness of intelligent conversational agents allows screening to be administered to different types of profiles, such as patients (33% of relevant proposals) and caregivers (11%), although in many cases a target profile is not clearly of (66% of solutions analyzed). This study found 30 standalone conversational agents, but some proposals were explored that combine several approaches for a more enriching data acquisition. The interaction implemented in most relevant conversational agents is textbased, although the evolution is clearly towards voice integration, which in turns enhances their psychometric characteristics, as voice interaction is perceived as more natural and less invasive.Agencia Estatal de Investigación | Ref. PID2020-115137RB-I0

    Monitoring the effects of therapeutic interventions in depression through self-assessments

    Get PDF
    The treatment of major psychiatric disorders is an arduous and thorny path for the patients concerned, characterized by polypharmacy, massive adverse side effects, modest prospects of success, and constantly declining response rates. The more important is the early detection of psychiatric disorders prior to the development of clinically relevant symptoms, so that people can benefit from early interventions. A well-proven approach to monitoring mental health relies on voice analysis. This method has been successfully used with psychiatric patients to ‘objectively’ document the progress of improvement or the onset of relapse. The studies with psychiatric patients over 2-4 weeks demonstrated that daily voice assessments have a notable therapeutic effect in themselves. Therefore, daily voice assessments appear to be a lowthreshold form of therapeutic means that may be realized through self-assessments. To evaluate performance and reliability of this approach, we have carried out a longitudinal study on 82 university students in 3 different countries with daily assessments over 2 weeks. The sample included 41 males (mean age 24.2±3.83 years) and 41 females (mean age 21.6±2.05 years). Unlike other research in the field, this study was not concerned with the classification of individuals in terms of diagnostic categories. The focus lay on the monitoring aspect and the extent to which the effects of therapeutic interventions or of behavioural changes are visible in the results of self-assessment voice analyses. The test persons showed an over-proportionally good adherence to the daily voice analysis scheme. The accumulated data were of generally high quality: sufficiently high signal levels, a very limited number of movement artifacts, and little to no interfering background noise. The method was sufficiently sensitive to detect: i) habituation effects when test persons became used to the daily procedure; and ii) short-term fluctuations that exceeded prespecified thresholds and reached significance. Results are directly interpretable and provide information about what is going well, what is going less well, and where there is a need for action. The proposed self-assessment approach was found to be well-suited to serve as a health-monitoring tool for subjects with an elevated vulnerability to psychiatric disorders or to stress-induced mental health problems. Daily voice assessments are in fact a low-threshold form of therapeutic means that can be realized through selfassessments, that requires only little effort, can be carried out in the test person’s own home, and has the potential to strengthen resilience and to induce positive behavioural changes

    Addressing Variability in Speech when Recognizing Emotion and Mood In-the-Wild

    Full text link
    Bipolar disorder is a chronic mental illness, affecting 4% of Americans, that is characterized by periodic mood changes ranging from severe depression to extreme compulsive highs. Both mania and depression profoundly impact the behavior of affected individuals, resulting in potentially devastating personal and social consequences. Bipolar disorder is managed clinically with regular interactions with care providers, who assess mood, energy levels, and the form and content of speech. Recent work has proposed smartphones for automatically monitoring mood using speech. Much of the early work in speech-centered mood detection has been done in the laboratory or clinic and is not reflective of the variability found in real-world conversations and conditions. Outside of these settings, automatic mood detection is hard, as the recordings include environmental noise, differences in recording devices, and variations in subject speaking patterns. Without addressing these issues, it is difficult to move towards a passive mobile health system. My research works to address this variability present in speech so that such a system can be created, allowing for interventions to mitigate the life-changing effects of mood transitions. However detecting mood directly from speech is difficult, as mood varies over the course of days or weeks, while speech fluctuates rapidly. To address this, my thesis explores how an intermediate step can be used to aid in this prediction. For example, one of the major symptoms of bipolar disorder is emotion dysregulation - changes in the way emotions are perceived and a lack of inhibition in their expression. My work has supported the relationship between automatically extracted emotion estimates and mood. Because of this, my thesis explores how to mitigate the variability found when detecting emotion from speech. The remainder of my thesis is focused on employing these emotion-based features, as well as features based on language content, to real-world applications. This dissertation is divided into the following parts: Part I: I address the direct classification of mood from speech. This is accomplished by addressing variability due to recording device using preprocessing and multi-task learning. I then show how both subject-specific and population-general information can be combined to significantly improve mood detection. Part II: I explore the automatic detection of emotion from speech and how to control for the other factors of variability present in the speech signal. I use progressive networks as a method to augment emotion with other paralinguistic data including gender and speaker, as well as other datasets. Additionally, I introduce a novel domain generalization method for cross-corpus detection. Part III: I demonstrate real-world applications of speech mood monitoring using everyday conversations. I show how the previously introduced generalized model can predict emotion from the speech of individuals with suicidal ideation, demonstrating its effectiveness across domains. Furthermore, I use these predictions to distinguish individuals with suicidal thoughts from healthy controls. Lastly, I introduce a novel framework for intervention detection in individuals with bipolar disorder. I then create a natural speech mood monitoring system based on features derived from measures of emotion and automatic speech recognition (ASR) transcripts and show effective intervention detection. I conclude this dissertation with the following future directions: (1) Extending my emotion generalization system to include multiple modalities and factors of variability; (2) Expanding natural speech mood monitoring by including more devices, exploring other data besides speech, and investigating mood rating causality.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/153461/1/gideonjn_1.pd

    Анализ информационного и математического обеспечения для распознавания аффективных состояний человека

    Get PDF
    В статье представлен аналитический обзор исследований в области аффективных вычислений. Это направление является составляющей искусственного интеллекта, и изучает методы, алгоритмы и системы для анализа аффективных состояний человека при его взаимодействии с другими людьми, компьютерными системами или роботами. В области интеллектуального анализа данных под аффектом подразумевается проявление психологических реакций на возбуждаемое событие, которое может протекать как в краткосрочном, так и в долгосрочном периоде, а также иметь различную интенсивность переживаний. Аффекты в рассматриваемой области разделены на 4 вида: аффективные эмоции, базовые эмоции, настроение и аффективные расстройства. Проявление аффективных состояний отражается в вербальных данных и невербальных характеристиках поведения: акустических и лингвистических характеристиках речи, мимике, жестах и позах человека. В обзоре приводится сравнительный анализ существующего информационного обеспечения для автоматического распознавания аффективных состояний человека на примере эмоций, сентимента, агрессии и депрессии. Немногочисленные русскоязычные аффективные базы данных пока существенно уступают по объему и качеству электронным ресурсам на других мировых языках, что обуславливает необходимость рассмотрения широкого спектра дополнительных подходов, методов и алгоритмов, применяемых в условиях ограниченного объема обучающих и тестовых данных, и ставит задачу разработки новых подходов к аугментации данных, переносу обучения моделей и адаптации иноязычных ресурсов. В статье приводится описание методов анализа одномодальной визуальной, акустической и лингвистической информации, а также многомодальных подходов к распознаванию аффективных состояний. Многомодальный подход к автоматическому анализу аффективных состояний позволяет повысить точность распознавания рассматриваемых явлений относительно одномодальных решений. В обзоре отмечена тенденция современных исследований, заключающаяся в том, что нейросетевые методы постепенно вытесняют классические детерминированные методы благодаря лучшему качеству распознавания состояний и оперативной обработке большого объема данных. В статье рассматриваются методы анализа аффективных состояний. Преимуществом использования многозадачных иерархических подходов является возможность извлекать новые типы знаний, в том числе о влиянии, корреляции и взаимодействии нескольких аффективных состояний друг на друга, что потенциально влечет к улучшению качества распознавания. Приводятся потенциальные требования к разрабатываемым системам анализа аффективных состояний и основные направления дальнейших исследований

    Анализ информационного и математического обеспечения для распознавания аффективных состояний человека

    Get PDF
    The article presents an analytical review of research in the affective computing field. This research direction is a component of artificial intelligence, and it studies methods, algorithms and systems for analyzing human affective states during interactions with other people, computer systems or robots. In the field of data mining, the definition of affect means the manifestation of psychological reactions to an exciting event, which can occur both in the short and long term, and also have different intensity. The affects in this field are divided into 4 types: affective emotions, basic emotions, sentiment and affective disorders. The manifestation of affective states is reflected in verbal data and non-verbal characteristics of behavior: acoustic and linguistic characteristics of speech, facial expressions, gestures and postures of a person. The review provides a comparative analysis of the existing infoware for automatic recognition of a person’s affective states on the example of emotions, sentiment, aggression and depression. The few Russian-language, affective databases are still significantly inferior in volume and quality compared to electronic resources in other world languages. Thus, there is a need to consider a wide range of additional approaches, methods and algorithms used in a limited amount of training and testing data, and set the task of developing new approaches to data augmentation, transferring model learning and adapting foreign-language resources. The article describes the methods of analyzing unimodal visual, acoustic and linguistic information, as well as multimodal approaches for the affective states recognition. A multimodal approach to the automatic affective states analysis makes it possible to increase the accuracy of recognition of the phenomena compared to single-modal solutions. The review notes the trend of modern research that neural network methods are gradually replacing classical deterministic methods through better quality of state recognition and fast processing of large amount of data. The article discusses the methods for affective states analysis. The advantage of multitasking hierarchical approaches is the ability to extract new types of knowledge, including the influence, correlation and interaction of several affective states on each other, which potentially leads to improved recognition quality. The potential requirements for the developed systems for affective states analysis and the main directions of further research are given.В статье представлен аналитический обзор исследований в области аффективных вычислений. Это направление является составляющей искусственного интеллекта, и изучает методы, алгоритмы и системы для анализа аффективных состояний человека при его взаимодействии с другими людьми, компьютерными системами или роботами. В области интеллектуального анализа данных под аффектом подразумевается проявление психологических реакций на возбуждаемое событие, которое может протекать как в краткосрочном, так и в долгосрочном периоде, а также иметь различную интенсивность переживаний. Аффекты в рассматриваемой области разделены на 4 вида: аффективные эмоции, базовые эмоции, настроение и аффективные расстройства. Проявление аффективных состояний отражается в вербальных данных и невербальных характеристиках поведения: акустических и лингвистических характеристиках речи, мимике, жестах и позах человека. В обзоре приводится сравнительный анализ существующего информационного обеспечения для автоматического распознавания аффективных состояний человека на примере эмоций, сентимента, агрессии и депрессии. Немногочисленные русскоязычные аффективные базы данных пока существенно уступают по объему и качеству электронным ресурсам на других мировых языках, что обуславливает необходимость рассмотрения широкого спектра дополнительных подходов, методов и алгоритмов, применяемых в условиях ограниченного объема обучающих и тестовых данных, и ставит задачу разработки новых подходов к аугментации данных, переносу обучения моделей и адаптации иноязычных ресурсов. В статье приводится описание методов анализа одномодальной визуальной, акустической и лингвистической информации, а также многомодальных подходов к распознаванию аффективных состояний. Многомодальный подход к автоматическому анализу аффективных состояний позволяет повысить точность распознавания рассматриваемых явлений относительно одномодальных решений. В обзоре отмечена тенденция современных исследований, заключающаяся в том, что нейросетевые методы постепенно вытесняют классические детерминированные методы благодаря лучшему качеству распознавания состояний и оперативной обработке большого объема данных. В статье рассматриваются методы анализа аффективных состояний. Преимуществом использования многозадачных иерархических подходов является возможность извлекать новые типы знаний, в том числе о влиянии, корреляции и взаимодействии нескольких аффективных состояний друг на друга, что потенциально влечет к улучшению качества распознавания. Приводятся потенциальные требования к разрабатываемым системам анализа аффективных состояний и основные направления дальнейших исследований

    Speech function in persons with Parkinson\u27s disease: effects of environment, task and treatment

    Get PDF
    Parkinson’s Disease (PD) is a degenerative neurological disease affecting aspects of movement, including speech. Persons with PD are reported to have better speech functioning in the clinical setting than in the home setting, but this has not been quantified. New methodologies in ambulatory measures of speech are emerging that allow investigation of non-clinical settings. The following questions are addressed: Is speech different between environments in PD and in healthy controls? Can clinical tasks predict speech behaviors in the home? Is treatment proven effective by measures in the home? What can we glean from methods of measurement of speech function in the home? The experiment included 13 persons with PD and 12 healthy controls, studied in the clinical and home environments, and 7 of those 13 persons with PD participated in a treatment study. Major findings included: Spontaneous speech intelligibility, not intensity, was the differentiating factor between persons with PD and healthy controls. Intelligibility and intensity were not related. Both groups presented with higher sentence intensity in the home environment. Spontaneous speech intelligibility in the clinic was related to spontaneous speech intelligibility in the home. The Sentence Intelligibility Test emerged as the best predictor of spontaneous speech intelligibility in the home. Differences between pilot treatment groups measured in the home on intensity and intelligibility were not large enough to make a clinical trial feasible. Individual differences may account for many of these results, for example more severely impaired patients may have shown different data. Drawing conclusions regarding the home environment via measures outside the home should be carefully considered. Ambulatory measures of speech are a viable option for studying speech function in non-clinical settings, and technology is advancing. Further investigation is needed to develop methodologies and normative values for speech in the home

    Deep learning-based automatic analysis of social interactions from wearable data for healthcare applications

    Get PDF
    PhD ThesisSocial interactions of people with Late Life Depression (LLD) could be an objective measure of social functioning due to the association between LLD and poor social functioning. The utilisation of wearable computing technologies is a relatively new approach within healthcare and well-being application sectors. Recently, the design and development of wearable technologies and systems for health and well-being monitoring have attracted attention both of the clinical and scientific communities. Mainly because the current clinical practice of – typically rather sporadic – clinical behaviour assessments are often administered in artificial settings. As a result, it does not provide a realistic impression of a patient’s condition and thus does not lead to sufficient diagnosis and care. However, wearable behaviour monitors have the potential for continuous, objective assessment of behaviour and wider social interactions and thereby allowing for capturing naturalistic data without any constraints on the place of recording or any typical limitations of the lab-setting research. Such data from naturalistic ambient environments would facilitate automated transmission and analysis by having no constraints on the recordings, allowing for a more timely and accurate assessment of depressive symptoms. In response to this artificial setting issue, this thesis focuses on the analysis and assessment of the different aspects of social interactions in naturalistic environments using deep learning algorithms. That could lead to improvements in both diagnosis and treatment. The advantages of using deep learning are that there is no need for hand-crafted features engineering and this leads to using the raw data with minimal pre-processing compared to classical machine learning approaches and also its scalability and ability to generalise. The main dataset used in this thesis is recorded by a wrist worn device designed at Newcastle University. This device has multiple sensors including microphone, tri-axial accelerometer, light sensor and proximity sensor. In this thesis, only microphone and tri-axial accelerometer are used for the social interaction analysis. The other sensors are not used since they need more calibration from the user which in this will be the elderly people with depression. Hence, it was not feasible in this scenario. Novel deep learning models are proposed to automatically analyse two aspects of social interactions (the verbal interactions/acoustic communications and physical activities/movement patterns). Verbal Interactions include the total quantity of speech, who is talking to whom and when and how much engagement the wearer contributed in the conversations. The physical activity analysis includes activity recognition and the quantity of each activity and sleep patterns. This thesis is composed of three main stages, two of them discuss the acoustic analysis and the third stage describes the movement pattern analysis. The acoustic analysis starts with speech detection in which each segment of the recording is categorised as speech or non-speech. This segment classification is achieved by a novel deep learning model that leverages bi-directional Long Short-Term Memory with gated activation units combined with Maxout Networks as well as a combination of two optimisers. After detecting speech segments from audio data, the next stage is detecting how much engagement the wearer has in any conversation throughout these speech events based on detecting the wearer of the device using a variant model of the previous one that combines the convolutional autoencoder with bi-directional Long Short-Term Memory. Following this, the system then detects the spoken parts of the main speaker/wearer and therefore detects the conversational turn-taking but only includes the turn taking between the wearer and other speakers and not every speaker in the conversation. This stage did not take into account the semantics of the speakers due to the ethical constraints of the main dataset (Depression dataset) and therefore it was not possible to listen to the data by any means or even have any information about the contents. So, it is a good idea to be considered for future work. Stage 3 involves the physical activity analysis that is inferring the elementary physical activities and movement patterns. These elementary patterns include sedentary actions, walking, mixed activities, cycling, using vehicles as well as the sleep patterns. The predictive model used is based on Random Forests and Hidden Markov Models. In all stages the methods presented in this thesis have been compared to the state-of-the-art in processing audio, accelerometer data, respectively, to thoroughly assess their contribution. Following these stages is a thorough analysis of the interplay between acoustic interaction and physical movement patterns and the depression key clinical variables resulting to the outcomes of the previous stages. The main reason for not using deep learning in this stage unlike the previous stages is that the main dataset (Depression dataset) did not have any annotations for the speech or even the activity due to the ethical constraints as mentioned. Furthermore, the training dataset (Discussion dataset) did not have any annotations for the accelerometer data where the data is recorded freely and there is no camera attached to device to make it possible to be annotated afterwards.Newton-Mosharafa Fund and the mission sector and cultural affairs, ministry of Higher Education in Egypt
    corecore