456 research outputs found

    Multimodal Emotion Recognition among Couples from Lab Settings to Daily Life using Smartwatches

    Full text link
    Couples generally manage chronic diseases together and the management takes an emotional toll on both patients and their romantic partners. Consequently, recognizing the emotions of each partner in daily life could provide an insight into their emotional well-being in chronic disease management. The emotions of partners are currently inferred in the lab and daily life using self-reports which are not practical for continuous emotion assessment or observer reports which are manual, time-intensive, and costly. Currently, there exists no comprehensive overview of works on emotion recognition among couples. Furthermore, approaches for emotion recognition among couples have (1) focused on English-speaking couples in the U.S., (2) used data collected from the lab, and (3) performed recognition using observer ratings rather than partner's self-reported / subjective emotions. In this body of work contained in this thesis (8 papers - 5 published and 3 currently under review in various journals), we fill the current literature gap on couples' emotion recognition, develop emotion recognition systems using 161 hours of data from a total of 1,051 individuals, and make contributions towards taking couples' emotion recognition from the lab which is the status quo, to daily life. This thesis contributes toward building automated emotion recognition systems that would eventually enable partners to monitor their emotions in daily life and enable the delivery of interventions to improve their emotional well-being.Comment: PhD Thesis, 2022 - ETH Zuric

    A cross-sectional study to assess pragmatic strengths and weaknesses in healthy ageing

    Get PDF
    BACKGROUND: Ageing refers to the natural and physiological changes that individuals experience over the years. This process also involves modifications in terms of communicative-pragmatics, namely the ability to convey meanings in social contexts and to interact with other people using various expressive means, such as linguistic, extralinguistic and paralinguistic aspects of communication. Very few studies have provided a complete assessment of communicative-pragmatic performance in healthy ageing. METHODS: The aim of this study was to comprehensively assess communicative-pragmatic ability in three samples of 20 (N = 60) healthy adults, each belonging to a different age range (20–40, 65–75, 76–86 years old) and to compare their performance in order to observe any potential changes in their ability to communicate. We also explored the potential role of education and sex on the communicative-pragmatic abilities observed. The three age groups were evaluated with a between-study design by means of the Assessment Battery for Communication (ABaCo), a validated assessment tool characterised by five scales: linguistic, extralinguistic, paralinguistic, contextual and conversational. RESULTS: The results indicated that the pragmatic ability assessed by the ABaCo is poorer in older participants when compared to the younger ones (main effect of age group: F(2,56) = 9.097; p < .001). Specifically, significant differences were detected in tasks on the extralinguistic, paralinguistic and contextual scales. Whereas the data highlighted a significant role of education (F(1,56) = 4.713; p = .034), no sex-related differences were detected. CONCLUSIONS: Our results suggest that the ageing process may also affect communicative-pragmatic ability and a comprehensive assessment of the components of such ability may help to better identify difficulties often experienced by older individuals in their daily life activities. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12877-022-03304-z

    Proceedings of the LREC 2018 Special Speech Sessions

    Get PDF
    LREC 2018 Special Speech Sessions "Speech Resources Collection in Real-World Situations"; Phoenix Seagaia Conference Center, Miyazaki; 2018-05-0

    Analysis and automatic identification of spontaneous emotions in speech from human-human and human-machine communication

    Get PDF
    383 p.This research mainly focuses on improving our understanding of human-human and human-machineinteractions by analysing paricipantsÂż emotional status. For this purpose, we have developed andenhanced Speech Emotion Recognition (SER) systems for both interactions in real-life scenarios,explicitly emphasising the Spanish language. In this framework, we have conducted an in-depth analysisof how humans express emotions using speech when communicating with other persons or machines inactual situations. Thus, we have analysed and studied the way in which emotional information isexpressed in a variety of true-to-life environments, which is a crucial aspect for the development of SERsystems. This study aimed to comprehensively understand the challenge we wanted to address:identifying emotional information on speech using machine learning technologies. Neural networks havebeen demonstrated to be adequate tools for identifying events in speech and language. Most of themaimed to make local comparisons between some specific aspects; thus, the experimental conditions weretailored to each particular analysis. The experiments across different articles (from P1 to P19) are hardlycomparable due to our continuous learning of dealing with the difficult task of identifying emotions inspeech. In order to make a fair comparison, additional unpublished results are presented in the Appendix.These experiments were carried out under identical and rigorous conditions. This general comparisonoffers an overview of the advantages and disadvantages of the different methodologies for the automaticrecognition of emotions in speech

    Accountable, Explainable Artificial Intelligence Incorporation Framework for a Real-Time Affective State Assessment Module

    Get PDF
    The rapid growth of artificial intelligence (AI) and machine learning (ML) solutions has seen it adopted across various industries. However, the concern of ‘black-box’ approaches has led to an increase in the demand for high accuracy, transparency, accountability, and explainability in AI/ML approaches. This work contributes through an accountable, explainable AI (AXAI) framework for delineating and assessing AI systems. This framework has been incorporated into the development of a real-time, multimodal affective state assessment system

    Advanced Content and Interface Personalization through Conversational Behavior and Affective Embodied Conversational Agents

    Get PDF
    Conversation is becoming one of the key interaction modes in HMI. As a result, the conversational agents (CAs) have become an important tool in various everyday scenarios. From Apple and Microsoft to Amazon, Google, and Facebook, all have adapted their own variations of CAs. The CAs range from chatbots and 2D, carton-like implementations of talking heads to fully articulated embodied conversational agents performing interaction in various concepts. Recent studies in the field of face-to-face conversation show that the most natural way to implement interaction is through synchronized verbal and co-verbal signals (gestures and expressions). Namely, co-verbal behavior represents a major source of discourse cohesion. It regulates communicative relationships and may support or even replace verbal counterparts. It effectively retains semantics of the information and gives a certain degree of clarity in the discourse. In this chapter, we will represent a model of generation and realization of more natural machine-generated output

    Acoustic Features of Different Types of Laughter in North Sami Conversational Speech

    Get PDF
    Peer reviewe
    • 

    corecore