496 research outputs found
A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Understanding expressed sentiment and emotions are two crucial factors in
human multimodal language. This paper describes a Transformer-based
joint-encoding (TBJE) for the task of Emotion Recognition and Sentiment
Analysis. In addition to use the Transformer architecture, our approach relies
on a modular co-attention and a glimpse layer to jointly encode one or more
modalities. The proposed solution has also been submitted to the ACL20: Second
Grand-Challenge on Multimodal Language to be evaluated on the CMU-MOSEI
dataset. The code to replicate the presented experiments is open-source:
https://github.com/jbdel/MOSEI_UMONS.Comment: Winner of the ACL20: Second Grand-Challenge on Multimodal Languag
Comment vivre ses émotions quand on dit ne pas en avoir ? Regard sur les expériences émotionnelles de personnes hautement alexithymiques en contexte clinique
Au cœur de la psychothérapie se déploie un processus dans lequel psychologue et client.e tentent ensemble d’apaiser des expériences porteuses de souffrance. Pour ce faire, les psychologues articulent leurs interventions autour des expériences émotionnelles à l’origine de cette souffrance. Or, ceci peut s’avérer particulièrement complexe auprès de client.es qui éprouvent une difficulté à reconnaître et décrire leurs émotions, c’est-à-dire auprès de client.es hautement alexithymiques. Avec pour point de départ une volonté de mieux comprendre des manières de construire des ponts pour discuter des émotions avec ces client.es, cette thèse vise à élargir la compréhension déficitaire et intrapsychique de l’alexithymie en portant un regard sur les dimensions personnelle et relationnelle des expériences émotionnelles de personnes hautement alexithymiques en psychothérapie. Dans le premier article inscrit dans une posture compréhensive pour mieux cerner la dimension personnelle, la perspective de personnes hautement alexithymiques sur leur vécu est explorée afin de définir les caractéristiques présentes (par opposition à absentes) de leurs expériences émotionnelles. À partir de l’analyse de treize entrevues menées auprès de sept participant.es, un modèle interprétatif est proposé. Celui-ci décrit des repères expérientiels (un état de déconnexion et un état indifférencié, reconnus à travers le corps) et externes (des repères rationnels et relationnels) sur lesquels s’appuient des personnes hautement alexithymiques pour mitiger le sentiment de confusion quant à leurs émotions. À travers la combinaison de repères, ces personnes hautement alexithymiques sont en mesure d’identifier certaines émotions, de remarquer leur difficulté à comprendre leur vécu et de donner un sens à cette difficulté. Dans le second article s’appuyant sur une perspective interactionniste pour appréhender la dimension relationnelle, les patrons d’interaction qui interviennent lorsque les émotions sont discutées en psychothérapie avec des client.es hautement alexithymique sont décrits. Les résultats de cet article se basent sur trois séances de psychothérapie analysées conjointement avec des segments d’entrevues individuelles explorant la perspective de personnes hautement alexithymique sur leur processus de psychothérapie. Dans ces résultats, l’alternance entre une posture d’exploration ouverte et une posture active de guide chez les psychologues est décrite. Lorsque les psychologues adoptent une posture d’exploration ouverte des émotions, les client.es adoptent en réponse une posture passive, menant à des interactions désalignées (c.-à-d. marquées par des pauses, des délais, etc.). Afin de rétablir des interactions fluides, les psychologues adoptent une posture active de guide en expliquant des connaissances, en reformulant les difficultés des client.es à partir d’une perspective psychologique, en maintenant l’interaction et en orientant la conversation vers des aspects concrets des émotions. Cette posture était perçue comme aidante par les client.es, qui tendent à leur tour à adopter le point de vue de leur psychologue dans les entrevues individuelles. Ces différents résultats permettent de mieux comprendre comment les personnes hautement alexithymiques donnent un sens à leurs expériences émotionnelles et comment ce sens est négocié en psychothérapie. Plus largement, la mise en dialogue de ces résultats permet de nourrir des réflexions sur les processus de négociations qui s’inscrivent en psychothérapie et de réfléchir aux manières d’élaborer les émotions avec des client.es hautement alexithymiques.At the heart of psychotherapy lies a process in which psychologists and clients work together to alleviate experiences of suffering. To achieve this, psychologists orient their interventions on emotional experiences at the root of suffering. However, this can be particularly challenging when working with clients who struggle to identify and describe their emotions, namely highly alexithymic clients. To better understand how to elaborate on emotions with these clients, this thesis aims to expand on the deficit and intrapsychic models of alexithymia by examining the personal and relational dimensions of the emotional experiences of highly alexithymic individuals in psychotherapy. In the first article that examines the personal dimension of alexithymia, how highly alexithymic individuals construct meaning from their emotional experiences is explored in order to offer a comprehensive understanding of present (as opposed to absent or negative) characteristics of such experiences. Based on the analysis of thirteen interviews with seven participants, we suggest an interpretative model that describes experiential markers (a state of disconnection and an undifferentiated state, recognized through bodily cues) and external markers (rational and relational landmarks). Through the use of markers, highly alexithymic individuals mitigate the confusion they feel about their emotions. Moreover, these markers can be utilized simultaneously to label their emotions, identify their difficulties and give meaning to their emotional experiences. In the second article that explores the relational dimension of alexithymia, interactions between highly alexithymic clients and therapists when discussing emotions in therapy are described within an interactionist perspective. Based on the analysis of three psychotherapy sessions alongside individual interviews exploring the clients' perspectives on their psychotherapy process, we suggest that psychologists alternate between a posture of open exploration and of active guidance. Clients tend to adopt a passive posture in response to open-ended and exploratory interventions from psychotherapists, leading to misaligned interactions (i.e., marked by pauses, delays, etc.). In response, psychotherapists adopt an active guiding posture that repaired and maintained aligned interactions. This guidance is provided by explicitly explaining psychological knowledge, framing the clients’ difficulties from a psychological perspective, sustaining the interaction, and shaping the course of the conversation toward concrete aspects of emotions. Clients adopted their psychologists’ way of framing their difficulties and perceived this posture as helpful. These results help deepen our understanding of how highly alexithymic individuals make sense of their emotional experiences and how discussing emotions is negotiated in psychotherapy with these clients. More broadly, these results are integrated to provide insights on the negotiation processes that occur in psychotherapy, as well as insights on different ways to discuss emotions with highly alexithymic clients
Temporal patterns of phytoplankton assemblages, size spectra and diversity during the wane of a Phaeocystis globosa spring bloom in hydrologically contrasted coastal waters
International audienc
Multimodal Attentive Fusion Network for audio-visual event recognition
peer reviewedEvent classification is inherently sequential and multimodal. Therefore, deep neural models need to dynamically focus on the most relevant time window and/or modality of a video. In this study, we propose the Multimodal Attentive Fusion Network (MAFnet), an architecture that can dynamically fuse visual and audio information for event recognition. Inspired by prior studies in neuroscience, we couple both modalities at different levels of visual and audio paths. Furthermore, the network dynamically highlights a modality at a given time window relevant to classify events. Experimental results in AVE (Audio-Visual Event), UCF51, and Kinetics-Sounds datasets show that the approach can effectively improve the accuracy in audio-visual event classification. Code is available at: https://github.com/numediart/MAFne
SECL-UMONS DATABASE FOR SOUND EVENT CLASSIFICATION AND LOCALIZATION
peer reviewedWe introduce the SECL-UMons dataset for sound event classification and localization in the context of office environments. The multichannel dataset is composed of 11 event classes recorded at several realistic positions in two different rooms. The dataset comprises two types of sequences according to the number of events in the sequence. 2662 unilabel sequences and 2724 multilabel sequences are recorded corresponding to a total of 5.24 hours. The database is publicly available to provide support for algorithm development and common ground for comparison of different techniques. The DCASE 2019 challenge baseline (SELDnet) employing a convolutional recurrent neural network is used to generate benchmark scores for the new dataset. We also slightly modify the model to introduce a benchmark score for real-time classification and localization for the new dataset.Computer vision and audition based on bioinspired and deep neural networks for representation learning - Fédération Wallonie Bruxelle
Audio-Visual Fusion And Conditioning With Neural Networks For Event Recognition
peer reviewedideo event recognition based on audio and visual modalities is an open research problem. The mainstream literature on video event recognition focuses on the visual modality and does not take into account the relevant information present in the audio modality. We propose to study several fusion
architectures for the audio-visual recognition task of video events. We first build classical fusion architectures using concatenation, addition or Multimodal Compact Bilinear pooling (MCB). Then, we propose to create connections between visual and audio processing with Feature-Wise Linear Modulation (FiLM) layers. For instance, the information present in the audio modality is exploited to change the visual classification behaviour. We found that multimodal event classification performance is always better than unimodal performance, whatever the fusion or conditioning method used. Classification accuracy based on one modality improves when we add the modulation of the other modality through FiLM layers.les travaux financés conjointement par l'UMONS et par l’Université de Sherbrooke, dans le cadre d'une thèse de doctorat intitulée : « Apprentissage et perception multimodale d’objets et application en analyse de scène ». - Autres sources publiques belge
How to genetically increase fillet yield in fish: Relevant genetic parameters and methods to predict genetic gain
A first demonstration of realized selection response for fillet yield in fish, in rainbow trout oncorhynchus mykiss
A first demonstration of realized selection response for fillet yield in fish, in rainbow trout oncorhynchus mykiss. 13. International Symposium for Genetics in Aquaculture (ISGA XIII
- …
