321 research outputs found

    Films, Affective Computing and Aesthetic Experience: Identifying Emotional and Aesthetic Highlights from Multimodal Signals in a Social Setting.

    Get PDF
    Over the last years, affective computing has been strengthening its ties with the humanities, exploring and building understanding of people’s responses to specific artistic multimedia stimuli. “Aesthetic experience” is acknowledged to be the subjective part of some artistic exposure, namely, the inner affective state of a person exposed to some artistic object. In this work, we describe ongoing research activities for studying the aesthetic experience of people when exposed to movie artistic stimuli. To do so, this work is focused on the definition of emotional and aesthetic highlights in movies and studies the people responses to them using physiological and behavioral signals, in a social setting. In order to examine the suitability of multimodal signals for detecting highlights, we initially evaluate a supervised highlight detection system. Further, in order to provide an insight on the reactions of people, in a social setting, during emotional and aesthetic highlights, we study two unsupervised systems. Those systems are able to (a) measure the distance among the captured signals of multiple people using the dynamic time warping algorithm and (b) create a reaction profile for a group of people that would be indicative of whether that group reacts or not at a given time. The results indicate that the proposed systems are suitable for detecting highlights in movies and capturing some form of social interactions across different movie genres. Moreover, similar social interactions during exposure to emotional and some types of aesthetic highlights, such as those corresponding to technical or lightening choices of the director, can be observed. The utilization of electrodermal activity measurements yields in better performances than those achieved when using acceleration measurements, whereas fusion of the modalities does not appear to be beneficial for the majority of the cases

    Aesthetic Highlight Detection in Movies Based on Synchronization of Spectators’ Reactions.

    Get PDF
    Detection of aesthetic highlights is a challenge for understanding the affective processes taking place during movie watching. In this paper we study spectators’ responses to movie aesthetic stimuli in a social context. Moreover, we look for uncovering the emotional component of aesthetic highlights in movies. Our assumption is that synchronized spectators’ physiological and behavioral reactions occur during these highlights because: (i) aesthetic choices of filmmakers are made to elicit specific emotional reactions (e.g. special effects, empathy and compassion toward a character, etc.) and (ii) watching a movie together causes spectators’ affective reactions to be synchronized through emotional contagion. We compare different approaches to estimation of synchronization among multiple spectators’ signals, such as pairwise, group and overall synchronization measures to detect aesthetic highlights in movies. The results show that the unsupervised architecture relying on synchronization measures is able to capture different properties of spectators’ synchronization and detect aesthetic highlights based on both spectators’ electrodermal and acceleration signals. We discover that pairwise synchronization measures perform the most accurately independently of the category of the highlights and movie genres. Moreover, we observe that electrodermal signals have more discriminative power than acceleration signals for highlight detection

    Multimodal Affect and Aesthetic Experience

    Get PDF
    The term “aesthetic experience” corresponds to the inner state of a person exposed to form and content of artistic objects. Exploring certain aesthetic values of artistic objects, as well as interpreting the aesthetic experience of people when exposed to art can contribute towards understanding (a) art and (b) people’s affective reactions to artwork. Focusing on different types of artistic content, such as movies, music, urban art and other artwork, the goal of this workshop is to enhance the interdisciplinary collaboration between affective computing and aesthetics researchers

    Recognizing Induced Emotions of Movie Audiences: Are Induced and Perceived Emotions the Same?

    Get PDF
    Predicting the emotional response of movie audi- ences to affective movie content is a challenging task in affective computing. Previous work has focused on using audiovisual movie content to predict movie induced emotions. However, the relationship between the audience’s perceptions of the affective movie content (perceived emotions) and the emotions evoked in the audience (induced emotions) remains unexplored. In this work, we address the relationship between perceived and in- duced emotions in movies, and identify features and modelling approaches effective for predicting movie induced emotions. First, we extend the LIRIS-ACCEDE database by annotating perceived emotions in a crowd-sourced manner, and find that perceived and induced emotions are not always consistent. Second, we show that dialogue events and aesthetic highlights are effective predictors of movie induced emotions. In addition to movie based features, we also study physiological and be- havioural measurements of audiences. Our experiments show that induced emotion recognition can benefit from including temporal context and from including multimodal information. Our study bridges the gap between affective content analysis and induced emotion prediction

    Towards an Effectve Arousal Detecton System for Virtual Reality

    Get PDF
    Immersive technologies offer the potential to drive engagement and create exciting experiences. A better understanding of the emotional state of the user within immersive experiences can assist in healthcare interventions and the evaluation of entertainment technologies. This work describes a feasibility study to explore the effect of affective video content on heart-rate recordings for Virtual Reality applications. A lowcost reflected-mode photoplethysmographic sensor and an electrocardiographic chest-belt sensor were attached on a novel non-invasive wearable interface specially designed for this study. 11 participants responses were analysed, and heart-rate metrics were used for arousal classification. The reported results demonstrate that the fusion of physiological signals yields to significant performance improvement; and hence the feasibility of our new approach

    Using Movies to Probe the Neurobiology of Anxiety

    Get PDF
    Over the past century, research has helped us build a fundamental understanding of the neurobiological underpinnings of anxiety. Specifically, anxiety engages a broad range of cortico-subcortical neural circuitry. Core to this is a ‘defensive response network’ which includes an amygdala-prefrontal circuit that is hypothesized to drive attentional amplification of threat-relevant stimuli in the environment. In order to help prepare the body for defensive behaviors to threat, anxiety also engages peripheral physiological systems. However, our theoretical frameworks of the neurobiology of anxiety are built mostly on the foundations of tightly-controlled experiments, such as task-based fMRI. Whether these findings generalize to more naturalistic settings is unknown. To address this shortcoming, movie-watching paradigms offer an effective tool at the intersection of tightly controlled and entirely naturalistic experiments. Particularly, using suspenseful movies presents a novel and effective means to induce and study anxiety. In this thesis, I demonstrate the potential of movie-watching paradigms in the study of how trait and state anxiety impact the ‘defensive response network’ in the brain, as well as peripheral physiology. The key findings reveal that trait anxiety is associated with differing amygdala-prefrontal responses to suspenseful movies; specific trait anxiety symptoms are linked to altered states of anxiety during suspenseful movies; and states of anxiety during movies impact brain-body communication. Notably, my results frequently diverged from those of conventional task-based experiments. Taken together, the insights gathered from this thesis underscore the utility of movie-watching paradigms for a more nuanced understanding of how anxiety impacts the brain and peripheral physiology. These outcomes provide compelling evidence that further integration of naturalistic methods will be beneficial in the study of the neurobiology of anxiety

    Recognizing Induced Emotions of Movie Audiences From Multimodal Information

    Get PDF
    Recognizing emotional reactions of movie audiences to affective movie content is a challenging task in affective computing. Previous research on induced emotion recognition has mainly focused on using audio-visual movie content. Nevertheless, the relationship between the perceptions of the affective movie content (perceived emotions) and the emotions evoked in the audiences (induced emotions) is unexplored. In this work, we studied the relationship between perceived and induced emotions of movie audiences. Moreover, we investigated multimodal modelling approaches to predict movie induced emotions from movie content based features, as well as physiological and behavioral reactions of movie audiences. To carry out analysis of induced and perceived emotions, we first extended an existing database for movie affect analysis by annotating perceived emotions in a crowd-sourced manner. We find that perceived and induced emotions are not always consistent with each other. In addition, we show that perceived emotions, movie dialogues, and aesthetic highlights are discriminative for movie induced emotion recognition besides spectators’ physiological and behavioral reactions. Also, our experiments revealed that induced emotion recognition could benefit from including temporal information and performing multimodal fusion. Moreover, our work deeply investigated the gap between affective content analysis and induced emotion recognition by gaining insight into the relationships between aesthetic highlights, induced emotions, and perceived emotions

    Automatic Emotion Recognition: Quantifying Dynamics and Structure in Human Behavior.

    Full text link
    Emotion is a central part of human interaction, one that has a huge influence on its overall tone and outcome. Today's human-centered interactive technology can greatly benefit from automatic emotion recognition, as the extracted affective information can be used to measure, transmit, and respond to user needs. However, developing such systems is challenging due to the complexity of emotional expressions and their dynamics in terms of the inherent multimodality between audio and visual expressions, as well as the mixed factors of modulation that arise when a person speaks. To overcome these challenges, this thesis presents data-driven approaches that can quantify the underlying dynamics in audio-visual affective behavior. The first set of studies lay the foundation and central motivation of this thesis. We discover that it is crucial to model complex non-linear interactions between audio and visual emotion expressions, and that dynamic emotion patterns can be used in emotion recognition. Next, the understanding of the complex characteristics of emotion from the first set of studies leads us to examine multiple sources of modulation in audio-visual affective behavior. Specifically, we focus on how speech modulates facial displays of emotion. We develop a framework that uses speech signals which alter the temporal dynamics of individual facial regions to temporally segment and classify facial displays of emotion. Finally, we present methods to discover regions of emotionally salient events in a given audio-visual data. We demonstrate that different modalities, such as the upper face, lower face, and speech, express emotion with different timings and time scales, varying for each emotion type. We further extend this idea into another aspect of human behavior: human action events in videos. We show how transition patterns between events can be used for automatically segmenting and classifying action events. Our experimental results on audio-visual datasets show that the proposed systems not only improve performance, but also provide descriptions of how affective behaviors change over time. We conclude this dissertation with the future directions that will innovate three main research topics: machine adaptation for personalized technology, human-human interaction assistant systems, and human-centered multimedia content analysis.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133459/1/yelinkim_1.pd
    • 

    corecore