1,017 research outputs found

    Time-delay neural network for continuous emotional dimension prediction from facial expression sequences

    Get PDF
    "(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a Time-Delay Neural Network (TDNN) is proposed to model the temporal relationships between consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic facial expressions. The proposed approach has won the affect recognition sub-challenge of the third international Audio/Visual Emotion Recognition Challenge (AVEC2013)1

    Human Amygdala in Sensory and Attentional Unawareness: Neural Pathways and Behavioural Outcomes

    Get PDF
    One of the neural structures more often implicated in the processing of emotional signals in the absence of visual awareness is the amygdala. In this chapter, we review current evidence from human neuroscience in healthy and brain-damaged patients on the role of amygdala during non-conscious (visual) perception of emotional stimuli. Nevertheless, there is as of yet no consensus on the limits and conditions that affect the extent of amygdala’s response without focused attention or awareness. We propose to distinguish between attentional unawareness, a condition wherein the stimulus is potentially accessible to enter visual awareness but fails to do so because attention is diverted, and sensory unawareness, in which the stimulus fails to enter awareness because its normal processing in the visual cortex is suppressed. Within this conceptual framework, some of the apparently contradictory findings seem to gain new coherence and converge on the role of the amygdala in supporting different types of non-conscious emotion processing. Amygdala responses in the absence of awareness are linked to different functional mechanisms and are driven by more complex neural networks than commonly assumed. Acknowledging this complexity can be helpful to foster new studies on amygdala functions without awareness and their impact on human behaviour

    Amygdala response to emotional stimuli without awareness: Facts and interpretations

    Get PDF
    Over the past two decades, evidence has accumulated that the human amygdala exerts some of its functions also when the observer is not aware of the content, or even presence, of the triggering emotional stimulus. Nevertheless, there is as of yet no consensus on the limits and conditions that affect the extent of amygdala\u2019s response without focused attention or awareness. Here we review past and recent studies on this subject, examining neuroimaging literature on healthy participants as well as brain damaged patients, and we comment on their strengths and limits. We propose a theoretical distinction between processes involved in attentional unawareness, wherein the stimulus is potentially accessible to enter visual awareness but fails to do so because attention is diverted, and in sensory unawareness, wherein the stimulus fails to enter awareness because its normal processing in the visual cortex is suppressed. We argue this distinction, along with data sampling amygdala responses with high temporal resolution, helps to appreciate the multiplicity of functional and anatomical mechanisms centered on the amygdala and supporting its role in non-conscious emotion processing. Separate, but interacting, networks relay visual information to the amygdala exploiting different computational properties of subcortical and cortical routes, thereby supporting amygdala functions at different stages of emotion processing. This view reconciles some apparent contradictions in the literature, as well as seemingly contrasting proposals, such as the dual stage and the dual route model. We conclude that evidence in favor of the amygdala response without awareness is solid, albeit this response originates from different functional mechanisms and is driven by more complex neural networks than commonly assumed. Acknowledging the complexity of such mechanisms can foster new insights on the varieties of amygdala functions without awareness and their impact on human behavior

    Social and Affective Neuroscience of Everyday Human Interaction

    Get PDF
    This Open Access book presents the current state of the art knowledge on social and affective neuroscience based on empirical findings. This volume is divided into several sections first guiding the reader through important theoretical topics within affective neuroscience, social neuroscience and moral emotions, and clinical neuroscience. Each chapter addresses everyday social interactions and various aspects of social interactions from a different angle taking the reader on a diverse journey. The last section of the book is of methodological nature. Basic information is presented for the reader to learn about common methodologies used in neuroscience alongside advanced input to deepen the understanding and usability of these methods in social and affective neuroscience for more experienced readers

    Improving the accuracy of automatic facial expression recognition in speaking subjects with deep learning

    Get PDF
    When automatic facial expression recognition is applied to video sequences of speaking subjects, the recognition accuracy has been noted to be lower than with video sequences of still subjects. This effect known as the speaking effect arises during spontaneous conversations, and along with the affective expressions the speech articulation process influences facial configurations. In this work we question whether, aside from facial features, other cues relating to the articulation process would increase emotion recognition accuracy when added in input to a deep neural network model. We develop two neural networks that classify facial expressions in speaking subjects from the RAVDESS dataset, a spatio-temporal CNN and a GRU cell RNN. They are first trained on facial features only, and afterwards both on facial features and articulation related cues extracted from a model trained for lip reading, while varying the number of consecutive frames provided in input as well. We show that using DNNs the addition of features related to articulation increases classification accuracy up to 12%, the increase being greater with more consecutive frames provided in input to the model

    Synch-Graph : multisensory emotion recognition through neural synchrony via graph convolutional networks

    Get PDF
    Human emotions are essentially multisensory, where emotional states are conveyed through multiple modalities such as facial expression, body language, and non-verbal and verbal signals. Therefore having multimodal or multisensory learning is crucial for recognising emotions and interpreting social signals. Existing multisensory emotion recognition approaches focus on extracting features on each modality, while ignoring the importance of constant interaction and co- learning between modalities. In this paper, we present a novel bio-inspired approach based on neural synchrony in audio- visual multisensory integration in the brain, named Synch-Graph. We model multisensory interaction using spiking neural networks (SNN) and explore the use of Graph Convolutional Networks (GCN) to represent and learn neural synchrony patterns. We hypothesise that modelling interactions between modalities will improve the accuracy of emotion recognition. We have evaluated Synch-Graph on two state- of-the-art datasets and achieved an overall accuracy of 98.3% and 96.82%, which are significantly higher than the existing techniques.Postprin

    Social and Affective Neuroscience of Everyday Human Interaction

    Get PDF
    This Open Access book presents the current state of the art knowledge on social and affective neuroscience based on empirical findings. This volume is divided into several sections first guiding the reader through important theoretical topics within affective neuroscience, social neuroscience and moral emotions, and clinical neuroscience. Each chapter addresses everyday social interactions and various aspects of social interactions from a different angle taking the reader on a diverse journey. The last section of the book is of methodological nature. Basic information is presented for the reader to learn about common methodologies used in neuroscience alongside advanced input to deepen the understanding and usability of these methods in social and affective neuroscience for more experienced readers

    Autism, Movement, Time and Thought E-Motion Mis-Sight and other Temporo-Spatial Processing Disorders in Autism

    No full text
    In this chapter we propose a new approach of autism called E-Motion mis-sight and other temporospatial processing disorders. According to our view, subjects with autistic spectrum disorders (ASD) present more or less disabilities, delays and deviances, to perceive and integrate environmental world's sensory events online and to produce real-time sensorymotor coupling and adequate verbal and nonverbal outputs, from the beginning of their life. In other words, the environmental world is going and changing too fast for persons with ASD.In the first paragraph, we present some biographical self-reports, clinical considerations and neuropsychological arguments, which open windows on the peculiar visual and visuo-motor world of autistic persons. In the second paragraph, we expose the available experimental results in favour of physical and biological motion integration disorders in autistic population, and we expose a first synthesis of our approach. In the third paragraph, we review some results demonstrating other temporo-spatial processing disorders in ASD, and suggest some possible underlying neurobiological mechanisms of our E-Motion mis-sight and other temporospatial processing disorders hypothesis of autism, based on putative multi-system temporal dissynchronization and functional disconnectivity. We think that this approach, which is compatible with the major contemporary theories of ASD, may have new implications for the comprehension and new applications for the rehabilitation of these disorders. In the last paragraph, we propose some psychological and philosophical perspectives of our approach concerning with the integration of movement and time in thought, and the mind-brain relationships in autism in particular, and in human being in general
    • …
    corecore