23,052 research outputs found

    Facial Expressions of Sentence Comprehension

    Get PDF
    International audienceUnderstanding facial expressions allows access to one's intentional and affective states. Using the findings in psychology and neuroscience, in which physical behaviors of the face are linked to emotional states, this paper aims to study sentence comprehension shown by facial expressions. In our experiments, participants took part in a roughly 30-minute computer mediated task, where they were asked to answer either "true" or "false" to knowledge-based questions, then immediately given feedback of "correct" or "incorrect". Their faces, which were recorded during the task using the Kinect v2 device, are later used to identify the level of comprehension shown by their expressions. To achieve this, the SVM and Random Forest classifiers with facial appearance information extracted using a spatiotemporal local descriptor, named LPQ-TOP, are employed. Results of online sentence comprehension show that facial dynamics are promising to help understand cognitive states of the mind

    Discourse comprehension and simulation of positive emotions

    Get PDF
    Recent research has suggested that emotional sentences are understood by constructing an emotion simulation of the events being described. The present study aims to investigate whether emotion simulation is also involved in online and offline comprehension of larger language segments such as discourse. Participants read a target text describing positive events while their facial postures were manipulated to be either congruent (matching condition) or incongruent (mismatching condition) with emotional valence of the text. In addition, a control condition was included in which participants read the text naturally (without a manipulation of facial posture). The influence of emotion simulation on discourse understanding was assessed by online (self-paced reading times) and offline (verbatim and inference questions) measures of comprehension. The major result was that participants read faster the target text describing positive emotional events while their bodily systems were prepared for processing of positive emotions (matching condition) rather than unprepared (control condition) or prevented from positive emotional processing (mismatching condition). Simulation of positive emotions did not have a significant impact on offline explicit and implicit discourse comprehension. This pattern of results suggests that emotion simulation has an impact on online comprehension, but may not have any effect on offline discourse processing

    Effects of emotional facial expressions and depicted actions on situated language processing across the lifespan

    Get PDF
    Münster K. Effects of emotional facial expressions and depicted actions on situated language processing across the lifespan. Bielefeld: Universität Bielefeld; 2016.Language processing does not happen in isolation, but is often embedded in a rich non-linguistic visual and social context. Yet, although many psycholinguistic studies have investigated the close interplay between language and the visual context, the role of social aspects and listener characteristics in real-time language processing remains largely elusive. The present thesis aims at closing this gap. Taking extant literature regarding the incrementality of language processing, the close interplay between visual and linguistic context and the relevance for and effect of social aspects on language comprehension into account, we argue for the necessity to extend investigations on the influence of social information and listener characteristics on real-time language processing. Crucially, we moreover argue for the inclusion of social information and listener characteristics into real-time language processing accounts. Up-to-date, extant accounts on language comprehension remain elusive about the influence of social cues and listener characteristics on real-time language processing. Yet a more comprehensive approach that takes these aspects into account is highly desirable given that psycholinguistics aims at describing how language processing happens in real-time in the mind of the comprehender. In 6 eye-tracking studies, this thesis hence investigated the effect of two distinct visual contextual cues on real-time language processing and thematic role assignment in emotionally valenced non-canonical German sentences. We are using emotional facial expressions of a speaker as a visual social cue and depicted actions as a visual contextual cue that is directly mediated by the linguistic input. Crucially, we are also investigating the effect of the age of the listener as one type of listener characteristics in testing children and older and younger adults. In our studies, participants were primed with a positive emotional facial expression (vs. a non-emotional / negative expression). Following this they inspected a target scene depicting two potential agents either performing or not performing an action towards a patient. This scene was accompanied by a related positively valenced German Object-Verb-Adverb-Subject sentence (e.g.,: The ladybug(accusative object, patient) tickles happily the cat(nominative object, agent).). Anticipatory eye-movements to the agent of the action, i.e., the sentential subject in sentence end position (vs. distractor agent), were measured in order to investigate if, to what extent and how rapidly positive emotional facial expressions and depicted actions can facilitate thematic role assignment in children and older and younger adults. Moreover, given the complex nature of emotional facial expressions, we also investigated if the naturalness of the emotional face has an influence on the integration of this social cue into real-time sentence processing. We hence used a schematic depiction of an emotional face, i.e., a happy smiley, in half of the studies and a natural human emotional face in the remaining studies. Our results showed that all age groups could reliably use the depicted actions as a cue to facilitate sentence processing and to assign thematic roles even before the target agent had been mentioned. Crucially, only our adult listener groups could also use the emotional facial expression for real-time sentence processing. When the natural human facial expression instead of the schematic smiley was used to portray the positive emotion, the use of the social cue was even stronger. Nevertheless, our results have also suggested that the depicted action is a stronger cue than the social cue, i.e., the emotional facial expression, for both adult age groups. Children on the other hand do not yet seem to be able to also use emotional facial expressions as visual social cues for language comprehension. Interestingly, we also found time course differences regarding the integration of the two cues into real-time sentence comprehension. Compared to younger adults, both older adults and children were delayed by one word region in their visual cue effects. Our on-line data is further supported by accuracy results. All age groups answered comprehension questions for ‘who is doing what to whom’ more accurately when an action was depicted (vs. was not depicted). However, only younger adults made use of the emotional cue for answering the comprehension questions, although to a lesser extent than they used depicted actions. In conclusion, our findings suggest for the first time that different non-linguistic cues, i.e., more direct referential cues such as depicted actions and more indirect social cues such as emotional facial expressions, are integrated into situated language processing to different degrees. Crucially, the time course and strength of the integration of these cues varies as a function of age. Hence our findings support our argument regarding the inclusion of social cues and listener characteristics into real-time language processing accounts. Based on our own results we have therefore outlined at the end of this thesis, how an account of real-time language comprehension that already takes the influence of visual context such as depicted actions into account (but fails to include social aspects and listener characteristics) can be enriched to also include the effects of emotional facial expressions and listener characteristics such as age

    Auditory smiles trigger unconscious facial imitation

    Get PDF
    Smiles, produced by the bilateral contraction of the zygomatic major muscles, are one of the most powerful expressions of positive affect and affiliation and also one of the earliest to develop [1]. The perception-action loop responsible for the fast and spontaneous imitation of a smile is considered a core component of social cognition [2]. In humans, social interaction is overwhelmingly vocal, and the visual cues of a smiling face co-occur with audible articulatory changes on the speaking voice [3]. Yet remarkably little is known about how such 'auditory smiles' are processed and reacted to. We have developed a voice transformation technique that selectively simulates the spectral signature of phonation with stretched lips and report here how we have used this technique to study facial reactions to smiled and non-smiled spoken sentences, finding that listeners' zygomatic muscles tracked auditory smile gestures even when they did not consciously detect them

    Cognitive biases in body dysmorphic disorder

    Get PDF

    Cognitive Ability and Cognitive Style in the Comprehension and Expression of Emotion

    Get PDF
    This research investigated the ability to comprehend and express affect by non-verbal means. Darwin (1872) suggested that the non-verbal communication of emotion had a biological basis. Hughlings-Jackson (1879) emphasized the relationship between cerebral functioning and the ability to communicate affect. Neuropsychological research suggests that a major neuronal network located in the right hemisphere supports non-verbal communication. The evidence indicates the localization of expressive skills in the anterior cortex and comprehensive skills in the posterior cortex. In the present study 31 behavioral tasks comprised a scale to assess the ability to communicate affect by non-verbal means. The tasks assessed the comprehension and/or expression of affect in facial expressions, drawings of faces, intonations of neutral sentences, and non-verbal vocal sounds. Six basic emotions were used--happiness, sadness, fear, surprise, anger, and disgust. The scale was administered to 20 male and 25 female college students. Measures of internal consistency and reliability were calculated. Interrelationships between the tasks were analyzed as were relationships between the scale and demographic, personality and intellectual variables. The scale was analyzed via factor analysis and factor scores were calculated and correlated with demographic, personality and intellectual variables. The scale was found to be internally consistent and the scoring reliable. Performances on the 31 tasks were highly interrelated, which suggested a general ability to communicate emotion by non-verbal means. This general non-verbal ability appeared similar in many ways to the ability to use language to communicate. Comprehension skills appeared more fundamental than expression. The more highly developed skills in non-verbal communication were found for subjects at high levels of intellectual functioning. Communication of affect by non-verbal means appeared to be a skill that can be studied with cognitive research methods
    corecore