6 research outputs found
Recommended from our members
How Do Static and Dynamic Emotional Faces Prime Incremental Semantic Interpretation?: Comparing Older and Younger Adults
Münster K, Carminati MN, Knoeferle P. How Do Static and Dynamic Emotional Faces Prime Incremental Semantic Interpretation?: Comparing Older and Younger Adults. In: Proceedings of the 36th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2014: 2675-2680
Visual gender cues elicit agent expectations: different mismatches in situated language comprehension
Abstract Previous research has shown that visual cues (depicted events) can have a strong effect on language comprehension and guide attention more than stereotypical thematic role knowledge ('depicted / recent event preference'). We examined to which extent this finding generalizes to another visual cue (gender from the hands of an agent) and to which extent it is modulated by picture-sentence incongruence. Participants inspected videos of hands performing an action, and then listened to non-canonical German OVS sentences while we monitored their eye gaze to the faces of two potential subjects / agents (one male and one female). In Experiment 1, the sentential verb phrase matched (vs. mismatched) the video action and in Experiment 2, the sentential subject matched (vs. mismatched) the gender of the agent's hands in the video. Additionally, both experiments manipulated gender stereotypicality congruence (i.e. whether the gender stereotypicality of the described actions matched or mismatched the gender of the hands in the video). Participants overall preferred to inspect the target agent face (i.e. the face whose gender matched that of the hands seen in the previous video), suggesting the depicted event preference observed in previous studies generalizes to visual gender cues. Stereotypicality match did not seem to modulate this gaze behavior. However, when there was a mismatch between the sentence and the previous video, participants tended to look away from the target face (post-verbally for action-verb mismatches and at the final subject region for hand gender -subject gender mismatches), suggesting outright picture-sentence incongruence can modulate the preference to inspect the face whose gender matched that of the hands seen in the previous video
Visual gender cues elicit agent expectations: different mismatches in situated language comprehension
Rodriguez A, Burigo M, Knoeferle P. Visual gender cues elicit agent expectations: different mismatches in situated language comprehension. In: Airenti G, Bara BG, Sandini G, eds. Proceedings of the EuroAsianPacific Joint Conference on Cognitive Science (EAPCogSci 2015). CEUR Workshop Proceedings. Vol 1419. Aachen; 2015: 234-239
Emotional processing of ironic vs. literal criticism in autistic and non-autistic adults: Evidence from eye-tracking
Typically developing (TD) adults are able to keep track of story characters’ emotional states online while reading. Filik et al. (2017) showed that initially, participants expected the victim to be more hurt by ironic comments than literal, but later considered them less hurtful; ironic comments were regarded as more amusing. We examined these processes in autistic adults, since previous research has demonstrated socio-emotional difficulties among autistic people, which may lead to problems processing irony and its related emotional processes despite an intact ability to integrate language in context. We recorded eye movements from autistic and non-autistic adults while they read narratives in which a character (the victim) was either criticised in an ironic or a literal manner by another character (the protagonist). A target sentence then either described the victim as feeling hurt/amused by the comment, or the protagonist as having intended to hurt/amused the victim by making the comment. Results from the non-autistic adults broadly replicated the key findings from Filik et al. (2017), supporting the two-stage account. Importantly, the autistic adults did not show comparable two-stage processing of ironic language; they did not differentiate between the emotional responses for victims or protagonists following ironic vs. literal criticism. These findings suggest that autistic people experience a specific difficulty taking into account other peoples’ communicative intentions (i.e. infer their mental state) to appropriately anticipate emotional responses to an ironic comment. We discuss how these difficulties might link to atypical socio-emotional processing in autism, and the ability to maintain successful real-life social interactions
The Effects of Social Context and Perspective on Language Processing: Evidence from Autism Spectrum Disorder
This thesis aimed to provide new insights into the role of perspective and non-linguistic context in language processing among autistic and typically developing (TD) adults. The mental simulation account and the one-step model state that language is mentally simulated and interpreted in context, suggesting that these processes are activated online while linguistic input is processed. Little is known of whether the same processes are activated in autism. In seven experiments (four were fully pre-registered), I used offline and online measures (e.g. EEG, eye-tracking) to investigate how social factors, such as the perspective, speaker's voice, emotional states of the characters, and topic of conversation influence language comprehension in both lab and real-life settings, in autism and TD adults. Based on the weak central coherence (WCC), and the complex information processing disorder (CIPD) theories, it was expected that autistic adults would struggle to integrate the social context with language, or at least show some subtle delays in the time-course of these anticipation/integration processes. First, I failed to find the same effect as previous findings, showing enhanced processing for personalized language, suggesting that this process is dependent on individual preferences in perspective-taking and task demands. Furthermore, I found that contrary to the WCC, autistic individuals had an intact ability to integrate social context online, while extracting the meaning from language. There were subtle differences in the time-course and strength of these processes between autistic and TD adults under high cognitive load. Findings are in line with CIPD hypothesis, showing that online language processes are disrupted as task demands increase, which consequently affect the quality of their social interactions. Future research should further investigate how these subtle differences impact social communication abilities in everyday life in autism
Effects of emotional facial expressions and depicted actions on situated language processing across the lifespan
Münster K. Effects of emotional facial expressions and depicted actions on situated language processing across the lifespan. Bielefeld: Universität Bielefeld; 2016.Language processing does not happen in isolation, but is often embedded in a rich
non-linguistic visual and social context. Yet, although many psycholinguistic studies
have investigated the close interplay between language and the visual context, the role
of social aspects and listener characteristics in real-time language processing remains
largely elusive. The present thesis aims at closing this gap.
Taking extant literature regarding the incrementality of language processing, the
close interplay between visual and linguistic context and the relevance for and effect
of social aspects on language comprehension into account, we argue for the necessity
to extend investigations on the influence of social information and listener
characteristics on real-time language processing. Crucially, we moreover argue for the
inclusion of social information and listener characteristics into real-time language
processing accounts. Up-to-date, extant accounts on language comprehension remain
elusive about the influence of social cues and listener characteristics on real-time
language processing. Yet a more comprehensive approach that takes these aspects
into account is highly desirable given that psycholinguistics aims at describing how
language processing happens in real-time in the mind of the comprehender.
In 6 eye-tracking studies, this thesis hence investigated the effect of two distinct
visual contextual cues on real-time language processing and thematic role assignment
in emotionally valenced non-canonical German sentences. We are using emotional
facial expressions of a speaker as a visual social cue and depicted actions as a visual
contextual cue that is directly mediated by the linguistic input. Crucially, we are also
investigating the effect of the age of the listener as one type of listener characteristics
in testing children and older and younger adults.
In our studies, participants were primed with a positive emotional facial
expression (vs. a non-emotional / negative expression). Following this they inspected
a target scene depicting two potential agents either performing or not performing an
action towards a patient. This scene was accompanied by a related positively valenced
German Object-Verb-Adverb-Subject sentence (e.g.,: The ladybug(accusative
object, patient) tickles happily the cat(nominative object, agent).). Anticipatory eye-movements to
the agent of the action, i.e., the sentential subject in sentence end position (vs.
distractor agent), were measured in order to investigate if, to what extent and how
rapidly positive emotional facial expressions and depicted actions
can facilitate thematic role assignment in children and older and younger adults.
Moreover, given the complex nature of emotional facial expressions, we also
investigated if the naturalness of the emotional face has an influence on the
integration of this social cue into real-time sentence processing. We hence used a
schematic depiction of an emotional face, i.e., a happy smiley, in half of the studies
and a natural human emotional face in the remaining studies.
Our results showed that all age groups could reliably use the depicted actions as a
cue to facilitate sentence processing and to assign thematic roles even before the
target agent had been mentioned. Crucially, only our adult listener groups could also
use the emotional facial expression for real-time sentence processing. When the
natural human facial expression instead of the schematic smiley was used to portray
the positive emotion, the use of the social cue was even stronger. Nevertheless, our
results have also suggested that the depicted action is a stronger cue than the social
cue, i.e., the emotional facial expression, for both adult age groups. Children on the
other hand do not yet seem to be able to also use emotional facial expressions as
visual social cues for language comprehension. Interestingly, we also found time
course differences regarding the integration of the two cues into real-time sentence
comprehension. Compared to younger adults, both older adults and children were
delayed by one word region in their visual cue effects.
Our on-line data is further supported by accuracy results. All age groups answered
comprehension questions for ‘who is doing what to whom’ more accurately when an
action was depicted (vs. was not depicted). However, only younger adults made use
of the emotional cue for answering the comprehension questions, although to a lesser
extent than they used depicted actions.
In conclusion, our findings suggest for the first time that different non-linguistic
cues, i.e., more direct referential cues such as depicted actions and more indirect
social cues such as emotional facial expressions, are integrated into situated language
processing to different degrees. Crucially, the time course and strength of the
integration of these cues varies as a function of age.
Hence our findings support our argument regarding the inclusion of social cues
and listener characteristics into real-time language processing accounts. Based on our
own results we have therefore outlined at the end of this thesis, how an account of
real-time language comprehension that already takes the influence of visual context
such as depicted actions into account (but fails to include social aspects and listener
characteristics) can be enriched to also include the effects of emotional facial
expressions and listener characteristics such as age