288 research outputs found

    Attentional Bias to Facial Expressions of Different Emotions – A Cross-Cultural Comparison of ≠Akhoe Hai||om and German Children and Adolescents

    Get PDF
    The attentional bias to negative information enables humans to quickly identify and to respond appropriately to potentially threatening situations. Because of its adaptive function, the enhanced sensitivity to negative information is expected to represent a universal trait, shared by all humans regardless of their cultural background. However, existing research focuses almost exclusively on humans from Western industrialized societies, who are not representative for the human species. Therefore, we compare humans from two distinct cultural contexts: adolescents and children from Germany, a Western industrialized society, and from the ≠Akhoe Hai||om, semi-nomadic hunter-gatherers in Namibia. We predicted that both groups show an attentional bias toward negative facial expressions as compared to neutral or positive faces. We used eye-tracking to measure their fixation duration on facial expressions depicting different emotions, including negative (fear, anger), positive (happy), and neutral faces. Both Germans and the ≠Akhoe Hai||om gazed longer at fearful faces, but shorter on angry faces, challenging the notion of a general bias toward negative emotions. For happy faces, fixation durations varied between the two groups, suggesting more flexibility in the response to positive emotions. Our findings emphasize the need for placing research on emotion perception into an evolutionary, cross-cultural comparative framework that considers the adaptive significance of specific emotions, rather than differentiating between positive and negative information, and enables systematic comparisons across participants from diverse cultural backgrounds

    From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    Get PDF
    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development

    Context updating during sentence comprehension: The effect of aboutness topic

    Get PDF
    AbstractTo communicate efficiently, speakers typically link their utterances to the discourse environment and adapt their utterances to the listener‘s discourse representation. Information structure describes how linguistic information is packaged within a discourse to optimize information transfer. The present study investigates the nature and time course of context integration (i.e., aboutness topic vs. neutral context) on the comprehension of German declarative sentences with either subject-before-object (SO) or object-before-subject (OS) word order using offline comprehensibility judgments and online event-related potentials (ERPs). Comprehensibility judgments revealed that the topic context selectively facilitated comprehension of stories containing OS (i.e., non-canonical) sentences. In the ERPs, the topic context effect was reflected in a less pronounced late positivity at the sentence-initial object. In line with the Syntax-Discourse Model, we argue that these context-induced effects are attributable to reduced processing costs for updating the current discourse model. The results support recent approaches of neurocognitive models of discourse processing

    Visual attention-capture cue in depicted scenes fails to modulate online sentence processing

    Get PDF
    Everyday communication is enriched by the visual environment that listeners concomitantly link to the linguistic input. If and when visual cues are integrated into the mental meaning representation of the communicative setting, is still unclear. In our earlier findings, the integration of linguistic cues (i.e., topic-hood of a discourse referent) reduced discourse updating costs of the mental representation as indicated by reduced sentence-initial processing costs of the non-canonical word order in German. In the present study we tried to replicate our earlier findings by replacing the linguistic cue by a visual attention-capture cue presented below the threshold of perception in order to direct participant’s attention to a depicted referent. While this type of cue has previously been shown to modulate word order preferences in sentence production, we found no effects on sentence comprehension. We discuss possible theory-based reasons for the null effect of the implicit visual cue as well as methodological caveats and issues that should be considered in future research on multimodal meaning integration

    Common Ground Information Affects Reference Resolution

    Get PDF
    One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.Peer Reviewe

    Show your hands — Are you really clever? Reasoning, gesture production, and intelligence

    Get PDF
    This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.This study investigates the relationship of reasoning and gesture production in individuals differing in fluid and crystallized intelligence. It combines mea-sures of speed and accuracy of processing geometric analogies with analyses of spontaneous hand gestures that accompanied young adults’ subsequent ex-planations of how they solved the geometric analogy task. Individuals with superior fluid intelligence processed the analogies more efficiently than par-ticipants with average fluid intelligence. Additionally, they accompanied their subsequent explanations with more gestures expressing movement in non- egocentric perspective. Furthermore, gesturing (but not speaking) about the most relevant aspect of the task was related to higher fluid intelligence. Within the gestures-as-simulated action framework, the results suggest that i ndividuals with superior fluid intelligence engage more in mental simulation during vi-sual imagery than those with average fluid intelligence. The findings stress the relationship between gesture production and general cognition, such as fluid intelligence, rather than its relationship to language. The role of gesture pro-duction in thinking and learning processes is discussed.Peer Reviewe

    Altered function of ventral striatum during reward-based decision making in old age

    Get PDF
    Normal aging is associated with a decline in different cognitive domains and local structural atrophy as well as decreases in dopamine concentration and receptor density. To date, it is largely unknown how these reductions in dopaminergic neurotransmission affect human brain regions responsible for reward-based decision making in older adults. Using a learning criterion in a probabilistic object reversal task, we found a learning stage by age interaction in the dorsolateral prefrontal cortex (dlPFC) during decision making. While young adults recruited the dlPFC in an early stage of learning reward associations, older adults recruited the dlPFC when reward associations had already been learned. Furthermore, we found a reduced change in ventral striatal BOLD signal in older as compared to younger adults in response to high probability rewards. Our data are in line with behavioral evidence that older adults show altered stimulus–reward learning and support the view of an altered fronto-striatal interaction during reward-based decision making in old age, which contributes to prolonged learning of reward associations

    Human aging compromises attentional control of auditory perception

    Get PDF

    Children's learning of non-adjacent dependencies using a web-based computer game setting

    Get PDF
    Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting

    Linguistic and non-linguistic non-adjacent dependency learning in early development

    No full text
    Non-adjacent dependencies (NADs) are important building blocks for language and extracting them from the input is a fundamental part of language acquisition. Prior event-related potential (ERP) studies revealed changes in the neural signature of NAD learning between infancy and adulthood, suggesting a developmental shift in the learning route for NADs. The present study aimed to specify which brain regions are involved in this developmental shift and whether this shift extends to NAD learning in the non-linguistic domain. In two experiments, 2- and 3-year-old German-learning children were familiarized with either Italian sentences or tone sequences containing NADs and subsequently tested with NAD violations, while functional near-infrared spectroscopy (fNIRS) data were recorded. Results showed increased hemodynamic responses related to the detection of linguistic NAD violations in the left temporal, inferior frontal, and parietal regions in 2-year-old children, but not in 3-year-old children. A different developmental trajectory was found for non-linguistic NADs, where 3-year-old, but not 2-year-old children showed evidence for the detection of non-linguistic NAD violations. These results confirm a developmental shift in the NAD learning route and point to distinct mechanisms underlying NAD learning in the linguistic and the non-linguistic domain
    • 

    corecore