12 research outputs found
Empathy matters: ERP evidence for inter-individual differences in social language processing
When an adult claims he cannot sleep without his teddy bear, people tend to react surprised. Language interpretation is, thus, influenced by social context, such as who the speaker is. The present study reveals inter-individual differences in brain reactivity to social aspects of language. Whereas women showed brain reactivity when stereotype-based inferences about a speaker conflicted with the content of the message, men did not. This sex difference in social information processing can be explained by a specific cognitive trait, one’s ability to empathize. Individuals who empathize to a greater degree revealed larger N400 effects (as well as a larger increase in γ-band power) to socially relevant information. These results indicate that individuals with high-empathizing skills are able to rapidly integrate information about the speaker with the content of the message, as they make use of voice-based inferences about the speaker to process language in a top-down manner. Alternatively, individuals with lower empathizing skills did not use information about social stereotypes in implicit sentence comprehension, but rather took a more bottom-up approach to the processing of these social pragmatic sentences
Contextual influences on spoken-word processing: An electrophysiological approach
The aim of this thesis was to gain more insight into spoken-word comprehension and the influence of sentence-contextual information on these processes using ERPs. By manipulating critical words in semantically constraining sententes, in semantic or syntactic sense, and examining the consequences in the electrophysiological signal (e.g., elicitation of ERP components such as the N400, N200, LAN, and P600), three questions were tackled: I At which moment is context information used in the spoken-word recognition process? II What is the temporal relationship between lexical selection and integration of the meaning of a spoken word into a higher-order level representeation of the preceding sentence? III What is the time course of the processing of different sources of linguistic information obtained from the context, such as phonological, semantic and syntactic information, during spoken-word comprehension? From the results of this thesis it can be concluded that sentential context already exerts an influence on spoken-word processing at approximately 200 ms after word onset. In addition, semantic integration is attempted before a spoken word can be selected on the basis of the acoustic signal, i.e. before lexical selection is completed. Finally, knowledge of the syntactic category of a word is not needed before semantic integration can take place. These findings, therefore, were interpreted as providing evidence for an account of cascaded spoken-word processing that proclaims an optimal use of contextual information during spoken-word identification. Optimal use is accomplished by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic inpu
Feed a Cold, Starve a Fever?
An English old wives’ tale advises us to “feed a cold and starve a fever.” Here we report that the nutritional status modulates the T helper 1 (Th1)-Th2 balance of activated T cells in human volunteers. Food intake resulted in increased levels of gamma interferon production, whereas food deprivation stimulated interleukin-4 release
The cascaded nature of lexical selection and integration in auditory sentence processing.
Contains fulltext :
49746.pdf (publisher's version ) (Closed access)An event-related brain potential experiment was carried out to investigate the temporal relationship between lexical selection and the semantic integration in auditory sentence processing. Participants were presented with spoken sentences that ended with a word that was either semantically congruent or anomalous. Information about the moment in which a sentence-final word could uniquely be identified, its isolation point (IP), was compared with the onset of the elicited N400 congruity effect, reflecting semantic integration processing. The results revealed that the onset of the N400 effect occurred prior to the IP of the sentence-final words. Moreover, the factor early or late IP did not affect the onset of the N400. These findings indicate that lexical selection and semantic integration are cascading processes, in that semantic integration processing can start before the acoustic information allows the selection of a unique candidate and seems to be attempted in parallel for multiple candidates that are still compatible with the bottom-up acoustic input
Unification of speaker and meaning in language comprehension: an FMRI study.
Contains fulltext :
77210.pdf (publisher's version ) (Open Access)When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension
Towards neurophysiological assessment of phonemic discrimination: Context effects of the mismatch negativity
Contains fulltext :
77251.pdf (publisher's version ) (Closed access)Objective - This study focusses on the optimal paradigm for simultaneous assessment of auditory and phonemic discrimination in clinical populations. We investigated (a) whether pitch and phonemic deviants presented together in one sequence are able to elicit mismatch negativities (MMNs) in healthy adults and (b) whether MMN elicited by a change in pitch is modulated by the presence of the phonemic deviants.
Methods - Standard stimuli [i] were intermixed with small, medium or large pitch deviants or with pitch deviants of the same magnitude together with small and large phonemic deviants, [y] and [u], respectively.
Results - When pitch and phonemic deviants were presented together, only the large pitch and phonemic contrasts elicited significant MMNs. When only pitch deviants were presented, the medium and large pitch contrasts elicited significant MMNs. The MMNs, in response to the medium and large pitch contrasts, were of similar magnitude across the two contexts.
Conclusions - Pitch and phonemic deviants can be tested together provided the pitch contrast is relatively large.
Significance - A combined neurophysiological test of phonemic and pitch discrimination, as measured by the MMN, is a time-effective tool that may provide valuable information about the underlying cause of poorly specified phonemic representations in clinical populations.9 p