396 research outputs found
One, two, or many mechanisms? The brain's processing of complex words
The heated debate over whether there is only a single mechanism or two mechanisms for morphology has diverted valuable research energy away from the more critical questions about the neural computations involved in the comprehension and production of morphologically complex forms. Cognitive neuroscience data implicate many brain areas. All extant models, whether they rely on a connectionist network or espouse two mechanisms, are too underspecified to explain why more than a few brain areas differ in their activity during the processing of regular and irregular forms. No one doubts that the brain treats regular and irregular words differently, but brain data indicate that a simplistic account will not do. It is time for us to search for the critical factors free from theoretical blinders
Differences in the processing of anaphoric reference between closely related languages: neurophysiological evidence
Contains fulltext :
67857.pdf ( ) (Open Access)1 p
How Are ‘Barack Obama’ and ‘President Elect’ Differentially Stored in the Brain? An ERP Investigation on the Processing of Proper and Common Noun Pairs
BACKGROUND:One of the most debated issues in the cognitive neuroscience of language is whether distinct semantic domains are differentially represented in the brain. Clinical studies described several anomic dissociations with no clear neuroanatomical correlate. Neuroimaging studies have shown that memory retrieval is more demanding for proper than common nouns in that the former are purely arbitrary referential expressions. In this study a semantic relatedness paradigm was devised to investigate neural processing of proper and common nouns. METHODOLOGY/PRINCIPAL FINDINGS:780 words (arranged in pairs of Italian nouns/adjectives and the first/last names of well known persons) were presented. Half pairs were semantically related ("Woody Allen" or "social security"), while the others were not ("Sigmund Parodi" or "judicial cream"). All items were balanced for length, frequency, familiarity and semantic relatedness. Participants were to decide about the semantic relatedness of the two items in a pair. RTs and N400 data suggest that the task was more demanding for common nouns. The LORETA neural generators for the related-unrelated contrast (for proper names) included the left fusiform gyrus, right medial temporal gyrus, limbic and parahippocampal regions, inferior parietal and inferior frontal areas, which are thought to be involved in the conjoined processing a familiar face with the relevant episodic information. Person name was more emotional and sensory vivid than common noun semantic access. CONCLUSIONS/SIGNIFICANCE:When memory retrieval is not required, proper name access (conspecifics knowledge) is not more demanding. The neural generators of N400 to unrelated items (unknown persons and things) did not differ as a function of lexical class, thus suggesting that proper and common nouns are not treated differently as belonging to different grammatical classes
Interactions between mood and the structure of semantic memory: event-related potentials evidence
Recent evidence suggests that affect acts as modulator of cognitive processes and in particular that induced mood has an effect on the way semantic memory is used on-line. We used event-related potentials (ERPs) to examine affective modulation of semantic information processing under three different moods: neutral, positive and negative. Fifteen subjects read 324 pairs of sentences, after mood induction procedure with 30 pictures of neutral, 30 pictures of positive and 30 pictures of neutral valence: 108 sentences were read in each mood induction condition. Sentences ended with three word types: expected words, within-category violations, and between-category violations. N400 amplitude was measured to the three word types under each mood induction condition. Under neutral mood, a congruency (more negative N400 amplitude for unexpected relative to expected endings) and a category effect (more negative N400 amplitude for between- than to within-category violations) were observed. Also, results showed differences in N400 amplitude for both within- and between-category violations as a function of mood: while positive mood tended to facilitate the integration of unexpected but related items, negative mood made their integration as difficult as unexpected and unrelated items. These findings suggest the differential impact of mood on access to long-term semantic memory during sentence comprehension.The authors would like to thank to all the participants of the study, as well as to Jenna Mezin and Elizabeth Thompson for their help with data collection. This work was supported by a Doctoral Grant from Fundacao para a Ciencia e a Tecnologia - Portugal (SFRH/BD/35882/2007 to A. P. P.) and by the National Institute of Mental Health (RO1 MH 040799 to R. W. M.; RO3 MH 078036 to M.A.N.)
Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach
Background: In this study, we quantified age-related changes in the time-course of face processing
by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our
approach does not rely on peak measurements and can provide a more sensitive measure of
processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded
discrimination task between two faces. The phase spectrum of these faces was manipulated
parametrically to create pictures that ranged between pure noise (0% phase information) and the
undistorted signal (100% phase information), with five intermediate steps.
Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was
higher, in younger than older observers. ERPs from each subject were entered into a single-trial
general linear regression model to identify variations in neural activity statistically associated with
changes in image structure. The earliest age-related ERP differences occurred in the time window
of the N170. Older observers had a significantly stronger N170 in response to noise, but this age
difference decreased with increasing phase information. Overall, manipulating image phase
information had a greater effect on ERPs from younger observers, which was quantified using a
hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus
parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at
multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower
processing in older observers starting around 120 ms after stimulus onset. This age-related delay
increased over time to reach a maximum around 190 ms, at which latency younger observers had
around 50 ms time lead over older observers.
Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual
system sensitivity to image structure, the current study demonstrates that older observers
accumulate face information more slowly than younger subjects. Additionally, the N170 appears to
be less face-sensitive in older observers
The effects of stereo disparity on the behavioural and electrophysiological correlates of audio-visual motion in depth.
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135 – 160 ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140 – 200 ms, 220 – 280 ms, and 350 – 500 ms after stimulus onset
- …