6,338 research outputs found

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach

    Get PDF
    Background: In this study, we quantified age-related changes in the time-course of face processing by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our approach does not rely on peak measurements and can provide a more sensitive measure of processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded discrimination task between two faces. The phase spectrum of these faces was manipulated parametrically to create pictures that ranged between pure noise (0% phase information) and the undistorted signal (100% phase information), with five intermediate steps. Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was higher, in younger than older observers. ERPs from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The earliest age-related ERP differences occurred in the time window of the N170. Older observers had a significantly stronger N170 in response to noise, but this age difference decreased with increasing phase information. Overall, manipulating image phase information had a greater effect on ERPs from younger observers, which was quantified using a hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower processing in older observers starting around 120 ms after stimulus onset. This age-related delay increased over time to reach a maximum around 190 ms, at which latency younger observers had around 50 ms time lead over older observers. Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual system sensitivity to image structure, the current study demonstrates that older observers accumulate face information more slowly than younger subjects. Additionally, the N170 appears to be less face-sensitive in older observers

    Infants segment words from songs - an EEG study

    No full text
    Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech

    Early EEG correlates of word frequency and contextual predictability in reading

    Get PDF
    Previous research into written language comprehension has been equivocal as to whether word frequency and contextual predictability effects share an early time course of processing. Target word frequency (low, high) and its predictability from prior context (low, high) were manipulated across two-sentence passages. Context sentences were presented in full, followed by word-by-word presentation (300 ms SOA) of target sentences. ERPs were analysed across left-to-right and anterior-to-posterior regions of interest within intervals from 50 to 550 ms post-stimulus. The onset of significant predictability effects (50–80 ms) preceded that of frequency (P1, 80–120 ms), while both main effects were generally sustained through the N400 (350–550 ms). Critically, the frequency-predictability interaction became significant in the P1 and was sustained through the N400, although the specific configuration of effects differed across components. The pattern of findings supports an early, chronometric locus of contextual predictability in recognising words during reading

    A visual M170 effect of morphological complexity

    Get PDF
    Recent masked priming studies on visual word recognition have suggested that morphological decomposition is performed prelexically, purely on the basis of the orthographic properties of the word form. Given this, one might expect morphological complexity to modulate early visual evoked activity in electromagnetic measures. We investigated the neural bases of morphological decomposition with magnetoencephalography (MEG). In two experiments, we manipulated morphological complexity in single word lexical decision without priming, once using suffixed words and once using prefixed words. We found that morphologically complex forms display larger amplitudes in the M170, the same component that has been implicated for letterstring and face effects in previous MEG studies. Although letterstring effects have been reported to be left-lateral, we found a right-lateral effect of morphological complexity, suggesting that both hemispheres may be involved in early analysis of word forms

    How are visual words represented? Insights from EEG-based visual word decoding, feature derivation and image reconstruction

    Get PDF
    Issue Online: 23 October 2019Investigations into the neural basis of reading have shed light on the cortical locus and the functional role of visual-orthographic processing. Yet, the fine-grained structure of neural representations subserving reading remains to be clarified. Here, we capitalize on the spatiotemporal structure of electroencephalography (EEG) data to examine if and how EEG patterns can serve to decode and reconstruct the internal representation of visually presented words in healthy adults. Our results show that word classification and image reconstruction were accurate well above chance, that their temporal profile exhibited an early onset, soon after 100 ms, and peaked around 170 ms. Further, reconstruction results were well explained by a combination of visual-orthographic word properties. Last, systematic individual differences were detected in orthographic representations across participants. Collectively, our results establish the feasibility of EEG-based word decoding and image reconstruction. More generally, they help to elucidate the specific features, dynamics, and neurocomputational principles underlying word recognition.This research was supported by the Natural Sciences and Engineering Research Council of Canada (to A.N., A.C.H.L., and B. C. A.)
    corecore