8 research outputs found

    A two-stage model of orientation integration for Battenberg-modulated micropatterns

    Get PDF
    The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage

    Healthy Aging Delays Scalp EEG Sensitivity to Noise in a Face Discrimination Task

    Get PDF
    We used a single-trial ERP approach to quantify age-related changes in the time-course of noise sensitivity. A total of 62 healthy adults, aged between 19 and 98, performed a non-speeded discrimination task between two faces. Stimulus information was controlled by parametrically manipulating the phase spectrum of these faces. Behavioral 75% correct thresholds increased with age. This result may be explained by lower signal-to-noise ratios in older brains. ERP from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed significantly delayed noise sensitivity in older observers. This age effect is reliable, as demonstrated by test–retest in 24 subjects, and started about 120 ms after stimulus onset. Our analyses suggest also a qualitative change from a young to an older pattern of brain activity at around 47 ± 4 years old

    Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach

    Get PDF
    Background: In this study, we quantified age-related changes in the time-course of face processing by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our approach does not rely on peak measurements and can provide a more sensitive measure of processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded discrimination task between two faces. The phase spectrum of these faces was manipulated parametrically to create pictures that ranged between pure noise (0% phase information) and the undistorted signal (100% phase information), with five intermediate steps. Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was higher, in younger than older observers. ERPs from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The earliest age-related ERP differences occurred in the time window of the N170. Older observers had a significantly stronger N170 in response to noise, but this age difference decreased with increasing phase information. Overall, manipulating image phase information had a greater effect on ERPs from younger observers, which was quantified using a hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower processing in older observers starting around 120 ms after stimulus onset. This age-related delay increased over time to reach a maximum around 190 ms, at which latency younger observers had around 50 ms time lead over older observers. Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual system sensitivity to image structure, the current study demonstrates that older observers accumulate face information more slowly than younger subjects. Additionally, the N170 appears to be less face-sensitive in older observers

    Learning to recognize letters in the periphery: Effects of repeated exposure, letter frequency, and letter complexity

    No full text

    Single-trial EEG dynamics of object and face visual processing

    No full text
    There has been extensive work using early event-related potentials (ERPs) to study visual object processing. ERP analyses focus traditionally on mean amplitude differences, with the implicit assumption that all of the neuronal activity of interest is evoked by the stimulus in a time-locked manner from trial to trial. However, several recent studies have suggested that visual ERP components might be explained to a large extent by the partial phase resetting of ongoing activity in restricted frequency bands. Here we apply that approach to the neural processing of visual objects. We examine the single-trial dynamics of the EEG signal elicited by the presentation of noise textures, houses and faces. We show that the brain response to those stimuli is best explained by amplitude increase that is maximal in the 5- to 15-Hz frequency band. The results indicate also the presence of a substantial increase in phase coherence in the same frequency band. However, analyses of residual activity, after subtracting the mean from single trials, show that this increase in phase coherence is not due to phase resetting per se, but rather to the presence of the ERP + noise in each trial. In keeping with this idea, a simulation demonstrates that a purely evoked model of the ERP produces quantitatively very similar results. Finally, the stronger response to faces compared to other objects (the ‘N170 face effect’) can be explained by a pure modulation of amplitude centered in the 5- to 15-Hz band

    Spatial scaling factors explain eccentricity effects on face ERPs

    No full text
    corecore