107 research outputs found

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach

    Get PDF
    Background: In this study, we quantified age-related changes in the time-course of face processing by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our approach does not rely on peak measurements and can provide a more sensitive measure of processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded discrimination task between two faces. The phase spectrum of these faces was manipulated parametrically to create pictures that ranged between pure noise (0% phase information) and the undistorted signal (100% phase information), with five intermediate steps. Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was higher, in younger than older observers. ERPs from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The earliest age-related ERP differences occurred in the time window of the N170. Older observers had a significantly stronger N170 in response to noise, but this age difference decreased with increasing phase information. Overall, manipulating image phase information had a greater effect on ERPs from younger observers, which was quantified using a hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower processing in older observers starting around 120 ms after stimulus onset. This age-related delay increased over time to reach a maximum around 190 ms, at which latency younger observers had around 50 ms time lead over older observers. Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual system sensitivity to image structure, the current study demonstrates that older observers accumulate face information more slowly than younger subjects. Additionally, the N170 appears to be less face-sensitive in older observers

    Electrophysiological evidence for an early processing of human voices

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Previous electrophysiological studies have identified a "voice specific response" (VSR) peaking around 320 ms after stimulus onset, a latency markedly longer than the 70 ms needed to discriminate living from non-living sound sources and the 150 ms to 200 ms needed for the processing of voice paralinguistic qualities. In the present study, we investigated whether an early electrophysiological difference between voice and non-voice stimuli could be observed.</p> <p>Results</p> <p>ERPs were recorded from 32 healthy volunteers who listened to 200 ms long stimuli from three sound categories - voices, bird songs and environmental sounds - whilst performing a pure-tone detection task. ERP analyses revealed voice/non-voice amplitude differences emerging as early as 164 ms post stimulus onset and peaking around 200 ms on fronto-temporal (positivity) and occipital (negativity) electrodes.</p> <p>Conclusion</p> <p>Our electrophysiological results suggest a rapid brain discrimination of sounds of voice, termed the "fronto-temporal positivity to voices" (FTPV), at latencies comparable to the well-known face-preferential N170.</p

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Longer fixation duration while viewing face images

    Get PDF
    The spatio-temporal properties of saccadic eye movements can be influenced by the cognitive demand and the characteristics of the observed scene. Probably due to its crucial role in social communication, it is argued that face perception may involve different cognitive processes compared with non-face object or scene perception. In this study, we investigated whether and how face and natural scene images can influence the patterns of visuomotor activity. We recorded monkeys’ saccadic eye movements as they freely viewed monkey face and natural scene images. The face and natural scene images attracted similar number of fixations, but viewing of faces was accompanied by longer fixations compared with natural scenes. These longer fixations were dependent on the context of facial features. The duration of fixations directed at facial contours decreased when the face images were scrambled, and increased at the later stage of normal face viewing. The results suggest that face and natural scene images can generate different patterns of visuomotor activity. The extra fixation duration on faces may be correlated with the detailed analysis of facial features

    Large-scale replication study reveals a limit on probabilistic prediction in language comprehension

    Get PDF
    Do people routinely pre-activate the meaning and even the phonological form of upcoming words? The most acclaimed evidence for phonological prediction comes from a 2005 Nature Neuroscience publication by DeLong, Urbach and Kutas, who observed a graded modulation of electrical brain potentials (N400) to nouns and preceding articles by the probability that people use a word to continue the sentence fragment (‘cloze’). In our direct replication study spanning 9 laboratories (N=334), pre-registered replication-analyses and exploratory Bayes factor analyses successfully replicated the noun-results but, crucially, not the article-results. Pre-registered single-trial analyses also yielded a statistically significant effect for the nouns but not the articles. Exploratory Bayesian single-trial analyses showed that the article-effect may be non-zero but is likely far smaller than originally reported and too small to observe without very large sample sizes. Our results do not support the view that readers routinely pre-activate the phonological form of predictable words

    The role of configurality in the Thatcher illusion: an ERP study.

    Get PDF
    The Thatcher illusion (Thompson in Perception, 9, 483-484, 1980) is often explained as resulting from recognising a distortion of configural information when 'Thatcherised' faces are upright but not when inverted. However, recent behavioural studies suggest that there is an absence of perceptual configurality in upright Thatcherised faces (Donnelly et al. in Attention, Perception & Psychophysics, 74, 1475-1487, 2012) and both perceptual and decisional sources of configurality in behavioural tasks with Thatcherised stimuli (Mestry, Menneer et al. in Frontiers in Psychology, 3, 456, 2012). To examine sources linked to the behavioural experience of the illusion, we studied inversion and Thatcherisation of faces (comparing across conditions in which no features, the eyes, the mouth, or both features were Thatcherised) on a set of event-related potential (ERP) components. Effects of inversion were found at the N170, P2 and P3b. Effects of eye condition were restricted to the N170 generated in the right hemisphere. Critically, an interaction of orientation and eye Thatcherisation was found for the P3b amplitude. Results from an individual with acquired prosopagnosia who can discriminate Thatcherised from typical faces but cannot categorise them or perceive the illusion (Mestry, Donnelly et al. in Neuropsychologia, 50, 3410-3418, 2012) only differed from typical participants at the P3b component. Findings suggest the P3b links most directly to the experience of the illusion. Overall, the study showed evidence consistent with both perceptual and decisional sources and the need to consider both in relation to configurality

    Age-related changes in global motion coherence: conflicting haemodynamic and perceptual responses

    Get PDF
    Our aim was to use both behavioural and neuroimaging data to identify indicators of perceptual decline in motion processing. We employed a global motion coherence task and functional Near Infrared Spectroscopy (fNIRS). Healthy adults (n = 72, 18-85) were recruited into the following groups: young (n = 28, mean age = 28), middle-aged (n = 22, mean age = 50), and older adults (n = 23, mean age = 70). Participants were assessed on their motion coherence thresholds at 3 different speeds using a psychophysical design. As expected, we report age group differences in motion processing as demonstrated by higher motion coherence thresholds in older adults. Crucially, we add correlational data showing that global motion perception declines linearly as a function of age. The associated fNIRS recordings provide a clear physiological correlate of global motion perception. The crux of this study lies in the robust linear correlation between age and haemodynamic response for both measures of oxygenation. We hypothesise that there is an increase in neural recruitment, necessitating an increase in metabolic need and blood flow, which presents as a higher oxygenated haemoglobin response. We report age-related changes in motion perception with poorer behavioural performance (high motion coherence thresholds) associated with an increased haemodynamic response

    A Preference for Contralateral Stimuli in Human Object- and Face-Selective Cortex

    Get PDF
    Visual input from the left and right visual fields is processed predominantly in the contralateral hemisphere. Here we investigated whether this preference for contralateral over ipsilateral stimuli is also found in high-level visual areas that are important for the recognition of objects and faces. Human subjects were scanned with functional magnetic resonance imaging (fMRI) while they viewed and attended faces, objects, scenes, and scrambled images in the left or right visual field. With our stimulation protocol, primary visual cortex responded only to contralateral stimuli. The contralateral preference was smaller in object- and face-selective regions, and it was smallest in the fusiform gyrus. Nevertheless, each region showed a significant preference for contralateral stimuli. These results indicate that sensitivity to stimulus position is present even in high-level ventral visual cortex

    Ecological expected utility and the mythical neural code

    Get PDF
    Neural spikes are an evolutionarily ancient innovation that remains nature’s unique mechanism for rapid, long distance information transfer. It is now known that neural spikes sub serve a wide variety of functions and essentially all of the basic questions about the communication role of spikes have been answered. Current efforts focus on the neural communication of probabilities and utility values involved in decision making. Significant progress is being made, but many framing issues remain. One basic problem is that the metaphor of a neural code suggests a communication network rather than a recurrent computational system like the real brain. We propose studying the various manifestations of neural spike signaling as adaptations that optimize a utility function called ecological expected utility
    corecore