146 research outputs found

    The rapid emergence of stimulus specific perceptual learning

    Get PDF
    Is stimulus specific perceptual learning the result of extended practice or does it emerge early in the time course of learning? We examined this issue by manipulating the amount of practice given on a face identification task on Day 1, and altering the familiarity of stimuli on Day 2. We found that a small number of trials was sufficient to produce stimulus specific perceptual learning of faces: on Day 2, response accuracy decreased by the same amount for novel stimuli regardless of whether observers practiced 105 or 840 trials on Day 1. Current models of learning assume early procedural improvements followed by late stimulus specific gains. Our results show that stimulus specific and procedural improvements are distributed throughout the time course of learning

    Effects of aging on identifying emotions conveyed by point-light walkers

    Get PDF
    M.G. was supported by EC FP7 HBP (grant 604102), PITN-GA-011-290011 (ABC) FP7-ICT-2013-10/ 611909 (KOROIBOT), and by GI 305/4-1 and KA 1258/15-1, and BMBF, FKZ: 01GQ1002A. K.S.P. was supported by a BBSRC New Investigator Grant. A.B.S. and P.J.B. were supported by an operating grant (528206) from the Canadian Institutes for Health Research. The authors also thank Donna Waxman for her valuable help in data collection for all experiments described here.Peer reviewedPostprin

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    BayesFit: A tool for modeling psychophysical data using Bayesian inference

    Get PDF
    BayesFit is a module for Python that allows users to fit models to psychophysical data using Bayesian inference. The module aims to make it easier to develop probabilistic models for psychophysical data in Python by providing users with a simple API that streamlines the process of defining psychophysical models, obtaining fits, extracting outputs, and visualizing fitted models. Our software implementation uses numerical integration as the primary tool to fit models, which avoids the complications that arise in using Markov Chain Monte Carlo (MCMC) methods [1]. The source code for BayesFit is available at https://github.com/slugocm/bayesfit and API documentation at http://www.slugocm.ca/bayesfit/. This module is extensible, and many of the functions primarily rely on Numpy [2] and therefore can be reused as newer versions of Python are developed to ensure researchers always have a tool available to ease the process of fitting models to psychophysical data

    How Prevalent Is Object-Based Attention?

    Get PDF
    Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects low incidence of such effects at the level of individual subjects. Experiment 1 measured space-based and object-based cueing effects for horizontal and vertical rectangles in 60 subjects comparing commonly used target detection and discrimination tasks. In Experiment 2 we ran another 120 subjects in a target discrimination task in which rectangle orientation varied between subjects. Using parametric statistical methods, we found object-based effects only for horizontal rectangles. Bootstrapping methods were used to measure effects in individual subjects. Significant space-based cueing effects were found in nearly all subjects in both experiments, across tasks and rectangle orientations. However, only a small number of subjects exhibited significant object-based cueing effects. Experiment 3 measured only object-based attention effects using another common paradigm and again, using bootstrapping, we found only a small number of subjects that exhibited significant object-based cueing effects. Our results show that object-based effects are more prevalent for horizontal rectangles, which is in accordance with the theory that attention may be allocated more easily along the horizontal meridian. The fact that so few individuals exhibit a significant object-based cueing effect presumably is why previous studies of this effect might have yielded inconsistent results. The results from the current study highlight the importance of considering individual subject data in addition to commonly used statistical methods

    Age-related delay in information accrual for faces: Evidence from a parametric, single-trial EEG approach

    Get PDF
    Background: In this study, we quantified age-related changes in the time-course of face processing by means of an innovative single-trial ERP approach. Unlike analyses used in previous studies, our approach does not rely on peak measurements and can provide a more sensitive measure of processing delays. Young and old adults (mean ages 22 and 70 years) performed a non-speeded discrimination task between two faces. The phase spectrum of these faces was manipulated parametrically to create pictures that ranged between pure noise (0% phase information) and the undistorted signal (100% phase information), with five intermediate steps. Results: Behavioural 75% correct thresholds were on average lower, and maximum accuracy was higher, in younger than older observers. ERPs from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The earliest age-related ERP differences occurred in the time window of the N170. Older observers had a significantly stronger N170 in response to noise, but this age difference decreased with increasing phase information. Overall, manipulating image phase information had a greater effect on ERPs from younger observers, which was quantified using a hierarchical modelling approach. Importantly, visual activity was modulated by the same stimulus parameters in younger and older subjects. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed a significantly slower processing in older observers starting around 120 ms after stimulus onset. This age-related delay increased over time to reach a maximum around 190 ms, at which latency younger observers had around 50 ms time lead over older observers. Conclusion: Using a component-free ERP analysis that provides a precise timing of the visual system sensitivity to image structure, the current study demonstrates that older observers accumulate face information more slowly than younger subjects. Additionally, the N170 appears to be less face-sensitive in older observers

    Healthy Aging Delays Scalp EEG Sensitivity to Noise in a Face Discrimination Task

    Get PDF
    We used a single-trial ERP approach to quantify age-related changes in the time-course of noise sensitivity. A total of 62 healthy adults, aged between 19 and 98, performed a non-speeded discrimination task between two faces. Stimulus information was controlled by parametrically manipulating the phase spectrum of these faces. Behavioral 75% correct thresholds increased with age. This result may be explained by lower signal-to-noise ratios in older brains. ERP from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed significantly delayed noise sensitivity in older observers. This age effect is reliable, as demonstrated by test–retest in 24 subjects, and started about 120 ms after stimulus onset. Our analyses suggest also a qualitative change from a young to an older pattern of brain activity at around 47 ± 4 years old

    Spontaneous blinking and brain health in aging: Large-scale evaluation of blink-related oscillations across the lifespan

    Get PDF
    Blink-related oscillations (BROs) are newly discovered neurophysiological brainwave responses associated with spontaneous blinking, and represent environmental monitoring and awareness processes as the brain evaluates new visual information appearing after eye re-opening. BRO responses have been demonstrated in healthy young adults across multiple task states and are modulated by both task and environmental factors, but little is known about this phenomenon in aging. To address this, we undertook the first large-scale evaluation of BRO responses in healthy aging using the Cambridge Centre for Aging and Neuroscience (Cam-CAN) repository, which contains magnetoencephalography (MEG) data from a large sample (N = 457) of healthy adults across a broad age range (18–88) during the performance of a simple target detection task. The results showed that BRO responses were present in all age groups, and the associated effects exhibited significant age-related modulations comprising an increase in sensor-level global field power (GFP) and source-level theta and alpha spectral power within the bilateral precuneus. Additionally, the extent of cortical activations also showed an inverted-U relationship with age, consistent with neurocompensation with aging. Crucially, these age-related differences were not observed in the behavioral measures of task performance such as reaction time and accuracy, suggesting that blink-related neural responses during the target detection task are more sensitive in capturing aging-related brain function changes compared to behavioral measures alone. Together, these results suggest that BRO responses are not only present throughout the adult lifespan, but the effects can also capture brain function changes in healthy aging—thus providing a simple yet powerful avenue for evaluating brain health in aging

    Perception of Biological Motion in Schizophrenia and Healthy Individuals: A Behavioral and fMRI Study

    Get PDF
    Background: Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits. Methodology/Findings: In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task. Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophreni
    corecore