85 research outputs found

    Effects of aging on face identification and holistic face processing

    Get PDF
    AbstractSeveral studies have shown that face identification accuracy is lower in older than younger adults. This effect of aging might be due to age differences in holistic processing, which is thought to be an important component of human face processing. Currently, however, there is conflicting evidence as to whether holistic face processing is impaired in older adults. The current study therefore re-examined this issue by measuring response accuracy in a 1-of-4 face identification task and the composite face effect (CFE), a common index of holistic processing, in older adults. Consistent with previous reports, we found that face identification accuracy was lower in older adults than in younger adults tested in the same task. We also found a significant CFE in older adults that was similar in magnitude to the CFE measured in younger subjects with the same task. Finally, we found that there was a significant positive correlation between the CFE and face identification accuracy. This last result differs from the results obtained in a previous study that used the same tasks and which found no evidence of an association between the CFE and face identification accuracy in younger adults. Furthermore, the age difference was found with subtraction-, regression-, and ratio-based estimates of the CFE. The current findings are consistent with previous claims that older adults rely more heavily on holistic processing to identify objects in conditions of limited processing resources

    Differences in discrimination of eye and mouth displacement in autism spectrum disorders

    Get PDF
    AbstractIndividuals with Autism Spectrum Disorders (ASD) have been found to have impairments in some face recognition tasks [e.g., Boucher, J., & Lewis, V. (1992). Unfamiliar face recognition in relatively able autistic children. Journal of Child Psychology and Psychiatry, 33, 843–859.], and it has been suggested that this impairment occurs because these individuals do not spontaneously attend to the eyes [e.g., Pelphrey, K. A., Sasson, N. J., Reznick, J. S., Paul, G., Goldman, B. D., & Piven, J. (2002). Visual scanning of faces in autism. Journal of Autism and Developmental Disorders, 32, 249–261.], or attend selectively to the mouth [e.g., Langdell, T. (1978). Recognition of faces—approach to study of autism. Journal of Child Psychology and Psychiatry and Allied Disciplines, 19, 255–268; Joseph, R. M., & Tanaka J. (2003). Holistic and part-based face recognition in children with autism. Journal of Child Psychology and Psychiatry, 44, 529–542.]. Here, we test whether the eyes or the mouth are attended to preferentially by 16 males with ASD and 19 matched controls. Participants discriminated small spatial displacements of the eyes and the mouth. If the mouth region were attended to preferentially by individuals with ASD, we would expect ASD observers to be better at detecting subtle changes in mouth than eye displacements, relative to controls. Further, following Barton [Barton, J. J. S., Keenan, J. P., & Bass, T. (2001). Discrimination of spatial relations and features in faces: Effects of inversion and viewing duration. British Journal of Psychology, 92, 527–549.], we would expect to see differences in inversion effects as a function of feature manipulation between ASD and control groups. We found that individuals with ASD performed significantly differently than controls for the eye, but not the mouth, trials. However, we found no difference in inversion effects between the two groups of observers. Furthermore, we found evidence of distinct subclasses of individuals with ASD: those who performed normally, and those who were impaired. These results suggests that typical individuals are better able to make use of information in the eyes than some individuals with ASD, but that there is no clear autism “advantage” in the use of information in the mouth region

    The effects of aging on orientation discrimination

    Get PDF
    AbstractThe current experiments measured orientation discrimination thresholds in younger (mean age≈23 years) and older (mean age≈66 years) subjects. In Experiment 1, the contrast needed to discriminate Gabor patterns (0.75, 1.5, and 3c/deg) that differed in orientation by 12deg was measured for different levels of external noise. At all three spatial frequencies, discrimination thresholds were significantly higher in older than younger subjects when external noise was low, but not when external noise was high. In Experiment 2, discrimination thresholds were measured as a function of stimulus contrast by varying orientation while contrast was fixed. The resulting threshold-vs-contrast curves had very similar shapes in the two age groups, although the curve obtained from older subjects was shifted to slightly higher contrasts. At contrasts greater than 0.05, thresholds in both older and younger subjects were approximately constant at 0.5deg. The results from Experiments 1 and 2 suggest that age differences in orientation discrimination are due solely to differences in equivalent input noise. Using the same methods as Experiment 1, Experiment 3 measured thresholds in 6 younger observers as a function of external noise and retinal illuminance. Although reducing retinal illumination increased equivalent input noise, the effect was much smaller than the age difference found in Experiment 1. Therefore, it is unlikely that differences in orientation discrimination were due solely to differences in retinal illumination. Our findings are consistent with recent physiological experiments that have found elevated spontaneous activity and reduced orientation tuning on visual cortical neurons in senescent cats (Hua, T., Li, X., He, L., Zhou, Y., Wang, Y., Leventhal, A. G. (206). Functional degradation of visual cortical cells in old cats. Neurobiology Aging, 27(1), 155–162) and monkeys (Yu, S., Wang, Y., Li, X., Zhou, Y. & Leventhal, A. G. (2006). Functional degradation of visual cortex in senescent rhesus monkeys. Neuroscience, 140(3), 1023–1029; Leventhal, A. G., Wang, Y., Pu, M., Zhou, Y. & Ma. Y. (2003). GABA and its agonists improved visual cortical function in senescent monkeys. Science,300 (5620), 812–815)

    Effects of aging on identifying emotions conveyed by point-light walkers

    Get PDF
    M.G. was supported by EC FP7 HBP (grant 604102), PITN-GA-011-290011 (ABC) FP7-ICT-2013-10/ 611909 (KOROIBOT), and by GI 305/4-1 and KA 1258/15-1, and BMBF, FKZ: 01GQ1002A. K.S.P. was supported by a BBSRC New Investigator Grant. A.B.S. and P.J.B. were supported by an operating grant (528206) from the Canadian Institutes for Health Research. The authors also thank Donna Waxman for her valuable help in data collection for all experiments described here.Peer reviewedPostprin

    The rapid emergence of stimulus specific perceptual learning

    Get PDF
    Is stimulus specific perceptual learning the result of extended practice or does it emerge early in the time course of learning? We examined this issue by manipulating the amount of practice given on a face identification task on Day 1, and altering the familiarity of stimuli on Day 2. We found that a small number of trials was sufficient to produce stimulus specific perceptual learning of faces: on Day 2, response accuracy decreased by the same amount for novel stimuli regardless of whether observers practiced 105 or 840 trials on Day 1. Current models of learning assume early procedural improvements followed by late stimulus specific gains. Our results show that stimulus specific and procedural improvements are distributed throughout the time course of learning

    BayesFit: A tool for modeling psychophysical data using Bayesian inference

    Get PDF
    BayesFit is a module for Python that allows users to fit models to psychophysical data using Bayesian inference. The module aims to make it easier to develop probabilistic models for psychophysical data in Python by providing users with a simple API that streamlines the process of defining psychophysical models, obtaining fits, extracting outputs, and visualizing fitted models. Our software implementation uses numerical integration as the primary tool to fit models, which avoids the complications that arise in using Markov Chain Monte Carlo (MCMC) methods [1]. The source code for BayesFit is available at https://github.com/slugocm/bayesfit and API documentation at http://www.slugocm.ca/bayesfit/. This module is extensible, and many of the functions primarily rely on Numpy [2] and therefore can be reused as newer versions of Python are developed to ensure researchers always have a tool available to ease the process of fitting models to psychophysical data

    Inversion Leads to Quantitative, Not Qualitative, Changes in Face Processing

    Get PDF
    AbstractHumans are remarkably adept at recognizing objects across a wide range of views. A notable exception to this general rule is that turning a face upside down makes it particularly difficult to recognize [1–3]. This striking effect has prompted speculation that inversion qualitatively changes the way faces are processed. Researchers commonly assume that configural cues strongly influence the recognition of upright, but not inverted, faces [3–5]. Indeed, the assumption is so well accepted that the inversion effect itself has been taken as a hallmark of qualitative processing differences [6]. Here, we took a novel approach to understand the inversion effect. We used response classification [7–10] to obtain a direct view of the perceptual strategies underlying face discrimination and to determine whether orientation effects can be explained by differential contributions of nonlinear processes. Inversion significantly impaired performance in our face discrimination task. However, surprisingly, observers utilized similar, local regions of faces for discrimination in both upright and inverted face conditions, and the relative contributions of nonlinear mechanisms to performance were similar across orientations. Our results suggest that upright and inverted face processing differ quantitatively, not qualitatively; information is extracted more efficiently from upright faces, perhaps as a by-product of orientation-dependent expertise

    How Prevalent Is Object-Based Attention?

    Get PDF
    Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects low incidence of such effects at the level of individual subjects. Experiment 1 measured space-based and object-based cueing effects for horizontal and vertical rectangles in 60 subjects comparing commonly used target detection and discrimination tasks. In Experiment 2 we ran another 120 subjects in a target discrimination task in which rectangle orientation varied between subjects. Using parametric statistical methods, we found object-based effects only for horizontal rectangles. Bootstrapping methods were used to measure effects in individual subjects. Significant space-based cueing effects were found in nearly all subjects in both experiments, across tasks and rectangle orientations. However, only a small number of subjects exhibited significant object-based cueing effects. Experiment 3 measured only object-based attention effects using another common paradigm and again, using bootstrapping, we found only a small number of subjects that exhibited significant object-based cueing effects. Our results show that object-based effects are more prevalent for horizontal rectangles, which is in accordance with the theory that attention may be allocated more easily along the horizontal meridian. The fact that so few individuals exhibit a significant object-based cueing effect presumably is why previous studies of this effect might have yielded inconsistent results. The results from the current study highlight the importance of considering individual subject data in addition to commonly used statistical methods

    Healthy Aging Delays Scalp EEG Sensitivity to Noise in a Face Discrimination Task

    Get PDF
    We used a single-trial ERP approach to quantify age-related changes in the time-course of noise sensitivity. A total of 62 healthy adults, aged between 19 and 98, performed a non-speeded discrimination task between two faces. Stimulus information was controlled by parametrically manipulating the phase spectrum of these faces. Behavioral 75% correct thresholds increased with age. This result may be explained by lower signal-to-noise ratios in older brains. ERP from each subject were entered into a single-trial general linear regression model to identify variations in neural activity statistically associated with changes in image structure. The fit of the model, indexed by R2, was computed at multiple post-stimulus time points. The time-course of the R2 function showed significantly delayed noise sensitivity in older observers. This age effect is reliable, as demonstrated by test–retest in 24 subjects, and started about 120 ms after stimulus onset. Our analyses suggest also a qualitative change from a young to an older pattern of brain activity at around 47 ± 4 years old
    corecore