117 research outputs found

    Bistable Gestalts reduce activity in the whole of V1, not just the retinotopically predicted parts

    Get PDF
    Activity in the primary visual cortex reduces when certain stimuli can be perceptually organized as a unified Gestalt. This reduction could offer important insights into the nature of feedback computations within the human visual system; however, the properties of this response reduction have not yet been investigated in detail. Here we replicate this reduced V1 response, but find that the modulation in V1 (and V2) to the perceived organization of the input is not specific to the retinotopic location at which the sensory input from that stimulus is represented. Instead, we find a response modulation that is equally evident across the primary visual cortex. Thus in contradiction to some models of hierarchical predictive coding, the perception of an organized Gestalt causes a broad feedback effect that does not act specifically on the part of the retinotopic map representing the sensory input

    Size-sensitive perceptual representations underlie visual and haptic object recognition.

    Get PDF
    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations

    Hearing It Again and Again: On-Line Subcortical Plasticity in Humans

    Get PDF
    Background: Human brainstem activity is sensitive to local sound statistics, as reflected in an enhanced response in repetitive compared to pseudo-random stimulus conditions [1]. Here we probed the short-term time course of this enhancement using a paradigm that assessed how the local sound statistics (i.e., repetition within a five-note melody) interact with more global statistics (i.e., repetition of the melody). Methodology/Principal Findings: To test the hypothesis that subcortical repetition enhancement builds over time, we recorded auditory brainstem responses in young adults to a five-note melody containing a repeated note, and monitored how the response changed over the course of 1.5 hrs. By comparing response amplitudes over time, we found a robust time-dependent enhancement to the locally repeating note that was superimposed on a weaker enhancement of the globally repeating pattern. Conclusions/Significance: We provide the first demonstration of on-line subcortical plasticity in humans. This complements previous findings that experience-dependent subcortical plasticity can occur on a number of time scales, including life-long experiences with music and language, and short-term auditory training. Our results suggest that the incoming stimulus stream is constantly being monitored, even when the stimulus is physically invariant and attention is directed elsewhere, to augment the neural response to the most statistically salient features of the ongoing stimulus stream. These real-tim

    A Conceptual Framework of Computations in Mid-Level Vision

    Get PDF
    The goal of visual processing is to extract information necessary for a variety of tasks, such as grasping objects, navigating in scenes, and recognizing them. While ultimately these tasks might be carried out by separate processing pathways, they nonetheless share a common root in the early and intermediate visual areas. What representations should these areas develop in order to facilitate all of these higher-level tasks? Several distinct ideas have received empirical support in the literature so far: (i) boundary feature detection, such as edge, corner, and curved segment extraction; (ii) second-order feature detection, such as the difference in orientation or phase; (iii) computation of summary statistics, that is, correlations across features. Here we provide a novel synthesis of these ideas into a single framework. We start by specifying the goal of mid-level processing as the construction of surface-based representations. To support it , we propose three basic computations: (i) computation of feature similarity across local neighborhoods; (ii) pooling of highly similar features, and (iii) inference of new, more complex features. These computations are carried out hierarchically over increasingly larger receptive fields and refined via recurrent processes when necessary

    The representation of perceived shape similarity and its role for category learning in monkeys: A modeling study

    Get PDF
    AbstractCategorization models often assume an intermediate stimulus representation by units implementing β€œdistance functions”, that is, units that are activated according to the distance or similarity among stimuli. Here we show that a popular example of these models, ALCOVE, is able to account for the performance of monkeys during category learning when it takes the perceived similarity among stimuli into account. Similar results were obtained with a slightly different model (ITCOVE) that included experimentally measured tuning curves of neurons in inferior temporal (IT) cortex. These results show the intimate link between category learning and perceived similarity as represented in IT cortex

    The Part Task of the Part-Spacing Paradigm Is Not a Pure Measurement of Part-Based Information of Faces

    Get PDF
    Faces are arguably one of the most important object categories encountered by human observers, yet they present one of the most difficult challenges to both the human and artificial visual systems. A variety of experimental paradigms have been developed to study how faces are represented and recognized, among which is the part-spacing paradigm. This paradigm is presumed to characterize the processing of both the featural and configural information of faces, and it has become increasingly popular for testing hypotheses on face specificity and in the diagnosis of face perception in cognitive disorders.In two experiments we questioned the validity of the part task of this paradigm by showing that, in this task, measuring pure information about face parts is confounded by the effect of face configuration on the perception of those parts. First, we eliminated or reduced contributions from face configuration by either rearranging face parts into a non-face configuration or by removing the low spatial frequencies of face images. We found that face parts were no longer sensitive to inversion, suggesting that the previously reported inversion effect observed in the part task was due in fact to the presence of face configuration. Second, self-reported prosopagnosic patients who were selectively impaired in the holistic processing of faces failed to detect part changes when face configurations were presented. When face configurations were scrambled, however, their performance was as good as that of normal controls.In sum, consistent evidence from testing both normal and prosopagnosic subjects suggests the part task of the part-spacing paradigm is not an appropriate task for either measuring how face parts alone are processed or for providing a valid contrast to the spacing task. Therefore, conclusions from previous studies using the part-spacing paradigm may need re-evaluation with proper paradigms

    Repeated Stimulus Exposure Alters the Way Sound Is Encoded in the Human Brain

    Get PDF
    Auditory training programs are being developed to remediate various types of communication disorders. Biological changes have been shown to coincide with improved perception following auditory training so there is interest in determining if these changes represent biologic markers of auditory learning. Here we examine the role of stimulus exposure and listening tasks, in the absence of training, on the modulation of evoked brain activity. Twenty adults were divided into two groups and exposed to two similar sounding speech syllables during four electrophysiological recording sessions (24 hours, one week, and up to one year later). In between each session, members of one group were asked to identify each stimulus. Both groups showed enhanced neural activity from session-to-session, in the same P2 latency range previously identified as being responsive to auditory training. The enhancement effect was most pronounced over temporal-occipital scalp regions and largest for the group who participated in the identification task. The effects were rapid and long-lasting with enhanced synchronous activity persisting months after the last auditory experience. Physiological changes did not coincide with perceptual changes so results are interpreted to mean stimulus exposure, with and without being paired with an identification task, alters the way sound is processed in the brain. The cumulative effect likely involves auditory memory; however, in the absence of training, the observed physiological changes are insufficient to result in changes in learned behavior

    The representation of subordinate shape similarity in human occipitotemporal cortex

    Get PDF
    We investigated the coding of subordinate shape similarity in human object-selective cortex in two event-related functional magnetic resonance adaptation (fMR-A) experiments. Previous studies using faces have concluded that there is a narrow tuning of neuronal populations selective to each face, and that tuning is relative to the expected "average" face (norm-based encoding). Here we investigated these issues using outlines of animals and tools occupying a particular position on different morphing sequences per category. In a first experiment, we inferred the width of neural tuning to exemplars by examining whether the release from adaptation with increasing shape changes between two stimuli asymptotes. In a second experiment, we compared the response to central and extreme positions in shape space while controlling for the number of presentations of each unique stimulus to study whether the expected "average" category exemplar plays a role. The current fMR-A results show that a small change in exemplar shape produces a large release of adaptation, but only for outline shape changes of animals and not for man-made tools. Furthermore, our results suggested that central and extreme positions were not treated differently. Together, these results suggest a narrow tuning in object-selective cortex for individual exemplars from natural object categories, consistent with an exemplar-based encoding principle

    Fine-Scale Spatial Organization of Face and Object Selectivity in the Temporal Lobe: Do Functional Magnetic Resonance Imaging, Optical Imaging, and Electrophysiology Agree?

    Get PDF
    The spatial organization of the brain's object and face representations in the temporal lobe is critical for understanding high-level vision and cognition but is poorly understood. Recently, exciting progress has been made using advanced imaging and physiology methods in humans and nonhuman primates, and the combination of such methods may be particularly powerful. Studies applying these methods help us to understand how neuronal activity, optical imaging, and functional magnetic resonance imaging signals are related within the temporal lobe, and to uncover the fine-grained and large-scale spatial organization of object and face representations in the primate brain

    The Vividness of Happiness in Dynamic Facial Displays of Emotion

    Get PDF
    Rapid identification of facial expressions can profoundly affect social interactions, yet most research to date has focused on static rather than dynamic expressions. In four experiments, we show that when a non-expressive face becomes expressive, happiness is detected more rapidly anger. When the change occurs peripheral to the focus of attention, however, dynamic anger is better detected when it appears in the left visual field (LVF), whereas dynamic happiness is better detected in the right visual field (RVF), consistent with hemispheric differences in the processing of approach- and avoidance-relevant stimuli. The central advantage for happiness is nevertheless the more robust effect, persisting even when information of either high or low spatial frequency is eliminated. Indeed, a survey of past research on the visual search for emotional expressions finds better support for a happiness detection advantage, and the explanation may lie in the coevolution of the signal and the receiver
    • …
    corecore