1,235 research outputs found

    Local and global limits on visual processing in schizophrenia.

    Get PDF
    Schizophrenia has been linked to impaired performance on a range of visual processing tasks (e.g. detection of coherent motion and contour detection). It has been proposed that this is due to a general inability to integrate visual information at a global level. To test this theory, we assessed the performance of people with schizophrenia on a battery of tasks designed to probe voluntary averaging in different visual domains. Twenty-three outpatients with schizophrenia (mean age: 40±8 years; 3 female) and 20 age-matched control participants (mean age 39±9 years; 3 female) performed a motion coherence task and three equivalent noise (averaging) tasks, the latter allowing independent quantification of local and global limits on visual processing of motion, orientation and size. All performance measures were indistinguishable between the two groups (ps>0.05, one-way ANCOVAs), with one exception: participants with schizophrenia pooled fewer estimates of local orientation than controls when estimating average orientation (p = 0.01, one-way ANCOVA). These data do not support the notion of a generalised visual integration deficit in schizophrenia. Instead, they suggest that distinct visual dimensions are differentially affected in schizophrenia, with a specific impairment in the integration of visual orientation information

    A robust contour detection operator with combined push-pull inhibition and surround suppression

    Get PDF
    Contour detection is a salient operation in many computer vision applications as it extracts features that are important for distinguishing objects in scenes. It is believed to be a primary role of simple cells in visual cortex of the mammalian brain. Many of such cells receive push-pull inhibition or surround suppression. We propose a computational model that exhibits a combination of these two phenomena. It is based on two existing models, which have been proven to be very effective for contour detection. In particular, we introduce a brain-inspired contour operator that combines push-pull and surround inhibition. It turns out that this combination results in a more effective contour detector, which suppresses texture while keeping the strongest responses to lines and edges, when compared to existing models. The proposed model consists of a Combination of Receptive Field (or CORF) model with push-pull inhibition, extended with surround suppression. We demonstrate the effectiveness of the proposed approach on the RuG and Berkeley benchmark data sets of 40 and 500 images, respectively. The proposed push-pull CORF operator with surround suppression outperforms the one without suppression with high statistical significance

    Contour Detection by Surround Inhibition in the Circular Harmonic Functions Domain

    Get PDF

    Neural responses to dynamic adaptation reveal the dissociation between the processing of the shape of contours and textures

    Get PDF
    This research was supported by a Leverhulme Trust grant (RPG-2016-056) awarded to Elena Gheorghiu (PI) and Jasna Martinovic (co-PI). The C code used to generate the contours, written in conjunction with routines from the VISAGE graphics library (Cambridge Research System) was modified from code originally written by Frederick A. A. Kingdom. We would like to thank Frederick Kingdom for helping with the development of the original C code.Peer reviewedPostprin

    Improved Contour Detection by Non-Classical Receptive Field Inhibition

    Get PDF

    Texture Segregation By Visual Cortex: Perceptual Grouping, Attention, and Learning

    Get PDF
    A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics

    Get PDF
    Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience

    Predictive coding as a model of the V1 saliency map hypothesis

    Get PDF
    The predictive coding/biased competition (PC/BC) model is a specific implementation of predictive coding theory that has previously been shown to provide a detailed account of the response properties of orientation tuned cells in primary visual cortex (V1). Here it is shown that the same model can successfully simulate psy-chophysical data relating to the saliency of unique items in search arrays, of contours embedded in random texture, and of borders between textured regions. This model thus provides a possible implementation of the hypothesis that V1 generates a bottom-up saliency map. However, PC/BC is very different from previous mod-els of visual salience, in that it proposes that saliency results from the failure of an internal model of simple elementary image components to accurately predict the visual input. Saliency can therefore be interpreted as a mechanism by which prediction errors attract attention in an attempt to improve the accuracy of the brain’s internal representation of the world
    corecore