162 research outputs found

    Border-ownership-dependent tilt aftereffect in incomplete figures

    Get PDF
    A recent physiological finding of neural coding for border ownership (BO) that defines the direction of a figure with respect to the border has provided a possible basis for figure-ground segregation. To explore the underlying neural mechanisms of BO, we investigated stimulus configurations that activate BO circuitry through psychophysical investigation of the BO-dependent tilt aftereffect (BO-TAE). Specifically, we examined robustness of the border ownership signal by determining whether the BO-TAE is observed when gestalt factors are broken. The results showed significant BO-TAEs even when a global shape was not explicitly given due to the ambiguity of the contour, suggesting a contour-independent mechanism for BO coding.This paper was published in Journal of the Optical Society of America. A and is made available as an electronic reprint with the permission of OSA. The paper can be found at the following URL on the OSA website: http://www.opticsinfobase.org/abstract.cfm?URI=josaa-24-1-18. Systematic or multiple reproduction or distribution to multiple locations via electronic or other means is prohibited and is subject to penalties under law

    Neural models of inter-cortical networks in the primate visual system for navigation, attention, path perception, and static and kinetic figure-ground perception

    Full text link
    Vision provides the primary means by which many animals distinguish foreground objects from their background and coordinate locomotion through complex environments. The present thesis focuses on mechanisms within the visual system that afford figure-ground segregation and self-motion perception. These processes are modeled as emergent outcomes of dynamical interactions among neural populations in several brain areas. This dissertation specifies and simulates how border-ownership signals emerge in cortex, and how the medial superior temporal area (MSTd) represents path of travel and heading, in the presence of independently moving objects (IMOs). Neurons in visual cortex that signal border-ownership, the perception that a border belongs to a figure and not its background, have been identified but the underlying mechanisms have been unclear. A model is presented that demonstrates that inter-areal interactions across model visual areas V1-V2-V4 afford border-ownership signals similar to those reported in electrophysiology for visual displays containing figures defined by luminance contrast. Competition between model neurons with different receptive field sizes is crucial for reconciling the occlusion of one object by another. The model is extended to determine border-ownership when object borders are kinetically-defined, and to detect the location and size of shapes, despite the curvature of their boundary contours. Navigation in the real world requires humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature. In primates, MSTd has been implicated in heading perception. A model of V1, medial temporal area (MT), and MSTd is developed herein that demonstrates how MSTd neurons can simultaneously encode path curvature and heading. Human judgments of heading are accurate in rigid environments, but are biased in the presence of IMOs. The model presented here explains the bias through recurrent connectivity in MSTd and avoids the use of differential motion detectors which, although used in existing models to discount the motion of an IMO relative to its background, is not biologically plausible. Reported modulation of the MSTd population due to attention is explained through competitive dynamics between subpopulations responding to bottom-up and top- down signals

    Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Get PDF
    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.Published versio

    Extraction of Surface-Related Features in a Recurrent Model of V1-V2 Interactions

    Get PDF
    Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces.In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches.A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals which treat localized junction configurations as 2D image features, we link them to mechanisms of apparent surface segregation. As a consequence, we demonstrate how junctions can change their perceptual representation depending on the scene context and the spatial configuration of boundary fragments

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision

    Multiple components of surround modulation in primary visual cortex: Multiple neural circuits with multiple functions?

    Get PDF
    pre-printThe responses of neurons in primary visual cortex (V1) to stimulation of their receptive field (RF) are modulated by stimuli in the RF surround. This modulation is suppressive when the stimuli in the RF and surround are of similar orientation, but less suppressive or facilitatory when they are cross-oriented. Similarly, in human vision surround stimuli selectively suppress the perceived contrast of a central stimulus. Although the properties of surround modulation have been thoroughly characterized in many species, cortical areas and sensory modalities, its role in perception remains unknown. Here we argue that surround modulation in V1 consists of multiple components having different spatio-temporal and tuning properties, generated by different neural circuits and serving different visual functions. One component arises from LGN afferents, is fast, untuned for orientation, and spatially restricted to the surround region nearest to the RF (the near-surround); its function is to normalize V1 cell responses to local contrast. Intra-V1 horizontal connections contribute a slower, narrowly orientation-tuned component to near-surround modulation, whose function is to increase the coding efficiency of natural images in manner that leads to the extraction of object boundaries. The third component is generated by topdown feedback connections to V1, is fast, broadly orientation-tuned, and extends into the far-surround; its function is to enhance the salience of behaviorally relevant visual features. Far- and near-surround modulation, thus, act as parallel mechanisms: the former quickly detects and guides saccades/attention to salient visual scene locations, the latter segments object boundaries in the scene

    Neural dynamics of invariant object recognition: relative disparity, binocular fusion, and predictive eye movements

    Full text link
    How does the visual cortex learn invariant object categories as an observer scans a depthful scene? Two neural processes that contribute to this ability are modeled in this thesis. The first model clarifies how an object is represented in depth. Cortical area V1 computes absolute disparity, which is the horizontal difference in retinal location of an image in the left and right foveas. Many cells in cortical area V2 compute relative disparity, which is the difference in absolute disparity of two visible features. Relative, but not absolute, disparity is unaffected by the distance of visual stimuli from an observer, and by vergence eye movements. A laminar cortical model of V2 that includes shunting lateral inhibition of disparity-sensitive layer 4 cells causes a peak shift in cell responses that transforms absolute disparity from V1 into relative disparity in V2. The second model simulates how the brain maintains stable percepts of a 3D scene during binocular movements. The visual cortex initiates the formation of a 3D boundary and surface representation by binocularly fusing corresponding features from the left and right retinotopic images. However, after each saccadic eye movement, every scenic feature projects to a different combination of retinal positions than before the saccade. Yet the 3D representation, resulting from the prior fusion, is stable through the post-saccadic re-fusion. One key to stability is predictive remapping: the system anticipates the new retinal positions of features entailed by eye movements by using gain fields that are updated by eye movement commands. The 3D ARTSCAN model developed here simulates how perceptual, attentional, and cognitive interactions across different brain regions within the What and Where visual processing streams interact to coordinate predictive remapping, stable 3D boundary and surface perception, spatial attention, and the learning of object categories that are invariant to changes in an object's retinal projections. Such invariant learning helps the system to avoid treating each new view of the same object as a distinct object to be learned. The thesis hereby shows how a process that enables invariant object category learning can be extended to also enable stable 3D scene perception

    Anatomy and Physiology of Macaque Visual Cortical Areas V1, V2, and V5/MT : Bases for Biologically Realistic Models

    Get PDF
    The cerebral cortex of primates encompasses multiple anatomically and physiologically distinct areas processing visual information. Areas V1, V2, and VS/MT are conserved across mammals and are central for visual behavior. To facilitate the generation of biologically accurate computational models of primate early visual processing, here we provide an overview of over 350 published studies of these three areas in the genus Macaca, whose visual system provides the closest model for human vision. The literature reports 14 anatomical connection types from the lateral geniculate nucleus of the thalamus to V1 having distinct layers of origin or termination, and 194 connection types between V1, V2, and VS, forming multiple parallel and interacting visual processing streams. Moreover, within V1, there are reports of 286 and 120 types of intrinsic excitatory and inhibitory connections, respectively. Physiologically, tuning of neuronal responses to 11 types of visual stimulus parameters has been consistently reported. Overall, the optimal spatial frequency (SF) of constituent neurons decreases with cortical hierarchy. Moreover, VS neurons are distinct from neurons in other areas for their higher direction selectivity, higher contrast sensitivity, higher temporal frequency tuning, and wider SF bandwidth. We also discuss currently unavailable data that could be useful for biologically accurate models.Peer reviewe

    Dynamic and Integrative Properties of the Primary Visual Cortex

    Get PDF
    The ability to derive meaning from complex, ambiguous sensory input requires the integration of information over both space and time, as well as cognitive mechanisms to dynamically shape that integration. We have studied these processes in the primary visual cortex (V1), where neurons have been proposed to integrate visual inputs along a geometric pattern known as the association field (AF). We first used cortical reorganization as a model to investigate the role that a specific network of V1 connections, the long-range horizontal connections, might play in temporal and spatial integration across the AF. When retinal lesions ablate sensory information from portions of the visual field, V1 undergoes a process of reorganization mediated by compensatory changes in the network of horizontal collaterals. The reorganization accompanies the brain’s amazing ability to perceptually “fill-inâ€, or “seeâ€, the lost visual input. We developed a computational model to simulate cortical reorganization and perceptual fill-in mediated by a plexus of horizontal connections that encode the AF. The model reproduces the major features of the perceptual fill-in reported by human subjects with retinal lesions, and it suggests that V1 neurons, empowered by their horizontal connections, underlie both perceptual fill-in and normal integrative mechanisms that are crucial to our visual perception. These results motivated the second prong of our work, which was to experimentally study the normal integration of information in V1. Since psychophysical and physiological studies suggest that spatial interactions in V1 may be under cognitive control, we investigated the integrative properties of V1 neurons under different cognitive states. We performed extracellular recordings from single V1 neurons in macaques that were trained to perform a delayed-match-to-sample contour detection task. We found that the ability of V1 neurons to summate visual inputs from beyond the classical receptive field (cRF) imbues them with selectivity for complex contour shapes, and that neuronal shape selectivity in V1 changed dynamically according to the shapes monkeys were cued to detect. Over the population, V1 encoded subsets of the AF, predicted by the computational model, that shifted as a function of the monkeys’ expectations. These results support the major conclusions of the theoretical work; even more, they reveal a sophisticated mode of form processing, whereby the selectivity of the whole network in V1 is reshaped by cognitive state

    Figure-ground responsive fields of monkey V4 neurons estimated from natural image patches

    Get PDF
    Neurons in visual area V4 modulate their responses depending on the figure-ground (FG) organization in natural images containing a variety of shapes and textures. To clarify whether the responses depend on the extents of the figure and ground regions in and around the classical receptive fields (CRFs) of the neurons, we estimated the spatial extent of local figure and ground regions that evoked FG-dependent responses (RF-FGs) in natural images and their variants. Specifically, we applied the framework of spike triggered averaging (STA) to the combinations of neural responses and human-marked segmentation images (FG labels) that represent the extents of the figure and ground regions in the corresponding natural image stimuli. FG labels were weighted by the spike counts in response to the corresponding stimuli and averaged over. The bias due to the nonuniformity of FG labels was compensated by subtracting the ensemble average of FG labels from the weighted average. Approximately 50% of the neurons showed effective RF-FGs, and a large number exhibited structures that were similar to those observed in virtual neurons with ideal FG-dependent responses. The structures of the RF-FGs exhibited a subregion responsive to a preferred side (figure or ground) around the CRF center and a subregion responsive to a non-preferred side in the surroundings. The extents of the subregions responsive to figure were smaller than those responsive to ground in agreement with the Gestalt rule. We also estimated RF-FG by an adaptive filtering (AF) method, which does not require spherical symmetry (whiteness) in stimuli. RF-FGs estimated by AF and STA exhibited similar structures, supporting the veridicality of the proposed STA. To estimate the contribution of nonlinear processing in addition to linear processing, we estimated nonlinear RF-FGs based on the framework of spike triggered covariance (STC). The analyses of the models based on STA and STC did not show inconsiderable contribution of nonlinearity, suggesting spatial variance of FG regions. The results lead to an understanding of the neural responses that underlie the segregation of figures and the construction of surfaces in intermediate-level visual areas.journal articl
    corecore