437 research outputs found

    Dynamic and Integrative Properties of the Primary Visual Cortex

    Get PDF
    The ability to derive meaning from complex, ambiguous sensory input requires the integration of information over both space and time, as well as cognitive mechanisms to dynamically shape that integration. We have studied these processes in the primary visual cortex (V1), where neurons have been proposed to integrate visual inputs along a geometric pattern known as the association field (AF). We first used cortical reorganization as a model to investigate the role that a specific network of V1 connections, the long-range horizontal connections, might play in temporal and spatial integration across the AF. When retinal lesions ablate sensory information from portions of the visual field, V1 undergoes a process of reorganization mediated by compensatory changes in the network of horizontal collaterals. The reorganization accompanies the brain’s amazing ability to perceptually “fill-inâ€, or “seeâ€, the lost visual input. We developed a computational model to simulate cortical reorganization and perceptual fill-in mediated by a plexus of horizontal connections that encode the AF. The model reproduces the major features of the perceptual fill-in reported by human subjects with retinal lesions, and it suggests that V1 neurons, empowered by their horizontal connections, underlie both perceptual fill-in and normal integrative mechanisms that are crucial to our visual perception. These results motivated the second prong of our work, which was to experimentally study the normal integration of information in V1. Since psychophysical and physiological studies suggest that spatial interactions in V1 may be under cognitive control, we investigated the integrative properties of V1 neurons under different cognitive states. We performed extracellular recordings from single V1 neurons in macaques that were trained to perform a delayed-match-to-sample contour detection task. We found that the ability of V1 neurons to summate visual inputs from beyond the classical receptive field (cRF) imbues them with selectivity for complex contour shapes, and that neuronal shape selectivity in V1 changed dynamically according to the shapes monkeys were cued to detect. Over the population, V1 encoded subsets of the AF, predicted by the computational model, that shifted as a function of the monkeys’ expectations. These results support the major conclusions of the theoretical work; even more, they reveal a sophisticated mode of form processing, whereby the selectivity of the whole network in V1 is reshaped by cognitive state

    Top-Down Control of Lateral Interactions in Visual Cortex

    Get PDF
    V1 neurons are capable of integrating information over a large area of visual field. Their responses to local features are dependent on the global characteristics of contours and surfaces that extend well beyond their receptive fields. These contextual influences in V1 are subject to cognitive influences of attention, perceptual task and expectation. Previously it’s been shown that the response properties of V1 neurons change to carry more information about behaviorally relevant stimulus features (Li et al. 2004). We hypothesized that top-down modulation of effective connectivity within V1 underlies the behaviorally dependent modulations of contextual interactions in V1. To test this idea, we used a chronically implanted multi-electrode array in awake primates and studied the mechanisms of top-down control of contextual interactions in V1. We used a behavioral paradigm in which the animals performed two different perceptual tasks on the same stimulus and studied task-dependent changes in connectivity between V1 sites that encode the stimulus. We found that V1 interactions-both spiking and LFP interactions-showed significant task-dependent changes. The direction of the task-dependent changes observed in LFP interactions, measured by coherence between LFP signals, was dependent on the perceptual strategy used by the animal. Bisection task involving perceptual grouping of parallel lines increased LFP coherence while vernier task involving segregation of collinear line decrease LFP coherence. Also, grouping of collinear lines to detect a contour resulted in increased LFP interactions. Since noise correlations can affect the coding accuracy of a cortical network, we investigated how top-down processes of attention and perceptual task affect V1 noise correlations. We were able to study the noise correlation dynamics that were due to attentional shift separately from the changes due to the perceptual task being performed at the attended location. Top-down influences reduced V1 noise-correlations to a greater extent when the animal performed a discrimination task at the recorded locations compared to when the animal shifted its attention to the location. The reduction in noise correlation during the perceptual task was accompanied by a significant increase in the information carried about the stimulus (calculated as Fisher information). Our analysis was also able to determine the degree to which the task dependent change in information was due to the alteration in neuronal tuning compared to changes in correlated activity. Interestingly, the largest effects on information were seen between stimuli that had the greatest difficulty of discrimination

    The application of computational modeling to data visualization

    Get PDF
    Researchers have argued that perceptual issues are important in determining what makes an effective visualization, but generally only provide descriptive guidelines for transforming perceptual theory into practical designs. In order to bridge the gap between theory and practice in a more rigorous way, a computational model of the primary visual cortex is used to explore the perception of data visualizations. A method is presented for automatically evaluating and optimizing data visualizations for an analytical task using a computational model of human vision. The method relies on a neural network simulation of early perceptual processing in the retina and visual cortex. The neural activity resulting from viewing an information visualization is simulated and evaluated to produce metrics of visualization effectiveness for analytical tasks. Visualization optimization is achieved by applying these effectiveness metrics as the utility function in a hill-climbing algorithm. This method is applied to the evaluation and optimization of two visualization types: 2D flow visualizations and node-link graph visualizations. The computational perceptual model is applied to various visual representations of flow fields evaluated using the advection task of Laidlaw et al. The predictive power of the model is examined by comparing its performance to that of human subjects on the advection task using four flow visualization types. The results show the same overall pattern for humans and the model. In both cases, the best performance was obtained from visualizations containing aligned visual edges. Flow visualization optimization is done using both streaklet-based and pixel-based visualization parameterizations. An emergent property of the streaklet-based optimization is head-to-tail streaklet alignment, the pixel-based parameterization results in a LIC-like result. The model is also applied to node-link graph diagram visualizations for a node connectivity task using two-layer node-link diagrams. The model evaluation of node-link graph visualizations correlates with human performance, in terms of both accuracy and response time. Node-link graph visualizations are optimized using the perceptual model. The optimized node-link diagrams exhibit the aesthetic properties associated with good node-link diagram design, such as straight edges, minimal edge crossings, and maximal crossing angles, and yields empirically better performance on the node connectivity task

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    How Does the Cerebral Cortex Work? Developement, Learning, Attention, and 3D Vision by Laminar Circuits of Visual Cortex

    Full text link
    A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624

    The integration of bottom-up and top-down signals in human perception in health and disease

    Get PDF
    To extract a meaningful visual experience from the information falling on the retina, the visual system must integrate signals from multiple levels. Bottom-up signals provide input relating to local features while top-down signals provide contextual feedback and reflect internal states of the organism. In this thesis I will explore the nature and neural basis of this integration in two key areas. I will examine perceptual filling-in of artificial scotomas to investigate the bottom-up signals causing changes in perception when filling-in takes place. I will then examine how this perceptual filling-in is modified by top-down signals reflecting attention and working memory. I will also investigate hemianopic completion, an unusual form of filling-in, which may reflect a breakdown in top-down feedback from higher visual areas. The second part of the thesis will explore a different form of top-down control of visual processing. While the effects of cognitive mechanisms such as attention on visual processing are well-characterised, other types of top-down signal such as reward outcome are less well explored. I will therefore study whether signals relating to reward can influence visual processing. To address these questions, I will employ a range of methodologies including functional MRI, magnetoencephalography and behavioural testing in healthy participants and patients with cortical damage. I will demonstrate that perceptual filling-in of artificial scotomas is largely a bottom-up process but that higher cognitive functions can modulate the phenomenon. I will also show that reward modulates activity in higher visual areas in the absence of concurrent visual stimulation and that receiving reward leads to enhanced activity in primary visual cortex on the next trial. These findings reveal that integration occurs across multiple levels even for processes rooted in early retinotopic regions, and that higher cognitive processes such as reward can influence the earliest stages of cortical visual processing

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Contour integration: Psychophysical, neurophysiological, and computational perspectives

    Get PDF
    One of the important roles of our visual system is to detect and segregate objects. Neurons in the early visual system extract local image features from the visual scene. To combine these features into separate, global objects, the visual system must perform some kind of grouping operation. One such operation is contour integration. Contours form the outlines of objects, and are the first step in shape perception. We discuss the mechanism of contour integration from psychophysical, neurophysiological, and computational perspectives

    Perceptual Learning, Long-Range Horizontal Connections And Top-Down Influences In Primary Visual Cortex

    Get PDF
    The earliest cortical stage of visual processing, the primary visual cortex, has long been seen as a static preprocessor that finds local edges and their orientation like a linear filter bank, and passes this information on to downstream visual areas. This view has been challenged in recent years since the discovery of contextual influences, that is, interactions between the responses of neurons that encode for non-overlapping adjacent areas of visual space, and their anatomical substrate, long-range horizontal connections. These contextual interactions have been shown in awake behaving primates to be modulated depending on the task the animals are performing. A first set of electrophysiological experiments has shown with the help of information theory that when an animal performed one of two tasks on the same visual display, the contextual modulations of the task-relevant parts of the visual display contained more information about the stimulus position than when the same elements were task-irrelevant. A second set of experiments on contour integration was analyzed with ROC analysis to show that an ideal observer could predict the presence of an embedded contour from the spike count of a single neuron on a single trial as well as the animal’s behavioral performance. A final set of experiments showed that prior to learning the same contour integration task, the responses did not contain any information about the stimulus position, that the information in the response increased in parallel with the animals performance during learning, and that the enhanced response after learning disappeared during anesthesia, but is only weakened when performing an irrelevant task in a different part of visual space. Last, a neural network is presented that allows gating of long-range horizontal connections by top-down feedback. The stability and the dynamic behavior of the network have been established with phase-plane analysis. Large-scale simulations have been performed to confirm the stability and show the enhanced contour integration of realistic stimuli as a function of feedback gain. This model has fit quantitatively the electrophysiological experiments of contour integration

    Stereoscopic Surface Interpolation from Illusory Contours

    Get PDF
    Stereoscopic Kanizsa figures are an example of stereoscopic interpolation of an illusory surface. In such stimuli, luminance-defined disparity signals exist only along the edges of inducing elements, but observers reliably perceive a coherent surface that extends across the central region in depth. The aim of this series of experiments was to understand the nature of the disparity signal that underlies the perception of illusory stereoscopic surfaces. I systematically assessed the accuracy and precision of suprathreshold depth percepts using a collection of Kanizsa figures with a wide range of 2D and 3D properties. For comparison, I assessed similar perceptually equated figures with luminance-defined surfaces, with and without inducing elements. A cue combination analysis revealed that observers rely on ordinal depth cues in conjunction with stereopsis when making depth judgements. Thus, 2D properties (e.g. occlusion features and luminance relationships) contribute rich information about 3D surface structure by influencing perceived depth from binocular disparity
    • …
    corecore