7,968 research outputs found

    Construction of direction selectivity in V1: from simple to complex cells

    Get PDF
    Despite detailed knowledge about the anatomy and physiology of the primary visual cortex (V1), the immense number of feed-forward and recurrent connections onto a given V1 neuron make it difficult to understand how the physiological details relate to a given neuron’s functional properties. Here, we focus on a well-known functional property of many V1 complex cells: phase-invariant direction selectivity (DS). While the energy model explains its construction at the conceptual level, it remains unclear how the mathematical operations described in this model are implemented by cortical circuits. To understand how DS of complex cells is constructed in cortex, we apply a nonlinear modeling framework to extracellular data from macaque V1. We use a modification of spike-triggered covariance (STC) analysis to identify multiple biologically plausible "spatiotemporal features" that either excite or suppress a cell. We demonstrate that these features represent the true inputs to the neuron more accurately, and the resulting nonlinear model compactly describes how these inputs are combined to result in the functional properties of the cell. In a population of 59 neurons, we find that both simple and complex V1 cells are selective to combinations of excitatory and suppressive motion features. Because the strength of DS and simple/complex classification is well predicted by our models, we can use simulations with inputs matching thalamic and simple cells to assess how individual model components contribute to these measures. Our results unify experimental observations regarding the construction of DS from thalamic feed-forward inputs to V1: based on the differences between excitatory and inhibitory inputs, they suggest a connectivity diagram for simple and complex cells that sheds light on the mechanism underlying the DS of cortical cells. More generally, they illustrate how stage-wise nonlinear combination of multiple features gives rise to the processing of more abstract visual information

    Modeling the Possible Influences of Eye Movements on the Refinement of Cortical Direction Selectivity

    Full text link
    The second-order statistics of neural activity was examined in a model of the cat LGN and V1 during free-viewing of natural images. In the model, the specific patterns of thalamocortical activity required for a Bebbian maturation of direction-selective cells in VI were found during the periods of visual fixation, when small eye movements occurred, but not when natural images were examined in the absence of fixational eye movements. In addition, simulations of stroboscopic reming that replicated the abnormal pattern of eye movements observed in kittens chronically exposed to stroboscopic illumination produced results consistent with the reported loss of direction selectivity and preservation of orientation selectivity. These results suggest the involvement of the oculomotor activity of visual fixation in the maturation of cortical direction selectivity

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    Spatiotemporal dynamics of feature-based attention spread: evidence from combined electroencephalographic and magnetoencephalographic recordings

    Get PDF
    Attentional selection on the basis of nonspatial stimulus features induces a sensory gain enhancement by increasing the firing-rate of individual neurons tuned to the attended feature, while responses of neurons tuned to opposite feature-values are suppressed. Here we recorded event-related potentials (ERPs) and magnetic fields (ERMFs) in human observers to investigate the underlying neural correlates of feature-based attention at the population level. During the task subjects attended to a moving transparent surface presented in the left visual field, while task-irrelevant probe stimuli executing brief movements into varying directions were presented in the opposite visual field. ERP and ERMF amplitudes elicited by the unattended task-irrelevant probes were modulated as a function of the similarity between their movement direction and the task-relevant movement direction in the attended visual field. These activity modulations reflecting globally enhanced processing of the attended feature were observed to start not before 200 ms poststimulus and were localized to the motion-sensitive area hMT. The current results indicate that feature-based attention operates in a global manner but needs time to spread and provide strong support for the feature-similarity gain model

    Efficient MaxCount and threshold operators of moving objects

    Get PDF
    Calculating operators of continuously moving objects presents some unique challenges, especially when the operators involve aggregation or the concept of congestion, which happens when the number of moving objects in a changing or dynamic query space exceeds some threshold value. This paper presents the following six d-dimensional moving object operators: (1) MaxCount (or MinCount), which finds the Maximum (or Minimum) number of moving objects simultaneously present in the dynamic query space at any time during the query time interval. (2) CountRange, which finds a count of point objects whose trajectories intersect the dynamic query space during the query time interval. (3) ThresholdRange, which finds the set of time intervals during which the dynamic query space is congested. (4) ThresholdSum, which finds the total length of all the time intervals during which the dynamic query space is congested. (5) ThresholdCount, which finds the number of disjoint time intervals during which the dynamic query space is congested. And (6) ThresholdAverage, which finds the average length of time of all the time intervals when the dynamic query space is congested. For these operators separate algorithms are given to find only estimate or only precise values. Experimental results from more than 7,500 queries indicate that the estimation algorithms produce fast, efficient results with error under 5%
    corecore