546 research outputs found

    Rapid mapping of visual receptive fields by filtered back-projection: application to multi-neuronal electrophysiology and imaging

    Get PDF
    Neurons in the visual system vary widely in the spatiotemporal properties of their receptive fields (RFs), and understanding these variations is key to elucidating how visual information is processed. We present a new approach for mapping RFs based on the filtered back projection (FBP), an algorithm used for tomographic reconstructions. To estimate RFs, a series of bars were flashed across the retina at pseudo‐random positions and at a minimum of five orientations. We apply this method to retinal neurons and show that it can accurately recover the spatial RF and impulse response of ganglion cells recorded on a multi‐electrode array. We also demonstrate its utility for in vivo imaging by mapping the RFs of an array of bipolar cell synapses expressing a genetically encoded Ca2+ indicator. We find that FBP offers several advantages over the commonly used spike‐triggered average (STA): (i) ON and OFF components of a RF can be separated; (ii) the impulse response can be reconstructed at sample rates of 125 Hz, rather than the refresh rate of a monitor; (iii) FBP reveals the response properties of neurons that are not evident using STA, including those that display orientation selectivity, or fire at low mean spike rates; and (iv) the FBP method is fast, allowing the RFs of all the bipolar cell synaptic terminals in a field of view to be reconstructed in under 4 min. Use of the FBP will benefit investigations of the visual system that employ electrophysiology or optical reporters to measure activity across populations of neurons

    Deformable kernels for early vision

    Get PDF
    Early vision algorithms often have a first stage of linear-filtering that `extracts' from the image information at multiple scales of resolution and multiple orientations. A common difficulty in the design and implementation of such schemes is that one feels compelled to discretize coarsely the space of scales and orientations in order to reduce computation and storage costs. A technique is presented that allows: 1) computing the best approximation of a given family using linear combinations of a small number of `basis' functions; and 2) describing all finite-dimensional families, i.e., the families of filters for which a finite dimensional representation is possible with no error. The technique is based on singular value decomposition and may be applied to generating filters in arbitrary dimensions and subject to arbitrary deformations. The relevant functional analysis results are reviewed and precise conditions for the decomposition to be feasible are stated. Experimental results are presented that demonstrate the applicability of the technique to generating multiorientation multi-scale 2D edge-detection kernels. The implementation issues are also discussed

    A computational and psychophysical study of motion induced distortions of perceived location.

    Get PDF
    In this thesis I begin by extending previous psychophysical research on the effects of visual motion on spatial localisation. In particular, I measured the perceived spatial shift of briefly presented static objects adjacent to a moving stimulus. It was found that the timing of the presentation of static objects with respect to nearby motion was crucial. I also found a decrease of this motion induced spatial displacement with the increasing distance of static objects from motion, suggesting a local effect of motion. The induced perceptual shift could also be reduced by introducing transient stimuli (flickering dots) in the background of the display. The next stage was to construct a computational model to provide a mechanism that could facilitate such shifts in position. To motivate our combined model of motion computation and spatial representation we considered what functions could be attributed to V1 cells on the basis of their contrast sensitivity functions. I found that functions based on sums of differential of Gaussian operators could provide good fits to previously found V1 data. The properties of V1 cells as derivatives of Gaussian kernel filters on an image were used to build a spatial representation, where position is represented in the weighting of these filter outputs, rather than in a one-to-one isomorphic representation of the scene. This image representation can also be used along with temporal derivatives to calculate motion using the Multi-Channel Gradient Model scheme (Johnston et al, 1992). 1 demonstrate how this framework can incorporate motion signals to produce "in place" shifts of visual location. Finally a combined model of motion and spatial location is outlined and evaluated in relation to the psychophysical data

    Adaptation of the Retina to Stimulus Correlations

    Get PDF
    Visual scenes in the natural world are highly correlated. To efficiently encode such an environment with a limited dynamic range, the retina ought to reduce correlations to maximize information. On the other hand, some redundancy is needed to combat the effects of noise. Here we ask how the degree of redundancy in retinal output depends on the stimulus ensemble. We find that retinal output preserves correlations in a spatially correlated stimulus but adaptively reduces changes in spatio-temporal input correlations. The latter effect can be explained by stimulus-dependent changes in receptive fields. We also find evidence that horizontal cells in the outer retina enhance changes in output correlations. GABAergic amacrine cells in the inner retina also enhance differences in correlation, albeit to a lesser degree, while gylcinergic amacrine cells have little effect on output correlation. These results suggest that the early visual system is capable of adapting to stimulus correlations to balance the challenges of redundancy and noise

    How Is a Moving Target Continuously Tracked Behind Occluding Cover?

    Full text link
    Office of Naval Research (N00014-95-1-0657, N00014-95-1-0409

    Saccade learning with concurrent cortical and subcortical basal ganglia loops

    Get PDF
    The Basal Ganglia is a central structure involved in multiple cortical and subcortical loops. Some of these loops are believed to be responsible for saccade target selection. We study here how the very specific structural relationships of these saccadic loops can affect the ability of learning spatial and feature-based tasks. We propose a model of saccade generation with reinforcement learning capabilities based on our previous basal ganglia and superior colliculus models. It is structured around the interactions of two parallel cortico-basal loops and one tecto-basal loop. The two cortical loops separately deal with spatial and non-spatial information to select targets in a concurrent way. The subcortical loop is used to make the final target selection leading to the production of the saccade. These different loops may work in concert or disturb each other regarding reward maximization. Interactions between these loops and their learning capabilities are tested on different saccade tasks. The results show the ability of this model to correctly learn basic target selection based on different criteria (spatial or not). Moreover the model reproduces and explains training dependent express saccades toward targets based on a spatial criterion. Finally, the model predicts that in absence of prefrontal control, the spatial loop should dominate
    corecore