3,710 research outputs found
Unraveling neural coding of dynamic natural visual scenes via convolutional recurrent neural networks
Traditional models of retinal system identification analyze the neural response to artificial stimuli using models consisting of predefined components. The model design is limited to prior knowledge, and the artificial stimuli are too simple to be compared with stimuli processed by the retina. To fill in this gap with an explainable model that reveals how a population of neurons work together to encode the larger field of natural scenes, here we used a deep-learning model for identifying the computational elements of the retinal circuit that contribute to learning the dynamics of natural scenes. Experimental results verify that the recurrent connection plays a key role in encoding complex dynamic visual scenes while learning biological computational underpinnings of the retinal circuit. In addition, the proposed models reveal both the shapes and the locations of the spatiotemporal receptive fields of ganglion cells
Neural system identification for large populations separating "what" and "where"
Neuroscientists classify neurons into different types that perform similar
computations at different locations in the visual field. Traditional methods
for neural system identification do not capitalize on this separation of 'what'
and 'where'. Learning deep convolutional feature spaces that are shared among
many neurons provides an exciting path forward, but the architectural design
needs to account for data limitations: While new experimental techniques enable
recordings from thousands of neurons, experimental time is limited so that one
can sample only a small fraction of each neuron's response space. Here, we show
that a major bottleneck for fitting convolutional neural networks (CNNs) to
neural data is the estimation of the individual receptive field locations, a
problem that has been scratched only at the surface thus far. We propose a CNN
architecture with a sparse readout layer factorizing the spatial (where) and
feature (what) dimensions. Our network scales well to thousands of neurons and
short recordings and can be trained end-to-end. We evaluate this architecture
on ground-truth data to explore the challenges and limitations of CNN-based
system identification. Moreover, we show that our network model outperforms
current state-of-the art system identification models of mouse primary visual
cortex.Comment: NIPS 201
Closed-loop estimation of retinal network sensitivity reveals signature of efficient coding
According to the theory of efficient coding, sensory systems are adapted to
represent natural scenes with high fidelity and at minimal metabolic cost.
Testing this hypothesis for sensory structures performing non-linear
computations on high dimensional stimuli is still an open challenge. Here we
develop a method to characterize the sensitivity of the retinal network to
perturbations of a stimulus. Using closed-loop experiments, we explore
selectively the space of possible perturbations around a given stimulus. We
then show that the response of the retinal population to these small
perturbations can be described by a local linear model. Using this model, we
computed the sensitivity of the neural response to arbitrary temporal
perturbations of the stimulus, and found a peak in the sensitivity as a
function of the frequency of the perturbations. Based on a minimal theory of
sensory processing, we argue that this peak is set to maximize information
transmission. Our approach is relevant to testing the efficient coding
hypothesis locally in any context where no reliable encoding model is known
Delineation of line patterns in images using B-COSFIRE filters
Delineation of line patterns in images is a basic step required in various
applications such as blood vessel detection in medical images, segmentation of
rivers or roads in aerial images, detection of cracks in walls or pavements,
etc. In this paper we present trainable B-COSFIRE filters, which are a model of
some neurons in area V1 of the primary visual cortex, and apply it to the
delineation of line patterns in different kinds of images. B-COSFIRE filters
are trainable as their selectivity is determined in an automatic configuration
process given a prototype pattern of interest. They are configurable to detect
any preferred line structure (e.g. segments, corners, cross-overs, etc.), so
usable for automatic data representation learning. We carried out experiments
on two data sets, namely a line-network data set from INRIA and a data set of
retinal fundus images named IOSTAR. The results that we achieved confirm the
robustness of the proposed approach and its effectiveness in the delineation of
line structures in different kinds of images.Comment: International Work Conference on Bioinspired Intelligence, July
10-13, 201
From receptive profiles to a metric model of V1
In this work we show how to construct connectivity kernels induced by the
receptive profiles of simple cells of the primary visual cortex (V1). These
kernels are directly defined by the shape of such profiles: this provides a
metric model for the functional architecture of V1, whose global geometry is
determined by the reciprocal interactions between local elements. Our
construction adapts to any bank of filters chosen to represent a set of
receptive profiles, since it does not require any structure on the
parameterization of the family. The connectivity kernel that we define carries
a geometrical structure consistent with the well-known properties of long-range
horizontal connections in V1, and it is compatible with the perceptual rules
synthesized by the concept of association field. These characteristics are
still present when the kernel is constructed from a bank of filters arising
from an unsupervised learning algorithm.Comment: 25 pages, 18 figures. Added acknowledgement
- …