25,477 research outputs found
Nonlinear Hebbian learning as a unifying principle in receptive field formation
The development of sensory receptive fields has been modeled in the past by a
variety of models including normative models such as sparse coding or
independent component analysis and bottom-up models such as spike-timing
dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic
plasticity. Here we show that the above variety of approaches can all be
unified into a single common principle, namely Nonlinear Hebbian Learning. When
Nonlinear Hebbian Learning is applied to natural images, receptive field shapes
were strongly constrained by the input statistics and preprocessing, but
exhibited only modest variation across different choices of nonlinearities in
neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse
network activity are necessary for the development of localized receptive
fields. The analysis of alternative sensory modalities such as auditory models
or V2 development lead to the same conclusions. In all examples, receptive
fields can be predicted a priori by reformulating an abstract model as
nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural
statistics can account for many aspects of receptive field formation across
models and sensory modalities
Optimal measurement of visual motion across spatial and temporal scales
Sensory systems use limited resources to mediate the perception of a great
variety of objects and events. Here a normative framework is presented for
exploring how the problem of efficient allocation of resources can be solved in
visual perception. Starting with a basic property of every measurement,
captured by Gabor's uncertainty relation about the location and frequency
content of signals, prescriptions are developed for optimal allocation of
sensors for reliable perception of visual motion. This study reveals that a
large-scale characteristic of human vision (the spatiotemporal contrast
sensitivity function) is similar to the optimal prescription, and it suggests
that some previously puzzling phenomena of visual sensitivity, adaptation, and
perceptual organization have simple principled explanations.Comment: 28 pages, 10 figures, 2 appendices; in press in Favorskaya MN and
Jain LC (Eds), Computer Vision in Advanced Control Systems using Conventional
and Intelligent Paradigms, Intelligent Systems Reference Library,
Springer-Verlag, Berli
Invariance of visual operations at the level of receptive fields
Receptive field profiles registered by cell recordings have shown that
mammalian vision has developed receptive fields tuned to different sizes and
orientations in the image domain as well as to different image velocities in
space-time. This article presents a theoretical model by which families of
idealized receptive field profiles can be derived mathematically from a small
set of basic assumptions that correspond to structural properties of the
environment. The article also presents a theory for how basic invariance
properties to variations in scale, viewing direction and relative motion can be
obtained from the output of such receptive fields, using complementary
selection mechanisms that operate over the output of families of receptive
fields tuned to different parameters. Thereby, the theory shows how basic
invariance properties of a visual system can be obtained already at the level
of receptive fields, and we can explain the different shapes of receptive field
profiles found in biological vision from a requirement that the visual system
should be invariant to the natural types of image transformations that occur in
its environment.Comment: 40 pages, 17 figure
Do retinal ganglion cells project natural scenes to their principal subspace and whiten them?
Several theories of early sensory processing suggest that it whitens sensory
stimuli. Here, we test three key predictions of the whitening theory using
recordings from 152 ganglion cells in salamander retina responding to natural
movies. We confirm the previous finding that firing rates of ganglion cells are
less correlated compared to natural scenes, although significant correlations
remain. We show that while the power spectrum of ganglion cells decays less
steeply than that of natural scenes, it is not completely flattened. Finally,
we find evidence that only the top principal components of the visual stimulus
are transmitted.Comment: 2016 Asilomar Conference on Signals, Systems and Computer
Evidence accumulation in a Laplace domain decision space
Evidence accumulation models of simple decision-making have long assumed that
the brain estimates a scalar decision variable corresponding to the
log-likelihood ratio of the two alternatives. Typical neural implementations of
this algorithmic cognitive model assume that large numbers of neurons are each
noisy exemplars of the scalar decision variable. Here we propose a neural
implementation of the diffusion model in which many neurons construct and
maintain the Laplace transform of the distance to each of the decision bounds.
As in classic findings from brain regions including LIP, the firing rate of
neurons coding for the Laplace transform of net accumulated evidence grows to a
bound during random dot motion tasks. However, rather than noisy exemplars of a
single mean value, this approach makes the novel prediction that firing rates
grow to the bound exponentially, across neurons there should be a distribution
of different rates. A second set of neurons records an approximate inversion of
the Laplace transform, these neurons directly estimate net accumulated
evidence. In analogy to time cells and place cells observed in the hippocampus
and other brain regions, the neurons in this second set have receptive fields
along a "decision axis." This finding is consistent with recent findings from
rodent recordings. This theoretical approach places simple evidence
accumulation models in the same mathematical language as recent proposals for
representing time and space in cognitive models for memory.Comment: Revised for CB
Time-causal and time-recursive spatio-temporal receptive fields
We present an improved model and theory for time-causal and time-recursive
spatio-temporal receptive fields, based on a combination of Gaussian receptive
fields over the spatial domain and first-order integrators or equivalently
truncated exponential filters coupled in cascade over the temporal domain.
Compared to previous spatio-temporal scale-space formulations in terms of
non-enhancement of local extrema or scale invariance, these receptive fields
are based on different scale-space axiomatics over time by ensuring
non-creation of new local extrema or zero-crossings with increasing temporal
scale. Specifically, extensions are presented about (i) parameterizing the
intermediate temporal scale levels, (ii) analysing the resulting temporal
dynamics, (iii) transferring the theory to a discrete implementation, (iv)
computing scale-normalized spatio-temporal derivative expressions for
spatio-temporal feature detection and (v) computational modelling of receptive
fields in the lateral geniculate nucleus (LGN) and the primary visual cortex
(V1) in biological vision.
We show that by distributing the intermediate temporal scale levels according
to a logarithmic distribution, we obtain much faster temporal response
properties (shorter temporal delays) compared to a uniform distribution.
Specifically, these kernels converge very rapidly to a limit kernel possessing
true self-similar scale-invariant properties over temporal scales, thereby
allowing for true scale invariance over variations in the temporal scale,
although the underlying temporal scale-space representation is based on a
discretized temporal scale parameter.
We show how scale-normalized temporal derivatives can be defined for these
time-causal scale-space kernels and how the composed theory can be used for
computing basic types of scale-normalized spatio-temporal derivative
expressions in a computationally efficient manner.Comment: 39 pages, 12 figures, 5 tables in Journal of Mathematical Imaging and
Vision, published online Dec 201
Phase synchrony facilitates binding and segmentation of natural images in a coupled neural oscillator network
Synchronization has been suggested as a mechanism of binding distributed feature representations facilitating segmentation of visual stimuli. Here we investigate this concept based on unsupervised learning using natural visual stimuli. We simulate dual-variable neural oscillators with separate activation and phase variables. The binding of a set of neurons is coded by synchronized phase variables. The network of tangential synchronizing connections learned from the induced activations exhibits small-world properties and allows binding even over larger distances. We evaluate the resulting dynamic phase maps using segmentation masks labeled by human experts. Our simulation results show a continuously increasing phase synchrony between neurons within the labeled segmentation masks. The evaluation of the network dynamics shows that the synchrony between network nodes establishes a relational coding of the natural image inputs. This demonstrates that the concept of binding by synchrony is applicable in the context of unsupervised learning using natural visual stimuli
The Effects of the Quantification of Faculty Productivity: Perspectives from the Design Science Research Community
In recent years, efforts to assess faculty research productivity have focused more on the measurable quantification of academic outcomes. For benchmarking academic performance, researchers have developed different ranking and rating lists that define so-called high-quality research. While many scholars in IS consider lists such as the Senior Scholarâs basket (SSB) to provide good guidance, others who belong to less-mainstream groups in the IS discipline could perceive these lists as constraining. Thus, we analyzed the perceived impact of the SSB on information systems (IS) academics working in design science research (DSR) and, in particular, how it has affected their research behavior. We found the DSR community felt a strong normative influence from the SSB. We conducted a content analysis of the SSB and found evidence that some of its journals have come to accept DSR more. We note the emergence of papers in the SSB that outline the role of theory in DSR and describe DSR methodologies, which indicates that the DSR community has rallied to describe what to expect from a DSR manuscript to the broader IS community and to guide the DSR community on how to organize papers for publication in the SSB
Advancing models of the visual system using biologically plausible unsupervised spiking neural networks
Spikes are thought to provide a fundamental unit of computation in the nervous system. The retina is known to use the relative timing of spikes to encode visual input, whereas primary visual cortex (V1) exhibits sparse and irregular spiking activity â but what do these different spiking patterns represent about sensory stimuli? To address this question, I set out to model the retina and V1 using a biologically-realistic spiking neural network (SNN), exploring the idea that temporal prediction underlies the sensory transformation of natural inputs.
Firstly, I trained a recurrently-connected SNN of excitatory and inhibitory units to predict the sensory future in natural movies under metabolic-like constraints. This network exhibited V1-like spike statistics, simple and complex cell-like tuning, and - advancing prior studies - key physiological and tuning differences between excitatory and inhibitory neurons.
Secondly, I modified this spiking network to model the retina to explore its role in visual processing. I found the model optimized for efficient prediction to capture retina-like receptive fields and - in contrast to previous studies - various retinal phenomena, such as latency coding, response omissions, and motion-tuning properties. Notably, the temporal prediction model also more accurately predicts retinal ganglion cell responses to natural images and movies across various animal species.
Lastly, I developed a new method to accelerate the simulation and training of SNNs, obtaining a 10-50 times speedup, with performance on a par with the standard training approach on supervised classification benchmarks and for fitting electrophysiological recordings of cortical neurons.
The retina and V1 models lay the foundation for developing normative models of increasing biological realism and link sensory processing to spiking activity, suggesting that temporal prediction is an underlying function of visual processing. This is complemented by a new approach to drastically accelerate computational research using SNNs
- âŠ