44,792 research outputs found
How Does Our Visual System Achieve Shift and Size Invariance?
The question of shift and size invariance in the primate
visual system is discussed. After a short review of the relevant neurobiology and psychophysics, a more detailed analysis of computational models is given. The two main types of networks considered are the dynamic routing circuit model and invariant feature networks, such as the neocognitron. Some specific open questions in context of these models are raised and possible solutions discussed
A feedback model of perceptual learning and categorisation
Top-down, feedback, influences are known to have significant effects on visual information processing. Such influences are also likely to affect perceptual learning. This article employs a computational model of the cortical region interactions underlying visual perception to investigate possible influences of top-down information on learning. The results suggest that feedback could bias the way in which perceptual stimuli are categorised and could also facilitate the learning of sub-ordinate level representations suitable for object identification and perceptual expertise
A feedback model of visual attention
Feedback connections are a prominent feature of cortical anatomy and are likely
to have significant functional role in neural information processing. We present
a neural network model of cortical feedback that successfully simulates
neurophysiological data associated with attention. In this domain our model can
be considered a more detailed, and biologically plausible, implementation of the
biased competition model of attention. However, our model is more general as it
can also explain a variety of other top-down processes in vision, such as
figure/ground segmentation and contextual cueing. This model thus suggests that
a common mechanism, involving cortical feedback pathways, is responsible for a
range of phenomena and provides a unified account of currently disparate areas
of research
Distributed Hypothesis Testing, Attention Shifts and Transmitter Dynatmics During the Self-Organization of Brain Recognition Codes
BP (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-90-00530); Air Force Office of Scientific Research (90-0175, 90-0128); Army Research Office (DAAL-03-88-K0088
Attention Mechanisms for Object Recognition with Event-Based Cameras
Event-based cameras are neuromorphic sensors capable of efficiently encoding
visual information in the form of sparse sequences of events. Being
biologically inspired, they are commonly used to exploit some of the
computational and power consumption benefits of biological vision. In this
paper we focus on a specific feature of vision: visual attention. We propose
two attentive models for event based vision: an algorithm that tracks events
activity within the field of view to locate regions of interest and a
fully-differentiable attention procedure based on DRAW neural model. We
highlight the strengths and weaknesses of the proposed methods on four
datasets, the Shifted N-MNIST, Shifted MNIST-DVS, CIFAR10-DVS and N-Caltech101
collections, using the Phased LSTM recognition network as a baseline reference
model obtaining improvements in terms of both translation and scale invariance.Comment: WACV2019 camera-ready submissio
- …