1,219 research outputs found
Learning long-range spatial dependencies with horizontal gated-recurrent units
Progress in deep learning has spawned great successes in many engineering
applications. As a prime example, convolutional neural networks, a type of
feedforward neural networks, are now approaching -- and sometimes even
surpassing -- human accuracy on a variety of visual recognition tasks. Here,
however, we show that these neural networks and their recent extensions
struggle in recognition tasks where co-dependent visual features must be
detected over long spatial ranges. We introduce the horizontal gated-recurrent
unit (hGRU) to learn intrinsic horizontal connections -- both within and across
feature columns. We demonstrate that a single hGRU layer matches or outperforms
all tested feedforward hierarchical baselines including state-of-the-art
architectures which have orders of magnitude more free parameters. We further
discuss the biological plausibility of the hGRU in comparison to anatomical
data from the visual cortex as well as human behavioral data on a classic
contour detection task.Comment: Published at NeurIPS 2018
https://papers.nips.cc/paper/7300-learning-long-range-spatial-dependencies-with-horizontal-gated-recurrent-unit
A geometric model of multi-scale orientation preference maps via Gabor functions
In this paper we present a new model for the generation of orientation
preference maps in the primary visual cortex (V1), considering both orientation
and scale features. First we undertake to model the functional architecture of
V1 by interpreting it as a principal fiber bundle over the 2-dimensional
retinal plane by introducing intrinsic variables orientation and scale. The
intrinsic variables constitute a fiber on each point of the retinal plane and
the set of receptive profiles of simple cells is located on the fiber. Each
receptive profile on the fiber is mathematically interpreted as a rotated Gabor
function derived from an uncertainty principle. The visual stimulus is lifted
in a 4-dimensional space, characterized by coordinate variables, position,
orientation and scale, through a linear filtering of the stimulus with Gabor
functions. Orientation preference maps are then obtained by mapping the
orientation value found from the lifting of a noise stimulus onto the
2-dimensional retinal plane. This corresponds to a Bargmann transform in the
reducible representation of the group. A
comparison will be provided with a previous model based on the Bargman
transform in the irreducible representation of the group,
outlining that the new model is more physiologically motivated. Then we present
simulation results related to the construction of the orientation preference
map by using Gabor filters with different scales and compare those results to
the relevant neurophysiological findings in the literature
From receptive profiles to a metric model of V1
In this work we show how to construct connectivity kernels induced by the receptive profiles of simple cells of the primary visual cortex (V1). These kernels are directly defined by the shape of such profiles: this provides a metric model for the functional architecture of V1, whose global geometry is determined by the reciprocal interactions between local elements. Our construction adapts to any bank of filters chosen to represent a set of receptive profiles, since it does not require any structure on the parameterization of the family. The connectivity kernel that we define carries a geometrical structure consistent with the well-known properties of long-range horizontal connections in V1, and it is compatible with the perceptual rules synthesized by the concept of association field. These characteristics are still present when the kernel is constructed from a bank of filters arising from an unsupervised learning algorithm
A Neural Model of Motion Processing and Visual Navigation by Cortical Area MST
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually-guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals, and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves, and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.Defense Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N0014-94-I-0597); Air Force Office of Scientific Research (F49620-92-J-0334)
Dynamic Construction of Stimulus Values in the Ventromedial Prefrontal Cortex
Signals representing the value assigned to stimuli at the time of choice have been repeatedly observed in ventromedial prefrontal cortex (vmPFC). Yet it remains unknown how these value representations are computed from sensory and memory representations in more posterior brain regions. We used electroencephalography (EEG) while subjects evaluated appetitive and aversive food items to study how event-related responses modulated by stimulus value evolve over time. We found that value-related activity shifted from posterior to anterior, and from parietal to central to frontal sensors, across three major time windows after stimulus onset: 150–250 ms, 400–550 ms, and 700–800 ms. Exploratory localization of the EEG signal revealed a shifting network of activity moving from sensory and memory structures to areas associated with value coding, with stimulus value activity localized to vmPFC only from 400 ms onwards. Consistent with these results, functional connectivity analyses also showed a causal flow of information from temporal cortex to vmPFC. Thus, although value signals are present as early as 150 ms after stimulus onset, the value signals in vmPFC appear relatively late in the choice process, and seem to reflect the integration of incoming information from sensory and memory related regions
- …