1,036 research outputs found
The constitution of visual perceptual units in the functional architecture of V1
Scope of this paper is to consider a mean field neural model which takes into
account the functional neurogeometry of the visual cortex modelled as a group
of rotations and translations. The model generalizes well known results of
Bressloff and Cowan which, in absence of input, accounts for hallucination
patterns. The main result of our study consists in showing that in presence of
a visual input, the eigenmodes of the linearized operator which become stable
represent perceptual units present in the image. The result is strictly related
to dimensionality reduction and clustering problems
Local and global gestalt laws: A neurally based spectral approach
A mathematical model of figure-ground articulation is presented, taking into
account both local and global gestalt laws. The model is compatible with the
functional architecture of the primary visual cortex (V1). Particularly the
local gestalt law of good continuity is described by means of suitable
connectivity kernels, that are derived from Lie group theory and are neurally
implemented in long range connectivity in V1. Different kernels are compatible
with the geometric structure of cortical connectivity and they are derived as
the fundamental solutions of the Fokker Planck, the Sub-Riemannian Laplacian
and the isotropic Laplacian equations. The kernels are used to construct
matrices of connectivity among the features present in a visual stimulus.
Global gestalt constraints are then introduced in terms of spectral analysis of
the connectivity matrix, showing that this processing can be cortically
implemented in V1 by mean field neural equations. This analysis performs
grouping of local features and individuates perceptual units with the highest
saliency. Numerical simulations are performed and results are obtained applying
the technique to a number of stimuli.Comment: submitted to Neural Computatio
Cortical spatio-temporal dimensionality reduction for visual grouping
The visual systems of many mammals, including humans, is able to integrate
the geometric information of visual stimuli and to perform cognitive tasks
already at the first stages of the cortical processing. This is thought to be
the result of a combination of mechanisms, which include feature extraction at
single cell level and geometric processing by means of cells connectivity. We
present a geometric model of such connectivities in the space of detected
features associated to spatio-temporal visual stimuli, and show how they can be
used to obtain low-level object segmentation. The main idea is that of defining
a spectral clustering procedure with anisotropic affinities over datasets
consisting of embeddings of the visual stimuli into higher dimensional spaces.
Neural plausibility of the proposed arguments will be discussed
Geometry and dimensionality reduction of feature spaces in primary visual cortex
Some geometric properties of the wavelet analysis performed by visual neurons
are discussed and compared with experimental data. In particular, several
relationships between the cortical morphologies and the parametric dependencies
of extracted features are formalized and considered from a harmonic analysis
point of view
A geometric model of multi-scale orientation preference maps via Gabor functions
In this paper we present a new model for the generation of orientation
preference maps in the primary visual cortex (V1), considering both orientation
and scale features. First we undertake to model the functional architecture of
V1 by interpreting it as a principal fiber bundle over the 2-dimensional
retinal plane by introducing intrinsic variables orientation and scale. The
intrinsic variables constitute a fiber on each point of the retinal plane and
the set of receptive profiles of simple cells is located on the fiber. Each
receptive profile on the fiber is mathematically interpreted as a rotated Gabor
function derived from an uncertainty principle. The visual stimulus is lifted
in a 4-dimensional space, characterized by coordinate variables, position,
orientation and scale, through a linear filtering of the stimulus with Gabor
functions. Orientation preference maps are then obtained by mapping the
orientation value found from the lifting of a noise stimulus onto the
2-dimensional retinal plane. This corresponds to a Bargmann transform in the
reducible representation of the group. A
comparison will be provided with a previous model based on the Bargman
transform in the irreducible representation of the group,
outlining that the new model is more physiologically motivated. Then we present
simulation results related to the construction of the orientation preference
map by using Gabor filters with different scales and compare those results to
the relevant neurophysiological findings in the literature
The Self-Organization of Speech Sounds
The speech code is a vehicle of language: it defines
a set of forms used by a community to carry information.
Such a code is necessary to support the linguistic
interactions that allow humans to communicate.
How then may a speech code be formed prior to the
existence of linguistic interactions?
Moreover, the human speech code is discrete and compositional,
shared by all the individuals of a community but different
across communities, and phoneme inventories are characterized by
statistical regularities. How can a speech code with these properties form?
We try to approach these questions in the paper,
using the ``methodology of the artificial''. We
build a society of artificial agents, and detail a mechanism that
shows the formation of a discrete speech code without pre-supposing
the existence of linguistic capacities or of coordinated interactions.
The mechanism is based on a low-level model of
sensory-motor interactions. We show that the integration of certain very
simple and non language-specific neural devices
leads to the formation of a speech code that
has properties similar to the human speech code.
This result relies on the self-organizing properties of a generic
coupling between perception and production
within agents, and on the interactions between agents.
The artificial system helps us to develop better intuitions on how speech
might have appeared, by showing how self-organization
might have helped natural selection to find speech
Emergence of Lie Symmetries in Functional Architectures Learned by CNNs
In this paper we study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images. Our architecture is built in such a way to mimic some properties of the early stages of biological visual systems. In particular, it contains a pre-filtering step l(0) defined in analogy with the Lateral Geniculate Nucleus (LGN). Moreover, the first convolutional layer is equipped with lateral connections defined as a propagation driven by a learned connectivity kernel, in analogy with the horizontal connectivity of the primary visual cortex (V1). We first show that the l(0) filter evolves during the training to reach a radially symmetric pattern well approximated by a Laplacian of Gaussian (LoG), which is a well-known model of the receptive profiles of LGN cells. In line with previous works on CNNs, the learned convolutional filters in the first layer can be approximated by Gabor functions, in agreement with well-established models for the receptive profiles of V1 simple cells. Here, we focus on the geometric properties of the learned lateral connectivity kernel of this layer, showing the emergence of orientation selectivity w.r.t. the tuning of the learned filters. We also examine the short-range connectivity and association fields induced by this connectivity kernel, and show qualitative and quantitative comparisons with known group-based models of V1 horizontal connections. These geometric properties arise spontaneously during the training of the CNN architecture, analogously to the emergence of symmetries in visual systems thanks to brain plasticity driven by external stimuli
- …