16 research outputs found
Local/global analysis of the stationary solutions of some neural field equations
Neural or cortical fields are continuous assemblies of mesoscopic models,
also called neural masses, of neural populations that are fundamental in the
modeling of macroscopic parts of the brain. Neural fields are described by
nonlinear integro-differential equations. The solutions of these equations
represent the state of activity of these populations when submitted to inputs
from neighbouring brain areas. Understanding the properties of these solutions
is essential in advancing our understanding of the brain. In this paper we
study the dependency of the stationary solutions of the neural fields equations
with respect to the stiffness of the nonlinearity and the contrast of the
external inputs. This is done by using degree theory and bifurcation theory in
the context of functional, in particular infinite dimensional, spaces. The
joint use of these two theories allows us to make new detailed predictions
about the global and local behaviours of the solutions. We also provide a
generic finite dimensional approximation of these equations which allows us to
study in great details two models. The first model is a neural mass model of a
cortical hypercolumn of orientation sensitive neurons, the ring model. The
second model is a general neural field model where the spatial connectivity
isdescribed by heterogeneous Gaussian-like functions.Comment: 38 pages, 9 figure
Illusions in the Ring Model of visual orientation selectivity
The Ring Model of orientation tuning is a dynamical model of a hypercolumn of
visual area V1 in the human neocortex that has been designed to account for the
experimentally observed orientation tuning curves by local, i.e.,
cortico-cortical computations. The tuning curves are stationary, i.e. time
independent, solutions of this dynamical model. One important assumption
underlying the Ring Model is that the LGN input to V1 is weakly tuned to the
retinal orientation and that it is the local computations in V1 that sharpen
this tuning. Because the equations that describe the Ring Model have built-in
equivariance properties in the synaptic weight distribution with respect to a
particular group acting on the retinal orientation of the stimulus, the model
in effect encodes an infinite number of tuning curves that are arbitrarily
translated with respect to each other. By using the Orbit Space Reduction
technique we rewrite the model equations in canonical form as functions of
polynomials that are invariant with respect to the action of this group. This
allows us to combine equivariant bifurcation theory with an efficient numerical
continuation method in order to compute the tuning curves predicted by the Ring
Model. Surprisingly some of these tuning curves are not tuned to the stimulus.
We interpret them as neural illusions and show numerically how they can be
induced by simple dynamical stimuli. These neural illusions are important
biological predictions of the model. If they could be observed experimentally
this would be a strong point in favour of the Ring Model. We also show how our
theoretical analysis allows to very simply specify the ranges of the model
parameters by comparing the model predictions with published experimental
observations.Comment: 33 pages, 12 figure
Bifurcation of hyperbolic planforms
Motivated by a model for the perception of textures by the visual cortex in
primates, we analyse the bifurcation of periodic patterns for nonlinear
equations describing the state of a system defined on the space of structure
tensors, when these equations are further invariant with respect to the
isometries of this space. We show that the problem reduces to a bifurcation
problem in the hyperbolic plane D (Poincar\'e disc). We make use of the concept
of periodic lattice in D to further reduce the problem to one on a compact
Riemann surface D/T, where T is a cocompact, torsion-free Fuchsian group. The
knowledge of the symmetry group of this surface allows to carry out the
machinery of equivariant bifurcation theory. Solutions which generically
bifurcate are called "H-planforms", by analogy with the "planforms" introduced
for pattern formation in Euclidean space. This concept is applied to the case
of an octagonal periodic pattern, where we are able to classify all possible
H-planforms satisfying the hypotheses of the Equivariant Branching Lemma. These
patterns are however not straightforward to compute, even numerically, and in
the last section we describe a method for computation illustrated with a
selection of images of octagonal H-planforms.Comment: 26 pages, 11 figure
Large Deviations for Nonlocal Stochastic Neural Fields
We study the effect of additive noise on integro-differential neural field
equations. In particular, we analyze an Amari-type model driven by a -Wiener
process and focus on noise-induced transitions and escape. We argue that
proving a sharp Kramers' law for neural fields poses substanial difficulties
but that one may transfer techniques from stochastic partial differential
equations to establish a large deviation principle (LDP). Then we demonstrate
that an efficient finite-dimensional approximation of the stochastic neural
field equation can be achieved using a Galerkin method and that the resulting
finite-dimensional rate function for the LDP can have a multi-scale structure
in certain cases. These results form the starting point for an efficient
practical computation of the LDP. Our approach also provides the technical
basis for further rigorous study of noise-induced transitions in neural fields
based on Galerkin approximations.Comment: 29 page
Towards a bio-inspired evaluation methodology for motion estimation models
Offering proper evaluation methodology is essential to continue progress in modelling neural mechanisms in vision/visual information processing. Currently, evaluation of motion estimation models lacks a proper methodology for comparing their performance against the visual system. Here, we set the basis for such a new benchmark methodology which is based on human visual performance as measured in psychophysics, ocular following and neurobiology. This benchmark will enable comparisons between different kinds of models, but also it will challenge current motion estimation models and better characterize their properties with respect to visual cortex performance. To do so, we propose a database of image sequences taken from neuroscience and psychophysics literature. In this article, we focus on two aspects of motion estimation, which are the dynamics of motion integration and the respective influence between 1D versus 2D cues. Then, since motion models possibly deal with different kinds of motion representations and scale, we define here two general readouts based on a global motion estimation. Such readouts, namely eye movements and perceived motion will serve as a reference to compare simulated and experimental data. We evaluate the performance of several models on this data to establish the current state of the art. Models chosen for comparison have very different properties and internal mechanisms, such as feedforward normalisation of V1 and MT processing and recurrent feedback. As a whole, we provide here the basis for a valuable evaluation methodology to unravel the fundamental mechanisms of the visual cortex in motion perception. Our database is freely available on the web together with scoring instructions and results at http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenchOffrir une méthodologie d'évaluation est essentiel pour la recherche en modélisation des mécanismes neuraux impliqués dans la vision. Actuellement, il manque à l'évaluation des modèles d'estimation du mouvement une méthodologie bien définie permettant de comparer leurs performances avec celles du système visuel. Ici nous posons les bases d'un tel banc d'essai, basé sur les performances visuelles des humains telles que mesurées en psychophysique, en oculo-motricité, et en neurobiologie. Ce banc d'essai permettra de comparer différents modèles, mais aussi de mieux caractériser leurs propriétés en regard du comportement du système visuel. Dans ce but, nous proposons un ensemble de séquences vidéos, issues des expérimentations en neurosciences et en psychophysique. Dans cet article, nous mettons l'accent sur deux principaux aspects de l'estimation du mouvement~: les dynamiques d'intégration du mouvement, et les influences respectives des informations 1D par rapport aux informations 2D. De là , nous définissons deux «~lectures~» basés sur l'estimation du mouvement global. De telles «~lectures~», nommément les mouvements des yeux, et le mouvement perçu, serviront de référence pour comparer les données expérimentales et simulées. Nous évaluons les performances de différents modèles sur ces stimuli afin d'établir un état de l'art des modèles d'intégration du mouvement. Les modèles comparés sont choisis en fonction de leurs grandes différences en terme de propriétes et de mécanismes internes (rétroaction, normalisation). En définitive, nous établissons dans ce travail les bases d'une méthodologie d'évaluation permettant de découvrir les mécanismes fondamentaux du cortex visuel dédié à la perception du mouvement. Notre jeu de stimuli est librement accessible sur Internet, accompagné d'instructions pour l'évaluation, et de résultats, à l'adresse~: http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenc
Stochastic neural field equations: A rigorous footing
We extend the theory of neural fields which has been developed in a
deterministic framework by considering the influence spatio-temporal noise. The
outstanding problem that we here address is the development of a theory that
gives rigorous meaning to stochastic neural field equations, and conditions
ensuring that they are well-posed. Previous investigations in the field of
computational and mathematical neuroscience have been numerical for the most
part. Such questions have been considered for a long time in the theory of
stochastic partial differential equations, where at least two different
approaches have been developed, each having its advantages and disadvantages.
It turns out that both approaches have also been used in computational and
mathematical neuroscience, but with much less emphasis on the underlying
theory. We present a review of two existing theories and show how they can be
used to put the theory of stochastic neural fields on a rigorous footing. We
also provide general conditions on the parameters of the stochastic neural
field equations under which we guarantee that these equations are well-posed.
In so doing we relate each approach to previous work in computational and
mathematical neuroscience. We hope this will provide a reference that will pave
the way for future studies (both theoretical and applied) of these equations,
where basic questions of existence and uniqueness will no longer be a cause for
concern
Laws of large numbers and Langevin approximations for stochastic neural field equations
In this study we consider limit theorems for microscopic stochastic models of
neural fields. We show that the Wilson-Cowan equation can be obtained as the
limit in probability on compacts for a sequence of microscopic models when the
number of neuron populations distributed in space and the number of neurons per
population tend to infinity. Though the latter divergence is not necessary.
This result also allows to obtain limits for qualitatively different stochastic
convergence concepts, e.g., convergence in the mean. Further, we present a
central limit theorem for the martingale part of the microscopic models which,
suitably rescaled, converges to a centered Gaussian process with independent
increments. These two results provide the basis for presenting the neural field
Langevin equation, a stochastic differential equation taking values in a
Hilbert space, which is the infinite-dimensional analogue of the Chemical
Langevin Equation in the present setting. On a technical level we apply
recently developed law of large numbers and central limit theorems for
piecewise deterministic processes taking values in Hilbert spaces to a master
equation formulation of stochastic neuronal network models. These theorems are
valid for processes taking values in Hilbert spaces and by this are able to
incorporate spatial structures of the underlying model.Comment: 38 page
Bifurcation analysis applied to a model of motion integration with a multistable stimulus
A computational study into the motion perception dynamics of a multistable psychophysics stimulus is presented. A diagonally drifting grating viewed through a square aperture is can be perceived as moving in the actual grating direction or in line with the aperture edges (horizontally or vertically). The different percepts are the product of interplay between ambiguous contour cues and specific terminator cues. We present a dynamical model of motion integration that performs direction selection for such a stimulus and link the different percepts to coexisting steady-states of the underlying equations. We apply the powerful tools of bifurcation analysis and numerical continuation to study the changes to the model's solution structure under the variation of parameters. Indeed, we apply these tools in a systematic way, taking into account biological and mathematical constraints, in order to fix model parameters. A region of parameter space is identified for which the model reproduces the qualitative behaviour observed in experiments. The temporal dynamics of motion integration are studied within this region; specifically, the effect of varying the stimulus gain is studied, which allows for qualitative predictions to be made