539 research outputs found
Power spectrum and diffusion of the Amari neural field
We study the power spectrum of a space-time dependent neural field which
describes the average membrane potential of neurons in a single layer. This
neural field is modelled by a dissipative integro-differential equation, the
so-called Amari equation. By considering a small perturbation with respect to a
stationary and uniform configuration of the neural field we derive a linearized
equation which is solved for a generic external stimulus by using the Fourier
transform into wavevector-freqency domain, finding an analytical formula for
the power spectrum of the neural field. In addition, after proving that for
large wavelengths the linearized Amari equation is equivalent to a diffusion
equation which admits space-time dependent analytical solutions, we take into
account the nonlinearity of the Amari equation. We find that for large
wavelengths a weak nonlinearity in the Amari equation gives rise to a
reaction-diffusion equation which can be formally derived from a neural action
functional by introducing a dual neural field. For some initial conditions, we
discuss analytical solutions of this reaction-diffusion equation.Comment: 8 pages, 2 figures, improved version with inclusion of
reaction-diffusion equation and dual neural field. To be published in the
open access journal Symmetr
Image generation with shortest path diffusion
The field of image generation has made significant progress thanks to the
introduction of Diffusion Models, which learn to progressively reverse a given
image corruption. Recently, a few studies introduced alternative ways of
corrupting images in Diffusion Models, with an emphasis on blurring. However,
these studies are purely empirical and it remains unclear what is the optimal
procedure for corrupting an image. In this work, we hypothesize that the
optimal procedure minimizes the length of the path taken when corrupting an
image towards a given final state. We propose the Fisher metric for the path
length, measured in the space of probability distributions. We compute the
shortest path according to this metric, and we show that it corresponds to a
combination of image sharpening, rather than blurring, and noise deblurring.
While the corruption was chosen arbitrarily in previous work, our Shortest Path
Diffusion (SPD) determines uniquely the entire spatiotemporal structure of the
corruption. We show that SPD improves on strong baselines without any
hyperparameter tuning, and outperforms all previous Diffusion Models based on
image blurring. Furthermore, any small deviation from the shortest path leads
to worse performance, suggesting that SPD provides the optimal procedure to
corrupt images. Our work sheds new light on observations made in recent works
and provides a new approach to improve diffusion models on images and other
types of data.Comment: AD and SF contributed equall
Noise-induced behaviors in neural mean field dynamics
The collective behavior of cortical neurons is strongly affected by the
presence of noise at the level of individual cells. In order to study these
phenomena in large-scale assemblies of neurons, we consider networks of
firing-rate neurons with linear intrinsic dynamics and nonlinear coupling,
belonging to a few types of cell populations and receiving noisy currents.
Asymptotic equations as the number of neurons tends to infinity (mean field
equations) are rigorously derived based on a probabilistic approach. These
equations are implicit on the probability distribution of the solutions which
generally makes their direct analysis difficult. However, in our case, the
solutions are Gaussian, and their moments satisfy a closed system of nonlinear
ordinary differential equations (ODEs), which are much easier to study than the
original stochastic network equations, and the statistics of the empirical
process uniformly converge towards the solutions of these ODEs. Based on this
description, we analytically and numerically study the influence of noise on
the collective behaviors, and compare these asymptotic regimes to simulations
of the network. We observe that the mean field equations provide an accurate
description of the solutions of the network equations for network sizes as
small as a few hundreds of neurons. In particular, we observe that the level of
noise in the system qualitatively modifies its collective behavior, producing
for instance synchronized oscillations of the whole network, desynchronization
of oscillating regimes, and stabilization or destabilization of stationary
solutions. These results shed a new light on the role of noise in shaping
collective dynamics of neurons, and gives us clues for understanding similar
phenomena observed in biological networks
Visualizing probabilistic models: Intensive Principal Component Analysis
Unsupervised learning makes manifest the underlying structure of data without
curated training and specific problem definitions. However, the inference of
relationships between data points is frustrated by the `curse of
dimensionality' in high-dimensions. Inspired by replica theory from statistical
mechanics, we consider replicas of the system to tune the dimensionality and
take the limit as the number of replicas goes to zero. The result is the
intensive embedding, which is not only isometric (preserving local distances)
but allows global structure to be more transparently visualized. We develop the
Intensive Principal Component Analysis (InPCA) and demonstrate clear
improvements in visualizations of the Ising model of magnetic spins, a neural
network, and the dark energy cold dark matter ({\Lambda}CDM) model as applied
to the Cosmic Microwave Background.Comment: 6 pages, 5 figure
Linear response for spiking neuronal networks with unbounded memory
We establish a general linear response relation for spiking neuronal
networks, based on chains with unbounded memory. This relation allows us to
predict the influence of a weak amplitude time-dependent external stimuli on
spatio-temporal spike correlations, from the spontaneous statistics (without
stimulus) in a general context where the memory in spike dynamics can extend
arbitrarily far in the past. Using this approach, we show how linear response
is explicitly related to neuronal dynamics with an example, the gIF model,
introduced by M. Rudolph and A. Destexhe. This example illustrates the
collective effect of the stimuli, intrinsic neuronal dynamics, and network
connectivity on spike statistics. We illustrate our results with numerical
simulations.Comment: 60 pages, 8 figure
Independent Component Analysis of Spatiotemporal Chaos
Two types of spatiotemporal chaos exhibited by ensembles of coupled nonlinear
oscillators are analyzed using independent component analysis (ICA). For
diffusively coupled complex Ginzburg-Landau oscillators that exhibit smooth
amplitude patterns, ICA extracts localized one-humped basis vectors that
reflect the characteristic hole structures of the system, and for nonlocally
coupled complex Ginzburg-Landau oscillators with fractal amplitude patterns,
ICA extracts localized basis vectors with characteristic gap structures.
Statistics of the decomposed signals also provide insight into the complex
dynamics of the spatiotemporal chaos.Comment: 5 pages, 6 figures, JPSJ Vol 74, No.
Canonical Cortical Field Theories
We characterise the dynamics of neuronal activity, in terms of field theory,
using neural units placed on a 2D-lattice modelling the cortical surface. The
electrical activity of neuronal units was analysed with the aim of deriving a
neural field model with a simple functional form that still able to predict or
reproduce empirical findings. Each neural unit was modelled using a neural mass
and the accompanying field theory was derived in the continuum limit. The field
theory comprised coupled (real) Klein-Gordon fields, where predictions of the
model fall within the range of experimental findings. These predictions
included the frequency spectrum of electric activity measured from the cortex,
which was derived using an equipartition of energy over eigenfunctions of the
neural fields. Moreover, the neural field model was invariant, within a set of
parameters, to the dynamical system used to model each neuronal mass.
Specifically, topologically equivalent dynamical systems resulted in the same
neural field model when connected in a lattice; indicating that the fields
derived could be read as a canonical cortical field theory. We specifically
investigated non-dispersive fields that provide a structure for the coding (or
representation) of afferent information. Further elaboration of the ensuing
neural field theory, including the effect of dispersive forces, could be of
importance in the understanding of the cortical processing of information.Comment: 19 pages, 1 figur
Matematiske aspekter ved lokalisert aktivitet i nevrofeltmodeller
Neural field models assume the form of integral and integro-differential equations, and describe non-linear interactions between neuron populations. Such models reduce the dimensionality and complexity of the microscopic neural-network dynamics and allow for mathematical treatment, efficient simulation and intuitive understanding. Since the seminal studies byWilson and Cowan (1973) and Amari (1977) neural field models have been used to describe phenomena like persistent neuronal activity, waves and pattern formation in the cortex. In the present thesis we focus on mathematical aspects of localized activity which is described by stationary solutions of a neural field model, so called bumps.
While neural field models represent a considerable simplification of the neural dynamics in a large network, they are often studied under further simplifying assumptions, e.g., approximating the firing-rate function with a unit step function.
In some cases these assumptions may not change essential features of the model, but in other cases they may cause some properties of the model to vary significantly or even break down. The work presented in the thesis aims at studying properties of bump solutions in one- and two-population models relaxing on the common simplifications.
Numerical approaches used in mathematical neuroscience sometimes lack mathematical justification. This may lead to numerical instabilities, ill-conditioning or even divergence. Moreover, there are some methods which have not been used in neuroscience community but might be beneficial. We have initiated a work in this direction by studying advantages and disadvantages of a wavelet-Galerkin algorithm applied to a simplified framework of a one-population neural field model. We also focus on rigorous justification of iteration methods for constructing bumps.
We use the theory of monotone operators in ordered Banach spaces, the theory of Sobolev spaces in unbounded domains, degree theory, and other functional analytical methods, which are still not very well developed in neuroscience, for analysis of the models.Nevrofeltmodeller formuleres som integral og integro-differensiallikninger. De beskriver ikke-lineære vekselvirkninger mellom populasjoner av nevroner. Slike modeller reduserer dimensjonalitet og kompleksitet til den mikroskopiske nevrale nettverksdynamikken og tillater matematisk behandling, effektiv simulering og intuitiv forståelse. Siden pionerarbeidene til Wilson og Cowan (1973) og Amari (1977), har nevrofeltmodeller blitt brukt til å beskrive fenomener som vedvarende nevroaktivitet, bølger og mønsterdannelse i hjernebarken. I denne avhandlingen vil vi fokusere på matematiske aspekter ved lokalisert aktivitet som beskrives ved stasjonære løsninger til nevrofeltmodeller, såkalte bumps.
Mens nevrofeltmodeller innebærer en betydelig forenkling av den nevrale dynamikken i et større nettverk, så blir de ofte studert ved å gjøre forenklende tilleggsantakelser, som for eksempel å approksimere fyringratefunksjonen med en Heaviside-funksjon.
I noen tilfeller vil disse forenklingene ikke endre vesentlige trekk ved modellen, mens i andre tilfeller kan de forårsake at modellegenskapene endres betydelig eller at de bryter sammen. Arbeidene presentert i denne avhandlingen har som mål å studere egenskapene til bump-løsninger i en- og to-populasjonsmodeller når en lemper på de vanlige antakelsene.
Numeriske teknikker som brukes i matematisk nevrovitenskap mangler i noen tilfeller matematisk begrunnelse. Dette kan lede til numeriske instabiliteter, dårlig kondisjonering, og til og med divergens. I tillegg finnes det metoder som ikke er blitt brukt i nevrovitenskap, men som kunne være fordelaktige å bruke. Vi har startet et arbeid i denne retningen ved å studere fordeler og ulemper ved en wavelet-Galerkin algoritme anvendt på et forenklet rammeverk for en en-populasjons nevrofelt modell. Vi fokuserer også på rigorøs begrunnelse for iterasjonsmetoder for konstruksjon av bumps.
Vi bruker teorien for monotone operatorer i ordnede Banachrom, teorien for Sobolevrom for ubegrensede domener, gradteori, og andre funksjonalanalytiske metoder, som for tiden ikke er vel utviklet i nevrovitenskap, for analyse av modellene
Neural Field Models: A mathematical overview and unifying framework
Rhythmic electrical activity in the brain emerges from regular non-trivial
interactions between millions of neurons. Neurons are intricate cellular
structures that transmit excitatory (or inhibitory) signals to other neurons,
often non-locally, depending on the graded input from other neurons. Often this
requires extensive detail to model mathematically, which poses several issues
in modelling large systems beyond clusters of neurons, such as the whole brain.
Approaching large populations of neurons with interconnected constituent
single-neuron models results in an accumulation of exponentially many
complexities, rendering a realistic simulation that does not permit
mathematical tractability and obfuscates the primary interactions required for
emergent electrodynamical patterns in brain rhythms. A statistical mechanics
approach with non-local interactions may circumvent these issues while
maintaining mathematically tractability. Neural field theory is a
population-level approach to modelling large sections of neural tissue based on
these principles. Herein we provide a review of key stages of the history and
development of neural field theory and contemporary uses of this branch of
mathematical neuroscience. We elucidate a mathematical framework in which
neural field models can be derived, highlighting the many significant inherited
assumptions that exist in the current literature, so that their validity may be
considered in light of further developments in both mathematical and
experimental neuroscience.Comment: 55 pages, 10 figures, 2 table
- …