435 research outputs found
The complexity of dynamics in small neural circuits
Mean-field theory is a powerful tool for studying large neural networks.
However, when the system is composed of a few neurons, macroscopic differences
between the mean-field approximation and the real behavior of the network can
arise. Here we introduce a study of the dynamics of a small firing-rate network
with excitatory and inhibitory populations, in terms of local and global
bifurcations of the neural activity. Our approach is analytically tractable in
many respects, and sheds new light on the finite-size effects of the system. In
particular, we focus on the formation of multiple branching solutions of the
neural equations through spontaneous symmetry-breaking, since this phenomenon
increases considerably the complexity of the dynamical behavior of the network.
For these reasons, branching points may reveal important mechanisms through
which neurons interact and process information, which are not accounted for by
the mean-field approximation.Comment: 34 pages, 11 figures. Supplementary materials added, colors of
figures 8 and 9 fixed, results unchange
Dynamics of neural systems with discrete and distributed time delays
In real-world systems, interactions between elements do not happen instantaneously, due to the time
required for a signal to propagate, reaction times of individual elements, and so forth. Moreover,
time delays are normally nonconstant and may vary with time. This means that it is vital to introduce
time delays in any realistic model of neural networks. In order to analyze the fundamental
properties of neural networks with time-delayed connections, we consider a system of two coupled
two-dimensional nonlinear delay differential equations. This model represents a neural network,
where one subsystem receives a delayed input from another subsystem. An exciting feature of the
model under consideration is the combination of both discrete and distributed delays, where distributed
time delays represent the neural feedback between the two subsystems, and the discrete
delays describe the neural interaction within each of the two subsystems. Stability properties are
investigated for different commonly used distribution kernels, and the results are compared to the
corresponding results on stability for networks with no distributed delays. It is shown how approximations
of the boundary of the stability region of a trivial equilibrium can be obtained analytically
for the cases of delta, uniform, and weak gamma delay distributions. Numerical techniques are used
to investigate stability properties of the fully nonlinear system, and they fully confirm all analytical
findings
Conditions for wave trains in spiking neural networks
Spatiotemporal patterns such as traveling waves are frequently observed in
recordings of neural activity. The mechanisms underlying the generation of such
patterns are largely unknown. Previous studies have investigated the existence
and uniqueness of different types of waves or bumps of activity using
neural-field models, phenomenological coarse-grained descriptions of
neural-network dynamics. But it remains unclear how these insights can be
transferred to more biologically realistic networks of spiking neurons, where
individual neurons fire irregularly. Here, we employ mean-field theory to
reduce a microscopic model of leaky integrate-and-fire (LIF) neurons with
distance-dependent connectivity to an effective neural-field model. In contrast
to existing phenomenological descriptions, the dynamics in this neural-field
model depends on the mean and the variance in the synaptic input, both
determining the amplitude and the temporal structure of the resulting effective
coupling kernel. For the neural-field model we employ liner stability analysis
to derive conditions for the existence of spatial and temporal oscillations and
wave trains, that is, temporally and spatially periodic traveling waves. We
first prove that wave trains cannot occur in a single homogeneous population of
neurons, irrespective of the form of distance dependence of the connection
probability. Compatible with the architecture of cortical neural networks, wave
trains emerge in two-population networks of excitatory and inhibitory neurons
as a combination of delay-induced temporal oscillations and spatial
oscillations due to distance-dependent connectivity profiles. Finally, we
demonstrate quantitative agreement between predictions of the analytically
tractable neural-field model and numerical simulations of both networks of
nonlinear rate-based units and networks of LIF neurons.Comment: 36 pages, 8 figures, 4 table
Bogdanov–Takens and triple zero bifurcations in general differential systems with m delays
This paper mainly concerns the derivation of the normal forms of the Bogdanov–Takens (BT) and triple zero bifurcations for differential systems with m discrete delays. The feasible algorithms to determine the existence of the corresponding bifurcations of the system at the origin are given. By using center manifold reduction and normal form theory, the coefficient formulas of normal forms are derived and some examples are presented to illustrate our main results
Computational study of resting state network dynamics
Lo scopo di questa tesi è quello di mostrare, attraverso una simulazione con il software The Virtual Brain, le più importanti proprietà della dinamica cerebrale durante il resting state, ovvero quando non si è coinvolti in nessun compito preciso e non si è sottoposti a nessuno stimolo particolare. Si comincia con lo spiegare cos’è il resting state attraverso una breve revisione storica della sua scoperta, quindi si passano in rassegna alcuni metodi sperimentali utilizzati nell’analisi dell’attività cerebrale, per poi evidenziare la differenza tra connettività strutturale e funzionale. In seguito, si riassumono brevemente i concetti dei sistemi dinamici, teoria indispensabile per capire un sistema complesso come il cervello. Nel capitolo successivo, attraverso un approccio ‘bottom-up’, si illustrano sotto il profilo biologico le principali strutture del sistema nervoso, dal neurone alla corteccia cerebrale. Tutto ciò viene spiegato anche dal punto di vista dei sistemi dinamici, illustrando il pionieristico modello di Hodgkin-Huxley e poi il concetto di dinamica di popolazione. Dopo questa prima parte preliminare si entra nel dettaglio della simulazione. Prima di tutto si danno maggiori informazioni sul software The Virtual Brain, si definisce il modello di network del resting state utilizzato nella simulazione e si descrive il ‘connettoma’ adoperato. Successivamente vengono mostrati i risultati dell’analisi svolta sui dati ricavati, dai quali si mostra come la criticità e il rumore svolgano un ruolo chiave nell'emergenza di questa attività di fondo del cervello. Questi risultati vengono poi confrontati con le più importanti e recenti ricerche in questo ambito, le quali confermano i risultati del nostro lavoro. Infine, si riportano brevemente le conseguenze che porterebbe in campo medico e clinico una piena comprensione del fenomeno del resting state e la possibilità di virtualizzare l’attività cerebrale
- …