2,227 research outputs found
How single neuron properties shape chaotic dynamics and signal transmission in random neural networks
While most models of randomly connected networks assume nodes with simple
dynamics, nodes in realistic highly connected networks, such as neurons in the
brain, exhibit intrinsic dynamics over multiple timescales. We analyze how the
dynamical properties of nodes (such as single neurons) and recurrent
connections interact to shape the effective dynamics in large randomly
connected networks. A novel dynamical mean-field theory for strongly connected
networks of multi-dimensional rate units shows that the power spectrum of the
network activity in the chaotic phase emerges from a nonlinear sharpening of
the frequency response function of single units. For the case of
two-dimensional rate units with strong adaptation, we find that the network
exhibits a state of "resonant chaos", characterized by robust, narrow-band
stochastic oscillations. The coherence of stochastic oscillations is maximal at
the onset of chaos and their correlation time scales with the adaptation
timescale of single units. Surprisingly, the resonance frequency can be
predicted from the properties of isolated units, even in the presence of
heterogeneity in the adaptation parameters. In the presence of these
internally-generated chaotic fluctuations, the transmission of weak,
low-frequency signals is strongly enhanced by adaptation, whereas signal
transmission is not influenced by adaptation in the non-chaotic regime. Our
theoretical framework can be applied to other mechanisms at the level of single
nodes, such as synaptic filtering, refractoriness or spike synchronization.
These results advance our understanding of the interaction between the dynamics
of single units and recurrent connectivity, which is a fundamental step toward
the description of biologically realistic network models in the brain, or, more
generally, networks of other physical or man-made complex dynamical units
A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements
The computational properties of neural systems are often thought to be
implemented in terms of their network dynamics. Hence, recovering the system
dynamics from experimentally observed neuronal time series, like multiple
single-unit (MSU) recordings or neuroimaging data, is an important step toward
understanding its computations. Ideally, one would not only seek a state space
representation of the dynamics, but would wish to have access to its governing
equations for in-depth analysis. Recurrent neural networks (RNNs) are a
computationally powerful and dynamically universal formal framework which has
been extensively studied from both the computational and the dynamical systems
perspective. Here we develop a semi-analytical maximum-likelihood estimation
scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of
state space models, which accounts for noise in both the underlying latent
dynamics and the observation process. The Expectation-Maximization algorithm is
used to infer the latent state distribution, through a global Laplace
approximation, and the PLRNN parameters iteratively. After validating the
procedure on toy examples, the approach is applied to MSU recordings from the
rodent anterior cingulate cortex obtained during performance of a classical
working memory task, delayed alternation. A model with 5 states turned out to
be sufficient to capture the essential computational dynamics underlying task
performance, including stimulus-selective delay activity. The estimated models
were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the
vicinity of a bifurcation point. In summary, the present work advances a
semi-analytical (thus reasonably fast) maximum-likelihood estimation framework
for PLRNNs that may enable to recover the relevant dynamics underlying observed
neuronal time series, and directly link them to computational properties
Modeling of complex-valued Wiener systems using B-spline neural network
In this brief, a new complex-valued B-spline neural network is introduced in order to model the complex-valued Wiener system using observational input/output data. The complex-valued nonlinear static function in the Wiener system is represented using the tensor product from two univariate Bspline neural networks, using the real and imaginary parts of the system input. Following the use of a simple least squares parameter initialization scheme, the GaussāNewton algorithm is applied for the parameter estimation, which incorporates the De Boor algorithm, including both the B-spline curve and the first-order derivatives recursion. Numerical examples, including a nonlinear high-power amplifier model in communication systems, are used to demonstrate the efficacy of the proposed approaches
Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression
We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave
Exponential multistability of memristive Cohen-Grossberg neural networks with stochastic parameter perturbations
Ā© 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Due to instability being induced easily by parameter disturbances of network systems, this paper investigates the multistability of memristive Cohen-Grossberg neural networks (MCGNNs) under stochastic parameter perturbations. It is demonstrated that stable equilibrium points of MCGNNs can be flexibly located in the odd-sequence or even-sequence regions. Some sufficient conditions are derived to ensure the exponential multistability of MCGNNs under parameter perturbations. It is found that there exist at least (w+2) l (or (w+1) l) exponentially stable equilibrium points in the odd-sequence (or the even-sequence) regions. In the paper, two numerical examples are given to verify the correctness and effectiveness of the obtained results.Peer reviewe
- ā¦