78 research outputs found
Exponential multistability of memristive Cohen-Grossberg neural networks with stochastic parameter perturbations
© 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Due to instability being induced easily by parameter disturbances of network systems, this paper investigates the multistability of memristive Cohen-Grossberg neural networks (MCGNNs) under stochastic parameter perturbations. It is demonstrated that stable equilibrium points of MCGNNs can be flexibly located in the odd-sequence or even-sequence regions. Some sufficient conditions are derived to ensure the exponential multistability of MCGNNs under parameter perturbations. It is found that there exist at least (w+2) l (or (w+1) l) exponentially stable equilibrium points in the odd-sequence (or the even-sequence) regions. In the paper, two numerical examples are given to verify the correctness and effectiveness of the obtained results.Peer reviewe
Monostability and multistability of genetic regulatory networks with different types of regulation functions
The official published version of the article can be found at the link below.Monostability and multistability are proven to be two important topics in synthesis biology and system biology. In this paper, both monostability and multistability are analyzed in a unified framework by applying control theory and mathematical tools. The genetic regulatory networks (GRNs) with multiple time-varying delays and different types of regulation functions are considered. By putting forward a general sector-like regulation function and utilizing up-to-date techniques, a novel Lyapunov–Krasovskii functional is introduced for achieving delay dependence to ensure less conservatism. A new condition is then proposed for the general stability of a GRN in the form of linear matrix inequalities (LMIs) that are dependent on the upper and lower bounds of the delays. Our general stability conditions are applicable to several frequently used regulation functions. It is shown that the existing results for monostability of GRNs are special cases of our main results. Five examples are employed to illustrate the applicability and usefulness of the developed theoretical results.This work was supported in part by the Biotechnology and Biological Sciences Research Council (BBSRC) of the U.K. under Grant BB/C506264/1, the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60504008 and 60804028, the Program for New Century Excellent Talents in Universities of China, and the Alexander von Humboldt Foundation of Germany
Multi-almost periodicity and invariant basins of general neural networks under almost periodic stimuli
In this paper, we investigate convergence dynamics of almost periodic
encoded patterns of general neural networks (GNNs) subjected to external almost
periodic stimuli, including almost periodic delays. Invariant regions are
established for the existence of almost periodic encoded patterns under
two classes of activation functions. By employing the property of
-cone and inequality technique, attracting basins are estimated
and some criteria are derived for the networks to converge exponentially toward
almost periodic encoded patterns. The obtained results are new, they
extend and generalize the corresponding results existing in previous
literature.Comment: 28 pages, 4 figure
Stability and synchronization of discrete-time neural networks with switching parameters and time-varying delays
published_or_final_versio
A State Space Approach for Piecewise-Linear Recurrent Neural Networks for Reconstructing Nonlinear Dynamics from Neural Measurements
The computational properties of neural systems are often thought to be
implemented in terms of their network dynamics. Hence, recovering the system
dynamics from experimentally observed neuronal time series, like multiple
single-unit (MSU) recordings or neuroimaging data, is an important step toward
understanding its computations. Ideally, one would not only seek a state space
representation of the dynamics, but would wish to have access to its governing
equations for in-depth analysis. Recurrent neural networks (RNNs) are a
computationally powerful and dynamically universal formal framework which has
been extensively studied from both the computational and the dynamical systems
perspective. Here we develop a semi-analytical maximum-likelihood estimation
scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of
state space models, which accounts for noise in both the underlying latent
dynamics and the observation process. The Expectation-Maximization algorithm is
used to infer the latent state distribution, through a global Laplace
approximation, and the PLRNN parameters iteratively. After validating the
procedure on toy examples, the approach is applied to MSU recordings from the
rodent anterior cingulate cortex obtained during performance of a classical
working memory task, delayed alternation. A model with 5 states turned out to
be sufficient to capture the essential computational dynamics underlying task
performance, including stimulus-selective delay activity. The estimated models
were rarely multi-stable, but rather were tuned to exhibit slow dynamics in the
vicinity of a bifurcation point. In summary, the present work advances a
semi-analytical (thus reasonably fast) maximum-likelihood estimation framework
for PLRNNs that may enable to recover the relevant dynamics underlying observed
neuronal time series, and directly link them to computational properties
State-Dependent Computation Using Coupled Recurrent Networks
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how
networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent
networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit
- …