912 research outputs found
Anti-periodic solution for fuzzy Cohen–Grossberg neural networks with time-varying and distributed delays
In this paper, by using a continuation theorem of coincidence degree theory and a differential inequality, we establish some sufficient conditions ensuring the existence and global exponential stability of anti-periodic solutions for a class of fuzzy Cohen–Grossberg neural networks with time-varying and distributed delays. In addition, we present an illustrative example to show the feasibility of obtained results
Exponential Lag Synchronization of Cohen-Grossberg Neural Networks with Discrete and Distributed Delays on Time Scales
In this article, we investigate exponential lag synchronization results for
the Cohen-Grossberg neural networks (C-GNNs) with discrete and distributed
delays on an arbitrary time domain by applying feedback control. We formulate
the problem by using the time scales theory so that the results can be applied
to any uniform or non-uniform time domains. Also, we provide a comparison of
results that shows that obtained results are unified and generalize the
existing results. Mainly, we use the unified matrix-measure theory and Halanay
inequality to establish these results. In the last section, we provide two
simulated examples for different time domains to show the effectiveness and
generality of the obtained analytical results.Comment: 20 pages, 18 figure
Complete Stability of Neural Networks With Extended Memristors
The article considers a large class of delayed neural networks (NNs) with extended memristors obeying the Stanford model. This is a widely used and popular model that accurately describes the switching dynamics of real nonvolatile memristor devices implemented in nanotechnology. The article studies via the Lyapunov method complete stability (CS), i.e., convergence of trajectories in the presence of multiple equilibrium points (EPs), for delayed NNs with Stanford memristors. The obtained conditions for CS are robust with respect to variations of the interconnections and they hold for any value of the concentrated delay. Moreover, they can be checked either numerically, via a linear matrix inequality (LMI), or analytically, via the concept of Lyapunov diagonally stable (LDS) matrices. The conditions ensure that at the end of the transient capacitor voltages and NN power vanish. In turn, this leads to advantages in terms of power consumption. This notwithstanding, the nonvolatile memristors can retain the result of computation in accordance with the in-memory computing principle. The results are verified and illustrated via numerical simulations. From a methodological viewpoint, the article faces new challenges to prove CS since due to the presence of nonvolatile memristors the NNs possess a continuum of nonisolated EPs. Also, for physical reasons, the memristor state variables are constrained to lie in some given intervals so that the dynamics of the NNs need to be modeled via a class of differential inclusions named differential variational inequalities
Periodic solutions for a Cauchy problem on time scales
AbstractThis paper firstly shows that there does not exist a nonzero periodic solution for a nonhomogeneous Cauchy problem by using the Laplace transformation on time scales. Secondly, two new Gronwall inequalities, which play an important role in the qualitative analysis of differential and integral equations, are established. Thirdly, by employing the contraction mapping principle, existence and uniqueness results of weighted S-asymptotically ω-periodic solutions for nonlinear Cauchy problem on time scales are obtained in an asymptotically periodic function space. Finally, some examples are presented to illustrate some of the results described here
Recommended from our members
Differentiating noise and modulators in artificial neural networks
Research in Computational Neural Networks is currently taking place at many different levels; from coarse-grain symbolic models to fine-grain representations of neurons and cell processes. One feature that the different approaches share, is that they are all in relative infancy. Thus, most research concentrates on gross aspects of neural communication and methods of computational simulation.
Recently, some clues have been found which point to more subtle mechanisms underlying the information processing capability of neural 'nodes'. These clues are the improvement in network operation by the injection of random noise; and the neurobiological finding that neuropeptides may exist as slower Signal transmission channels between neurons.
This study concerns the difference between random noise injection, and directed, low-level, activity injections which are postulated to be produced by neuromodulators such as neuropeptides. The findings of this study are that random noise does, indeed, enhance the operation of coarse-grain neural models; and that a 'neuropeptidergic' analogue also enhances operation; but to a different extent, and probably through a different mechanism. Further testing of a medium-grain computer model gives some indication of how a neuropeptidergic modulation might affect real neurons, by extending the time-course of the activation of the neuron. This appears to be a similar mechanism to that postulated for the coarse-grain 'neuropeptidergic' simulation model.
Given these findings, is it possible that signal transmission in real nervous systems assume these mechanisms? If so, it may be possible that a process of concurrent propagation, through different signal channels, also occurs in real nervous systems, making the nervous system much more complex than current models allow
- …