1,818 research outputs found

    Mean Square Exponential Stability of Stochastic Cohen-Grossberg Neural Networks with Unbounded Distributed Delays

    Get PDF
    This paper addresses the issue of mean square exponential stability of stochastic Cohen-Grossberg neural networks (SCGNN), whose state variables are described by stochastic nonlinear integrodifferential equations. With the help of Lyapunov function, stochastic analysis technique, and inequality techniques, some novel sufficient conditions on mean square exponential stability for SCGNN are given. Furthermore, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays

    Stability analysis of impulsive stochastic Cohenā€“Grossberg neural networks with mixed time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2008 Elsevier LtdIn this paper, the problem of stability analysis for a class of impulsive stochastic Cohenā€“Grossberg neural networks with mixed delays is considered. The mixed time delays comprise both the time-varying and infinite distributed delays. By employing a combination of the M-matrix theory and stochastic analysis technique, a sufficient condition is obtained to ensure the existence, uniqueness, and exponential p-stability of the equilibrium point for the addressed impulsive stochastic Cohenā€“Grossberg neural network with mixed delays. The proposed method, which does not make use of the Lyapunov functional, is shown to be simple yet effective for analyzing the stability of impulsive or stochastic neural networks with variable and/or distributed delays. We then extend our main results to the case where the parameters contain interval uncertainties. Moreover, the exponential convergence rate index is estimated, which depends on the system parameters. An example is given to show the effectiveness of the obtained results.This work was supported by the Natural Science Foundation of CQ CSTC under grant 2007BB0430, the Scientific Research Fund of Chongqing Municipal Education Commission under Grant KJ070401, an International Joint Project sponsored by the Royal Society of the UK and the National Natural Science Foundation of China, and the Alexander von Humboldt Foundation of Germany

    Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances

    Get PDF
    Robustness of neural networks has been a hot topic in recent years. This paper mainly studies the robustness of the global exponential stability of Cohen-Grossberg neural networks with a piecewise constant argument and stochastic disturbances, and discusses the problem of whether the Cohen-Grossberg neural networks can still maintain global exponential stability under the perturbation of the piecewise constant argument and stochastic disturbances. By using stochastic analysis theory and inequality techniques, the interval length of the piecewise constant argument and the upper bound of the noise intensity are derived by solving transcendental equations. In the end, we offer several examples to illustrate the efficacy of the findings

    Exponential Stability of Cohen-Grossberg Neural Networks with Impulse Time Window

    Get PDF
    This paper concerns the problem of exponential stability for a class of Cohen-Grossberg neural networks with impulse time window and time-varying delays. In our letter, the impulsive effects we considered can stochastically occur at a definitive time window and the impulsive controllers we considered can be nonlinear and even rely on the states of all the neurons. Hence, the impulses here can be more applicable and more general. By utilizing Lyapunov functional theory, inequality technique, and the analysis method, we obtain some novel and effective exponential stability criteria for the Cohen-Grossberg neural networks. These results generalize a few previous known results and numerical simulations are given to show the effectiveness of the derived results

    Restrictions and Stability of Time-Delayed Dynamical Networks

    Full text link
    This paper deals with the global stability of time-delayed dynamical networks. We show that for a time-delayed dynamical network with non-distributed delays the network and the corresponding non-delayed network are both either globally stable or unstable. We demonstrate that this may not be the case if the network's delays are distributed. The main tool in our analysis is a new procedure of dynamical network restrictions. This procedure is useful in that it allows for improved estimates of a dynamical network's global stability. Moreover, it is a computationally simpler and much more effective means of analyzing the stability of dynamical networks than the procedure of isospectral network expansions introduced in [Isospectral graph transformations, spectral equivalence, and global stability of dynamical networks. Nonlinearity, 25 (2012) 211-254]. The effectiveness of our approach is illustrated by applications to various classes of Cohen-Grossberg neural networks.Comment: 32 pages, 9 figure

    Stability in N-Layer recurrent neural networks

    Get PDF
    Starting with the theory developed by Hopfield, Cohen-Grossberg and Kosko, the study of associative memories is extended to N - layer re-current neural networks. The stability of different multilayer networks is demonstrated under specified bounding hypotheses. The analysis involves theorems for the additive as well as the multiplicative models for continuous and discrete N - layer networks. These demonstrations are based on contin-uous and discrete Liapunov theory. The thesis develops autoassociative and heteroassociative memories. It points out the link between all recurrent net-works of this type. The discrete case is analyzed using the threshold signal function as the activation function. A general approach for studying the sta-bility and convergence of the multilayer recurrent networks is developed

    The Synthesis of Arbitrary Stable Dynamics in Non-linear Neural Networks II: Feedback and Universality

    Full text link
    We wish to construct a realization theory of stable neural networks and use this theory to model the variety of stable dynamics apparent in natural data. Such a theory should have numerous applications to constructing specific artificial neural networks with desired dynamical behavior. The networks used in this theory should have well understood dynamics yet be as diverse as possible to capture natural diversity. In this article, I describe a parameterized family of higher order, gradient-like neural networks which have known arbitrary equilibria with unstable manifolds of known specified dimension. Moreover, any system with hyperbolic dynamics is conjugate to one of these systems in a neighborhood of the equilibrium points. Prior work on how to synthesize attractors using dynamical systems theory, optimization, or direct parametric. fits to known stable systems, is either non-constructive, lacks generality, or has unspecified attracting equilibria. More specifically, We construct a parameterized family of gradient-like neural networks with a simple feedback rule which will generate equilibrium points with a set of unstable manifolds of specified dimension. Strict Lyapunov functions and nested periodic orbits are obtained for these systems and used as a method of synthesis to generate a large family of systems with the same local dynamics. This work is applied to show how one can interpolate finite sets of data, on nested periodic orbits.Air Force Office of Scientific Research (90-0128

    The hippocampus and cerebellum in adaptively timed learning, recognition, and movement

    Full text link
    The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensory-motor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contain circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the model's description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention towards goal objects amid distractions. When these model recognition, reinforcement, sensory-motor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-86-C-0037, 90-0128); Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309, N00014-92-J-1904); National Institute of Mental Health (MH-42900

    Neural networks as nonlinear dynamical systems

    Get PDF
    The Recurrent Neural Networks (RNNs) represent an important class of bio-inspired learning machines belonging to the field of Artificial Intelligence. Due to the cyclic interconnections between the artificial neurons and of the activation functions, RNNs are nonlinear dynamical systems. From the point of view of the field of Dynamical Systems, a specific feature of RNNs is that their state space may consist of multiple equilibria, not necessary all stable. Thus, the usual local concepts of stability are not sufficient for an adequate description. Accordingly, the analysis have to be done within both the framework of the Stability theory and the framework of Qualitative theory of systems with several equilibria. The presentation firstly focuses on the main structure and features of the human brain, that which have been taken into account for deriving the artificial simulators of its functions. The second part presents the basics for linear and nonlinear dynamical systems including the main concepts of stability ā€“ both for local equilibrium and for the global behavior of the system ā€“ as well as, the powerfull tool of Lyapunov-like methods used for systemsā€™ analysis. In the third part, different models of RNNs are considered (Hopfield, competitive Cohen-Grossberg, Bidirectional Associative Memory, Cellular Neural Networks, K-Winner-Takes-All networks) and discussed within the framework of the Dynamical Systems.Universidad de MĆ”laga. Campus de Excelencia Internacional AndalucĆ­a Tec
    • ā€¦
    corecore