261 research outputs found
Recommended from our members
A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays
This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2007 Elsevier Ltd.In this Letter, the analysis problem for the existence and stability of periodic solutions is investigated for a class of general discrete-time recurrent neural networks with time-varying delays. For the neural networks under study, a generalized activation function is considered, and the traditional assumptions on the boundedness, monotony and differentiability of the activation functions are removed. By employing the latest free-weighting matrix method, an appropriate LyapunovāKrasovskii functional is constructed and several sufficient conditions are established to ensure the existence, uniqueness, and globally exponential stability of the periodic solution for the addressed neural network. The conditions are dependent on both the lower bound and upper bound of the time-varying time delays. Furthermore, the conditions are expressed in terms of the linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Two simulation examples are given to show the effectiveness and less conservatism of the proposed criteria.This work was supported in part by the National Natural Science Foundation of China under Grant 50608072, an International Joint Project sponsored by the Royal Society of the UK and the National Natural Science Foundation of China, and the Alexander von Humboldt Foundation of Germany
Recommended from our members
Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays
Copyright [2006] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this letter, the global asymptotic stability analysis problem is considered for a class of stochastic Cohen-Grossberg neural networks with mixed time delays, which consist of both the discrete and distributed time delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, a linear matrix inequality (LMI) approach is developed to derive several sufficient conditions guaranteeing the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the addressed stochastic Cohen-Grossberg neural networks with mixed delays are globally asymptotically stable in the mean square if two LMIs are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also pointed out that the main results comprise some existing results as special cases. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria
Robust stability for stochastic Hopfield neural networks with time delays
This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.In this paper, the asymptotic stability analysis problem is considered for a class of uncertain stochastic neural networks with time delays and parameter uncertainties. The delays are time-invariant, and the uncertainties are norm-bounded that enter into all the network parameters. The aim of this paper is to establish easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties. By employing a LyapunovāKrasovskii functional and conducting the stochastic analysis, a linear matrix inequality (LMI) approach is developed to derive the stability criteria. The proposed criteria can be checked readily by using some standard numerical packages, and no tuning of parameters is required. Examples are provided to demonstrate the effectiveness and applicability of the proposed criteria.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of German
Design of exponential state estimators for neural networks with mixed time delays
This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2007 Elsevier Ltd.In this Letter, the state estimation problem is dealt with for a class of recurrent neural networks (RNNs) with mixed discrete and distributed delays. The activation functions are assumed to be neither monotonic, nor differentiable, nor bounded. We aim at designing a state estimator to estimate the neuron states, through available output measurements, such that the dynamics of the estimation error is globally exponentially stable in the presence of mixed time delays. By using the LaypunovāKrasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions to guarantee the existence of the state estimators. We show that both the existence conditions and the explicit expression of the desired estimator can be characterized in terms of the solution to an LMI. A simulation example is exploited to show the usefulness of the derived LMI-based stability conditions.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, the Alexander von Humboldt Foundation of Germany, the Natural Science Foundation of Jiangsu Education Committee of China under Grants 05KJB110154 and BK2006064, and the National Natural Science Foundation of China under Grants 10471119 and 10671172
Boundedness and stability for CohenāGrossberg neural network with time-varying delays
AbstractIn this paper, a model is considered to describe the dynamics of CohenāGrossberg neural network with variable coefficients and time-varying delays. Uniformly ultimate boundedness and uniform boundedness are studied for the model by utilizing the Hardy inequality. Combining with the Halanay inequality and the Lyapunov functional method, some new sufficient conditions are derived for the model to be globally exponentially stable. The activation functions are not assumed to be differentiable or strictly increasing. Moreover, no assumption on the symmetry of the connection matrices is necessary. These criteria are important in signal processing and the design of networks
Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays
This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.This Letter is concerned with the global asymptotic stability analysis problem for a class of uncertain stochastic Hopfield neural networks with discrete and distributed time-delays. By utilizing a LyapunovāKrasovskii functional, using the well-known S-procedure and conducting stochastic analysis, we show that the addressed neural networks are robustly, globally, asymptotically stable if a convex optimization problem is feasible. Then, the stability criteria are derived in terms of linear matrix inequalities (LMIs), which can be effectively solved by some standard numerical packages. The main results are also extended to the multiple time-delay case. Two numerical examples are given to demonstrate the usefulness of the proposed global stability condition.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany
Recommended from our members
State estimation for delayed neural networks
Copyright [2005] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this letter, the state estimation problem is studied for neural networks with time-varying delays. The interconnection matrix and the activation functions are assumed to be norm-bounded. The problem addressed is to estimate the neuron states, through available output measurements, such that for all admissible time-delays, the dynamics of the estimation error is globally exponentially stable. An effective linear matrix inequality approach is developed to solve the neuron state estimation problem. In particular, we derive the conditions for the existence of the desired estimators for the delayed neural networks. We also parameterize the explicit expression of the set of desired estimators in terms of linear matrix inequalities (LMIs). Finally, it is shown that the main results can be easily extended to cope with the traditional stability analysis problem for delayed neural networks. Numerical examples are included to illustrate the applicability of the proposed design method
Non-Euclidean Contraction Analysis of Continuous-Time Neural Networks
Critical questions in dynamical neuroscience and machine learning are related
to the study of continuous-time neural networks and their stability,
robustness, and computational efficiency. These properties can be
simultaneously established via a contraction analysis.
This paper develops a comprehensive non-Euclidean contraction theory for
continuous-time neural networks. First, for non-Euclidean
logarithmic norms, we establish quasiconvexity with
respect to positive diagonal weights and closed-form worst-case expressions
over certain matrix polytopes. Second, for locally Lipschitz maps (e.g.,
arising as activation functions), we show that their one-sided Lipschitz
constant equals the essential supremum of the logarithmic norm of their
Jacobian. Third and final, we apply these general results to classes of
continuous-time neural networks, including Hopfield, firing rate, Persidskii,
Lur'e and other models. For each model, we compute the optimal contraction rate
and corresponding weighted non-Euclidean norm via a linear program or, in some
special cases, via a Hurwitz condition on the Metzler majorant of the synaptic
matrix. Our non-Euclidean analysis establishes also absolute, connective, and
total contraction properties
Exponential Lag Synchronization of Cohen-Grossberg Neural Networks with Discrete and Distributed Delays on Time Scales
In this article, we investigate exponential lag synchronization results for
the Cohen-Grossberg neural networks (C-GNNs) with discrete and distributed
delays on an arbitrary time domain by applying feedback control. We formulate
the problem by using the time scales theory so that the results can be applied
to any uniform or non-uniform time domains. Also, we provide a comparison of
results that shows that obtained results are unified and generalize the
existing results. Mainly, we use the unified matrix-measure theory and Halanay
inequality to establish these results. In the last section, we provide two
simulated examples for different time domains to show the effectiveness and
generality of the obtained analytical results.Comment: 20 pages, 18 figure
- ā¦