261 research outputs found

    Robust stability for stochastic Hopfield neural networks with time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.In this paper, the asymptotic stability analysis problem is considered for a class of uncertain stochastic neural networks with time delays and parameter uncertainties. The delays are time-invariant, and the uncertainties are norm-bounded that enter into all the network parameters. The aim of this paper is to establish easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties. By employing a Lyapunovā€“Krasovskii functional and conducting the stochastic analysis, a linear matrix inequality (LMI) approach is developed to derive the stability criteria. The proposed criteria can be checked readily by using some standard numerical packages, and no tuning of parameters is required. Examples are provided to demonstrate the effectiveness and applicability of the proposed criteria.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of German

    Design of exponential state estimators for neural networks with mixed time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2007 Elsevier Ltd.In this Letter, the state estimation problem is dealt with for a class of recurrent neural networks (RNNs) with mixed discrete and distributed delays. The activation functions are assumed to be neither monotonic, nor differentiable, nor bounded. We aim at designing a state estimator to estimate the neuron states, through available output measurements, such that the dynamics of the estimation error is globally exponentially stable in the presence of mixed time delays. By using the Laypunovā€“Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions to guarantee the existence of the state estimators. We show that both the existence conditions and the explicit expression of the desired estimator can be characterized in terms of the solution to an LMI. A simulation example is exploited to show the usefulness of the derived LMI-based stability conditions.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, the Alexander von Humboldt Foundation of Germany, the Natural Science Foundation of Jiangsu Education Committee of China under Grants 05KJB110154 and BK2006064, and the National Natural Science Foundation of China under Grants 10471119 and 10671172

    Boundedness and stability for Cohenā€“Grossberg neural network with time-varying delays

    Get PDF
    AbstractIn this paper, a model is considered to describe the dynamics of Cohenā€“Grossberg neural network with variable coefficients and time-varying delays. Uniformly ultimate boundedness and uniform boundedness are studied for the model by utilizing the Hardy inequality. Combining with the Halanay inequality and the Lyapunov functional method, some new sufficient conditions are derived for the model to be globally exponentially stable. The activation functions are not assumed to be differentiable or strictly increasing. Moreover, no assumption on the symmetry of the connection matrices is necessary. These criteria are important in signal processing and the design of networks

    Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.This Letter is concerned with the global asymptotic stability analysis problem for a class of uncertain stochastic Hopfield neural networks with discrete and distributed time-delays. By utilizing a Lyapunovā€“Krasovskii functional, using the well-known S-procedure and conducting stochastic analysis, we show that the addressed neural networks are robustly, globally, asymptotically stable if a convex optimization problem is feasible. Then, the stability criteria are derived in terms of linear matrix inequalities (LMIs), which can be effectively solved by some standard numerical packages. The main results are also extended to the multiple time-delay case. Two numerical examples are given to demonstrate the usefulness of the proposed global stability condition.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany

    Non-Euclidean Contraction Analysis of Continuous-Time Neural Networks

    Full text link
    Critical questions in dynamical neuroscience and machine learning are related to the study of continuous-time neural networks and their stability, robustness, and computational efficiency. These properties can be simultaneously established via a contraction analysis. This paper develops a comprehensive non-Euclidean contraction theory for continuous-time neural networks. First, for non-Euclidean ā„“1/ā„“āˆž\ell_{1}/\ell_{\infty} logarithmic norms, we establish quasiconvexity with respect to positive diagonal weights and closed-form worst-case expressions over certain matrix polytopes. Second, for locally Lipschitz maps (e.g., arising as activation functions), we show that their one-sided Lipschitz constant equals the essential supremum of the logarithmic norm of their Jacobian. Third and final, we apply these general results to classes of continuous-time neural networks, including Hopfield, firing rate, Persidskii, Lur'e and other models. For each model, we compute the optimal contraction rate and corresponding weighted non-Euclidean norm via a linear program or, in some special cases, via a Hurwitz condition on the Metzler majorant of the synaptic matrix. Our non-Euclidean analysis establishes also absolute, connective, and total contraction properties

    Exponential Lag Synchronization of Cohen-Grossberg Neural Networks with Discrete and Distributed Delays on Time Scales

    Full text link
    In this article, we investigate exponential lag synchronization results for the Cohen-Grossberg neural networks (C-GNNs) with discrete and distributed delays on an arbitrary time domain by applying feedback control. We formulate the problem by using the time scales theory so that the results can be applied to any uniform or non-uniform time domains. Also, we provide a comparison of results that shows that obtained results are unified and generalize the existing results. Mainly, we use the unified matrix-measure theory and Halanay inequality to establish these results. In the last section, we provide two simulated examples for different time domains to show the effectiveness and generality of the obtained analytical results.Comment: 20 pages, 18 figure
    • ā€¦
    corecore