3,969 research outputs found

    New delay-dependent stability criteria for recurrent neural networks with time-varying delays

    Get PDF
    Dimirovski, Georgi M. (Dogus Author)This work is concerned with the delay-dependentstability problem for recurrent neural networks with time-varying delays. A new improved delay-dependent stability criterion expressed in terms of linear matrix inequalities is derived by constructing a dedicated Lyapunov-Krasovskii functional via utilizing Wirtinger inequality and convex combination approach. Moreover, a further improved delay-dependent stability criterion is established by means of a new partitioning method for bounding conditions on the activation function and certain new activation function conditions presented. Finally, the application of these novel results to an illustrative example from the literature has been investigated and their effectiveness is shown via comparison with the existing recent ones

    Delay-dependent criterion for exponential stability analysis of neural networks with time-varying delays

    Get PDF
    This note investigates the problem of exponential stability of neural networks with time-varying delays. To derive a less conservative stability condition, a novel augmented Lyapunov-Krasovskii functional (LKF) which includes triple and quadruple-integral terms is employed. In order to reduce the complexity of the stability test, the convex combination method is utilized to derive an improved delay dependent stability criterion in the form of linear matrix inequalities (LMIs). The superiority of the proposed approach is demonstrated by two comparative examples

    Stability and dissipativity analysis of static neural networks with time delay

    Get PDF
    This paper is concerned with the problems of stability and dissipativity analysis for static neural networks (NNs) with time delay. Some improved delay-dependent stability criteria are established for static NNs with time-varying or time-invariant delay using the delay partitioning technique. Based on these criteria, several delay-dependent sufficient conditions are given to guarantee the dissipativity of static NNs with time delay. All the given results in this paper are not only dependent upon the time delay but also upon the number of delay partitions. Some examples are given to illustrate the effectiveness and reduced conservatism of the proposed results.published_or_final_versio

    State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays

    Get PDF
    Copyright @ 2012 Springer VerlagThis paper is concerned with the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters and mixed time-delays. The parameters of the neural networks under consideration switch over time subject to a Markov chain. The networks involve both the discrete-time-varying delay and the mode-dependent distributed time-delay characterized by the upper and lower boundaries dependent on the Markov chain. By constructing novel Lyapunov-Krasovskii functionals, sufficient conditions are firstly established to guarantee the exponential stability in mean square for the addressed discrete-time neural networks with Markovian jumping parameters and mixed time-delays. Then, the state estimation problem is coped with for the same neural network where the goal is to design a desired state estimator such that the estimation error approaches zero exponentially in mean square. The derived conditions for both the stability and the existence of desired estimators are expressed in the form of matrix inequalities that can be solved by the semi-definite programme method. A numerical simulation example is exploited to demonstrate the usefulness of the main results obtained.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60774073 and 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010313

    Stability Analysis for Delayed Neural Networks Considering Both Conservativeness and Complexity

    Get PDF

    Combined Convex Technique on Delay-Distribution-Dependent Stability for Delayed Neural Networks

    Get PDF
    Together with the Lyapunov-Krasovskii functional approach and an improved delay-partitioning idea, one novel sufficient condition is derived to guarantee a class of delayed neural networks to be asymptotically stable in the mean-square sense, in which the probabilistic variable delay and both of delay variation limits can be measured. Through combining the reciprocal convex technique and convex technique one, the criterion is presented via LMIs and its solvability heavily depends on the sizes of both time-delay range and its variations, which can become much less conservative than those present ones by thinning the delay intervals. Finally, it can be demonstrated by four numerical examples that our idea reduces the conservatism more effectively than some earlier reported ones

    Dissipativity analysis of stochastic fuzzy neural networks with randomly occurring uncertainties using delay dividing approach

    Get PDF
    This paper focuses on the problem of delay-dependent robust dissipativity analysis for a class of stochastic fuzzy neural networks with time-varying delay. The randomly occurring uncertainties under consideration are assumed to follow certain mutually uncorrelated Bernoulli-distributed white noise sequences. Based on the Itô's differential formula, Lyapunov stability theory, and linear matrix inequalities techniques, several novel sufficient conditions are derived using delay partitioning approach to ensure the dissipativity of neural networks with or without time-varying parametric uncertainties. It is shown, by comparing with existing approaches, that the delay-partitioning projection approach can largely reduce the conservatism of the stability results. Numerical examples are constructed to show the effectiveness of the theoretical results
    corecore