4 research outputs found

    Finite-time Stability, Dissipativity and Passivity Analysis of Discrete-time Neural Networks Time-varying Delays

    Get PDF
    The neural network time-varying delay was described as the dynamic properties of a neural cell, including neural functional and neural delay differential equations. The differential expression explains the derivative term of current and past state. The objective of this paper obtained the neural network time-varying delay. A delay-dependent condition is provided to ensure the considered discrete-time neural networks with time-varying delays to be finite-time stability, dissipativity, and passivity. This paper using a new Lyapunov-Krasovskii functional as well as the free-weighting matrix approach and a linear matrix inequality analysis (LMI) technique constructing to a novel sufficient criterion on finite-time stability, dissipativity, and passivity of the discrete-time neural networks with time-varying delays for improving. We propose sufficient conditions for discrete-time neural networks with time-varying delays. An effective LMI approach derives by base the appropriate type of Lyapunov functional. Finally, we present the effectiveness of novel criteria of finite-time stability, dissipativity, and passivity condition of discrete-time neural networks with time-varying delays in the form of linear matrix inequality (LMI)

    Finite-time passivity for neutral-type neural networks with time-varying delays – via auxiliary function-based integral inequalities

    Get PDF
    In this paper, we investigated the problem of the finite-time boundedness and finitetime passivity for neural networks with time-varying delays. A triple, quadrable and five integral terms with the delay information are introduced in the new Lyapunov–Krasovskii functional (LKF). Based on the auxiliary integral inequality, Writinger integral inequality and Jensen’s inequality, several sufficient conditions are derived. Finally, numerical examples are provided to verify the effectiveness of the proposed criterion. There results are compared with the existing results.&nbsp

    Dissipativity analysis of stochastic fuzzy neural networks with randomly occurring uncertainties using delay dividing approach

    Get PDF
    This paper focuses on the problem of delay-dependent robust dissipativity analysis for a class of stochastic fuzzy neural networks with time-varying delay. The randomly occurring uncertainties under consideration are assumed to follow certain mutually uncorrelated Bernoulli-distributed white noise sequences. Based on the ItĂ´'s differential formula, Lyapunov stability theory, and linear matrix inequalities techniques, several novel sufficient conditions are derived using delay partitioning approach to ensure the dissipativity of neural networks with or without time-varying parametric uncertainties. It is shown, by comparing with existing approaches, that the delay-partitioning projection approach can largely reduce the conservatism of the stability results. Numerical examples are constructed to show the effectiveness of the theoretical results

    Piecewise Convex Technique for the Stability Analysis of Delayed Neural Network

    Get PDF
    On the basis of the fact that the neuron activation function is sector bounded, this paper transforms the researched original delayed neural network into a linear uncertain system. Combined with delay partitioning technique, by using the convex combination between decomposed time delay and positive matrix, this paper constructs a novel Lyapunov function to derive new less conservative stability criteria. The benefit of the method used in this paper is that it can utilize more information on slope of the activations and time delays. To illustrate the effectiveness of the new established stable criteria, one numerical example and an application example are proposed to compare with some recent results
    corecore