757 research outputs found

    Combined Convex Technique on Delay-Distribution-Dependent Stability for Delayed Neural Networks

    Get PDF
    Together with the Lyapunov-Krasovskii functional approach and an improved delay-partitioning idea, one novel sufficient condition is derived to guarantee a class of delayed neural networks to be asymptotically stable in the mean-square sense, in which the probabilistic variable delay and both of delay variation limits can be measured. Through combining the reciprocal convex technique and convex technique one, the criterion is presented via LMIs and its solvability heavily depends on the sizes of both time-delay range and its variations, which can become much less conservative than those present ones by thinning the delay intervals. Finally, it can be demonstrated by four numerical examples that our idea reduces the conservatism more effectively than some earlier reported ones

    New delay-dependent stability criteria for recurrent neural networks with time-varying delays

    Get PDF
    Dimirovski, Georgi M. (Dogus Author)This work is concerned with the delay-dependentstability problem for recurrent neural networks with time-varying delays. A new improved delay-dependent stability criterion expressed in terms of linear matrix inequalities is derived by constructing a dedicated Lyapunov-Krasovskii functional via utilizing Wirtinger inequality and convex combination approach. Moreover, a further improved delay-dependent stability criterion is established by means of a new partitioning method for bounding conditions on the activation function and certain new activation function conditions presented. Finally, the application of these novel results to an illustrative example from the literature has been investigated and their effectiveness is shown via comparison with the existing recent ones

    A delay-dividing approach to robust stability of uncertain stochastic complex-valued Hopfield delayed neural networks

    Full text link
    In scientific disciplines and other engineering applications, most of the systems refer to uncertainties, because when modeling physical systems the uncertain parameters are unavoidable. In view of this, it is important to investigate dynamical systems with uncertain parameters. In the present study, a delay-dividing approach is devised to study the robust stability issue of uncertain neural networks. Specifically, the uncertain stochastic complex-valued Hopfield neural network (USCVHNN) with time delay is investigated. Here, the uncertainties of the system parameters are norm-bounded. Based on the Lyapunov mathematical approach and homeomorphism principle, the sufficient conditions for the global asymptotic stability of USCVHNN are derived. To perform this derivation, we divide a complex-valued neural network (CVNN) into two parts, namely real and imaginary, using the delay-dividing approach. All the criteria are expressed by exploiting the linear matrix inequalities (LMIs). Based on two examples, we obtain good theoretical results that ascertain the usefulness of the proposed delay-dividing approach for the USCVHNN model

    STABILITY, FINITE-TIME STABILITY AND PASSIVITY CRITERIA FOR DISCRETE-TIME DELAYED NEURAL NETWORKS

    Get PDF
    In this paper, we present the problem of stability, finite-time stability and passivity for discrete-time neural networks (DNNs) with variable delays. For the purposes of stability analysis, an augmented Lyapunov-Krasovskii functional (LKF) with single and double summation terms and several augmented vectors is proposed by decomposing the time-delay interval into two non-equidistant subintervals. Then, by using the Wirtinger-based inequality, reciprocally and extended reciprocally convex combination lemmas, tight estimations for sum terms in the forward difference of LKF are given. In order to relax the existing results, several zero equalities are introduced and stability criteria are proposed in terms of linear matrix inequalities (LMIs). The main objective for the finite-time stability and passivity analysis is how to effectively evaluate the finite-time passivity conditions for DNNs. To achieve this, some weighted summation inequalities are proposed for application to a finite-sum term appearing in the forward difference of LKF, which helps to ensure that the considered delayed DNN is passive. The derived passivity criteria are presented in terms of linear matrix inequalities. Some numerical examples are presented to illustrate the proposed methodology

    Combined Convex Technique on Delay-Distribution-Dependent Stability for Delayed Neural Networks

    Get PDF
    Together with the Lyapunov-Krasovskii functional approach and an improved delay-partitioning idea, one novel sufficient condition is derived to guarantee a class of delayed neural networks to be asymptotically stable in the mean-square sense, in which the probabilistic variable delay and both of delay variation limits can be measured. Through combining the reciprocal convex technique and convex technique one, the criterion is presented via LMIs and its solvability heavily depends on the sizes of both time-delay range and its variations, which can become much less conservative than those present ones by thinning the delay intervals. Finally, it can be demonstrated by four numerical examples that our idea reduces the conservatism more effectively than some earlier reported ones

    Stability of Stochastic Discrete-Time Neural Networks with Discrete Delays and the Leakage Delay

    Get PDF
    This paper investigates the stability of stochastic discrete-time neural networks (NNs) with discrete time-varying delays and leakage delay. As the partition of time-varying and leakage delay is brought in the discrete-time system, we construct a novel LyapunovKrasovskii function based on stability theory. Furthermore sufficient conditions are derived to guarantee the global asymptotic stability of the equilibrium point. Numerical example is given to demonstrate the effectiveness of the proposed method and the applicability of the proposed method
    corecore