1,882 research outputs found

    Stability and dissipativity analysis of static neural networks with time delay

    Get PDF
    This paper is concerned with the problems of stability and dissipativity analysis for static neural networks (NNs) with time delay. Some improved delay-dependent stability criteria are established for static NNs with time-varying or time-invariant delay using the delay partitioning technique. Based on these criteria, several delay-dependent sufficient conditions are given to guarantee the dissipativity of static NNs with time delay. All the given results in this paper are not only dependent upon the time delay but also upon the number of delay partitions. Some examples are given to illustrate the effectiveness and reduced conservatism of the proposed results.published_or_final_versio

    State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays

    Get PDF
    Copyright @ 2012 Springer VerlagThis paper is concerned with the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters and mixed time-delays. The parameters of the neural networks under consideration switch over time subject to a Markov chain. The networks involve both the discrete-time-varying delay and the mode-dependent distributed time-delay characterized by the upper and lower boundaries dependent on the Markov chain. By constructing novel Lyapunov-Krasovskii functionals, sufficient conditions are firstly established to guarantee the exponential stability in mean square for the addressed discrete-time neural networks with Markovian jumping parameters and mixed time-delays. Then, the state estimation problem is coped with for the same neural network where the goal is to design a desired state estimator such that the estimation error approaches zero exponentially in mean square. The derived conditions for both the stability and the existence of desired estimators are expressed in the form of matrix inequalities that can be solved by the semi-definite programme method. A numerical simulation example is exploited to demonstrate the usefulness of the main results obtained.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60774073 and 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010313

    Stability and synchronization of discrete-time neural networks with switching parameters and time-varying delays

    Get PDF
    published_or_final_versio

    Integral partitioning approach to stability analysis and stabilization of distributed time delay systems

    Get PDF
    In this paper, the problems of delay-dependent stability analysis and stabilization are investigated for linear continuous-time systems with distributed delay. By introducing an integral partitioning technique, a new form of Lyapunov-Krasovskii functional (LKF) is constructed and improved distributed delay dependent stability conditions are established in terms of linear matrix inequalities (LMIs). Based on the criteria, a design algorithm for a state feedback controller is proposed. The results developed in this paper are less conservative than existing ones in the literature, which is illustrated by several examples. © 2011 IFAC.postprintThe 18th World Congress of the International Federation of Automatic Control (IFAC 2011), Milano, Italy, 28 August-2 September 2011. In Proceedings of the 18th IFAC World Congress, 2011, v. 18 pt. 1, p. 5094–509

    Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network

    Full text link
    Because of their effectiveness in broad practical applications, LSTM networks have received a wealth of coverage in scientific journals, technical blogs, and implementation guides. However, in most articles, the inference formulas for the LSTM network and its parent, RNN, are stated axiomatically, while the training formulas are omitted altogether. In addition, the technique of "unrolling" an RNN is routinely presented without justification throughout the literature. The goal of this paper is to explain the essential RNN and LSTM fundamentals in a single document. Drawing from concepts in signal processing, we formally derive the canonical RNN formulation from differential equations. We then propose and prove a precise statement, which yields the RNN unrolling technique. We also review the difficulties with training the standard RNN and address them by transforming the RNN into the "Vanilla LSTM" network through a series of logical arguments. We provide all equations pertaining to the LSTM system together with detailed descriptions of its constituent entities. Albeit unconventional, our choice of notation and the method for presenting the LSTM system emphasizes ease of understanding. As part of the analysis, we identify new opportunities to enrich the LSTM system and incorporate these extensions into the Vanilla LSTM network, producing the most general LSTM variant to date. The target reader has already been exposed to RNNs and LSTM networks through numerous available resources and is open to an alternative pedagogical approach. A Machine Learning practitioner seeking guidance for implementing our new augmented LSTM model in software for experimentation and research will find the insights and derivations in this tutorial valuable as well.Comment: 43 pages, 10 figures, 78 reference

    Stability Analysis of Stochastic Markovian Jump Neural Networks with Different Time Scales and Randomly Occurred Nonlinearities Based on Delay-Partitioning Projection Approach

    Get PDF
    In this paper, the mean square asymptotic stability of stochastic Markovian jump neural networks with different time scales and randomly occurred nonlinearities is investigated. In terms of linear matrix inequality (LMI) approach and delay-partitioning projection technique, delay-dependent stability criteria are derived for the considered neural networks for cases with or without the information of the delay rates via new Lyapunov-Krasovskii functionals. We also obtain that the thinner the delay is partitioned, the more obviously the conservatism can be reduced. An example with simulation results is given to show the effectiveness of the proposed approach

    Improved Results on H∞ State Estimation of Static Neural Networks with Time Delay

    Get PDF
    This paper studies the problem of ∞ state estimation for a class of delayed static neural networks. The purpose of the problem is to design a delay-dependent state estimator such that the dynamics of the error system is globally exponentially stable and a prescribed ∞ performance is guaranteed. Some improved delay-dependent conditions are established by constructing augmented Lyapunov-Krasovskii functionals (LKFs). The desired estimator gain matrix can be characterized in terms of the solution to LMIs (linear matrix inequalities). Numerical examples are provided to illustrate the effectiveness of the proposed method compared with some existing results

    Robust Stabilization and H

    Get PDF
    This paper is concerned with the problem of robust stabilization and H∞ control for a class of uncertain neural networks. For the robust stabilization problem, sufficient conditions are derived based on the quadratic convex combination property together with Lyapunov stability theory. The feedback controller we design ensures the robust stability of uncertain neural networks with mixed time delays. We further design a robust H∞ controller which guarantees the robust stability of the uncertain neural networks with a given H∞ performance level. The delay-dependent criteria are derived in terms of LMI (linear matrix inequality). Finally, numerical examples are provided to show the effectiveness of the obtained results
    corecore