215,673 research outputs found

    State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays

    Get PDF
    Copyright @ 2012 Springer VerlagThis paper is concerned with the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters and mixed time-delays. The parameters of the neural networks under consideration switch over time subject to a Markov chain. The networks involve both the discrete-time-varying delay and the mode-dependent distributed time-delay characterized by the upper and lower boundaries dependent on the Markov chain. By constructing novel Lyapunov-Krasovskii functionals, sufficient conditions are firstly established to guarantee the exponential stability in mean square for the addressed discrete-time neural networks with Markovian jumping parameters and mixed time-delays. Then, the state estimation problem is coped with for the same neural network where the goal is to design a desired state estimator such that the estimation error approaches zero exponentially in mean square. The derived conditions for both the stability and the existence of desired estimators are expressed in the form of matrix inequalities that can be solved by the semi-definite programme method. A numerical simulation example is exploited to demonstrate the usefulness of the main results obtained.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60774073 and 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010313

    Asymptotic stability for neural networks with mixed time-delays: The discrete-time case

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2009 Elsevier LtdThis paper is concerned with the stability analysis problem for a new class of discrete-time recurrent neural networks with mixed time-delays. The mixed time-delays that consist of both the discrete and distributed time-delays are addressed, for the first time, when analyzing the asymptotic stability for discrete-time neural networks. The activation functions are not required to be differentiable or strictly monotonic. The existence of the equilibrium point is first proved under mild conditions. By constructing a new Lyapnuovā€“Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the discrete-time neural networks to be globally asymptotically stable. As an extension, we further consider the stability analysis problem for the same class of neural networks but with state-dependent stochastic disturbances. All the conditions obtained are expressed in terms of LMIs whose feasibility can be easily checked by using the numerically efficient Matlab LMI Toolbox. A simulation example is presented to show the usefulness of the derived LMI-based stability condition.This work was supported in part by the Biotechnology and Biological Sciences Research Council (BBSRC) of the UK under Grants BB/C506264/1 and 100/EGM17735, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grants GR/S27658/01 and EP/C524586/1, an International Joint Project sponsored by the Royal Society of the UK, the Natural Science Foundation of Jiangsu Province of China under Grant BK2007075, the National Natural Science Foundation of China under Grant 60774073, and the Alexander von Humboldt Foundation of Germany

    A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks

    Full text link
    In this paper, we address the stability of a broad class of discrete-time hypercomplex-valued Hopfield-type neural networks. To ensure the neural networks belonging to this class always settle down at a stationary state, we introduce novel hypercomplex number systems referred to as real-part associative hypercomplex number systems. Real-part associative hypercomplex number systems generalize the well-known Cayley-Dickson algebras and real Clifford algebras and include the systems of real numbers, complex numbers, dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as particular instances. Apart from the novel hypercomplex number systems, we introduce a family of hypercomplex-valued activation functions called B\mathcal{B}-projection functions. Broadly speaking, a B\mathcal{B}-projection function projects the activation potential onto the set of all possible states of a hypercomplex-valued neuron. Using the theory presented in this paper, we confirm the stability analysis of several discrete-time hypercomplex-valued Hopfield-type neural networks from the literature. Moreover, we introduce and provide the stability analysis of a general class of Hopfield-type neural networks on Cayley-Dickson algebras

    Finite-time Stability, Dissipativity and Passivity Analysis of Discrete-time Neural Networks Time-varying Delays

    Get PDF
    The neural network time-varying delay was described as the dynamic properties of a neural cell, including neural functional and neural delay differential equations. The differential expression explains the derivative term of current and past state. The objective of this paper obtained the neural network time-varying delay. A delay-dependent condition is provided to ensure the considered discrete-time neural networks with time-varying delays to be finite-time stability, dissipativity, and passivity. This paper using a new Lyapunov-Krasovskii functional as well as the free-weighting matrix approach and a linear matrix inequality analysis (LMI) technique constructing to a novel sufficient criterion on finite-time stability, dissipativity, and passivity of the discrete-time neural networks with time-varying delays for improving. We propose sufficient conditions for discrete-time neural networks with time-varying delays. An effective LMI approach derives by base the appropriate type of Lyapunov functional. Finally, we present the effectiveness of novel criteria of finite-time stability, dissipativity, and passivity condition of discrete-time neural networks with time-varying delays in the form of linear matrix inequality (LMI)

    Chaos and Asymptotical Stability in Discrete-time Neural Networks

    Full text link
    This paper aims to theoretically prove by applying Marotto's Theorem that both transiently chaotic neural networks (TCNN) and discrete-time recurrent neural networks (DRNN) have chaotic structure. A significant property of TCNN and DRNN is that they have only one fixed point, when absolute values of the self-feedback connection weights in TCNN and the difference time in DRNN are sufficiently large. We show that this unique fixed point can actually evolve into a snap-back repeller which generates chaotic structure, if several conditions are satisfied. On the other hand, by using the Lyapunov functions, we also derive sufficient conditions on asymptotical stability for symmetrical versions of both TCNN and DRNN, under which TCNN and DRNN asymptotically converge to a fixed point. Furthermore, generic bifurcations are also considered in this paper. Since both of TCNN and DRNN are not special but simple and general, the obtained theoretical results hold for a wide class of discrete-time neural networks. To demonstrate the theoretical results of this paper better, several numerical simulations are provided as illustrating examples.Comment: This paper will be published in Physica D. Figures should be requested to the first autho

    Dynamics of neural systems with discrete and distributed time delays

    Get PDF
    In real-world systems, interactions between elements do not happen instantaneously, due to the time required for a signal to propagate, reaction times of individual elements, and so forth. Moreover, time delays are normally nonconstant and may vary with time. This means that it is vital to introduce time delays in any realistic model of neural networks. In order to analyze the fundamental properties of neural networks with time-delayed connections, we consider a system of two coupled two-dimensional nonlinear delay differential equations. This model represents a neural network, where one subsystem receives a delayed input from another subsystem. An exciting feature of the model under consideration is the combination of both discrete and distributed delays, where distributed time delays represent the neural feedback between the two subsystems, and the discrete delays describe the neural interaction within each of the two subsystems. Stability properties are investigated for different commonly used distribution kernels, and the results are compared to the corresponding results on stability for networks with no distributed delays. It is shown how approximations of the boundary of the stability region of a trivial equilibrium can be obtained analytically for the cases of delta, uniform, and weak gamma delay distributions. Numerical techniques are used to investigate stability properties of the fully nonlinear system, and they fully confirm all analytical findings

    Global exponential stability of impulsive discrete-time neural networks with time-varying delays

    Get PDF
    This paper studies the problem of global exponential stability and exponential convergence rate for a class of impulsive discrete-time neural networks with time-varying delays. Firstly, by means of the Lyapunov stability theory, some inequality analysis techniques and a discrete-time Halanay-type inequality technique, sufficient conditions for ensuring global exponential stability of discrete-time neural networks are derived, and the estimated exponential convergence rate is provided as well. The obtained results are then applied to derive global exponential stability criteria and exponential convergence rate of impulsive discrete-time neural networks with time-varying delays. Finally, numerical examples are provided to illustrate the effectiveness and usefulness of the obtained criteria
    • ā€¦
    corecore