596 research outputs found

    Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays

    Get PDF
    Copyright [2009] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, we introduce a new class of discrete-time neural networks (DNNs) with Markovian jumping parameters as well as mode-dependent mixed time delays (both discrete and distributed time delays). Specifically, the parameters of the DNNs are subject to the switching from one to another at different times according to a Markov chain, and the mixed time delays consist of both discrete and distributed delays that are dependent on the Markovian jumping mode. We first deal with the stability analysis problem of the addressed neural networks. A special inequality is developed to account for the mixed time delays in the discrete-time setting, and a novel Lyapunov-Krasovskii functional is put forward to reflect the mode-dependent time delays. Sufficient conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the stochastic stability. We then turn to the synchronization problem among an array of identical coupled Markovian jumping neural networks with mixed mode-dependent time delays. By utilizing the Lyapunov stability theory and the Kronecker product, it is shown that the addressed synchronization problem is solvable if several LMIs are feasible. Hence, different from the commonly used matrix norm theories (such as the M-matrix method), a unified LMI approach is developed to solve the stability analysis and synchronization problems of the class of neural networks under investigation, where the LMIs can be easily solved by using the available Matlab LMI toolbox. Two numerical examples are presented to illustrate the usefulness and effectiveness of the main results obtained

    Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov–Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany

    Tree Echo State Networks

    Get PDF
    In this paper we present the Tree Echo State Network (TreeESN) model, generalizing the paradigm of Reservoir Computing to tree structured data. TreeESNs exploit an untrained generalized recursive reservoir, exhibiting extreme efficiency for learning in structured domains. In addition, we highlight through the paper other characteristics of the approach: First, we discuss the Markovian characterization of reservoir dynamics, extended to the case of tree domains, that is implied by the contractive setting of the TreeESN state transition function. Second, we study two types of state mapping functions to map the tree structured state of TreeESN into a fixed-size feature representation for classification or regression tasks. The critical role of the relation between the choice of the state mapping function and the Markovian characterization of the task is analyzed and experimentally investigated on both artificial and real-world tasks. Finally, experimental results on benchmark and real-world tasks show that the TreeESN approach, in spite of its efficiency, can achieve comparable results with state-of-the-art, although more complex, neural and kernel based models for tree structured data

    State estimation for discrete-time neural networks with Markov-mode-dependent lower and upper bounds on the distributed delays

    Get PDF
    Copyright @ 2012 Springer VerlagThis paper is concerned with the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters and mixed time-delays. The parameters of the neural networks under consideration switch over time subject to a Markov chain. The networks involve both the discrete-time-varying delay and the mode-dependent distributed time-delay characterized by the upper and lower boundaries dependent on the Markov chain. By constructing novel Lyapunov-Krasovskii functionals, sufficient conditions are firstly established to guarantee the exponential stability in mean square for the addressed discrete-time neural networks with Markovian jumping parameters and mixed time-delays. Then, the state estimation problem is coped with for the same neural network where the goal is to design a desired state estimator such that the estimation error approaches zero exponentially in mean square. The derived conditions for both the stability and the existence of desired estimators are expressed in the form of matrix inequalities that can be solved by the semi-definite programme method. A numerical simulation example is exploited to demonstrate the usefulness of the main results obtained.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 60774073 and 61074129, and the Natural Science Foundation of Jiangsu Province of China under Grant BK2010313

    Synchronization of coupled neutral-type neural networks with jumping-mode-dependent discrete and unbounded distributed delays

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the links below - Copyright @ 2013 IEEE.In this paper, the synchronization problem is studied for an array of N identical delayed neutral-type neural networks with Markovian jumping parameters. The coupled networks involve both the mode-dependent discrete-time delays and the mode-dependent unbounded distributed time delays. All the network parameters including the coupling matrix are also dependent on the Markovian jumping mode. By introducing novel Lyapunov-Krasovskii functionals and using some analytical techniques, sufficient conditions are derived to guarantee that the coupled networks are asymptotically synchronized in mean square. The derived sufficient conditions are closely related with the discrete-time delays, the distributed time delays, the mode transition probability, and the coupling structure of the networks. The obtained criteria are given in terms of matrix inequalities that can be efficiently solved by employing the semidefinite program method. Numerical simulations are presented to further demonstrate the effectiveness of the proposed approach.This work was supported in part by the Royal Society of the U.K., the National Natural Science Foundation of China under Grants 61074129, 61174136 and 61134009, and the Natural Science Foundation of Jiangsu Province of China under Grants BK2010313 and BK2011598

    Exponential synchronization of complex networks with Markovian jump and mixed delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2008 Elsevier LtdIn this Letter, we investigate the exponential synchronization problem for an array of N linearly coupled complex networks with Markovian jump and mixed time-delays. The complex network consists of m modes and the network switches from one mode to another according to a Markovian chain with known transition probability. The mixed time-delays are composed of discrete and distributed delays, both of which are mode-dependent. The nonlinearities imbedded with the complex networks are assumed to satisfy the sector condition that is more general than the commonly used Lipschitz condition. By making use of the Kronecker product and the stochastic analysis tool, we propose a novel Lyapunov–Krasovskii functional suitable for handling distributed delays and then show that the addressed synchronization problem is solvable if a set of linear matrix inequalities (LMIs) are feasible. Therefore, a unified LMI approach is developed to establish sufficient conditions for the coupled complex network to be globally exponentially synchronized in the mean square. Note that the LMIs can be easily solved by using the Matlab LMI toolbox and no tuning of parameters is required. A simulation example is provided to demonstrate the usefulness of the main results obtained.This work was supported in part by the Biotechnology and Biological Sciences Research Council (BBSRC) of the UK under Grants BB/C506264/1 and 100/EGM17735, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grants GR/S27658/01 and EP/C524586/1, an International Joint Project sponsored by the Royal Society of the UK, the Natural Science Foundation of Jiangsu Province of China under Grant BK2007075, the National Natural Science Foundation of China under Grant 60774073, and the Alexander von Humboldt Foundation of Germany

    Deep Randomized Neural Networks

    Get PDF
    Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers' connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains

    Deep Randomized Neural Networks

    Get PDF
    Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains
    corecore