2,003 research outputs found

    State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: The discrete-time case

    Get PDF
    Copyright [2009] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This paper is concerned with the problem of state estimation for a class of discrete-time coupled uncertain stochastic complex networks with missing measurements and time-varying delay. The parameter uncertainties are assumed to be norm-bounded and enter into both the network state and the network output. The stochastic Brownian motions affect not only the coupling term of the network but also the overall network dynamics. The nonlinear terms that satisfy the usual Lipschitz conditions exist in both the state and measurement equations. Through available output measurements described by a binary switching sequence that obeys a conditional probability distribution, we aim to design a state estimator to estimate the network states such that, for all admissible parameter uncertainties and time-varying delays, the dynamics of the estimation error is guaranteed to be globally exponentially stable in the mean square. By employing the Lyapunov functional method combined with the stochastic analysis approach, several delay-dependent criteria are established that ensure the existence of the desired estimator gains, and then the explicit expression of such estimator gains is characterized in terms of the solution to certain linear matrix inequalities (LMIs). Two numerical examples are exploited to illustrate the effectiveness of the proposed estimator design schemes

    Robust stability for stochastic Hopfield neural networks with time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2006 Elsevier Ltd.In this paper, the asymptotic stability analysis problem is considered for a class of uncertain stochastic neural networks with time delays and parameter uncertainties. The delays are time-invariant, and the uncertainties are norm-bounded that enter into all the network parameters. The aim of this paper is to establish easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties. By employing a Lyapunovā€“Krasovskii functional and conducting the stochastic analysis, a linear matrix inequality (LMI) approach is developed to derive the stability criteria. The proposed criteria can be checked readily by using some standard numerical packages, and no tuning of parameters is required. Examples are provided to demonstrate the effectiveness and applicability of the proposed criteria.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of German

    Synchronization in an array of linearly stochastically coupled networks with time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2007 Elsevier LtdIn this paper, the complete synchronization problem is investigated in an array of linearly stochastically coupled identical networks with time delays. The stochastic coupling term, which can reflect a more realistic dynamical behavior of coupled systems in practice, is introduced to model a coupled system, and the influence from the stochastic noises on the array of coupled delayed neural networks is studied thoroughly. Based on a simple adaptive feedback control scheme and some stochastic analysis techniques, several sufficient conditions are developed to guarantee the synchronization in an array of linearly stochastically coupled neural networks with time delays. Finally, an illustrate example with numerical simulations is exploited to show the effectiveness of the theoretical results.This work was jointly supported by the National Natural Science Foundation of China under Grant 60574043, the Royal Society of the United Kingdom, the Natural Science Foundation of Jiangsu Province of China under Grant BK2006093, and International Joint Project funded by NSFC and the Royal Society of the United Kingdom

    Design of non-fragile state estimators for discrete time-delayed neural networks with parameter uncertainties

    Get PDF
    This paper is concerned with the problem of designing a non-fragile state estimator for a class of uncertain discrete-time neural networks with time-delays. The norm-bounded parameter uncertainties enter into all the system matrices, and the network output is of a general type that contains both linear and nonlinear parts. The additive variation of the estimator gain is taken into account that reflects the possible implementation error of the neuron state estimator. The aim of the addressed problem is to design a state estimator such that the estimation performance is non-fragile against the gain variations and also robust against the parameter uncertainties. Sufficient conditions are presented to guarantee the existence of the desired non-fragile state estimators by using the Lyapunov stability theory and the explicit expression of the desired estimators is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is given to demonstrate the effectiveness of the proposed design approach

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks ā€“ akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networksā€”a state-of-the-art deep recurrent architectureā€”in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    Modelling and Contractivity of Neural-Synaptic Networks with Hebbian Learning

    Full text link
    This paper is concerned with the modelling and analysis of two of the most commonly used recurrent neural network models (i.e., Hopfield neural network and firing-rate neural network) with dynamic recurrent connections undergoing Hebbian learning rules. To capture the synaptic sparsity of neural circuits we propose a low dimensional formulation. We then characterize certain key dynamical properties. First, we give biologically-inspired forward invariance results. Then, we give sufficient conditions for the non-Euclidean contractivity of the models. Our contraction analysis leads to stability and robustness of time-varying trajectories -- for networks with both excitatory and inhibitory synapses governed by both Hebbian and anti-Hebbian rules. For each model, we propose a contractivity test based upon biologically meaningful quantities, e.g., neural and synaptic decay rate, maximum in-degree, and the maximum synaptic strength. Then, we show that the models satisfy Dale's Principle. Finally, we illustrate the effectiveness of our results via a numerical example.Comment: 24 pages, 4 figure

    ARCHITECTURE OPTIMIZATION, TRAINING CONVERGENCE AND NETWORK ESTIMATION ROBUSTNESS OF A FULLY CONNECTED RECURRENT NEURAL NETWORK

    Get PDF
    Recurrent neural networks (RNN) have been rapidly developed in recent years. Applications of RNN can be found in system identification, optimization, image processing, pattern reorganization, classification, clustering, memory association, etc. In this study, an optimized RNN is proposed to model nonlinear dynamical systems. A fully connected RNN is developed first which is modified from a fully forward connected neural network (FFCNN) by accommodating recurrent connections among its hidden neurons. In addition, a destructive structure optimization algorithm is applied and the extended Kalman filter (EKF) is adopted as a network\u27s training algorithm. These two algorithms can seamlessly work together to generate the optimized RNN. The enhancement of the modeling performance of the optimized network comes from three parts: 1) its prototype - the FFCNN has advantages over multilayer perceptron network (MLP), the most widely used network, in terms of modeling accuracy and generalization ability; 2) the recurrency in RNN network make it more capable of modeling non-linear dynamical systems; and 3) the structure optimization algorithm further improves RNN\u27s modeling performance in generalization ability and robustness. Performance studies of the proposed network are highlighted in training convergence and robustness. For the training convergence study, the Lyapunov method is used to adapt some training parameters to guarantee the training convergence, while the maximum likelihood method is used to estimate some other parameters to accelerate the training process. In addition, robustness analysis is conducted to develop a robustness measure considering uncertainties propagation through RNN via unscented transform. Two case studies, the modeling of a benchmark non-linear dynamical system and a tool wear progression in hard turning, are carried out to testify the development in this dissertation. The work detailed in this dissertation focuses on the creation of: (1) a new method to prove/guarantee the training convergence of RNN, and (2) a new method to quantify the robustness of RNN using uncertainty propagation analysis. With the proposed study, RNN and related algorithms are developed to model nonlinear dynamical system which can benefit modeling applications such as the condition monitoring studies in terms of robustness and accuracy in the future
    • ā€¦
    corecore