904 research outputs found

    Synchronization in an array of linearly stochastically coupled networks with time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2007 Elsevier LtdIn this paper, the complete synchronization problem is investigated in an array of linearly stochastically coupled identical networks with time delays. The stochastic coupling term, which can reflect a more realistic dynamical behavior of coupled systems in practice, is introduced to model a coupled system, and the influence from the stochastic noises on the array of coupled delayed neural networks is studied thoroughly. Based on a simple adaptive feedback control scheme and some stochastic analysis techniques, several sufficient conditions are developed to guarantee the synchronization in an array of linearly stochastically coupled neural networks with time delays. Finally, an illustrate example with numerical simulations is exploited to show the effectiveness of the theoretical results.This work was jointly supported by the National Natural Science Foundation of China under Grant 60574043, the Royal Society of the United Kingdom, the Natural Science Foundation of Jiangsu Province of China under Grant BK2006093, and International Joint Project funded by NSFC and the Royal Society of the United Kingdom

    Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module

    Get PDF
    The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project

    ARCHITECTURE OPTIMIZATION, TRAINING CONVERGENCE AND NETWORK ESTIMATION ROBUSTNESS OF A FULLY CONNECTED RECURRENT NEURAL NETWORK

    Get PDF
    Recurrent neural networks (RNN) have been rapidly developed in recent years. Applications of RNN can be found in system identification, optimization, image processing, pattern reorganization, classification, clustering, memory association, etc. In this study, an optimized RNN is proposed to model nonlinear dynamical systems. A fully connected RNN is developed first which is modified from a fully forward connected neural network (FFCNN) by accommodating recurrent connections among its hidden neurons. In addition, a destructive structure optimization algorithm is applied and the extended Kalman filter (EKF) is adopted as a network\u27s training algorithm. These two algorithms can seamlessly work together to generate the optimized RNN. The enhancement of the modeling performance of the optimized network comes from three parts: 1) its prototype - the FFCNN has advantages over multilayer perceptron network (MLP), the most widely used network, in terms of modeling accuracy and generalization ability; 2) the recurrency in RNN network make it more capable of modeling non-linear dynamical systems; and 3) the structure optimization algorithm further improves RNN\u27s modeling performance in generalization ability and robustness. Performance studies of the proposed network are highlighted in training convergence and robustness. For the training convergence study, the Lyapunov method is used to adapt some training parameters to guarantee the training convergence, while the maximum likelihood method is used to estimate some other parameters to accelerate the training process. In addition, robustness analysis is conducted to develop a robustness measure considering uncertainties propagation through RNN via unscented transform. Two case studies, the modeling of a benchmark non-linear dynamical system and a tool wear progression in hard turning, are carried out to testify the development in this dissertation. The work detailed in this dissertation focuses on the creation of: (1) a new method to prove/guarantee the training convergence of RNN, and (2) a new method to quantify the robustness of RNN using uncertainty propagation analysis. With the proposed study, RNN and related algorithms are developed to model nonlinear dynamical system which can benefit modeling applications such as the condition monitoring studies in terms of robustness and accuracy in the future

    Global Robust Exponential Stability and Periodic Solutions for Interval Cohen-Grossberg Neural Networks with Mixed Delays

    Get PDF
    A class of interval Cohen-Grossberg neural networks with time-varying delays and infinite distributed delays is investigated. By employing H-matrix and M-matrix theory, homeomorphism techniques, Lyapunov functional method, and linear matrix inequality approach, sufficient conditions are established for the existence, uniqueness, and global robust exponential stability of the equilibrium point and the periodic solution to the neural networks. Our results improve some previously published ones. Finally, numerical examples are given to illustrate the feasibility of the theoretical results and further to exhibit that there is a characteristic sequence of bifurcations leading to a chaotic dynamics, which implies that the system admits rich and complex dynamics

    Learning Universal Computations with Spikes

    Get PDF
    Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them

    Neuronal Models of Motor Sequence Learning in the Songbird

    Get PDF
    Communication of complex content is an important ability in our everyday life. For communication to be possible, several requirements need to be met: The individual communicated to has to learn to associate a certain meaning with a given sound. In the brain, this sound is represented as a spatio-temporal pattern of spikes, which will thus have to be associated with a different spike pattern representing its meaning. In this thesis, models for associative learning in spiking neurons are introduced in chapters 6 and 7. There, a new biologically plausible learning mechanism is proposed, where a property of the neuronal dynamics - the hyperpolarization of a neuron after each spike it produces - is coupled with a homeostatic plasticity mechanism, which acts to balance inputs into the neuron. In chapter 6, the mechanism used is a version of spike timing dependent plasticity (STDP), a property that was experimentally observed: The direction and amplitude of synaptic change depends on the precise timing of pre- and postsynaptic spiking activity. This mechanism is applied to associative learning of output spikes in response to purely spatial spiking patterns. In chapter 7, a new learning rule is introduced, which is derived from the objective of a balanced membrane potential. This learning rule is shown to be equivalent to a version of STDP and applied to associative learning of precisely timed output spikes in response to spatio-temporal input patterns. The individual communicating has to learn to reproduce certain sounds (which can be associated with a given meaning). To that end, a memory of the sound sequence has to be formed. Since sound sequences are represented as sequences of activation patterns in the brain, learning of a given sequence of spike patterns is an interesting problem for theoretical considerations Here, it is shown that the biologically plausible learning mechanism introduced for associative learning enables recurrently coupled networks of spiking neurons to learn to reproduce given sequences of spikes. These results are presented in chapter 9. Finally, the communicator has to translate the sensory memory into motor actions that serve to reproduce the target sound. This process is investigated in the framework of inverse model learning, where the learner learns to invert the action-perception cycle by mapping perceptions back onto the actions that caused them. Two different setups for inverse model learning are investigated: In chapter 5, a simple setup for inverse model learning is coupled with the learning algorithm used for Perceptron learning in chapter 6 and it is shown that models of the sound generation and perception process, which are non-linear and non-local in time, can be inverted, if the width of the distribution of time delays of self-generated inputs caused by an individual motor spike is not too large. This limitation is mitigated by the model introduced in chapter 8. Both these models have experimentally testable consequences, namely a dip in the autocorrelation function of the spike times in the motor population of the duration of the loop delay, i.e. the time it takes for a motor activation to cause a sound and thus a sensory activation and the time that this sensory activation takes to be looped back to the motor population. Furthermore, both models predict neurons, which are active during the sound generation and during the passive playback of the sound with a time delay equivalent to the loop delay. Finally, the inverse model presented in chapter 8 additionally predicts mirror neurons without a time delay. Both types of mirror neurons have been observed in the songbird [GKGH14, PPNM08], a popular animal model for vocal imitation learning
    corecore