17,312 research outputs found

    Chaos and Asymptotical Stability in Discrete-time Neural Networks

    Full text link
    This paper aims to theoretically prove by applying Marotto's Theorem that both transiently chaotic neural networks (TCNN) and discrete-time recurrent neural networks (DRNN) have chaotic structure. A significant property of TCNN and DRNN is that they have only one fixed point, when absolute values of the self-feedback connection weights in TCNN and the difference time in DRNN are sufficiently large. We show that this unique fixed point can actually evolve into a snap-back repeller which generates chaotic structure, if several conditions are satisfied. On the other hand, by using the Lyapunov functions, we also derive sufficient conditions on asymptotical stability for symmetrical versions of both TCNN and DRNN, under which TCNN and DRNN asymptotically converge to a fixed point. Furthermore, generic bifurcations are also considered in this paper. Since both of TCNN and DRNN are not special but simple and general, the obtained theoretical results hold for a wide class of discrete-time neural networks. To demonstrate the theoretical results of this paper better, several numerical simulations are provided as illustrating examples.Comment: This paper will be published in Physica D. Figures should be requested to the first autho

    Design of exponential state estimators for neural networks with mixed time delays

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link below - Copyright 2007 Elsevier Ltd.In this Letter, the state estimation problem is dealt with for a class of recurrent neural networks (RNNs) with mixed discrete and distributed delays. The activation functions are assumed to be neither monotonic, nor differentiable, nor bounded. We aim at designing a state estimator to estimate the neuron states, through available output measurements, such that the dynamics of the estimation error is globally exponentially stable in the presence of mixed time delays. By using the Laypunovā€“Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions to guarantee the existence of the state estimators. We show that both the existence conditions and the explicit expression of the desired estimator can be characterized in terms of the solution to an LMI. A simulation example is exploited to show the usefulness of the derived LMI-based stability conditions.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, the Alexander von Humboldt Foundation of Germany, the Natural Science Foundation of Jiangsu Education Committee of China under Grants 05KJB110154 and BK2006064, and the National Natural Science Foundation of China under Grants 10471119 and 10671172

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset
    • ā€¦
    corecore