672 research outputs found

    Complex oscillations in the delayed Fitzhugh-Nagumo equation

    Full text link
    Motivated by the dynamics of neuronal responses, we analyze the dynamics of the Fitzhugh-Nagumo slow-fast system with delayed self-coupling. This system provides a canonical example of a canard explosion for sufficiently small delays. Beyond this regime, delays significantly enrich the dynamics, leading to mixed-mode oscillations, bursting and chaos. These behaviors emerge from a delay-induced subcritical Bogdanov-Takens instability arising at the fold points of the S-shaped critical manifold. Underlying the transition from canard-induced to delay-induced dynamics is an abrupt switch in the nature of the Hopf bifurcation

    The complexity of dynamics in small neural circuits

    Full text link
    Mean-field theory is a powerful tool for studying large neural networks. However, when the system is composed of a few neurons, macroscopic differences between the mean-field approximation and the real behavior of the network can arise. Here we introduce a study of the dynamics of a small firing-rate network with excitatory and inhibitory populations, in terms of local and global bifurcations of the neural activity. Our approach is analytically tractable in many respects, and sheds new light on the finite-size effects of the system. In particular, we focus on the formation of multiple branching solutions of the neural equations through spontaneous symmetry-breaking, since this phenomenon increases considerably the complexity of the dynamical behavior of the network. For these reasons, branching points may reveal important mechanisms through which neurons interact and process information, which are not accounted for by the mean-field approximation.Comment: 34 pages, 11 figures. Supplementary materials added, colors of figures 8 and 9 fixed, results unchange

    Center Manifold Dynamics in Randomly Coupled Oscillators and in Cochlea

    Get PDF
    In dynamical systems theory, a fixed point of the activity is called nonhyperbolic if the linearization of the system around the fixed point has at least one eigenvalue with zero real part. The center manifold existence theorem guarantees the local existence of an invariant subspace of the activity, known as a center manifold, around nonhyperbolic fixed points. A growing number of theoretical and experimental studies suggest that neural systems utilize dynamics on center manifolds to display complex, nonlinear behavior and to flexibly adapt to wide-ranging sensory input parameters. In this thesis, I will present two lines of research exploring nonhyperbolicity in neural dynamics

    Interpreting multi-stable behaviour in input-driven recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are computational models inspired by the brain. Although RNNs stand out as state-of-the-art machine learning models to solve challenging tasks as speech recognition, handwriting recognition, language translation, and others, they are plagued by the so-called vanishing/exploding gradient issue. This prevents us from training RNNs with the aim of learning long term dependencies in sequential data. Moreover, a problem of interpretability affects these models, known as the ``black-box issue'' of RNNs. We attempt to open the black box by developing a mechanistic interpretation of errors occurring during the computation. We do this from a dynamical system theory perspective, specifically building on the notion of Excitable Network Attractors. Our methodology is effective at least for those tasks where a number of attractors and a switching pattern between them must be learned. RNNs can be seen as massively large nonlinear dynamical systems driven by external inputs. When it comes to analytically investigate RNNs, often in the literature the input-driven property is neglected or dropped in favour of tight constraints on the input driving the dynamics, which do not match the reality of RNN applications. Trying to bridge this gap, we framed RNNs dynamics driven by generic input sequences in the context of nonautonomous dynamical system theory. This brought us to enquire deeply into a fundamental principle established for RNNs known as the echo state property (ESP). In particular, we argue that input-driven RNNs can be reliable computational models even without satisfying the classical ESP formulation. We prove a sort of input-driven fixed point theorem and exploit it to (i) demonstrate the existence and uniqueness of a global attracting solution for strongly (in amplitude) input-driven RNNs, (ii) deduce the existence of multiple responses for certain input signals which can be reliably exploited for computational purposes, and (iii) study the stability of attracting solutions w.r.t. input sequences. Finally, we highlight the active role of the input in determining qualitative changes in the RNN dynamics, e.g. the number of stable responses, in contrast to commonly known qualitative changes due to variations of model parameters

    Controlling oscillatory behaviour of a two neuron recurrent neural network using inputs

    Get PDF
    Haschke R, Steil JJ, Ritter H. Controlling oscillatory behaviour of a two neuron recurrent neural network using inputs. In: Dorffner G, Bischof H, Hornik K, eds. Artificial Neural Networks - ICANN 2001. Lecture notes in computer science. Vol 2130. Springer; 2001: 1109-1114.We derive analytical expressions of codim-1-bifurcations for a fully connected, additive two-neuron network with sigmoidal activations, where the two external inputs are regarded as bifurcation parameters. The obtained Neimark-Sacker bifurcation curve encloses a region in input space with stable oscillatory behaviour, in which it is possible to control the oscillation frequency by adjusting the inputs
    • …
    corecore