8 research outputs found

    Input-to-State Representation in linear reservoirs dynamics

    Get PDF
    Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners

    Input-to-state representation in linear reservoirs dynamics

    Get PDF

    Frequency modulation of large oscillatory neural networks

    Get PDF
    Dynamical systems which generate periodic signals are of interest as models of biological central pattern generators and in a number of robotic applications. A basic functionality that is required in both biological modelling and robotics is frequency modulation. This leads to the question of whether there are generic mechanisms to control the frequency of neural oscillators. Here we describe why this objective is of a different nature, and more difficult to achieve, than modulating other oscillation characteristics (like amplitude, offset, signal shape). We propose a generic way to solve this task which makes use of a simple linear controller. It rests on the insight that there is a bidirectional dependency between the frequency of an oscillation and geometric properties of the neural oscillator's phase portrait. By controlling the geometry of the neural state orbits, it is possible to control the frequency on the condition that the state space can be shaped such that it can be pushed easily to any frequency

    Multiplex visibility graphs to investigate recurrent neural network dynamics

    Get PDF
    Source at https://doi.org/10.1038/srep44037 .A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods

    Interpreting recurrent neural networks behaviour via excitable network attractors

    Get PDF
    Introduction: Machine learning provides fundamental tools both for scientific research and for the development of technologies with significant impact on society. It provides methods that facilitate the discovery of regularities in data and that give predictions without explicit knowledge of the rules governing a system. However, a price is paid for exploiting such flexibility: machine learning methods are typically black-boxes where it is difficult to fully understand what the machine is doing or how it is operating. This poses constraints on the applicability and explainability of such methods. Methods: Our research aims to open the black-box of recurrent neural networks, an important family of neural networks used for processing sequential data. We propose a novel methodology that provides a mechanistic interpretation of behaviour when solving a computational task. Our methodology uses mathematical constructs called excitable network attractors, which are invariant sets in phase space composed of stable attractors and excitable connections between them. Results and Discussion: As the behaviour of recurrent neural networks depends both on training and on inputs to the system, we introduce an algorithm to extract network attractors directly from the trajectory of a neural network while solving tasks. Simulations conducted on a controlled benchmark task confirm the relevance of these attractors for interpreting the behaviour of recurrent neural networks, at least for tasks that involve learning a finite number of stable states and transitions between them.Comment: revised versio

    At the Edge of Chaos: How Cerebellar Granular Layer Network Dynamics Can Provide the Basis for Temporal Filters.

    Get PDF
    Models of the cerebellar microcircuit often assume that input signals from the mossy-fibers are expanded and recoded to provide a foundation from which the Purkinje cells can synthesize output filters to implement specific input-signal transformations. Details of this process are however unclear. While previous work has shown that recurrent granule cell inhibition could in principle generate a wide variety of random outputs suitable for coding signal onsets, the more general application for temporally varying signals has yet to be demonstrated. Here we show for the first time that using a mechanism very similar to reservoir computing enables random neuronal networks in the granule cell layer to provide the necessary signal separation and extension from which Purkinje cells could construct basis filters of various time-constants. The main requirement for this is that the network operates in a state of criticality close to the edge of random chaotic behavior. We further show that the lack of recurrent excitation in the granular layer as commonly required in traditional reservoir networks can be circumvented by considering other inherent granular layer features such as inverted input signals or mGluR2 inhibition of Golgi cells. Other properties that facilitate filter construction are direct mossy fiber excitation of Golgi cells, variability of synaptic weights or input signals and output-feedback via the nucleocortical pathway. Our findings are well supported by previous experimental and theoretical work and will help to bridge the gap between system-level models and detailed models of the granular layer network

    Interpreting multi-stable behaviour in input-driven recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are computational models inspired by the brain. Although RNNs stand out as state-of-the-art machine learning models to solve challenging tasks as speech recognition, handwriting recognition, language translation, and others, they are plagued by the so-called vanishing/exploding gradient issue. This prevents us from training RNNs with the aim of learning long term dependencies in sequential data. Moreover, a problem of interpretability affects these models, known as the ``black-box issue'' of RNNs. We attempt to open the black box by developing a mechanistic interpretation of errors occurring during the computation. We do this from a dynamical system theory perspective, specifically building on the notion of Excitable Network Attractors. Our methodology is effective at least for those tasks where a number of attractors and a switching pattern between them must be learned. RNNs can be seen as massively large nonlinear dynamical systems driven by external inputs. When it comes to analytically investigate RNNs, often in the literature the input-driven property is neglected or dropped in favour of tight constraints on the input driving the dynamics, which do not match the reality of RNN applications. Trying to bridge this gap, we framed RNNs dynamics driven by generic input sequences in the context of nonautonomous dynamical system theory. This brought us to enquire deeply into a fundamental principle established for RNNs known as the echo state property (ESP). In particular, we argue that input-driven RNNs can be reliable computational models even without satisfying the classical ESP formulation. We prove a sort of input-driven fixed point theorem and exploit it to (i) demonstrate the existence and uniqueness of a global attracting solution for strongly (in amplitude) input-driven RNNs, (ii) deduce the existence of multiple responses for certain input signals which can be reliably exploited for computational purposes, and (iii) study the stability of attracting solutions w.r.t. input sequences. Finally, we highlight the active role of the input in determining qualitative changes in the RNN dynamics, e.g. the number of stable responses, in contrast to commonly known qualitative changes due to variations of model parameters

    Regularization and stability in reservoir networks with output feedback

    No full text
    Reinhart F, Steil JJ. Regularization and stability in reservoir networks with output feedback. Neurocomputing. 2012;90:96-105
    corecore