202 research outputs found

    Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs

    Full text link
    The paper introduces concentric Echo State Network, an approach to design reservoir topologies that tries to bridge the gap between deterministically constructed simple cycle models and deep reservoir computing approaches. We show how to modularize the reservoir into simple unidirectional and concentric cycles with pairwise bidirectional jump connections between adjacent loops. We provide a preliminary experimental assessment showing how concentric reservoirs yield to superior predictive accuracy and memory capacity with respect to single cycle reservoirs and deep reservoir models

    Architectural designs of Echo State Network

    Get PDF
    It investigates systematically the reservoir construction of Echo State Network (ESN). This thesis proposes two very simple deterministic ESN organisation (Simple Cycle reservoir (SCR) and Cycle Reservoir with Jumps (CRJ). Simple Cycle reservoir (SCR) is sufficient to obtain performances comparable to those of the classical ESN. While Cycle Reservoir with Jumps (CRJ) significantly outperform the those of the classical ESN. This thesis also studies and discusses three reservoir characterisations - short-term memory capacity (MC), eigen-spectrum of the reservoir weight matrix and Lyapunov Exponent with their relation to the ESN performance. It also designs and utilises an ensemble of ESNs with diverse reservoirs whose collective readout is obtained through Negative Correlation Learning (NCL) of ensemble of Multi-Layer Perceptrons (MLP), where each individual MPL realises the readout from a single ESN. Finally, this thesis investigates the relation between two quantitative measures characterising short term memory in input driven dynamical systems, namely the short term memory capacity (MC), and the Fisher memory curve (FMC)

    Reservoir Memory Machines as Neural Computers

    Full text link
    Differentiable neural computers extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently, namely an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably can not recognize. Further, we demonstrate experimentally that our model performs comparably to its fully-trained deep version on several typical benchmark tasks for differentiable neural computers.Comment: In print at the special issue 'New Frontiers in Extremely Efficient Reservoir Computing' of IEEE TNNL

    Reservoir Topology in Deep Echo State Networks

    Full text link
    Deep Echo State Networks (DeepESNs) recently extended the applicability of Reservoir Computing (RC) methods towards the field of deep learning. In this paper we study the impact of constrained reservoir topologies in the architectural design of deep reservoirs, through numerical experiments on several RC benchmarks. The major outcome of our investigation is to show the remarkable effect, in terms of predictive performance gain, achieved by the synergy between a deep reservoir construction and a structured organization of the recurrent units in each layer. Our results also indicate that a particularly advantageous architectural setting is obtained in correspondence of DeepESNs where reservoir units are structured according to a permutation recurrent matrix.Comment: Preprint of the paper published in the proceedings of ICANN 201

    Reservoir Topology in Deep Echo State Networks

    Get PDF
    Deep Echo State Networks (DeepESNs) recently extended the applicability of Reservoir Computing (RC) methods towards the field of deep learning. In this paper we study the impact of constrained reservoir topologies in the architectural design of deep reservoirs, through numerical experiments on several RC benchmarks. The major outcome of our investigation is to show the remarkable effect, in terms of predictive performance gain, achieved by the synergy between a deep reservoir construction and a structured organization of the recurrent units in each layer. Our results also indicate that a particularly advantageous architectural setting is obtained in correspondence of DeepESNs where reservoir units are structured according to a permutation recurrent matrix

    The role of structure and complexity on Reservoir Computing quality

    Get PDF
    We explore the effect of structure and connection complexity on the dynamical behaviour of Reservoir Computers (RC). At present, considerable effort is taken to design and hand-craft physical reservoir computers. Both structure and physical complexity are often pivotal to task performance, however, assessing their overall importance is challenging. Using a recently proposed framework, we evaluate and compare the dynamical freedom (referring to quality) of neural network structures, as an analogy for physical systems. The results quantify how structure affects the range of behaviours exhibited by these networks. It highlights that high quality reached by more complex structures is often also achievable in simpler structures with greater network size. Alternatively, quality is often improved in smaller networks by adding greater connection complexity. This work demonstrates the benefits of using abstract behaviour representation, rather than evaluation through benchmark tasks, to assess the quality of computing substrates, as the latter typically has biases, and often provides little insight into the complete computing quality of physical systems
    corecore