277 research outputs found

    The Asymptotic Performance of Linear Echo State Neural Networks

    Get PDF
    In this article, a study of the mean-square error (MSE) performance of linear echo-state neural networks is performed, both for training and testing tasks. Considering the realistic setting of noise present at the network nodes, we derive deterministic equivalents for the aforementioned MSE in the limit where the number of input data TT and network size nn both grow large. Specializing then the network connectivity matrix to specific random settings, we further obtain simple formulas that provide new insights on the performance of such networks

    Input-to-State Representation in linear reservoirs dynamics

    Get PDF
    Reservoir computing is a popular approach to design recurrent neural networks, due to its training simplicity and approximation performance. The recurrent part of these networks is not trained (e.g., via gradient descent), making them appealing for analytical studies by a large community of researchers with backgrounds spanning from dynamical systems to neuroscience. However, even in the simple linear case, the working principle of these networks is not fully understood and their design is usually driven by heuristics. A novel analysis of the dynamics of such networks is proposed, which allows the investigator to express the state evolution using the controllability matrix. Such a matrix encodes salient characteristics of the network dynamics; in particular, its rank represents an input-indepedent measure of the memory capacity of the network. Using the proposed approach, it is possible to compare different reservoir architectures and explain why a cyclic topology achieves favourable results as verified by practitioners

    Input-to-state representation in linear reservoirs dynamics

    Get PDF

    Decoherent time-dependent transport beyond the Landauer-B\"uttiker formulation: a quantum-drift alternative to quantum jumps

    Get PDF
    We present a model for decoherence in time-dependent transport. It boils down into a form of wave function that undergoes a smooth stochastic drift of the phase in a local basis, the Quantum Drift (QD) model. This drift is nothing else but a local energy fluctuation. Unlike Quantum Jumps (QJ) models, no jumps are present in the density as the evolution is unitary. As a first application, we address the transport through a resonant state ∣0⟩\left\vert 0\right\rangle that undergoes decoherence. We show the equivalence with the decoherent steady state transport in presence of a B\"{u}ttiker's voltage probe. In order to test the dynamics, we consider two many-spin systems whith a local energy fluctuation. A two-spin system is reduced to a two level system (TLS) that oscillates among ∣0⟩\left\vert 0\right\rangle ≡\equiv ∣↑↓⟩ \left\vert \uparrow \downarrow \right\rangle and ∣1⟩≡\left\vert 1\right\rangle \equiv ∣↓↑⟩\left\vert \downarrow \uparrow \right\rangle . We show that QD model recovers not only the exponential damping of the oscillations in the low perturbation regime, but also the non-trivial bifurcation of the damping rates at a critical point, i.e. the quantum dynamical phase transition. We also address the spin-wave like dynamics of local polarization in a spin chain. The QD average solution has about half the dispersion respect to the mean dynamics than QJ. By evaluating the Loschmidt Echo (LE), we find that the pure states ∣0⟩\left\vert 0\right\rangle and ∣1⟩\left\vert 1\right \rangle are quite robust against the local decoherence. In contrast, the LE, and hence coherence, decays faster when the system is in a superposition state. Because its simple implementation, the method is well suited to assess decoherent transport problems as well as to include decoherence in both one-body and many-body dynamics.Comment: 10 pages, 5 figure

    A Random Matrix Approach to Echo-State Neural Networks

    Get PDF
    Abstract Recurrent neural networks, especially in their linear version, have provided many qualitative insights on their performance under different configurations. This article provides, through a novel random matrix framework, the quantitative counterpart of these performance results, specifically in the case of echo-state networks. Beyond mere insights, our approach conveys a deeper understanding on the core mechanism under play for both training and testing

    Dynamical systems as temporal feature spaces

    Get PDF
    Parameterized state space models in the form of recurrent networks are often used in machine learning to learn from data streams exhibiting temporal dependencies. To break the black box nature of such models it is important to understand the dynamical features of the input driving time series that are formed in the state space. We propose a framework for rigorous analysis of such state representations in vanishing memory state space models such as echo state networks (ESN). In particular, we consider the state space a temporal feature space and the readout mapping from the state space a kernel machine operating in that feature space. We show that: (1) The usual ESN strategy of randomly generating input-to-state, as well as state coupling leads to shallow memory time series representations, corresponding to cross-correlation operator with fast exponentially decaying coefficients; (2) Imposing symmetry on dynamic coupling yields a constrained dynamic kernel matching the input time series with straightforward exponentially decaying motifs or exponentially decaying motifs of the highest frequency; (3) Simple cycle high-dimensional reservoir topology specified only through two free parameters can implement deep memory dynamic kernels with a rich variety of matching motifs. We quantify richness of feature representations imposed by dynamic kernels and demonstrate that for dynamic kernel associated with cycle reservoir topology, the kernel richness undergoes a phase transition close to the edge of stability.Comment: 45 pages, 17 figures, accepte
    • …
    corecore