1,050 research outputs found

    Hierarchical Temporal Representation in Linear Reservoir Computing

    Full text link
    Recently, studies on deep Reservoir Computing (RC) highlighted the role of layering in deep recurrent neural networks (RNNs). In this paper, the use of linear recurrent units allows us to bring more evidence on the intrinsic hierarchical temporal representation in deep RNNs through frequency analysis applied to the state signals. The potentiality of our approach is assessed on the class of Multiple Superimposed Oscillator tasks. Furthermore, our investigation provides useful insights to open a discussion on the main aspects that characterize the deep learning framework in the temporal domain.Comment: This is a pre-print of the paper submitted to the 27th Italian Workshop on Neural Networks, WIRN 201

    Time Series Clustering with Deep Reservoir Computing

    Get PDF
    This paper proposes a method for clustering of time series, based upon the ability of deep Reservoir Computing networks to grasp the dynamical structure of the series that is presented as input. A standard clustering algorithm, such as k-means, is applied to the network states, rather than the input series themselves. Clustering is thus embedded into the network dynamical evolution, since a clustering result is obtained at every time step, which in turn serves as initialisation at the next step. We empirically assess the performance of deep reservoir systems in time series clustering on benchmark datasets, considering the influence of crucial hyperparameters. Experimentation with the proposed model shows enhanced clustering quality, measured by the silhouette coefficient, when compared to both static clustering of data, and dynamic clustering with a shallow network

    Pure Samples of Quark and Gluon Jets at the LHC

    Get PDF
    Having pure samples of quark and gluon jets would greatly facilitate the study of jet properties and substructure, with many potential standard model and new physics applications. To this end, we consider multijet and jets+X samples, to determine the purity that can be achieved by simple kinematic cuts leaving reasonable production cross sections. We find, for example, that at the 7 TeV LHC, the pp {\to} {\gamma}+2jets sample can provide 98% pure quark jets with 200 GeV of transverse momentum and a cross section of 5 pb. To get 10 pb of 200 GeV jets with 90% gluon purity, the pp {\to} 3jets sample can be used. b+2jets is also useful for gluons, but only if the b-tagging is very efficient.Comment: 19 pages, 16 figures; v2 section on formally defining quark and gluon jets has been adde

    Reservoir Topology in Deep Echo State Networks

    Full text link
    Deep Echo State Networks (DeepESNs) recently extended the applicability of Reservoir Computing (RC) methods towards the field of deep learning. In this paper we study the impact of constrained reservoir topologies in the architectural design of deep reservoirs, through numerical experiments on several RC benchmarks. The major outcome of our investigation is to show the remarkable effect, in terms of predictive performance gain, achieved by the synergy between a deep reservoir construction and a structured organization of the recurrent units in each layer. Our results also indicate that a particularly advantageous architectural setting is obtained in correspondence of DeepESNs where reservoir units are structured according to a permutation recurrent matrix.Comment: Preprint of the paper published in the proceedings of ICANN 201

    Deep Tree Transductions - A Short Survey

    Full text link
    The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions. It provides a discussion of modern TreeLSTM models, showing the effect of the bias induced by the direction of tree processing. An empirical analysis is performed on real-world benchmarks, highlighting how there is no single model adequate to effectively approach all transduction problems.Comment: To appear in the Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019). arXiv admin note: text overlap with arXiv:1809.0909

    Richness of Deep Echo State Network Dynamics

    Full text link
    Reservoir Computing (RC) is a popular methodology for the efficient design of Recurrent Neural Networks (RNNs). Recently, the advantages of the RC approach have been extended to the context of multi-layered RNNs, with the introduction of the Deep Echo State Network (DeepESN) model. In this paper, we study the quality of state dynamics in progressively higher layers of DeepESNs, using tools from the areas of information theory and numerical analysis. Our experimental results on RC benchmark datasets reveal the fundamental role played by the strength of inter-reservoir connections to increasingly enrich the representations developed in higher layers. Our analysis also gives interesting insights into the possibility of effective exploitation of training algorithms based on stochastic gradient descent in the RC field.Comment: Preprint of the paper accepted at IWANN 201
    corecore