2,452 research outputs found

    Echo State Networks with Self-Normalizing Activations on the Hyper-Sphere

    Get PDF
    Among the various architectures of Recurrent Neural Networks, Echo State Networks (ESNs) emerged due to their simplified and inexpensive training procedure. These networks are known to be sensitive to the setting of hyper-parameters, which critically affect their behaviour. Results show that their performance is usually maximized in a narrow region of hyper-parameter space called edge of chaos. Finding such a region requires searching in hyper-parameter space in a sensible way: hyper-parameter configurations marginally outside such a region might yield networks exhibiting fully developed chaos, hence producing unreliable computations. The performance gain due to optimizing hyper-parameters can be studied by considering the memory--nonlinearity trade-off, i.e., the fact that increasing the nonlinear behavior of the network degrades its ability to remember past inputs, and vice-versa. In this paper, we propose a model of ESNs that eliminates critical dependence on hyper-parameters, resulting in networks that provably cannot enter a chaotic regime and, at the same time, denotes nonlinear behaviour in phase space characterised by a large memory of past inputs, comparable to the one of linear networks. Our contribution is supported by experiments corroborating our theoretical findings, showing that the proposed model displays dynamics that are rich-enough to approximate many common nonlinear systems used for benchmarking

    Fluctuations between high- and low-modularity topology in time-resolved functional connectivity

    Full text link
    Modularity is an important topological attribute for functional brain networks. Recent studies have reported that modularity of functional networks varies not only across individuals being related to demographics and cognitive performance, but also within individuals co-occurring with fluctuations in network properties of functional connectivity, estimated over short time intervals. However, characteristics of these time-resolved functional networks during periods of high and low modularity have remained largely unexplored. In this study we investigate spatiotemporal properties of time-resolved networks in the high and low modularity periods during rest, with a particular focus on their spatial connectivity patterns, temporal homogeneity and test-retest reliability. We show that spatial connectivity patterns of time-resolved networks in the high and low modularity periods are represented by increased and decreased dissociation of the default mode network module from task-positive network modules, respectively. We also find that the instances of time-resolved functional connectivity sampled from within the high (low) modularity period are relatively homogeneous (heterogeneous) over time, indicating that during the low modularity period the default mode network interacts with other networks in a variable manner. We confirmed that the occurrence of the high and low modularity periods varies across individuals with moderate inter-session test-retest reliability and that it is correlated with previously-reported individual differences in the modularity of functional connectivity estimated over longer timescales. Our findings illustrate how time-resolved functional networks are spatiotemporally organized during periods of high and low modularity, allowing one to trace individual differences in long-timescale modularity to the variable occurrence of network configurations at shorter timescales.Comment: Reorganized the paper; to appear in NeuroImage; arXiv abstract shortened to fit within character limit

    Multiplex visibility graphs to investigate recurrent neural network dynamics

    Get PDF
    Source at https://doi.org/10.1038/srep44037 .A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods
    corecore