4 research outputs found

    Benchmarking Learning Efficiency in Deep Reservoir Computing

    Get PDF
    International audienceIt is common to evaluate the performance of a machine learning model by measuring its predictive power on a test dataset. This approach favors complicated models that can smoothly fit complex functions and generalize well from training data points. Although essential components of intelligence, speed and data efficiency of this learning process are rarely reported or compared between different candidate models. In this paper, we introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data. We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing. The proposed tasks require a wide range of computational primitives, such as memory or the ability to compute Boolean functions, to be effectively solved. Surprisingly, we observe that reservoir computing systems that rely on dynamically evolving feature maps learn faster than fully supervised methods trained with stochastic gradient optimization while achieving comparable accuracy scores. The code, benchmark, trained models, and results to reproduce our experiments are available at https://github.com/hugcis/benchmark_learning_efficiency/

    Algorithmic statistics: forty years later

    Full text link
    Algorithmic statistics has two different (and almost orthogonal) motivations. From the philosophical point of view, it tries to formalize how the statistics works and why some statistical models are better than others. After this notion of a "good model" is introduced, a natural question arises: it is possible that for some piece of data there is no good model? If yes, how often these bad ("non-stochastic") data appear "in real life"? Another, more technical motivation comes from algorithmic information theory. In this theory a notion of complexity of a finite object (=amount of information in this object) is introduced; it assigns to every object some number, called its algorithmic complexity (or Kolmogorov complexity). Algorithmic statistic provides a more fine-grained classification: for each finite object some curve is defined that characterizes its behavior. It turns out that several different definitions give (approximately) the same curve. In this survey we try to provide an exposition of the main results in the field (including full proofs for the most important ones), as well as some historical comments. We assume that the reader is familiar with the main notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde

    Predictability: a way to characterize Complexity

    Full text link
    Different aspects of the predictability problem in dynamical systems are reviewed. The deep relation among Lyapunov exponents, Kolmogorov-Sinai entropy, Shannon entropy and algorithmic complexity is discussed. In particular, we emphasize how a characterization of the unpredictability of a system gives a measure of its complexity. Adopting this point of view, we review some developments in the characterization of the predictability of systems showing different kind of complexity: from low-dimensional systems to high-dimensional ones with spatio-temporal chaos and to fully developed turbulence. A special attention is devoted to finite-time and finite-resolution effects on predictability, which can be accounted with suitable generalization of the standard indicators. The problems involved in systems with intrinsic randomness is discussed, with emphasis on the important problems of distinguishing chaos from noise and of modeling the system. The characterization of irregular behavior in systems with discrete phase space is also considered.Comment: 142 Latex pgs. 41 included eps figures, submitted to Physics Reports. Related information at this http://axtnt2.phys.uniroma1.i
    corecore