116 research outputs found
Optoelectronic Reservoir Computing
Reservoir computing is a recently introduced, highly efficient bio-inspired
approach for processing time dependent data. The basic scheme of reservoir
computing consists of a non linear recurrent dynamical system coupled to a
single input layer and a single output layer. Within these constraints many
implementations are possible. Here we report an opto-electronic implementation
of reservoir computing based on a recently proposed architecture consisting of
a single non linear node and a delay line. Our implementation is sufficiently
fast for real time information processing. We illustrate its performance on
tasks of practical importance such as nonlinear channel equalization and speech
recognition, and obtain results comparable to state of the art digital
implementations.Comment: Contains main paper and two Supplementary Material
Online Training of an Opto-Electronic Reservoir Computer
Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals. Its analog implementations equal and sometimes outperform other digital algorithms on a series of benchmark tasks. Their performance can be increased by switching from offline to online training method. Here we present the first online trained opto-electronic reservoir computer. The system is tested on a channel equalisation task and the algorithm is executed by an FPGA chip. We report performances close to previous implementations and demonstrate the benefits of online training on a non-stationary task that could not be easily solved using offline methods.info:eu-repo/semantics/publishe
Time-shift selection for reservoir computing using a rank-revealing QR algorithm
Reservoir computing, a recurrent neural network paradigm in which only the
output layer is trained, has demonstrated remarkable performance on tasks such
as prediction and control of nonlinear systems. Recently, it was demonstrated
that adding time-shifts to the signals generated by a reservoir can provide
large improvements in performance accuracy. In this work, we present a
technique to choose the time-shifts by maximizing the rank of the reservoir
matrix using a rank-revealing QR algorithm. This technique, which is not task
dependent, does not require a model of the system, and therefore is directly
applicable to analog hardware reservoir computers. We demonstrate our
time-shift selection technique on two types of reservoir computer: one based on
an opto-electronic oscillator and the traditional recurrent network with a
activation function. We find that our technique provides improved
accuracy over random time-shift selection in essentially all cases
Minimal approach to neuro-inspired information processing
© 2015 Soriano, Brunner, Escalona-Morán, Mirasso and Fischer. To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.The authors acknowledge support by MINECO (Spain) under Projects TEC2012-36335 (TRIPHOP) and FIS2012-30634 (Intense@cosyp), FEDER and Govern de les Illes Balears via the program Grups Competitius. The work of MS was supported by the Conselleria d'Educació, Cultura i Universitats del Govern de les Illes Balears and the European Social Fund.Peer Reviewe
Emerging opportunities and challenges for the future of reservoir computing
Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines
Emerging opportunities and challenges for the future of reservoir computing
Reservoir computing originates in the early 2000s, the core idea being to utilize dynamical systems as reservoirs (nonlinear generalizations of standard bases) to adaptively learn spatiotemporal features and hidden patterns in complex time series. Shown to have the potential of achieving higher-precision prediction in chaotic systems, those pioneering works led to a great amount of interest and follow-ups in the community of nonlinear dynamics and complex systems. To unlock the full capabilities of reservoir computing towards a fast, lightweight, and significantly more interpretable learning framework for temporal dynamical systems, substantially more research is needed. This Perspective intends to elucidate the parallel progress of mathematical theory, algorithm design and experimental realizations of reservoir computing, and identify emerging opportunities as well as existing challenges for large-scale industrial adoption of reservoir computing, together with a few ideas and viewpoints on how some of those challenges might be resolved with joint efforts by academic and industrial researchers across multiple disciplines
PAM-4 transmission at 1550 nm using photonic reservoir computing post-processing
The efficacy of data decoding in contemporary ultrafast fiber transmission systems is greatly determined by the capabilities of the signal processing tools that are used. The received signal must not exceed a certain level of complexity, beyond which the applied signal processing solutions become insufficient or slow. Moreover, the required signal-to-noise ratio (SNR) of the received signal can be challenging, especially when adopting modulation formats with multi-level encoding. Lately, photonic reservoir computing (RC)–a hardware machine learning technique with recurrent connectivity–has been proposed as a post-processing tool that deals with deterministic distortions from fiber transmission. Here, we show that RC post-processing is remarkably efficient for multilevel encoding and for the use of very high launched optical peak power for fiber transmission up to 14 dBm. Higher power levels provide the desired high SNR values at the receiver end, at the expense of a complex nonlinear transformation of the transmission signal. Our demonstration evaluates a direct fiber communication link with 4-level pulse amplitude modulation (PAM-4) encoding and direct detection, without including optical amplification, dispersion compensation, pulse shaping or other digital signal processing (DSP) techniques. By applying RC post-processing on the distorted signal, we numerically estimate fiber transmission distances of 27 km at 56 Gb/s and of 5.5 km at 112 Gb/s data encoding rates, while fulfilling the hard-decision forward error correction (HD-FEC) bit-error-rate (BER) limit for data recovery. In an experimental equivalent demonstration of our photonic reservoir, the achieved distances are 21 and 4.6 km, respectively
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Reservoir computing (RC), first applied to temporal signal processing, is a
recurrent neural network in which neurons are randomly connected. Once
initialized, the connection strengths remain unchanged. Such a simple structure
turns RC into a non-linear dynamical system that maps low-dimensional inputs
into a high-dimensional space. The model's rich dynamics, linear separability,
and memory capacity then enable a simple linear readout to generate adequate
responses for various applications. RC spans areas far beyond machine learning,
since it has been shown that the complex dynamics can be realized in various
physical hardware implementations and biological devices. This yields greater
flexibility and shorter computation time. Moreover, the neuronal responses
triggered by the model's dynamics shed light on understanding brain mechanisms
that also exploit similar dynamical processes. While the literature on RC is
vast and fragmented, here we conduct a unified review of RC's recent
developments from machine learning to physics, biology, and neuroscience. We
first review the early RC models, and then survey the state-of-the-art models
and their applications. We further introduce studies on modeling the brain's
mechanisms by RC. Finally, we offer new perspectives on RC development,
including reservoir design, coding frameworks unification, physical RC
implementations, and interaction between RC, cognitive neuroscience and
evolution.Comment: 51 pages, 19 figures, IEEE Acces
Photonic machine learning implementation for signal recovery in optical communications
Machine learning techniques have proven very efficient in assorted classification tasks. Nevertheless, processing time-dependent high-speed signals can turn into an extremely challenging task, especially when these signals have been nonlinearly distorted. Recently, analogue hardware concepts using nonlinear transient responses have been gaining significant interest for fast information processing. Here, we introduce a simplified photonic reservoir computing scheme for data classification of severely distorted optical communication signals after extended fibre transmission. To this end, we convert the direct bit detection process into a pattern recognition problem. Using an experimental implementation of our photonic reservoir computer, we demonstrate an improvement in bit-error-rate by two orders of magnitude, compared to directly classifying the transmitted signal. This improvement corresponds to an extension of the communication range by over 75%. While we do not yet reach full real-time post-processing at telecom rates, we discuss how future designs might close the gap
- …