14 research outputs found

    Photonic Delay Systems as Machine Learning Implementations

    Get PDF
    Nonlinear photonic delay systems present interesting implementation platforms for machine learning models. They can be extremely fast, offer great degrees of parallelism and potentially consume far less power than digital processors. So far they have been successfully employed for signal processing using the Reservoir Computing paradigm. In this paper we show that their range of applicability can be greatly extended if we use gradient descent with backpropagation through time on a model of the system to optimize the input encoding of such systems. We perform physical experiments that demonstrate that the obtained input encodings work well in reality, and we show that optimized systems perform significantly better than the common Reservoir Computing approach. The results presented here demonstrate that common gradient descent techniques from machine learning may well be applicable on physical neuro-inspired analog computers

    Photonic delay systems as machine learning implementations

    Get PDF
    Nonlinear photonic delay systems present interesting implementation platforms for machine learning models. They can be extremely fast, offer great degrees of parallelism and potentially consume far less power than digital processors. So far they have been successfully employed for signal processing using the Reservoir Computing paradigm. In this paper we show that their range of applicability can be greatly extended if we use gradient descent with backpropagation through time on a model of the system to optimize the input encoding of such systems. We perform physical experiments that demonstrate that the obtained input encodings work well in reality, and we show that optimized systems perform significantly better than the common Reservoir Computing approach. The results presented here demonstrate that common gradient descent techniques from machine learning may well be applicable on physical neuro-inspired analog computers.P.B., M.H. and J.D. acknowledge support by the interuniversity attraction pole (IAP) Photonics@be of the Belgian Science Policy Office, the ERC NaResCo Starting grant and the European Union Seventh Framework Programme under grant agreement no. 604102 (Human Brain Project). M.C.S. and I.F. acknowledge support by MINECO (Spain), Comunitat Autónoma de les Illes Balears, FEDER, and the European Commission under Projects TEC2012-36335 (TRIPHOP), and Grups Competitius. M.H. and I.F. acknowledge support from the Universitat de les Illes Balears for an Invited Young Researcher Grant.Peer Reviewe

    Basic protocols in quantum reinforcement learning with superconducting circuits

    Get PDF
    Superconducting circuit technologies have recently achieved quantum protocols involving closed feedback loops. Quantum artificial intelligence and quantum machine learning are emerging fields inside quantum technologies which may enable quantum devices to acquire information from the outer world and improve themselves via a learning process. Here we propose the implementation of basic protocols in quantum reinforcement learning, with superconducting circuits employing feedback-loop control. We introduce diverse scenarios for proof-of-principle experiments with state-of-the-art superconducting circuit technologies and analyze their feasibility in presence of imperfections. The field of quantum artificial intelligence implemented with superconducting circuits paves the way for enhanced quantum control and quantum computation protocols.Comment: Published versio

    High performance photonic reservoir computer based on a coherently driven passive cavity

    Full text link
    Reservoir computing is a recent bio-inspired approach for processing time-dependent signals. It has enabled a breakthrough in analog information processing, with several experiments, both electronic and optical, demonstrating state-of-the-art performances for hard tasks such as speech recognition, time series prediction and nonlinear channel equalization. A proof-of-principle experiment using a linear optical circuit on a photonic chip to process digital signals was recently reported. Here we present a photonic implementation of a reservoir computer based on a coherently driven passive fiber cavity processing analog signals. Our experiment has error rate as low or lower than previous experiments on a wide variety of tasks, and also has lower power consumption. Furthermore, the analytical model describing our experiment is also of interest, as it constitutes a very simple high performance reservoir computer algorithm. The present experiment, given its good performances, low energy consumption and conceptual simplicity, confirms the great potential of photonic reservoir computing for information processing applications ranging from artificial intelligence to telecommunicationsComment: non

    Training Passive Photonic Reservoirs with Integrated Optical Readout

    Full text link
    As Moore's law comes to an end, neuromorphic approaches to computing are on the rise. One of these, passive photonic reservoir computing, is a strong candidate for computing at high bitrates (> 10 Gbps) and with low energy consumption. Currently though, both benefits are limited by the necessity to perform training and readout operations in the electrical domain. Thus, efforts are currently underway in the photonic community to design an integrated optical readout, which allows to perform all operations in the optical domain. In addition to the technological challenge of designing such a readout, new algorithms have to be designed in order to train it. Foremost, suitable algorithms need to be able to deal with the fact that the actual on-chip reservoir states are not directly observable. In this work, we investigate several options for such a training algorithm and propose a solution in which the complex states of the reservoir can be observed by appropriately setting the readout weights, while iterating over a predefined input sequence. We perform numerical simulations in order to compare our method with an ideal baseline requiring full observability as well as with an established black-box optimization approach (CMA-ES).Comment: Accepted for publication in IEEE Transactions on Neural Networks and Learning Systems (TNNLS-2017-P-8539.R1), copyright 2018 IEEE. This research was funded by the EU Horizon 2020 PHRESCO Grant (Grant No. 688579) and the BELSPO IAP P7-35 program Photonics@be. 11 pages, 9 figure

    Minimal approach to neuro-inspired information processing

    Get PDF
    © 2015 Soriano, Brunner, Escalona-Morán, Mirasso and Fischer. To learn and mimic how the brain processes information has been a major research challenge for decades. Despite the efforts, little is known on how we encode, maintain and retrieve information. One of the hypothesis assumes that transient states are generated in our intricate network of neurons when the brain is stimulated by a sensory input. Based on this idea, powerful computational schemes have been developed. These schemes, known as machine-learning techniques, include artificial neural networks, support vector machine and reservoir computing, among others. In this paper, we concentrate on the reservoir computing (RC) technique using delay-coupled systems. Unlike traditional RC, where the information is processed in large recurrent networks of interconnected artificial neurons, we choose a minimal design, implemented via a simple nonlinear dynamical system subject to a self-feedback loop with delay. This design is not intended to represent an actual brain circuit, but aims at finding the minimum ingredients that allow developing an efficient information processor. This simple scheme not only allows us to address fundamental questions but also permits simple hardware implementations. By reducing the neuro-inspired reservoir computing approach to its bare essentials, we find that nonlinear transient responses of the simple dynamical system enable the processing of information with excellent performance and at unprecedented speed. We specifically explore different hardware implementations and, by that, we learn about the role of nonlinearity, noise, system responses, connectivity structure, and the quality of projection onto the required high-dimensional state space. Besides the relevance for the understanding of basic mechanisms, this scheme opens direct technological opportunities that could not be addressed with previous approaches.The authors acknowledge support by MINECO (Spain) under Projects TEC2012-36335 (TRIPHOP) and FIS2012-30634 (Intense@cosyp), FEDER and Govern de les Illes Balears via the program Grups Competitius. The work of MS was supported by the Conselleria d'Educació, Cultura i Universitats del Govern de les Illes Balears and the European Social Fund.Peer Reviewe

    Improving time series recognition and prediction with networks and ensembles of passive photonic reservoirs

    Get PDF
    As the performance increase of traditional Von-Neumann computing attenuates, new approaches to computing need to be found. A promising approach for low-power computing at high bitrates is integrated photonic reservoir computing. In the past though, the feasible reservoir size and computational power of integrated photonic reservoirs have been limited by hardware constraints. An alternative solution to building larger reservoirs is the combination of several small reservoirs to match or exceed the performance of a single bigger one. This paper summarizes our efforts to increase the available computational power by combining multiple reservoirs into a single computing architecture. We investigate several possible combination techniques and evaluate their performance using the classic XOR and header recognition tasks as well as the well-known Santa Fe chaotic laser prediction task. Our findings suggest that a new paradigm of feeding a reservoir's output into the readout structure of the next one shows consistently good results for various tasks as well as for both electrical and optical readouts and coupling schemes

    A training algorithm for networks of high-variability reservoirs

    Get PDF
    Physical reservoir computing approaches have gained increased attention in recent years due to their potential for low-energy high-performance computing. Despite recent successes, there are bounds to what one can achieve simply by making physical reservoirs larger. Therefore, we argue that a switch from single-reservoir computing to multi-reservoir and even deep physical reservoir computing is desirable. Given that error backpropagation cannot be used directly to train a large class of multi-reservoir systems, we propose an alternative framework that combines the power of backpropagation with the speed and simplicity of classic training algorithms. In this work we report our findings on a conducted experiment to evaluate the general feasibility of our approach. We train a network of 3 Echo State Networks to perform the well-known NARMA-10 task, where we use intermediate targets derived through backpropagation. Our results indicate that our proposed method is well-suited to train multi-reservoir systems in an efficient way
    corecore