2,009 research outputs found
Photonic Delay Systems as Machine Learning Implementations
Nonlinear photonic delay systems present interesting implementation platforms
for machine learning models. They can be extremely fast, offer great degrees of
parallelism and potentially consume far less power than digital processors. So
far they have been successfully employed for signal processing using the
Reservoir Computing paradigm. In this paper we show that their range of
applicability can be greatly extended if we use gradient descent with
backpropagation through time on a model of the system to optimize the input
encoding of such systems. We perform physical experiments that demonstrate that
the obtained input encodings work well in reality, and we show that optimized
systems perform significantly better than the common Reservoir Computing
approach. The results presented here demonstrate that common gradient descent
techniques from machine learning may well be applicable on physical
neuro-inspired analog computers
Photonic delay systems as machine learning implementations
Nonlinear photonic delay systems present interesting implementation platforms for machine learning models. They can be extremely fast, offer great degrees of parallelism and potentially consume far less power than digital processors. So far they have been successfully employed for signal processing using the Reservoir Computing paradigm. In this paper we show that their range of applicability can be greatly extended if we use gradient descent with backpropagation through time on a model of the system to optimize the input encoding of such systems. We perform physical experiments that demonstrate that the obtained input encodings work well in reality, and we show that optimized systems perform significantly better than the common Reservoir Computing approach. The results presented here demonstrate that common gradient descent techniques from machine learning may well be applicable on physical neuro-inspired analog computers.P.B., M.H. and J.D. acknowledge support by the interuniversity attraction pole (IAP) Photonics@be of the Belgian Science Policy Office, the ERC NaResCo Starting grant and the European Union Seventh Framework Programme under grant agreement no. 604102 (Human Brain Project). M.C.S. and I.F. acknowledge support by MINECO (Spain), Comunitat Autónoma de les Illes Balears, FEDER, and the European Commission under Projects TEC2012-36335 (TRIPHOP), and Grups Competitius. M.H. and I.F. acknowledge support from the Universitat de les Illes Balears for an Invited Young Researcher Grant.Peer Reviewe
Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns
Reservoir computing (RC) is a technique in machine learning inspired by neural systems. RC has been used successfully to solve complex problems such as signal classification and signal generation. These systems are mainly implemented in software, and thereby they are limited in speed and power efficiency. Several optical and optoelectronic implementations have been demonstrated, in which the system has signals with an amplitude and phase. It is proven that these enrich the dynamics of the system, which is beneficial for the performance. In this paper, we introduce a novel optical architecture based on nanophotonic crystal cavities. This allows us to integrate many neurons on one chip, which, compared with other photonic solutions, closest resembles a classical neural network. Furthermore, the components are passive, which simplifies the design and reduces the power consumption. To assess the performance of this network, we train a photonic network to generate periodic patterns, using an alternative online learning rule called first-order reduced and corrected error. For this, we first train a classical hyperbolic tangent reservoir, but then we vary some of the properties to incorporate typical aspects of a photonics reservoir, such as the use of continuous-time versus discrete-time signals and the use of complex-valued versus real-valued signals. Then, the nanophotonic reservoir is simulated and we explore the role of relevant parameters such as the topology, the phases between the resonators, the number of nodes that are biased and the delay between the resonators. It is important that these parameters are chosen such that no strong self-oscillations occur. Finally, our results show that for a signal generation task a complex-valued, continuous-time nanophotonic reservoir outperforms a classical (i.e., discrete-time, real-valued) leaky hyperbolic tangent reservoir (normalized root-mean-square errors = 0.030 versus NRMSE = 0.127)
Advances in photonic reservoir computing on an integrated platform
Reservoir computing is a recent approach from the fields of machine learning and artificial neural networks to solve a broad class of complex classification and recognition problems such as speech and image recognition. As is typical for methods from these fields, it involves systems that were trained based on examples, instead of using an algorithmic approach. It originated as a new training technique for recurrent neural networks where the network is split in a reservoir that does the `computation' and a simple readout function. This technique has been among the state-of-the-art. So far implementations have been mainly software based, but a hardware implementation offers the promise of being low-power and fast. We previously demonstrated with simulations that a network of coupled semiconductor optical amplifiers could also be used for this purpose on a simple classification task. This paper discusses two new developments. First of all, we identified the delay in between the nodes as the most important design parameter using an amplifier reservoir on an isolated digit recognition task and show that when optimized and combined with coherence it even yields better results than classical hyperbolic tangent reservoirs. Second we will discuss the recent advances in photonic reservoir computing with the use of resonator structures such as photonic crystal cavities and ring resonators. Using a network of resonators, feedback of the output to the network, and an appropriate learning rule, periodic signals can be generated in the optical domain. With the right parameters, these resonant structures can also exhibit spiking behaviour
Optical signal processing with a network of semiconductor optical amplifiers in the context of photonic reservoir computing
Photonic reservoir computing is a hardware implementation of the concept of reservoir computing which comes from the field of machine learning and artificial neural networks. This concept is very useful for solving all kinds of classification and recognition problems. Examples are time series prediction, speech and image recognition. Reservoir computing often competes with the state-of-the-art. Dedicated photonic hardware would offer advantages in speed and power consumption. We show that a network of coupled semiconductor optical amplifiers can be used as a reservoir by using it on a benchmark isolated words recognition task. The results are comparable to existing software implementations and fabrication tolerances can actually improve the robustness
Training Passive Photonic Reservoirs with Integrated Optical Readout
As Moore's law comes to an end, neuromorphic approaches to computing are on
the rise. One of these, passive photonic reservoir computing, is a strong
candidate for computing at high bitrates (> 10 Gbps) and with low energy
consumption. Currently though, both benefits are limited by the necessity to
perform training and readout operations in the electrical domain. Thus, efforts
are currently underway in the photonic community to design an integrated
optical readout, which allows to perform all operations in the optical domain.
In addition to the technological challenge of designing such a readout, new
algorithms have to be designed in order to train it. Foremost, suitable
algorithms need to be able to deal with the fact that the actual on-chip
reservoir states are not directly observable. In this work, we investigate
several options for such a training algorithm and propose a solution in which
the complex states of the reservoir can be observed by appropriately setting
the readout weights, while iterating over a predefined input sequence. We
perform numerical simulations in order to compare our method with an ideal
baseline requiring full observability as well as with an established black-box
optimization approach (CMA-ES).Comment: Accepted for publication in IEEE Transactions on Neural Networks and
Learning Systems (TNNLS-2017-P-8539.R1), copyright 2018 IEEE. This research
was funded by the EU Horizon 2020 PHRESCO Grant (Grant No. 688579) and the
BELSPO IAP P7-35 program Photonics@be. 11 pages, 9 figure
Delayed Dynamical Systems: Networks, Chimeras and Reservoir Computing
We present a systematic approach to reveal the correspondence between time
delay dynamics and networks of coupled oscillators. After early demonstrations
of the usefulness of spatio-temporal representations of time-delay system
dynamics, extensive research on optoelectronic feedback loops has revealed
their immense potential for realizing complex system dynamics such as chimeras
in rings of coupled oscillators and applications to reservoir computing.
Delayed dynamical systems have been enriched in recent years through the
application of digital signal processing techniques. Very recently, we have
showed that one can significantly extend the capabilities and implement
networks with arbitrary topologies through the use of field programmable gate
arrays (FPGAs). This architecture allows the design of appropriate filters and
multiple time delays which greatly extend the possibilities for exploring
synchronization patterns in arbitrary topological networks. This has enabled us
to explore complex dynamics on networks with nodes that can be perfectly
identical, introduce parameter heterogeneities and multiple time delays, as
well as change network topologies to control the formation and evolution of
patterns of synchrony
Photonic reservoir computing: a new approach to optical information processing
Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks and has been successfully used in several pattern classification problems, like speech and image recognition. The implementations have so far been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it on a benchmark speech recognition task to conventional software implementations. In spite of several differences, they perform as good as or better than conventional implementations. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We will also address the role phase plays on the reservoir performance
- …