56 research outputs found
Training Passive Photonic Reservoirs with Integrated Optical Readout
As Moore's law comes to an end, neuromorphic approaches to computing are on
the rise. One of these, passive photonic reservoir computing, is a strong
candidate for computing at high bitrates (> 10 Gbps) and with low energy
consumption. Currently though, both benefits are limited by the necessity to
perform training and readout operations in the electrical domain. Thus, efforts
are currently underway in the photonic community to design an integrated
optical readout, which allows to perform all operations in the optical domain.
In addition to the technological challenge of designing such a readout, new
algorithms have to be designed in order to train it. Foremost, suitable
algorithms need to be able to deal with the fact that the actual on-chip
reservoir states are not directly observable. In this work, we investigate
several options for such a training algorithm and propose a solution in which
the complex states of the reservoir can be observed by appropriately setting
the readout weights, while iterating over a predefined input sequence. We
perform numerical simulations in order to compare our method with an ideal
baseline requiring full observability as well as with an established black-box
optimization approach (CMA-ES).Comment: Accepted for publication in IEEE Transactions on Neural Networks and
Learning Systems (TNNLS-2017-P-8539.R1), copyright 2018 IEEE. This research
was funded by the EU Horizon 2020 PHRESCO Grant (Grant No. 688579) and the
BELSPO IAP P7-35 program Photonics@be. 11 pages, 9 figure
Anchor Pruning for Object Detection
This paper proposes anchor pruning for object detection in one-stage
anchor-based detectors. While pruning techniques are widely used to reduce the
computational cost of convolutional neural networks, they tend to focus on
optimizing the backbone networks where often most computations are. In this
work we demonstrate an additional pruning technique, specifically for object
detection: anchor pruning. With more efficient backbone networks and a growing
trend of deploying object detectors on embedded systems where post-processing
steps such as non-maximum suppression can be a bottleneck, the impact of the
anchors used in the detection head is becoming increasingly more important. In
this work, we show that many anchors in the object detection head can be
removed without any loss in accuracy. With additional retraining, anchor
pruning can even lead to improved accuracy. Extensive experiments on SSD and MS
COCO show that the detection head can be made up to 44% more efficient while
simultaneously increasing accuracy. Further experiments on RetinaNet and PASCAL
VOC show the general effectiveness of our approach. We also introduce
`overanchorized' models that can be used together with anchor pruning to
eliminate hyperparameters related to the initial shape of anchors
A training algorithm for networks of high-variability reservoirs
Physical reservoir computing approaches have gained increased attention in recent years due to their potential for low-energy high-performance computing. Despite recent successes, there are bounds to what one can achieve simply by making physical reservoirs larger. Therefore, we argue that a switch from single-reservoir computing to multi-reservoir and even deep physical reservoir computing is desirable. Given that error backpropagation cannot be used directly to train a large class of multi-reservoir systems, we propose an alternative framework that combines the power of backpropagation with the speed and simplicity of classic training algorithms. In this work we report our findings on a conducted experiment to evaluate the general feasibility of our approach. We train a network of 3 Echo State Networks to perform the well-known NARMA-10 task, where we use intermediate targets derived through backpropagation. Our results indicate that our proposed method is well-suited to train multi-reservoir systems in an efficient way
A multiple-input strategy to efficient integrated photonic reservoir computing
Photonic reservoir computing has evolved into a viable contender for the next generation of analog computing platforms as industry looks beyond standard transistor-based computing architectures. Integrated photonic reservoir computing, particularly on the silicon-on-insulator platform, presents a CMOS-compatible, wide bandwidth, parallel platform for implementation of optical reservoirs. A number of demonstrations of the applicability of this platform for processing optical telecommunication signals have been made in the recent past. In this work, we take it a stage further by performing an architectural search for designs that yield the best performance while maintaining power efficiency. We present numerical simulations for an optical circuit model of a 16-node integrated photonic reservoir with the input signal injected in combinations of 2, 4, and 8 nodes, or into all 16 nodes. The reservoir is composed of a network of passive photonic integrated circuit components with the required nonlinearity introduced at the readout point with a photodetector. The resulting error performance on the temporal XOR task for these multiple input cases is compared with that of the typical case of input to a single node. We additionally introduce for the first time in our simulations a realistic model of a photodetector. Based on this, we carry out a full power-level exploration for each of the above input strategies. Multiple-input reservoirs achieve better performance and power efficiency than single-input reservoirs. For the same input power level, multiple-input reservoirs yield lower error rates. The best multiple-input reservoir designs can achieve the error rates of single-input ones with at least two orders of magnitude less total input power. These results can be generally attributed to the increase in richness of the reservoir dynamics and the fact that signals stay longer within the reservoir. If we account for all loss and noise contributions, the minimum input power for error-free performance for the optimal design is found to be in the approximate to 1 mW range
On-chip passive photonic reservoir computing with integrated optical readout
Photonic reservoir computing is a recent bio-inspired paradigm for signal processing. Despite first successes, the paradigm still faces challenges. We address some of these challenges and introduce our approaches to solve them. In detail, we discuss how integrated reservoirs can be scaled up by injecting multiple copies of the input. Further we introduce a new hardware-friendly training method for integrated optical readouts
Improving time series recognition and prediction with networks and ensembles of passive photonic reservoirs
As the performance increase of traditional Von-Neumann computing attenuates, new approaches to computing need to be found. A promising approach for low-power computing at high bitrates is integrated photonic reservoir computing. In the past though, the feasible reservoir size and computational power of integrated photonic reservoirs have been limited by hardware constraints. An alternative solution to building larger reservoirs is the combination of several small reservoirs to match or exceed the performance of a single bigger one. This paper summarizes our efforts to increase the available computational power by combining multiple reservoirs into a single computing architecture. We investigate several possible combination techniques and evaluate their performance using the classic XOR and header recognition tasks as well as the well-known Santa Fe chaotic laser prediction task. Our findings suggest that a new paradigm of feeding a reservoir's output into the readout structure of the next one shows consistently good results for various tasks as well as for both electrical and optical readouts and coupling schemes
Silicon photonics for neuromorphic information processing
We present our latest results on silicon photonics neuromorphic information processing based a.o. on techniques like reservoir computing. We will discuss aspects like scalability, novel architectures for enhanced power efficiency, as well as all-optical readout. Additionally, we will touch upon new machine learning techniques to operate these integrated readouts. Finally, we will show how these systems can be used for high-speed low-power information processing for applications like recognition of biological cells
- …