146 research outputs found
Toward bio-inspired information processing with networks of nano-scale switching elements
Unconventional computing explores multi-scale platforms connecting
molecular-scale devices into networks for the development of scalable
neuromorphic architectures, often based on new materials and components with
new functionalities. We review some work investigating the functionalities of
locally connected networks of different types of switching elements as
computational substrates. In particular, we discuss reservoir computing with
networks of nonlinear nanoscale components. In usual neuromorphic paradigms,
the network synaptic weights are adjusted as a result of a training/learning
process. In reservoir computing, the non-linear network acts as a dynamical
system mixing and spreading the input signals over a large state space, and
only a readout layer is trained. We illustrate the most important concepts with
a few examples, featuring memristor networks with time-dependent and history
dependent resistances
Advances in Biologically Inspired Reservoir Computing
The interplay between randomness and optimization has always been a major theme in the design of neural networks [3]. In the last 15 years, the success of reservoir computing (RC) showed that, in many scenarios, the algebraic structure of the recurrent component is far more important than the precise fine-tuning of its weights. As long as the recurrent part of the network possesses a form of fading memory of the input, the dynamics of the neurons are enough to efficiently process many spatio-temporal signals, provided that their activations are sufficiently heterogeneous. Even if today it is feasible to fully optimize deep recurrent networks, their implementation still requires a vast degree of experience and practice, not to mention vast computational resources, limiting their applicability in simpler architectures (e.g., embedded systems) or in areas where time is of key importance (e.g., online systems). Not surprisingly, then, RC remains a powerful tool for quickly solving dynamical problems, and it has become an invaluable tool for modeling and analysis in neuroscience
2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 10 calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
Photonic neuromorphic information processing and reservoir computing
Photonic neuromorphic computing is attracting tremendous research interest now, catalyzed in no small part by the rise of deep learning in many applications. In this paper, we will review some of the exciting work that has been going in this area and then focus on one particular technology, namely, photonic reservoir computing
Ultrafast single-channel machine vision based on neuro-inspired photonic computing
High-speed machine vision is increasing its importance in both scientific and
technological applications. Neuro-inspired photonic computing is a promising
approach to speed-up machine vision processing with ultralow latency. However,
the processing rate is fundamentally limited by the low frame rate of image
sensors, typically operating at tens of hertz. Here, we propose an
image-sensor-free machine vision framework, which optically processes
real-world visual information with only a single input channel, based on a
random temporal encoding technique. This approach allows for compressive
acquisitions of visual information with a single channel at gigahertz rates,
outperforming conventional approaches, and enables its direct photonic
processing using a photonic reservoir computer in a time domain. We
experimentally demonstrate that the proposed approach is capable of high-speed
image recognition and anomaly detection, and furthermore, it can be used for
high-speed imaging. The proposed approach is multipurpose and can be extended
for a wide range of applications, including tracking, controlling, and
capturing sub-nanosecond phenomena.Comment: 30 pages, 12 figure
- …