3,969 research outputs found

    Toward bio-inspired information processing with networks of nano-scale switching elements

    Full text link
    Unconventional computing explores multi-scale platforms connecting molecular-scale devices into networks for the development of scalable neuromorphic architectures, often based on new materials and components with new functionalities. We review some work investigating the functionalities of locally connected networks of different types of switching elements as computational substrates. In particular, we discuss reservoir computing with networks of nonlinear nanoscale components. In usual neuromorphic paradigms, the network synaptic weights are adjusted as a result of a training/learning process. In reservoir computing, the non-linear network acts as a dynamical system mixing and spreading the input signals over a large state space, and only a readout layer is trained. We illustrate the most important concepts with a few examples, featuring memristor networks with time-dependent and history dependent resistances

    Parametrization of stochastic inputs using generative adversarial networks with application in geology

    Get PDF
    We investigate artificial neural networks as a parametrization tool for stochastic inputs in numerical simulations. We address parametrization from the point of view of emulating the data generating process, instead of explicitly constructing a parametric form to preserve predefined statistics of the data. This is done by training a neural network to generate samples from the data distribution using a recent deep learning technique called generative adversarial networks. By emulating the data generating process, the relevant statistics of the data are replicated. The method is assessed in subsurface flow problems, where effective parametrization of underground properties such as permeability is important due to the high dimensionality and presence of high spatial correlations. We experiment with realizations of binary channelized subsurface permeability and perform uncertainty quantification and parameter estimation. Results show that the parametrization using generative adversarial networks is very effective in preserving visual realism as well as high order statistics of the flow responses, while achieving a dimensionality reduction of two orders of magnitude

    Fiber echo state network analogue for high-bandwidth dual-quadrature signal processing

    Get PDF
    All-optical platforms for recurrent neural networks can offer higher computational speed and energy efficiency. To produce a major advance in comparison with currently available digital signal processing methods, the new system would need to have high bandwidth and operate both signal quadratures (power and phase). Here we propose a fiber echo state network analogue (FESNA) — the first optical technology that provides both high (beyond previous limits) bandwidth and dual-quadrature signal processing. We demonstrate applicability of the designed system for prediction tasks and for the mitigation of distortions in optical communication systems with multilevel dual-quadrature encoded signals

    Reservoir Computing in Materio : An Evaluation of Configuration through Evolution

    Get PDF
    Recent work has shown that computational substrates made from carbon nanotube/polymer mixtures can form trainable Reservoir Computers. This new reservoir computing platform uses computer based evolutionary algorithms to optimise a set of electrical control signals to induce reservoir properties within the substrate. In the training process, evolution decides the value of analogue control signals (voltages) and the location of inputs and outputs on the substrate that improve the performance of the subsequently trained reservoir readout. Here, we evaluate the performance of evolutionary search compared to randomly assigned electrical configurations. The substrate is trained and evaluated on time-series prediction using the Santa Fe Laser generated competition data (dataset A). In addition to the main investigation, we introduce two new features closely linked to the traditional reservoir computing architecture, adding an evolvable input-weighting mechanism and a reservoir time-scaling parameter. The experimental results show evolved configurations across all four test substrates consistently produce reservoirs with greater performance than randomly configured reservoirs. The results also show that applying both input-weighting and timescaling simultaneously can provide additional tuning to the task, improving performance. For one material, the evolved reservoir is shown to outperform – for this task – all other hardwarebased reservoir computers found in the literature. The same material also outperforms a simple evolved simulated Echo State Network of the same size. The performance of this material is reported to be both consistent after long time-periods and after reconfiguration to other tasks

    Delayed Dynamical Systems: Networks, Chimeras and Reservoir Computing

    Full text link
    We present a systematic approach to reveal the correspondence between time delay dynamics and networks of coupled oscillators. After early demonstrations of the usefulness of spatio-temporal representations of time-delay system dynamics, extensive research on optoelectronic feedback loops has revealed their immense potential for realizing complex system dynamics such as chimeras in rings of coupled oscillators and applications to reservoir computing. Delayed dynamical systems have been enriched in recent years through the application of digital signal processing techniques. Very recently, we have showed that one can significantly extend the capabilities and implement networks with arbitrary topologies through the use of field programmable gate arrays (FPGAs). This architecture allows the design of appropriate filters and multiple time delays which greatly extend the possibilities for exploring synchronization patterns in arbitrary topological networks. This has enabled us to explore complex dynamics on networks with nodes that can be perfectly identical, introduce parameter heterogeneities and multiple time delays, as well as change network topologies to control the formation and evolution of patterns of synchrony

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    • …
    corecore