55 research outputs found

    Quantum materials for energy-efficient neuromorphic computing

    Full text link
    Neuromorphic computing approaches become increasingly important as we address future needs for efficiently processing massive amounts of data. The unique attributes of quantum materials can help address these needs by enabling new energy-efficient device concepts that implement neuromorphic ideas at the hardware level. In particular, strong correlations give rise to highly non-linear responses, such as conductive phase transitions that can be harnessed for short and long-term plasticity. Similarly, magnetization dynamics are strongly non-linear and can be utilized for data classification. This paper discusses select examples of these approaches, and provides a perspective for the current opportunities and challenges for assembling quantum-material-based devices for neuromorphic functionalities into larger emergent complex network systems

    Energy-Efficient Ferroelectric Field-Effect Transistor-Based Oscillators for Neuromorphic System Design

    Full text link
    Neuromorphic or bioinspired computational platforms, as an alternative for von-Neumann structures, have benefitted from the excellent features of emerging technologies in order to emulate the behavior of the biological brain in an accurate and energy-efficient way. Integrability with CMOS technology and low power consumption make ferroelectric field-effect transistor (FEFET) an attractive candidate to perform such paradigms, particularly for image processing. In this article, we use the FEFET device to make energy-efficient oscillatory neurons as the main parts of neural networks for image processing applications, especially for edge detection. Based on our simulation results, we estimated a significant energy efficiency compared with other technologies, which shows roughly 5-120\times reduction, depending on the design

    A Mixed-Signal Oscillatory Neural Network for Scalable Analog Computations in Phase Domain

    Get PDF
    Digital electronics based on von Neumann's architecture are reaching their limits to solve large scale problems essentially due to the memory fetching. Instead, recent efforts to bring the memory near the computation have enabled highly parallel computations at low energy cost. Oscillatory Neural Network (ONN) is one example of in-memory analog computing paradigm consisting of coupled oscillating neurons. When implemented in hardware, ONNs naturally perform gradient descent of an energy landscape that makes them particularly suited for solving optimization problems. Although the ONN computational capability and its link with the Ising model are known for decades, implementing a large-scale ONN remains difficult. Beyond the oscillators' variations, there are still design challenges such as having compact, programmable synapses and a modular architecture for solving large problem instances. In this paper, we propose a mixed-signal architecture named Saturated Kuramoto ONN (SKONN) that leverages both analog and digital domains for efficient ONN hardware implementation. SKONN computes in the analog phase domain while propagating the information digitally to facilitate scaling up the ONN size. SKONN's separation between computation and propagation enhances the robustness and enables a feed-forward phase propagation that is showcased for the first time. Moreover, the SKONN architecture leads to unique binarizing dynamics that are particularly suitable for solving NP-hard combinatorial optimization problems such as finding the Weighted Max-cut of a graph. We find that SKONN's accuracy is as good as the Goemans-Williamson 0.878-approximation algorithm for Max-cut; whereas SKONN's computation time only grows logarithmically. We report on Weighted Max-cut experiments using a 9-neuron SKONN proof-of-concept on PCB. Finally, we present a low-power 16-neuron SKONN integrated circuit and illustrate SKONN's feed-forward ability while computing the XOR function

    Perspective on unconventional computing using magnetic skyrmions

    Full text link
    Learning and pattern recognition inevitably requires memory of previous events, a feature that conventional CMOS hardware needs to artificially simulate. Dynamical systems naturally provide the memory, complexity, and nonlinearity needed for a plethora of different unconventional computing approaches. In this perspective article, we focus on the unconventional computing concept of reservoir computing and provide an overview of key physical reservoir works reported. We focus on the promising platform of magnetic structures and, in particular, skyrmions, which potentially allow for low-power applications. Moreover, we discuss skyrmion-based implementations of Brownian computing, which has recently been combined with reservoir computing. This computing paradigm leverages the thermal fluctuations present in many skyrmion systems. Finally, we provide an outlook on the most important challenges in this field.Comment: 19 pages and 3 figure

    A perspective on physical reservoir computing with nanomagnetic devices

    Get PDF
    Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here, we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community
    corecore