546 research outputs found

    Memristor models for machine learning

    Get PDF
    In the quest for alternatives to traditional CMOS, it is being suggested that digital computing efficiency and power can be improved by matching the precision to the application. Many applications do not need the high precision that is being used today. In particular, large gains in area- and power efficiency could be achieved by dedicated analog realizations of approximate computing engines. In this work, we explore the use of memristor networks for analog approximate computation, based on a machine learning framework called reservoir computing. Most experimental investigations on the dynamics of memristors focus on their nonvolatile behavior. Hence, the volatility that is present in the developed technologies is usually unwanted and it is not included in simulation models. In contrast, in reservoir computing, volatility is not only desirable but necessary. Therefore, in this work, we propose two different ways to incorporate it into memristor simulation models. The first is an extension of Strukov's model and the second is an equivalent Wiener model approximation. We analyze and compare the dynamical properties of these models and discuss their implications for the memory and the nonlinear processing capacity of memristor networks. Our results indicate that device variability, increasingly causing problems in traditional computer design, is an asset in the context of reservoir computing. We conclude that, although both models could lead to useful memristor based reservoir computing systems, their computational performance will differ. Therefore, experimental modeling research is required for the development of accurate volatile memristor models.Comment: 4 figures, no tables. Submitted to neural computatio

    Toward bio-inspired information processing with networks of nano-scale switching elements

    Full text link
    Unconventional computing explores multi-scale platforms connecting molecular-scale devices into networks for the development of scalable neuromorphic architectures, often based on new materials and components with new functionalities. We review some work investigating the functionalities of locally connected networks of different types of switching elements as computational substrates. In particular, we discuss reservoir computing with networks of nonlinear nanoscale components. In usual neuromorphic paradigms, the network synaptic weights are adjusted as a result of a training/learning process. In reservoir computing, the non-linear network acts as a dynamical system mixing and spreading the input signals over a large state space, and only a readout layer is trained. We illustrate the most important concepts with a few examples, featuring memristor networks with time-dependent and history dependent resistances

    Hierarchical Composition of Memristive Networks for Real-Time Computing

    Get PDF
    Advances in materials science have led to physical instantiations of self-assembled networks of memristive devices and demonstrations of their computational capability through reservoir computing. Reservoir computing is an approach that takes advantage of collective system dynamics for real-time computing. A dynamical system, called a reservoir, is excited with a time-varying signal and observations of its states are used to reconstruct a desired output signal. However, such a monolithic assembly limits the computational power due to signal interdependency and the resulting correlated readouts. Here, we introduce an approach that hierarchically composes a set of interconnected memristive networks into a larger reservoir. We use signal amplification and restoration to reduce reservoir state correlation, which improves the feature extraction from the input signals. Using the same number of output signals, such a hierarchical composition of heterogeneous small networks outperforms monolithic memristive networks by at least 20% on waveform generation tasks. On the NARMA-10 task, we reduce the error by up to a factor of 2 compared to homogeneous reservoirs with sigmoidal neurons, whereas single memristive networks are unable to produce the correct result. Hierarchical composition is key for solving more complex tasks with such novel nano-scale hardware

    Memristor-based Reservoir Computing

    Get PDF
    In today\u27s nanoscale era, scaling down to even smaller feature sizes poses a significant challenge in the device fabrication, the circuit, and the system design and integration. On the other hand, nanoscale technology has also led to novel materials and devices with unique properties. The memristor is one such emergent nanoscale device that exhibits non-linear current-voltage characteristics and has an inherent memory property, i.e., its current state depends on the past. Both the non-linear and the memory property of memristors have the potential to enable solving spatial and temporal pattern recognition tasks in radically different ways from traditional binary transistor-based technology. The goal of this thesis is to explore the use of memristors in a novel computing paradigm called Reservoir Computing (RC). RC is a new paradigm that belongs to the class of artificial recurrent neural networks (RNN). However, it architecturally differs from the traditional RNN techniques in that the pre-processor (i.e., the reservoir) is made up of random recurrently connected non-linear elements. Learning is only implemented at the readout (i.e., the output) layer, which reduces the learning complexity significantly. To the best of our knowledge, memristors have never been used as reservoir components. We use pattern recognition and classification tasks as benchmark problems. Real world applications associated with these tasks include process control, speech recognition, and signal processing. We have built a software framework, RCspice (Reservoir Computing Simulation Program with Integrated Circuit Emphasis), for this purpose. The framework allows to create random memristor networks, to simulate and evaluate them in Ngspice, and to train the readout layer by means of Genetic Algorithms (GA). We have explored reservoir-related parameters, such as the network connectivity and the reservoir size along with the GA parameters. Our results show that we are able to efficiently and robustly classify time-series patterns using memristor-based dynamical reservoirs. This presents an important step towards computing with memristor-based nanoscale systems

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    Design Considerations for Training Memristor Crossbars Used in Spiking Neural Networks

    Get PDF
    CMOS/Memristor integrated architectures have shown to be powerful for realizing energy-efficient learning machines. These architectures are recently demonstrated in reservoir computing networks, which have reduced training complexity and resource utilization. In reservoir computing, the training time is curtailed due to random weight initialization in the hidden layer, which will remain constant during training. The CMOS/memristor variability can be exploited to generate these random weights and reduce the area overhead. Recent studies have shown that the CMOS/memristor crossbars are ideal for on-device learning machines, including reservoir computing networks. An exemplary CMOS/memristor crossbar based on-device accelerator, Ziksa, was demonstrated on several of these learning networks. While the crossbars are generally area and energy efficient, the peripheral circuitry to control the read/write logic to the crossbars is extremely power hungry. This work focuses on improving the Ziksa accelerator peripheral circuitry for a spiking reservoir network. The optimized training circuitry for Ziksa includes transmission gates, a control unit, and a current amplifier and is demonstrated within a layer of spiking neurons for training and neuron behavior. All the analog circuits are validated using the Cadence 45 nm GPDK on a 2x4 and 1x4 crossbar. For a 32x32 crossbar, the area and power of the peripheral circuitry is ∼2,800 µm^2 and ∼3.685 mW respectively, demonstrating the overall efficacy of the proposed circuits

    Computational Capacity and Energy Consumption of Complex Resistive Switch Networks

    Get PDF
    Resistive switches are a class of emerging nanoelectronics devices that exhibit a wide variety of switching characteristics closely resembling behaviors of biological synapses. Assembled into random networks, such resistive switches produce emerging behaviors far more complex than that of individual devices. This was previously demonstrated in simulations that exploit information processing within these random networks to solve tasks that require nonlinear computation as well as memory. Physical assemblies of such networks manifest complex spatial structures and basic processing capabilities often related to biologically-inspired computing. We model and simulate random resistive switch networks and analyze their computational capacities. We provide a detailed discussion of the relevant design parameters and establish the link to the physical assemblies by relating the modeling parameters to physical parameters. More globally connected networks and an increased network switching activity are means to increase the computational capacity linearly at the expense of exponentially growing energy consumption. We discuss a new modular approach that exhibits higher computational capacities and energy consumption growing linearly with the number of networks used. The results show how to optimize the trade-off between computational capacity and energy efficiency and are relevant for the design and fabrication of novel computing architectures that harness random assemblies of emerging nanodevices
    • …
    corecore