30 research outputs found

    Spiking ink drop spread clustering algorithm and its memristor crossbar conceptual hardware design

    Get PDF
    In this study, a novel neuro-fuzzy clustering algorithm is proposed based on spiking neural network and ink drop spread (IDS) concepts. The proposed structure is a one-layer artificial neural network with leaky integrate and fire (LIF) neurons. The structure implements the IDS algorithm as a fuzzy concept. Each training data will result in firing the corresponding input neuron and its neighboring neurons. A synchronous time coding algorithm is used to manage input and output neurons firing time. For an input data, one or several output neurons of the network will fire; confidence degree of the network to outputs is defined as the relative delay of the firing times with respect to the synchronous pulse. A memristor crossbar-based hardware is utilized for hardware implementation of the proposed algorithm. The simulation result corroborates that the proposed algorithm can be used as a neuro-fuzzy clustering and vector quantization algorithm

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd

    In-Memory Computing by Using Nano-ionic Memristive Devices

    Get PDF
    By reaching to the CMOS scaling limitation based on the Moore’s law and due to the increasing disparity between the processing units and memory performance, the quest is continued to find a suitable alternative to replace the conventional technology. The recently discovered two terminal element, memristor, is believed to be one of the most promising candidates for future very large scale integrated systems. This thesis is comprised of two main parts, (Part I) modeling the memristor devices, and (Part II) memristive computing. The first part is presented in one chapter and the second part of the thesis contains five chapters. The basics and fundamentals regarding the memristor functionality and memristive computing are presented in the introduction chapter. A brief detail of these two main parts is as follows: Part I: Modeling- This part presents an accurate model based on the charge transport mechanisms for nanoionic memristor devices. The main current mechanism in metal/insulator/metal (MIM) structures are assessed, a physic-based model is proposed and a SPICE model is presented and tested for four different fabricated devices. An accuracy comparison is done for various models for Ag/TiO2/ITO fabricated device. Also, the functionality of the model is tested for various input signals. Part II: Memristive computing- Memristive computing is about utilizing memristor to perform computational tasks. This part of the thesis is divided into neuromorphic, analog and digital computing schemes with memristor devices. – Neuromorphic computing- Two chapters of this thesis are about biologicalinspired memristive neural networks using STDP-based learning mechanism. The memristive implementation of two well-known spiking neuron models, Hudgkin-Huxley and Morris-Lecar, are assessed and utilized in the proposed memristive network. The synaptic connections are also memristor devices in this design. Unsupervised pattern classification tasks are done to ensure the right functionality of the system. – Analog computing- Memristor has analog memory property as it can be programmed to different memristance values. A novel memristive analog adder is designed by Continuous Valued Number System (CVNS) scheme and its circuit is comprised of addition and modulo blocks. The proposed analog adder design is explained and its functionality is tested for various numbers. It is shown that the CVNS scheme is compatible with memristive design and the environment resolution can be adjusted by the memristance ratio of the memristor devices. – Digital computing- Two chapters are dedicated for digital computing. In the first one, a development over IMPLY-based logic with memristor is provided to implement a 4:2 compressor circuit. In the second chapter, A novel resistive over a novel mirrored memristive crossbar platform. Different logic gates are designed with the proposed memristive logic method and the simulations are provided with Cadence to prove the functionality of the logic. The logic implementation over a mirrored memristive crossbars is also assessed

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Vector Symbolic Finite State Machines in Attractor Neural Networks

    Get PDF
    Hopfield attractor networks are robust distributed models of human memory. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random bipolar vectors, and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs may exist as a distributed computational primitive in biological neural networks
    corecore