19 research outputs found

    Computer arithmetic based on the Continuous Valued Number System

    Get PDF

    In-Memory Computing by Using Nano-ionic Memristive Devices

    Get PDF
    By reaching to the CMOS scaling limitation based on the Moore’s law and due to the increasing disparity between the processing units and memory performance, the quest is continued to find a suitable alternative to replace the conventional technology. The recently discovered two terminal element, memristor, is believed to be one of the most promising candidates for future very large scale integrated systems. This thesis is comprised of two main parts, (Part I) modeling the memristor devices, and (Part II) memristive computing. The first part is presented in one chapter and the second part of the thesis contains five chapters. The basics and fundamentals regarding the memristor functionality and memristive computing are presented in the introduction chapter. A brief detail of these two main parts is as follows: Part I: Modeling- This part presents an accurate model based on the charge transport mechanisms for nanoionic memristor devices. The main current mechanism in metal/insulator/metal (MIM) structures are assessed, a physic-based model is proposed and a SPICE model is presented and tested for four different fabricated devices. An accuracy comparison is done for various models for Ag/TiO2/ITO fabricated device. Also, the functionality of the model is tested for various input signals. Part II: Memristive computing- Memristive computing is about utilizing memristor to perform computational tasks. This part of the thesis is divided into neuromorphic, analog and digital computing schemes with memristor devices. – Neuromorphic computing- Two chapters of this thesis are about biologicalinspired memristive neural networks using STDP-based learning mechanism. The memristive implementation of two well-known spiking neuron models, Hudgkin-Huxley and Morris-Lecar, are assessed and utilized in the proposed memristive network. The synaptic connections are also memristor devices in this design. Unsupervised pattern classification tasks are done to ensure the right functionality of the system. – Analog computing- Memristor has analog memory property as it can be programmed to different memristance values. A novel memristive analog adder is designed by Continuous Valued Number System (CVNS) scheme and its circuit is comprised of addition and modulo blocks. The proposed analog adder design is explained and its functionality is tested for various numbers. It is shown that the CVNS scheme is compatible with memristive design and the environment resolution can be adjusted by the memristance ratio of the memristor devices. – Digital computing- Two chapters are dedicated for digital computing. In the first one, a development over IMPLY-based logic with memristor is provided to implement a 4:2 compressor circuit. In the second chapter, A novel resistive over a novel mirrored memristive crossbar platform. Different logic gates are designed with the proposed memristive logic method and the simulations are provided with Cadence to prove the functionality of the logic. The logic implementation over a mirrored memristive crossbars is also assessed

    Mixed-Signal Neural Network Implementation with Programmable Neuron

    Get PDF
    This thesis introduces implementation of mixed-signal building blocks of an artificial neural network; namely the neuron and the synaptic multiplier. This thesis, also, investigates the nonlinear dynamic behavior of a single artificial neuron and presents a Distributed Arithmetic (DA)-based Finite Impulse Response (FIR) filter. All the introduced structures are designed and custom laid out

    Mixed-Signal VLSI Implementation of CVNS Artificial Neural Networks

    Get PDF
    In this work, mixed-signal implementation of Continuous Valued Number System (CVNS) neural network is proposed. The proposed network resolves the limited signal processing precision issue present in mixed-signal neural networks. This is realized by the CVNS addition, the CVNS multiplication and the CVNS sigmoid function evaluation algorithms proposed in this dissertation. The proposed algorithms provide accurate results in low-resolution environment. In addition, an area-efficient low sensitivity CVNS Madaline is proposed. The proposed Madaline is more robust to input and weight errors when compared to the previously developed structures. Moreover, its area consumption is lower. Furthermore, a new approximation scheme for hyperbolic tangent activation function is proposed. Using the proposed approximation scheme results in efficient implementation of digital ASIC neural networks in terms of area, delay and power consumption

    A Prototype CVNS Distributed Neural Network

    Get PDF
    Artificial neural networks are widely used in many applications such as signal processing, classification, and control. However, the practical implementation of them is challenged by the number of inputs, storing the weights, and realizing the activation function.In this work, Continuous Valued Number System (CVNS) distributed neural networks are proposed which are providing the network with self-scaling property. This property aids the network to cope spontaneously with different number of inputs. The proposed CVNS DNN can change the dynamic range of the activation function spontaneously according to the number of inputs providing a proper functionality for the network.In addition, multi-valued CVNS DRAMs are proposed to store the weights as CVNS digits. These memories scan store up to 16 levels, equal to 4 bits, on each storage cell. In addition, they use error correction codes to detect and correct the error over the stored values.A synapse-neuron module is proposed to decrease the design cost. It contains both synapse and neuron and the relevant components. In these modules, the activation function is realized through analog circuits which are far more compact compared to the digital look-up-tables while quite accurate.Furthermore, the redundancy between CVNS digits together with the distributed structure of the neuron make the proposal stable against process violations and reduce the noise to signal ration

    A Mixed-Signal Feed-Forward Neural Network Architecture Using A High-Resolution Multiplying D/A Conversion Method

    Get PDF
    Artificial Neural Networks (ANNs) are parallel processors capable of learning from a set of sample data using a specific learning rule. Such systems are commonly used in applications where human brain may surpass conventional computers such as image processing, speech/character recognition, intelligent control and robotics to name a few. In this thesis, a mixed-signal neural network architecture is proposed employs a high resolution Multiplying Digital to Analog Converter (MDAC) designed using Delta Sigma Modulation (DSM). To reduce chip are, multiplexing is used in addition to analog implementation of arithmetic operations. This work employs a new method for filtering the high bit-rate signals using neurons nonlinear transfer function already existing in the network. Therefore, a configuration of a few MOS transistors are replacing the large resistors required to implement the low-pass filter in the network. This configuration noticeably decreases the chip area and also makes multiplexing feasible for hardware implementation

    Deep Liquid State Machines with Neural Plasticity and On-Device Learning

    Get PDF
    The Liquid State Machine (LSM) is a recurrent spiking neural network designed for efficient processing of spatio-temporal streams of information. LSMs have several inbuilt features such as robustness, fast training and inference speed, generalizability, continual learning (no catastrophic forgetting), and energy efficiency. These features make LSM’s an ideal network for deploying intelligence on-device. In general, single LSMs are unable to solve complex real-world tasks. Recent literature has shown emergence of hierarchical architectures to support temporal information processing over different time scales. However, these approaches do not typically investigate the optimum topology for communication between layers in the hierarchical network, or assume prior knowledge about the target problem and are not generalizable. In this thesis, a deep Liquid State Machine (deep-LSM) network architecture is proposed. The deep-LSM uses staggered reservoirs to process temporal information on multiple timescales. A key feature of this network is that neural plasticity and attention are embedded in the topology to bolster its performance for complex spatio-temporal tasks. An advantage of the deep-LSM is that it exploits the random projection native to the LSM as well as local plasticity mechanisms to optimize the data transfer between sequential layers. Both random projections and local plasticity mechanisms are ideal for on-device learning due to their low computational complexity and the absence of backpropagating error. The deep-LSM is deployed on a custom learning architecture with memristors to study the feasibility of on-device learning. The performance of the deep-LSM is demonstrated on speech recognition and seizure detection applications

    A Study of Techniques and Mechanisms of Vagus Nerve Stimulation for Treatment of Inflammation

    Get PDF
    Vagus nerve stimulation (VNS) has been on the forefront of inflammatory disorder research for the better part of the last three decades and has yielded many promising results. There remains, however, much debate about the actual biological mechanisms of such treatments, as well as, questions about inconsistencies in methods used in many research efforts. In this work, I identify shortcomings in past VNS methods and submit new developments and findings that can progress the research community towards more selective and relevant VNS research and treatments. In Aim 1, I present the most recent advancements in the capabilities of our fully implantable Bionode stimulation device platform for use in VNS studies to include stimulation circuitry, device packaging, and stimulation cuff design. In Aim 2, I characterize the inflammatory cytokine response of rats to intraperitoneally injected endotoxin utilizing new data analysis methods and demonstrate the modulatory effects of VNS applied by the Bionode stimulator to subdiaphragmatic branches of the left vagus nerve in an acute study. In Aim 3, using fully implanted Bionode devices, I expose a previously unidentified effect of chronically cuffing the left cervical vagus nerve to suppress efferent Fluorogold transport and cause unintended attenuation to physiological effects of VNS. Finally, in accordance with our findings from Aims 1, 2, and 3, I present results from new and promising techniques we have explored for future use of VNS in inflammation studies

    Challenging the Known. 16th Annual Research Week: Event Proceedings

    Get PDF
    Presentations of completed and ongoing research activity conducted by graduate students, undergraduate students, and faculty at University of the Incarnate Word. Includes poster, podium, visual arts, interactive demo, and creative and performing arts presentations. Coordinated and presented by the Office of Research and Graduate Studies

    Naval Research Program 2021 Annual Report

    Get PDF
    NPS NRP Annual ReportThe Naval Postgraduate School (NPS) Naval Research Program (NRP) is funded by the Chief of Naval Operations and supports research projects for the Navy and Marine Corps. The NPS NRP serves as a launch-point for new initiatives which posture naval forces to meet current and future operational warfighter challenges. NRP research projects are led by individual research teams that conduct research and through which NPS expertise is developed and maintained. The primary mechanism for obtaining NPS NRP support is through participation at NPS Naval Research Working Group (NRWG) meetings that bring together fleet topic sponsors, NPS faculty members, and students to discuss potential research topics and initiatives.Chief of Naval Operations (CNO)Approved for public release. Distribution is unlimited.
    corecore