49 research outputs found

    In-Memory Computing by Using Nano-ionic Memristive Devices

    Get PDF
    By reaching to the CMOS scaling limitation based on the Moore’s law and due to the increasing disparity between the processing units and memory performance, the quest is continued to find a suitable alternative to replace the conventional technology. The recently discovered two terminal element, memristor, is believed to be one of the most promising candidates for future very large scale integrated systems. This thesis is comprised of two main parts, (Part I) modeling the memristor devices, and (Part II) memristive computing. The first part is presented in one chapter and the second part of the thesis contains five chapters. The basics and fundamentals regarding the memristor functionality and memristive computing are presented in the introduction chapter. A brief detail of these two main parts is as follows: Part I: Modeling- This part presents an accurate model based on the charge transport mechanisms for nanoionic memristor devices. The main current mechanism in metal/insulator/metal (MIM) structures are assessed, a physic-based model is proposed and a SPICE model is presented and tested for four different fabricated devices. An accuracy comparison is done for various models for Ag/TiO2/ITO fabricated device. Also, the functionality of the model is tested for various input signals. Part II: Memristive computing- Memristive computing is about utilizing memristor to perform computational tasks. This part of the thesis is divided into neuromorphic, analog and digital computing schemes with memristor devices. – Neuromorphic computing- Two chapters of this thesis are about biologicalinspired memristive neural networks using STDP-based learning mechanism. The memristive implementation of two well-known spiking neuron models, Hudgkin-Huxley and Morris-Lecar, are assessed and utilized in the proposed memristive network. The synaptic connections are also memristor devices in this design. Unsupervised pattern classification tasks are done to ensure the right functionality of the system. – Analog computing- Memristor has analog memory property as it can be programmed to different memristance values. A novel memristive analog adder is designed by Continuous Valued Number System (CVNS) scheme and its circuit is comprised of addition and modulo blocks. The proposed analog adder design is explained and its functionality is tested for various numbers. It is shown that the CVNS scheme is compatible with memristive design and the environment resolution can be adjusted by the memristance ratio of the memristor devices. – Digital computing- Two chapters are dedicated for digital computing. In the first one, a development over IMPLY-based logic with memristor is provided to implement a 4:2 compressor circuit. In the second chapter, A novel resistive over a novel mirrored memristive crossbar platform. Different logic gates are designed with the proposed memristive logic method and the simulations are provided with Cadence to prove the functionality of the logic. The logic implementation over a mirrored memristive crossbars is also assessed

    Data assimilation for conductance-based neuronal models

    Get PDF
    This dissertation illustrates the use of data assimilation algorithms to estimate unobserved variables and unknown parameters of conductance-based neuronal models. Modern data assimilation (DA) techniques are widely used in climate science and weather prediction, but have only recently begun to be applied in neuroscience. The two main classes of DA techniques are sequential methods and variational methods. Throughout this work, twin experiments, where the data is synthetically generated from output of the model, are used to validate use of these techniques for conductance-based models observing only the voltage trace. In Chapter 1, these techniques are described in detail and the estimation problem for conductance-based neuron models is derived. In Chapter 2, these techniques are applied to a minimal conductance-based model, the Morris-Lecar model. This model exhibits qualitatively different types of neuronal excitability due to changes in the underlying bifurcation structure and it is shown that the DA methods can identify parameter sets that produce the correct bifurcation structure even with initial parameter guesses that correspond to a different excitability regime. This demonstrates the ability of DA techniques to perform nonlinear state and parameter estimation, and introduces the geometric structure of inferred models as a novel qualitative measure of estimation success. Chapter 3 extends the ideas of variational data assimilation to include a control term to relax the problem further in a process that is referred to as nudging from the geoscience community. The nudged 4D-Var is applied to twin experiments from a more complex, Hodgkin-Huxley-type two-compartment model for various time-sampling strategies. This controlled 4D-Var with nonuniform time-samplings is then applied to voltage traces from current-clamp recordings of suprachiasmatic nucleus neurons in diurnal rodents to improve upon our understanding of the driving forces in circadian (~24) rhythms of electrical activity. In Chapter 4 the complementary strengths of 4D-Var and UKF are leveraged to create a two-stage algorithm that uses 4D-Var to estimate fast timescale parameters and UKF for slow timescale parameters. This coupled approach is applied to data from a conductance-based model of neuronal bursting with distinctive slow and fast time-scales present in the dynamics. In Chapter 5, the ideas of identifiability and sensitivity are introduced. The Morris-Lecar model and a subset of its parameters are shown to be identifiable through the use of numerical techniques. Chapter 6 frames the selection of stimulus waveforms to inject into neurons during patch-clamp recordings as an optimal experimental design problem. Results on the optimal stimulus waveforms for improving the identifiability of parameters for a Hodgkin-Huxley-type model are presented. Chapter 7 shows the preliminary application of data assimilation for voltage-clamp, rather than current-clamp, data and expands on voltage-clamp principles to formulate a reduced assimilation problem driven by the observed voltage. Concluding thoughts are given in Chapter 8

    From memory to processing : a reaction-diffusion approach to neuromorphic computing

    Get PDF
    The goal of this research is to bridge the gap between the physiological brain and mathematically based neuromorphic computing models. As such, the reaction-diffusion method was chosen as it can naturally exhibit properties like propagation of excitation that are seen in the brain, but not current neuromorphic computing models. A reaction-diffusion memory unit was created to demonstrate the key memory functions of sensitization, habituation, and dishabituation, while a reaction-diffusion brain module was established to perform the specific processing task of single-digit binary addition. The results from both approaches were consistent with existing literature detailing physiological memory and processing in the human brain

    A Survey of Spiking Neural Network Accelerator on FPGA

    Full text link
    Due to the ability to implement customized topology, FPGA is increasingly used to deploy SNNs in both embedded and high-performance applications. In this paper, we survey state-of-the-art SNN implementations and their applications on FPGA. We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations. Compared with the previous surveys, this manuscript enumerates the application instances that applied the above-mentioned technical schemes in recent research. Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA. According to our above discussion, the upcoming trends are discussed in this paper and give a guideline for further advancement in related subjects

    Shape Representation in Primate Visual Area 4 and Inferotemporal Cortex

    Get PDF
    The representation of contour shape is an essential component of object recognition, but the cortical mechanisms underlying it are incompletely understood, leaving it a fundamental open question in neuroscience. Such an understanding would be useful theoretically as well as in developing computer vision and Brain-Computer Interface applications. We ask two fundamental questions: “How is contour shape represented in cortex and how can neural models and computer vision algorithms more closely approximate this?” We begin by analyzing the statistics of contour curvature variation and develop a measure of salience based upon the arc length over which it remains within a constrained range. We create a population of V4-like cells – responsive to a particular local contour conformation located at a specific position on an object’s boundary – and demonstrate high recognition accuracies classifying handwritten digits in the MNIST database and objects in the MPEG-7 Shape Silhouette database. We compare the performance of the cells to the “shape-context” representation (Belongie et al., 2002) and achieve roughly comparable recognition accuracies using a small test set. We analyze the relative contributions of various feature sensitivities to recognition accuracy and robustness to noise. Local curvature appears to be the most informative for shape recognition. We create a population of IT-like cells, which integrate specific information about the 2-D boundary shapes of multiple contour fragments, and evaluate its performance on a set of real images as a function of the V4 cell inputs. We determine the sub-population of cells that are most effective at identifying a particular category. We classify based upon cell population response and obtain very good results. We use the Morris-Lecar neuronal model to more realistically illustrate the previously explored shape representation pathway in V4 – IT. We demonstrate recognition using spatiotemporal patterns within a winnerless competition network with FitzHugh-Nagumo model neurons. Finally, we use the Izhikevich neuronal model to produce an enhanced response in IT, correlated with recognition, via gamma synchronization in V4. Our results support the hypothesis that the response properties of V4 and IT cells, as well as our computer models of them, function as robust shape descriptors in the object recognition process

    Biophysical mechanisms of frequency-dependence and its neuromodulation in neurons in oscillatory networks

    Get PDF
    In response to oscillatory input, many isolated neurons exhibit a preferred frequency response in their voltage amplitude and phase shift. Membrane potential resonance (MPR), a maximum amplitude in a neuron’s input impedance at a non-zero frequency, captures the essential subthreshold properties of a neuron, which may provide a coordinating mechanism for organizing the activity of oscillatory neuronal networks around a given frequency. In the pyloric central pattern generator network of the crab Cancer borealis, for example, the pacemaker group pyloric dilator neurons show MPR at a frequency that is correlated with the network frequency. This dissertation uses the crab pyloric CPG to examine how, in one neuron type, interactions of ionic currents, even when expressed at different levels, can produce consistent MPR properties, how MPR properties are modified by neuromodulators and how such modifications may lead to distinct functional effects at different network frequencies. In the first part of this dissertation it is demonstrated that, despite the extensive variability of individual ionic currents in a neuron type such as PD, these currents can generate a consistent impedance profile as a function of input frequency and therefore result in stable MPR properties. Correlated changes in ionic current parameters are associated with the dependence of MPR on the membrane potential range. Synaptic inputs or neuromodulators that shift the membrane potential range can modify the interaction of multiple resonant currents and therefore shift the MPR frequency. Neuromodulators change the properties of voltage-dependent ionic currents. Since ionic current interactions are nonlinear, the modulation of excitability and the impedance profile may depend on all ionic current types expressed by the neuron. MPR is generated by the interaction of positive and negative feedback effects due to fast amplifying and slower resonant currents. Neuromodulators can modify existing MPR properties to generate antiresonance (a minimum amplitude response). In the second part of this dissertation, it is shown that the neuropeptide proctolin produces antiresonance in the follower lateral pyloric neuron, but not in the PD neuron. This finding is inconsistent with the known influences of proctolin. However, a novel proctolin-activated ionic current is shown to produce the antiresonance. Using linear models, antiresonance is then demonstrated to amplify MPR in synaptic partner neurons, indicating a potential function in the pyloric network. Neuromodulators are state dependent, so that their action may depend on the prior activity history of the network. It is shown that state-dependence may arise in part from the time-dependence of an inactivating inward current targeted by the neuromodulator proctolin. Due to the kinetics of inactivation, this current advances the burst phase and increases the duty cycle of the neuron, but mainly at higher network frequencies. These results demonstrate that the effect of neuromodulators on MPR in individual neuron types depends on the nonlinear interaction of modulator-activated and other ionic currents as well as the activation of currents with frequency-dependent properties. Consequently, the action of neuromodulators on the output of oscillatory networks may depend on the frequency of oscillations and be predictable from the MPR properties of the network neurons

    Nonlinear Dynamics of Neural Circuits

    Get PDF

    Optimal Control and Synchronization of Dynamic Ensemble Systems

    Get PDF
    Ensemble control involves the manipulation of an uncountably infinite collection of structurally identical or similar dynamical systems, which are indexed by a parameter set, by applying a common control without using feedback. This subject is motivated by compelling problems in quantum control, sensorless robotic manipulation, and neural engineering, which involve ensembles of linear, bilinear, or nonlinear oscillating systems, for which analytical control laws are infeasible or absent. The focus of this dissertation is on novel analytical paradigms and constructive control design methods for practical ensemble control problems. The first result is a computational method %based on the singular value decomposition (SVD) for the synthesis of minimum-norm ensemble controls for time-varying linear systems. This method is extended to iterative techniques to accommodate bounds on the control amplitude, and to synthesize ensemble controls for bilinear systems. Example ensemble systems include harmonic oscillators, quantum transport, and quantum spin transfers on the Bloch system. To move towards the control of complex ensembles of nonlinear oscillators, which occur in neuroscience, circadian biology, electrochemistry, and many other fields, ideas from synchronization engineering are incorporated. The focus is placed on the phenomenon of entrainment, which refers to the dynamic synchronization of an oscillating system to a periodic input. Phase coordinate transformation, formal averaging, and the calculus of variations are used to derive minimum energy and minimum mean time controls that entrain ensembles of non-interacting oscillators to a harmonic or subharmonic target frequency. In addition, a novel technique for taking advantage of nonlinearity and heterogeneity to establish desired dynamical structures in collections of inhomogeneous rhythmic systems is derived

    Robust Reservoir Computing Approaches for Predicting Cardiac Electrical Dynamics

    Get PDF
    Computational modeling of cardiac electrophysiological signaling is of vital importance in understanding, preventing, and treating life-threatening arrhythmias. Traditionally, mathematical models incorporating physical principles have been used to study cardiac dynamical systems and can generate mechanistic insights, but their predictions are often quantitatively inaccurate due to model complexity, the lack of observability in the system, and variability within individuals and across the population. In contrast, machine-learning techniques can learn directly from training data, which in this context are time series of observed state variables, without prior knowledge of the system dynamics. The reservoir computing framework, a learning paradigm derived from recurrent neural network concepts and most commonly realized as an echo state network (ESN), offers a streamlined training process and holds promise to deliver more accurate predictions than mechanistic models. Accordingly, this research aims to develop robust ESN-based forecasting approaches for nonlinear cardiac electrodynamics, and thus presents the first application of machine-learning, and deep-learning techniques in particular, for modeling the complex electrical dynamics of cardiac cells and tissue. To accomplish this goal, we completed a set of three projects. (i) We compared the performance of available mainstream techniques for prediction with that of the baseline ESN approach along with several new ESN variants we proposed, including a physics-informed hybrid ESN. (ii) We proposed a novel integrated approach, the autoencoder echo state network (AE-ESN), that can accurately forecast the long-term future dynamics of cardiac electrical activity. This technique takes advantage of the best characteristics of both gated recurrent neural networks and ESNs by integrating a long short-term memory (LSTM) autoencoder into the ESN framework to improve reliability and robustness. (iii) We extended the long-term prediction of cardiac electrodynamics from a single cardiac cell to the tissue level, where, in addition to the temporal information, the data includes spatial dimensions and diffusive coupling. Building on the main design idea of the AE-ESN, a convolutional autoencoder was equipped with an ESN to create the Conv-ESN technique, which can process the spatiotemporal data and effectively capture the temporal dependencies between samples of data. Using these techniques, we forecast cardiac electrodynamics for a variety of datasets obtained in both in silico and in vitro experiments. We found that the proposed integrated approaches provide robust and computationally efficient techniques that can successfully predict the dynamics of electrical activity in cardiac cells and tissue with higher prediction accuracy than mainstream deep-learning approaches commonly used for predicting temporal data. On the application side, our approaches provide accurate forecasts over clinically useful time periods that could allow prediction of electrical problems with sufficient time for intervention and thus may support new types of treatments for some kinds of heart disease.Ph.D
    corecore