202 research outputs found

    Generalized reconfigurable memristive dynamical system (MDS) for neuromorphic applications

    Get PDF
    This study firstly presents (i) a novel general cellular mapping scheme for two dimensional neuromorphic dynamical systems such as bio-inspired neuron models, and (ii) an efficient mixed analog-digital circuit, which can be conveniently implemented on a hybrid memristor-crossbar/CMOS platform, for hardware implementation of the scheme. This approach employs 4n memristors and no switch for implementing an n-cell system in comparison with 2n2 memristors and 2n switches of a Cellular Memristive Dynamical System (CMDS). Moreover, this approach allows for dynamical variables with both analog and one-hot digital values opening a wide range of choices for interconnections and networking schemes. Dynamical response analyses show that this circuit exhibits various responses based on the underlying bifurcation scenarios which determine the main characteristics of the neuromorphic dynamical systems. Due to high programmability of the circuit, it can be applied to a variety of learning systems, real-time applications, and analytically indescribable dynamical systems. We simulate the FitzHugh-Nagumo (FHN), Adaptive Exponential (AdEx) integrate and fire, and Izhikevich neuron models on our platform, and investigate the dynamical behaviors of these circuits as case studies. Moreover, error analysis shows that our approach is suitably accurate. We also develop a simple hardware prototype for experimental demonstration of our approach.Unión Europea H2020 ECOMODE project under grant agreement 604102Unión Europea HBP project under grant number FP7-ICT-2013-FET-F-60410

    Hardware design of LIF with Latency neuron model with memristive STDP synapses

    Full text link
    In this paper, the hardware implementation of a neuromorphic system is presented. This system is composed of a Leaky Integrate-and-Fire with Latency (LIFL) neuron and a Spike-Timing Dependent Plasticity (STDP) synapse. LIFL neuron model allows to encode more information than the common Integrate-and-Fire models, typically considered for neuromorphic implementations. In our system LIFL neuron is implemented using CMOS circuits while memristor is used for the implementation of the STDP synapse. A description of the entire circuit is provided. Finally, the capabilities of the proposed architecture have been evaluated by simulating a motif composed of three neurons and two synapses. The simulation results confirm the validity of the proposed system and its suitability for the design of more complex spiking neural network

    Quantized Neural Networks and Neuromorphic Computing for Embedded Systems

    Get PDF
    Deep learning techniques have made great success in areas such as computer vision, speech recognition and natural language processing. Those breakthroughs made by deep learning techniques are changing every aspect of our lives. However, deep learning techniques have not realized their full potential in embedded systems such as mobiles, vehicles etc. because the high performance of deep learning techniques comes at the cost of high computation resource and energy consumption. Therefore, it is very challenging to deploy deep learning models in embedded systems because such systems have very limited computation resources and power constraints. Extensive research on deploying deep learning techniques in embedded systems has been conducted and considerable progress has been made. In this book chapter, we are going to introduce two approaches. The first approach is model compression, which is one of the very popular approaches proposed in recent years. Another approach is neuromorphic computing, which is a novel computing system that mimicks the human brain

    Stochastic IMT (insulator-metal-transition) neurons: An interplay of thermal and threshold noise at bifurcation

    Full text link
    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2_2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.Comment: Added sectioning, Figure 6, Table 1, and Section II.E Updated abstract, discussion and corrected typo

    Developing a spiking neural model of Long Short-Term Memory architectures

    Get PDF
    Current advances in Deep Learning have shown significant improvements in common Machine Learning applications such as image, speech and text recognition. Specifically, in order to process time series, deep Neural Networks (NNs) with Long Short-Term Memory (LSTM) units are widely used in sequence recognition problems to store recent information and to use it for future predictions. The efficiency in data analysis, especially when big-sized data sets are involved, can be greatly improved thanks to the advancement of the ongoing research on Neural Networks (NNs) and Machine Learning for many applications in Physics. However, whenever acquisition and processing of data at different time resolutions is required, a synchronization problem for which the same piece of information is processed multiple times arises, and the advantageous efficiency of NNs, which lack the natural notion of time, ceases to exist. Spiking Neural Networks (SNNs) are the next generation of NNs that allow efficient information coding and processing by means of spikes, i.e. binary pulses propagating between neurons. In this way, information can be encoded in time, and the communication of information is activated only when the input to the neurons change, thus giving higher efficiency. In the present work, analog neurons are used for training and then they are substituted with spiking neurons in order to perform tasks. The aim for this project is to find a transfer function which allows a simple and accurate switching between analog and spiking neurons, and then to prove that the obtained network performs well in different tasks. At first, an analytical transfer function for more biologically plausible values for some neuronal parameters is derived and tested. Subsequently, the stochastic nature of the biological neurons is implemented in the neuronal model used. A new transfer function is then approximated by studying the stochastic behavior of artificial neurons, allowing to implement a simplified description for the gates and the input cell in the LSTM units. The stochastic LSTM networks are then tested on Sequence Prediction and T-Maze, i.e. typical memory-involving Machine Learning tasks, showing that almost all the resulting spiking networks correctly compute the original tasks. The main conclusion drawn from this project is that by means of a neuronal model comprising of a stochastic description of the neuron it is possible to obtain an accurate mapping from analog to spiking memory networks, which gives good results on Machine Learning tasks.Spiking neurons communicate with each other by means of a telegraph-like mechanism: a message is encoded by a neuron in binary events–called spikes–that are sent to another neuron, which decodes the incoming spikes of signal by means of the same coding originally used by the sending neuron. The problem addressed in this project then was: is it possible to make a group of such neurons remember things in the short-term but for long-enough time that they are able to solve tasks that require memory? Imagine you are driving to work through a road you have never taken before, and your task is to turn right at the next traffic light. The memory-tasks we wanted the neural networks to learn and solve are of this sort, and no spiking networks exist that can do this. With regards to this goal, the approach we opted for was to train a network of standard artificial neurons and then, once the network learned how to perform the task, we would switch the standard neurons with our modeled spiking neurons. In order to do this, of course, there are some constraints, in particular the two types of neurons (standard and spiking) have to encode signals in the same way, meaning that they need to have the same coding policy. In this project, I had to find an adequate coding policy for the spiking neurons, in order to give the same policy to a network of standard neurons and to test this superposition. Turned out, after the standard networks had learned the tasks and then switched with spiking units, the spiking neurons were indeed able to remember short-term information (such as looking for a traffic light before turning right) and to perform well in such memory tasks, allowing useful computation over time. One of the scientific fields in need of improvement is, in fact, signal processing over time. Nowadays most of the detection instruments collect signal during a time window, meaning that the signal collected in a small time range is considered as a whole, instead of detecting in continuous time. In the first case, a buffer of the history of the time windows (the information gathered before meeting the traffic light) is stored, while when information is processed in continuous time only relevant information (the time at which the traffic light is encountered) is needed. Being able to classify signals as soon as they are detected is a characteristic of asynchronous detection, an example of which is our sight, or hearing. The brain, in fact, is one of the most efficient and powerful systems existent. So why not studying a computation method inspired by the brain? Spiking neurons are exactly that: artificial units performing brain-like computation. Hence, these neurons potentially offer efficient computation and an advantageous method for continuous-time signal processing, which hopefully will be implemented in many research fields in the future
    • …
    corecore