15 research outputs found

    Two Dimensional Dynamic Synapse With Programmable Spatio-Temporal Dynamics For Neuromorphic Computing

    Get PDF
    In today’s era of big-data, a new computing paradigm beyond today’s von-Neumann architecture is needed to process large-scale datasets efficiently. In response to this need, the field of neuromorphic computing has recently emerged. Inspired by the brain, neuromorphic approaches are better at complex tasks than even supercomputers and show much better efficiency. This is because, unlike modern computers that use digital ‘0’ and ‘1’ for computation, biological neural networks exhibit analog changes in synaptic connections during the decision-making and learning processes. However, the existing approaches of using digital complementary metal-oxide-semiconductor (CMOS) devices to emulate gradual/analog behaviors in the neural network are energy intensive and unsustainable; furthermore, emerging memristor devices still face challenges such as non-linearities and large write noise. Here, we propose a novel artificial synaptic device use of an electrochemical dynamic synapse based on two-dimensional (2D) materials. The synaptic weight (channel conductance) of these dynamic synapses can be tuned via both a long-term doping effect from electrochemical intercalation and a short-term doping effect from ionic gating, thereby demonstrating programmable spatio-temporal dynamics, an essential feature for implementing spiking neural networks (SNNs). The electrical conductance of the channel is reversibly modulated by a concentration of Li ions between the layers of the 2D materials. This fundamentally different mechanism allows us to achieve a good energy efficiency (5000 non-volatile states), good endurance and retention performances, and a linear and symmetric resistance response. We demonstrate essential neuronal functions such as excitatory and inhibitory synapses, short term and long term plasticity, paired pulse facilitation (PPF), spike timing dependent plasticity (STDP), and spike rating dependent plasticity (SRDP), with good repeatability. Our scaling study suggests that this simple, two-dimensional (2D) synapse is scalable in terms of switching energy and speed. This work can lead to the low-power hardware implementation of neural networks for neuromorphic computing

    Memristor: Modeling, Simulation and Usage in Neuromorphic Computation

    Get PDF
    Memristor, the fourth passive circuit element, has attracted increased attention from various areas since the first real device was discovered in 2008. Its distinctive characteristic to record the historic profile of the voltage/current through itself creates great potential in future circuit design. Inspired by its high Scalability, ultra low power consumption and similar functionality to biology synapse, using memristor to build high density, high power efficiency neuromorphic circuits becomes one of most promising and also challenging applications. The challenges can be concluded into three levels: device level, circuit level and application level. At device level, we studied different memristor models and process variations, then we carried out three independent variation models to describe the variation and stochastic behavior of TiO2 memristors. These models can also extend to other memristor models. Meanwhile, these models are also compact enough for large-scale circuit simulation. At circuit level, inspired by the large-scale and unique requirement of memristor-based neuromorphic circuits, we designed a circuit simulator for efficient memristor cross-point array simulations. Out simulator is 4~5 orders of magnitude faster than tradition SPICE simulators. Both linear and nonlinear memristor cross-point arrays are studied for level-based and spike-based neuromorphic circuits, respectively. At application level, we first designed a few compact memristor-based neuromorphic components, including ``Macro cell'' for efficient and high definition weight storage, memristor-based stochastic neuron and memristor-based spatio temporal synapse. We then studied three typical neural network models and their hardware realization on memristor-based neuromorphic circuits: Brain-State-in-a-Box (BSB) model stands for level-based neural network, and STDP/ReSuMe models stand for spiking neural network for temporal learning. Our result demonstrates the high resilience to variation of memristor-based circuits and ultra-low power consumption. In this thesis, we have proposed a complete and detailed analysis for memristor-based neuromorphic circuit design from the device level to the application level. In each level, both theoretical analysis and experimental data versification are applied to ensure the completeness and accuracy of the work

    The Evolution, Analysis, and Design of Minimal Spiking Neural Networks for Temporal Pattern Recognition

    Get PDF
    All sensory stimuli are temporal in structure. How a pattern of action potentials encodes the information received from the sensory stimuli is an important research question in neurosciencce. Although it is clear that information is carried by the number or the timing of spikes, the information processing in the nervous system is poorly understood. The desire to understand information processing in the animal brain led to the development of spiking neural networks (SNNs). Understanding information processing in spiking neural networks may give us an insight into the information processing in the animal brain. One way to understand the mechanisms which enable SNNs to perform a computational task is to associate the structural connectivity of the network with the corresponding functional behaviour. This work demonstrates the structure-function mapping of spiking networks evolved (or handcrafted) for recognising temporal patterns. The SNNs are composed of simple yet biologically meaningful adaptive exponential integrate-and-fire (AdEx) neurons. The computational task can be described as identifying a subsequence of three signals (say ABC) in a random input stream of signals ("ABBBCCBABABCBBCAC"). The topology and connection weights of the networks are optimised using a genetic algorithm such that the network output spikes only for the correct input pattern and remains silent for all others. The fitness function rewards the network output for spiking after receiving the correct pattern and penalises spikes elsewhere. To analyse the effect of noise, two types of noise are introduced during evolution: (i) random fluctuations of the membrane potential of neurons in the network at every network step, (ii) random variations of the duration of the silent interval between input signals. It has been observed that evolution in the presence of noise produced networks that were robust to perturbation of neuronal parameters. Moreover, the networks also developed a form of memory, enabling them to maintain network states in the absence of input activity. It has been demonstrated that the network states of an evolved network have a one-to-one correspondence with the states of a finite-state transducer (FST) { a model of computation for time-structured data. The analysis of networks indicated that the task of recognition is accomplished by transitions between network states. Evolution may overproduce synaptic connections, pruning these superfluous connections pronounced structural similarities among individuals obtained from different independent runs. Moreover, the analysis of the pruned networks highlighted that memory is a property of self-excitation in the network. Neurons with self-excitatory loops (also called autapses) could sustain spiking activity indefinitely in the absence of input activity. To recognise a pattern of length n, a network requires n+1 network states, where n states are maintained actively with autapses and the penultimate state is maintained passively by no activity in the network. Simultaneously, the role of other connections in the network is identified. Of particular interest, three interneurons in the network are found to have a specialized role: (i) the lock neuron is always active, preventing the output from spiking unless it is released by the penultimate signal in the correct pattern, exposing the output neuron to spike for the correct last signal, (ii) the switch neuron is responsible for switching the network between the inter-signal states and the start state, and (iii) the accept neuron produces spikes in the output neuron when the network receives the last correct input. It also sends a signal to the switch neuron, transforming the network back into the start state Understanding how information is processed in the evolved networks led to handcrafting network topologies for recognising more extended patterns. The proposed rules can extend network topologies to recognize temporal patterns up to length six. To validate the handcrafted topology, a genetic algorithm is used to optimise its connection weights. It has been observed that the maximum number of active neurons representing a state in the network increases with the pattern length. Therefore, the suggested rules can handcraft network topologies only up to length 6. Handcrafting network topologies, representing a network state with a fixed number of active neurons requires further investigation

    Cognitive Learning and Memory Systems Using Spiking Neural Networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Stochastic-Based Computing with Emerging Spin-Based Device Technologies

    Get PDF
    In this dissertation, analog and emerging device physics is explored to provide a technology platform to design new bio-inspired system and novel architecture. With CMOS approaching the nano-scaling, their physics limits in feature size. Therefore, their physical device characteristics will pose severe challenges to constructing robust digital circuitry. Unlike transistor defects due to fabrication imperfection, quantum-related switching uncertainties will seriously increase their susceptibility to noise, thus rendering the traditional thinking and logic design techniques inadequate. Therefore, the trend of current research objectives is to create a non-Boolean high-level computational model and map it directly to the unique operational properties of new, power efficient, nanoscale devices. The focus of this research is based on two-fold: 1) Investigation of the physical hysteresis switching behaviors of domain wall device. We analyze phenomenon of domain wall device and identify hysteresis behavior with current range. We proposed the Domain-Wall-Motion-based (DWM) NCL circuit that achieves approximately 30x and 8x improvements in energy efficiency and chip layout area, respectively, over its equivalent CMOS design, while maintaining similar delay performance for a one bit full adder. 2) Investigation of the physical stochastic switching behaviors of Mag- netic Tunnel Junction (MTJ) device. With analyzing of stochastic switching behaviors of MTJ, we proposed an innovative stochastic-based architecture for implementing artificial neural network (S-ANN) with both magnetic tunneling junction (MTJ) and domain wall motion (DWM) devices, which enables efficient computing at an ultra-low voltage. For a well-known pattern recognition task, our mixed-model HSPICE simulation results have shown that a 34-neuron S-ANN implementation, when compared with its deterministic-based ANN counterparts implemented with digital and analog CMOS circuits, achieves more than 1.5 ~ 2 orders of magnitude lower energy consumption and 2 ~ 2.5 orders of magnitude less hidden layer chip area

    Learning to Behave: Internalising Knowledge

    Get PDF

    Visual attention in primates and for machines - neuronal mechanisms

    Get PDF
    Visual attention is an important cognitive concept for the daily life of humans, but still not fully understood. Due to this, it is also rarely utilized in computer vision systems. However, understanding visual attention is challenging as it has many and seemingly-different aspects, both at neuronal and behavioral level. Thus, it is very hard to give a uniform explanation of visual attention that can account for all aspects. To tackle this problem, this thesis has the goal to identify a common set of neuronal mechanisms, which underlie both neuronal and behavioral aspects. The mechanisms are simulated by neuro-computational models, thus, resulting in a single modeling approach to explain a wide range of phenomena at once. In the thesis, the chosen aspects are multiple neurophysiological effects, real-world object localization, and a visual masking paradigm (OSM). In each of the considered fields, the work also advances the current state-of-the-art to better understand this aspect of attention itself. The three chosen aspects highlight that the approach can account for crucial neurophysiological, functional, and behavioral properties, thus the mechanisms might constitute the general neuronal substrate of visual attention in the cortex. As outlook, our work provides for computer vision a deeper understanding and a concrete prototype of attention to incorporate this crucial aspect of human perception in future systems.:1. General introduction 2. The state-of-the-art in modeling visual attention 3. Microcircuit model of attention 4. Object localization with a model of visual attention 5. Object substitution masking 6. General conclusionVisuelle Aufmerksamkeit ist ein wichtiges kognitives Konzept für das tägliche Leben des Menschen. Es ist aber immer noch nicht komplett verstanden, so dass es ein langjähriges Ziel der Neurowissenschaften ist, das Phänomen grundlegend zu durchdringen. Gleichzeitig wird es aufgrund des mangelnden Verständnisses nur selten in maschinellen Sehsystemen in der Informatik eingesetzt. Das Verständnis von visueller Aufmerksamkeit ist jedoch eine komplexe Herausforderung, da Aufmerksamkeit äußerst vielfältige und scheinbar unterschiedliche Aspekte besitzt. Sie verändert multipel sowohl die neuronalen Feuerraten als auch das menschliche Verhalten. Daher ist es sehr schwierig, eine einheitliche Erklärung von visueller Aufmerksamkeit zu finden, welche für alle Aspekte gleichermaßen gilt. Um dieses Problem anzugehen, hat diese Arbeit das Ziel, einen gemeinsamen Satz neuronaler Mechanismen zu identifizieren, welche sowohl den neuronalen als auch den verhaltenstechnischen Aspekten zugrunde liegen. Die Mechanismen werden in neuro-computationalen Modellen simuliert, wodurch ein einzelnes Modellierungsframework entsteht, welches zum ersten Mal viele und verschiedenste Phänomene von visueller Aufmerksamkeit auf einmal erklären kann. Als Aspekte wurden in dieser Dissertation multiple neurophysiologische Effekte, Realwelt Objektlokalisation und ein visuelles Maskierungsparadigma (OSM) gewählt. In jedem dieser betrachteten Felder wird gleichzeitig der State-of-the-Art verbessert, um auch diesen Teilbereich von Aufmerksamkeit selbst besser zu verstehen. Die drei gewählten Gebiete zeigen, dass der Ansatz grundlegende neurophysiologische, funktionale und verhaltensbezogene Eigenschaften von visueller Aufmerksamkeit erklären kann. Da die gefundenen Mechanismen somit ausreichend sind, das Phänomen so umfassend zu erklären, könnten die Mechanismen vielleicht sogar das essentielle neuronale Substrat von visueller Aufmerksamkeit im Cortex darstellen. Für die Informatik stellt die Arbeit damit ein tiefergehendes Verständnis von visueller Aufmerksamkeit dar. Darüber hinaus liefert das Framework mit seinen neuronalen Mechanismen sogar eine Referenzimplementierung um Aufmerksamkeit in zukünftige Systeme integrieren zu können. Aufmerksamkeit könnte laut der vorliegenden Forschung sehr nützlich für diese sein, da es im Gehirn eine Aufgabenspezifische Optimierung des visuellen Systems bereitstellt. Dieser Aspekt menschlicher Wahrnehmung fehlt meist in den aktuellen, starken Computervisionssystemen, so dass eine Integration in aktuelle Systeme deren Leistung sprunghaft erhöhen und eine neue Klasse definieren dürfte.:1. General introduction 2. The state-of-the-art in modeling visual attention 3. Microcircuit model of attention 4. Object localization with a model of visual attention 5. Object substitution masking 6. General conclusio

    MOCAST 2021

    Get PDF
    The 10th International Conference on Modern Circuit and System Technologies on Electronics and Communications (MOCAST 2021) will take place in Thessaloniki, Greece, from July 5th to July 7th, 2021. The MOCAST technical program includes all aspects of circuit and system technologies, from modeling to design, verification, implementation, and application. This Special Issue presents extended versions of top-ranking papers in the conference. The topics of MOCAST include:Analog/RF and mixed signal circuits;Digital circuits and systems design;Nonlinear circuits and systems;Device and circuit modeling;High-performance embedded systems;Systems and applications;Sensors and systems;Machine learning and AI applications;Communication; Network systems;Power management;Imagers, MEMS, medical, and displays;Radiation front ends (nuclear and space application);Education in circuits, systems, and communications

    Forum Bildverarbeitung 2020

    Get PDF
    Image processing plays a key role for fast and contact-free data acquisition in many technical areas, e.g., in quality control or robotics. These conference proceedings of the “Forum Bildverarbeitung”, which took place on 26.-27.11.202 in Karlsruhe as a common event of the Karlsruhe Institute of Technology and the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation, contain the articles of the contributions

    A Closed-Loop Bidirectional Brain-Machine Interface System For Freely Behaving Animals

    Get PDF
    A brain-machine interface (BMI) creates an artificial pathway between the brain and the external world. The research and applications of BMI have received enormous attention among the scientific community as well as the public in the past decade. However, most research of BMI relies on experiments with tethered or sedated animals, using rack-mount equipment, which significantly restricts the experimental methods and paradigms. Moreover, most research to date has focused on neural signal recording or decoding in an open-loop method. Although the use of a closed-loop, wireless BMI is critical to the success of an extensive range of neuroscience research, it is an approach yet to be widely used, with the electronics design being one of the major bottlenecks. The key goal of this research is to address the design challenges of a closed-loop, bidirectional BMI by providing innovative solutions from the neuron-electronics interface up to the system level. Circuit design innovations have been proposed in the neural recording front-end, the neural feature extraction module, and the neural stimulator. Practical design issues of the bidirectional neural interface, the closed-loop controller and the overall system integration have been carefully studied and discussed.To the best of our knowledge, this work presents the first reported portable system to provide all required hardware for a closed-loop sensorimotor neural interface, the first wireless sensory encoding experiment conducted in freely swimming animals, and the first bidirectional study of the hippocampal field potentials in freely behaving animals from sedation to sleep. This thesis gives a comprehensive survey of bidirectional BMI designs, reviews the key design trade-offs in neural recorders and stimulators, and summarizes neural features and mechanisms for a successful closed-loop operation. The circuit and system design details are presented with bench testing and animal experimental results. The methods, circuit techniques, system topology, and experimental paradigms proposed in this work can be used in a wide range of relevant neurophysiology research and neuroprosthetic development, especially in experiments using freely behaving animals
    corecore