1,367 research outputs found
Experience-driven formation of parts-based representations in a model of layered visual memory
Growing neuropsychological and neurophysiological evidence suggests that the
visual cortex uses parts-based representations to encode, store and retrieve
relevant objects. In such a scheme, objects are represented as a set of
spatially distributed local features, or parts, arranged in stereotypical
fashion. To encode the local appearance and to represent the relations between
the constituent parts, there has to be an appropriate memory structure formed
by previous experience with visual objects. Here, we propose a model how a
hierarchical memory structure supporting efficient storage and rapid recall of
parts-based representations can be established by an experience-driven process
of self-organization. The process is based on the collaboration of slow
bidirectional synaptic plasticity and homeostatic unit activity regulation,
both running at the top of fast activity dynamics with winner-take-all
character modulated by an oscillatory rhythm. These neural mechanisms lay down
the basis for cooperation and competition between the distributed units and
their synaptic connections. Choosing human face recognition as a test task, we
show that, under the condition of open-ended, unsupervised incremental
learning, the system is able to form memory traces for individual faces in a
parts-based fashion. On a lower memory layer the synaptic structure is
developed to represent local facial features and their interrelations, while
the identities of different persons are captured explicitly on a higher layer.
An additional property of the resulting representations is the sparseness of
both the activity during the recall and the synaptic patterns comprising the
memory traces.Comment: 34 pages, 12 Figures, 1 Table, published in Frontiers in
Computational Neuroscience (Special Issue on Complex Systems Science and
Brain Dynamics),
http://www.frontiersin.org/neuroscience/computationalneuroscience/paper/10.3389/neuro.10/015.2009
Dynamic Power Management for Neuromorphic Many-Core Systems
This work presents a dynamic power management architecture for neuromorphic
many core systems such as SpiNNaker. A fast dynamic voltage and frequency
scaling (DVFS) technique is presented which allows the processing elements (PE)
to change their supply voltage and clock frequency individually and
autonomously within less than 100 ns. This is employed by the neuromorphic
simulation software flow, which defines the performance level (PL) of the PE
based on the actual workload within each simulation cycle. A test chip in 28 nm
SLP CMOS technology has been implemented. It includes 4 PEs which can be scaled
from 0.7 V to 1.0 V with frequencies from 125 MHz to 500 MHz at three distinct
PLs. By measurement of three neuromorphic benchmarks it is shown that the total
PE power consumption can be reduced by 75%, with 80% baseline power reduction
and a 50% reduction of energy per neuron and synapse computation, all while
maintaining temporary peak system performance to achieve biological real-time
operation of the system. A numerical model of this power management model is
derived which allows DVFS architecture exploration for neuromorphics. The
proposed technique is to be used for the second generation SpiNNaker
neuromorphic many core system
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
์์ฑ ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ๋ฅผ ํ์ฉํ ์ ์ ๋ ฅ ์๋ ์ค ์์
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ, 2020. 8. ๋ฐ๋ณ๊ตญ.์ ๊ฒฝ๋ง ๋ชจ๋ฐฉ ์์คํ
์ ํฐ ๋
ธ์ด๋ง ๊ตฌ์กฐ์ ๊ณ์ฐ ์์คํ
์ด ๊ฐ์ง๋ ์ฝ์ ์ธ ๋ณต์กํ ์ธ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐ๊ณผ ์๋์ง ์๋น์ ํจ์จ์ฑ์ ๊ฐ๋ฅ์ฑ์ผ๋ก ์๋
๊ฐ ๋ง์ ๋ถ์ผ์์ ์ฐ๊ตฌ๋๊ณ ์๊ณ ์ผ๋ถ๋ ์์ฉํ ๋จ๊ณ์๊น์ง ์ด๋ฅด๋ ๋ค. ์ด ์ ๊ฒฝ ๋ชจ๋ฐฉ ์์คํ
์ ์๋
์ค ๋ชจ๋ฐฉ ์์์ ๋ด๋ฐ ํ๋ก๋ก ์ด๋ฃจ์ด ์ง๋๋ฐ ์๋
์ค ๋ชจ๋ฐฉ ์์๋ ์ ํธ์ ๋ฌ๊ณผ ๊ธฐ์ต ๊ธฐ๋ฅ์ ๋ด๋นํ๊ณ ์๋ค.
์๋
์ค๋ ์ ์ฒด ์ ๊ฒฝ๋ชจ๋ฐฉ ์์คํ
์์ ๊ฐ์ฅ ํฐ ๋ถ๋ถ์ ์ฐจ์ง ํ๋ค. ๋ฐ๋ผ์ ์์คํ
๋ด ๋๋ถ๋ถ์ ์ ๋ ฅ ์๋น๊ฐ ์๋
์ค ๋ถ๋ถ์์ ์ผ์ด๋๊ฒ ๋๋ฏ๋ก ์ ์ ๋ ฅ ๊ตฌํ์ด ํ์์ ์ธ ์์๋ค. ์ด๋ฐ ์ด์ ๋ก ์ ์ ๋ ฅ ์์์ ํนํ๋ ์์์ธ ํฐ๋ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ (TFET), ๋ค๊ฑฐํฐ๋ธ ์ปคํ์ํฐ ์ ๊ณํจ๊ณผ ํธ๋์ง์คํฐ (NCFET), ๊ฐ์ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ (FeFET) ๋ฐ ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ (FBFET) ๋ฑ์ด ์ฐ๊ตฌ๋๊ณ ์๋ค.
์ด๋ฐ ๋ค์ํ ์์์ค์ ํ์ฌ์ ์๋ณดํ ๊ธ์-์ฐํ๋ฌผ-๋ฐ๋์ฒด (CMOS) ๊ณต์ ์ ๊ทธ๋๋ก ์ฌ์ฉํ ์ ์๋ ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ๋ ๋ด๋ฐ ํ๋ก์ ๋์์ ์ ์์ด ํ์ํ ์ ๊ฒฝ๋ง ๋ชจ๋ฐฉ ์์คํ
์์ ๋๋ ์์ฐ ๊ฐ๋ฅ์ฑ์ ์์ด์ ๋งค์ฐ ์ ๋ฆฌํ๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ด ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๊ณ NAND ํ๋์ ๋ฉ๋ชจ๋ฆฌ ๊ตฌ์กฐ์์ ์ฌ์ฉํ๋ ํ์ธ๋ฌ ๋
ธ๋ฅด๋ค์ ํฐ๋๋ง(Fowler-Nordheim tunneling)์ ๋ฐฉ์์ผ๋ก ์ฐจ์น ํธ๋ฉ ์ธต์ ์๋
์ค ์์์ ๊ฐ์ค์น๋ฅผ ๊ธฐ์ตํ๋ ๋ฐฉ์์ ์๋
์ค ์ฅ์น๋ฅผ ์ ์ํ๊ณ ์๋ค.
ํด๋น ์์์ ์ ์ ๋ ฅ ํน์ฑ๊ณผ ๊ตฌ๋ ๋ฐฉ๋ฒ์ ํ
ํฌ๋๋ก์ง ์ปดํจํฐ ์ง์ ์ค๊ณ (TCAD) ์๋ฎฌ๋ ์ด์
์ ์ฌ์ฉํ์ฌ ์ ํจ์ฑ์ ํ์ธ ํ์๊ณ , ์์ธ๋ ๋ฐ๋์ฒด ๊ณต๋ ์ฐ๊ตฌ์ (ISRC) ์ CMOS ๊ณต์ ์ ์ฌ์ฉํ์ฌ ์์๋ฅผ ์ ์ํ์๊ณ ์ ๊ธฐ์ ํน์ฑ ์ธก์ ์ ํตํด ์ ์๋ ๋ฐฉ๋ฒ์ ํ์ธ ๋ฐ ๊ฒ์ฆ ํ์๋ค.The neuromorphic system has been widely used and commercialized in many fields in recent years due to its potential for complex problem solving and low energy consumption. The basic elements of this neuromorphic system are synapse and
neuron circuit, in which synapse research is focused on emerging electronic devices such as resistive change memory (RRAM), phase-change memory (PCRAM), magnetoresistive random-access memory (MRAM), and FET-based devices.
Synapse is responsible for the memory function of the neuromorphic system, that is, the current sum quantization with the specific weight value. and the neuron is responsible for integrating signals that have passed through the synapse and transmitting information to the next synapse. Since the synapse element is the largest portion of the whole system, It consumes most of the power of the entire system. So low power implementation is essential for the synapse device. In order to reduce power consumption, it is necessary to lower the off-current leakage and operate on low voltage. To overcome the limitation of MOSFETs in terms of ION/IOFF ratio, small sub-threshold swing and power consumption, various devices such as a tunneling field-effect transistor (TFET), negative capacitor field-effect transistor (NCFET), ferroelectric field-effect transistor (FeFET), and feedback field-effect transistor (FBFET) have been studied.
Another important factor in synapse devices is the cost aspect. The deep learning technology that made Alpha-go exist is also an expensive system. As we can see from the coexistence of supercomputers and personal computers in the past, the development of low-cost chips that can be used by individuals, in the end, is inevitable. Because a CMOS compatible process must be possible since the neuron circuit is needed to fabricate at the same time, which helps to ensure mass productivity. FET-based devices are CMOS process compatible, which is suitable for the mass production environment.
A positive FBFET (Feedback Field Effect Transistor) device has a very low sub-threshold current, SS (subthreshold swing) performance, and ION/IOFF ratio at the low operating voltage. We are proposing the synaptic device with a positive FBFET with a storage layer.
From the simulation study, the operation method is studied for the weight modulation of the synaptic device and electrical measurement confirms accumulated charge change by program and erase condition each. These results for the synaptic transistor in this dissertation can be one of the candidates in low power neuromorphic systems.1 Introduction 1
1.1 Limitation of von Neumann Architecture computing 1
1.2 Biological Synapse 3
1.3 Spiking Neural Network (SNN) 5
1.4 Requirements of synaptic device 7
1.5 Advantage of Feedback Field-effect transistor (FBFET) 9
1.6 Outline of the Dissertation 10
2 Positive Feedback FET with storage layer 11
2.1 Normal operation Principle of FBFET 14
2.2 Operation Mechanism by Drain Input Pulse 16
2.3 Weight Modulation Mechanism 20
2.4 TCAD Simulation Result for Weighted Sum 23
2.5 TCAD Simulation Result for Program and Erase 28
2.6 Array structure and Inhibition scheme 31
3 Fabrication and Measurement 36
3.1 Fabrication process of FBFET synapse 37
3.2 Measurement result 41
3.3 Hysteresis Reduction 49
3.4 Temperature Compensation method 53
4 Modeling and High level simulation 56
4.1 Compact modeling for SPICE 56
4.2 SPICE simulation for VMM 60
5 Conclusion 64
5.1 Review of Overall Work 64
5.2 Future work 65
Abstract (In Korean) 75Docto
Inference And Learning In Spiking Neural Networks For Neuromorphic Systems
Neuromorphic computing is a computing field that takes inspiration from the biological and physical characteristics of the neocortex system to motivate a new paradigm of highly parallel and distributed computing to take on the demands of the ever-increasing scale and computational complexity of machine intelligence esp. in energy-limited systems such as Edge devices, Internet-of-Things (IOT), and cyber physical systems (CPS). Spiking neural network (SNN) is often studied together with neuromorphic computing as the underlying computational model . Similar to the biological neural system, SNN is an inherently dynamic and stateful network. The state and output of SNN do not only dependent on the current input, but also dependent on the history information. Another distinct property of SNN is that the information is represented, transmitted, and processed as discrete spike events, also referred to as action potentials. All the processing happens in the neurons such that the computation itself is massively distributed and parallel. This enables low power information transmission and processing.
However, it is inefficient to implement SNNs on traditional Von Neumann architecture due to the performance gap between memory and processor. This has led to the advent of energy-efficient large-scale neuromorphic hardware such as IBM\u27s TrueNorth and Intel\u27s Loihi that enables low power implementation of large-scale neural networks for real-time applications. And although spiking networks have theoretically been shown to have Turing-equivalent computing power, it remains a challenge to train deep SNNs; the threshold functions that generate spikes are discontinuous, so they do not have derivatives and cannot directly utilize gradient-based optimization algorithms for training. Biologically plausible learning mechanism spike-timing-dependent plasticity (STDP) and its variants are local in synapses and time but are unstable during training and difficult to train multi-layer SNNs.
To better exploit the energy-saving features such as spike domain representation and stochastic computing provided by SNNs in neuromorphic hardware, and to address the hardware limitations such as limited data precision and neuron fan-in/fan-out constraints, it is necessary to re-design a neural network including its structure and computing. Our work focuses on low-level (activations, weights) and high-level (alternative learning algorithms) redesign techniques to enable inference and learning with SNNs in neuromorphic hardware.
First, we focused on transforming a trained artificial neural network (ANN) to a form that is suitable for neuromorphic hardware implementation. Here, we tackle transforming Long Short-Term Memory (LSTM), a version of recurrent neural network (RNN) which includes recurrent connectivity to enable learning long temporal patterns. This is specifically a difficult challenge due to the inherent nature of RNNs and SNNs; the recurrent connectivity in RNNs induces temporal dynamics which require synchronicity, especially with the added complexity of LSTMs; and SNNs are asynchronous in nature. In addition, the constraints of the neuromorphic hardware provided a massive challenge for this realization. Thus, in this work, we invented a store-and-release circuit using integrate-and-fire neurons which allows the synchronization and then developed modules using that circuit to replicate various parts of the LSTM. These modules enabled implementation of LSTMs with spiking neurons on IBMโs TrueNorth Neurosynaptic processor. This is the first work to realize such LSTM networks utilizing spiking neurons and implement on a neuromorphic hardware. This opens avenues for the use of neuromorphic hardware in applications involving temporal patterns.
Moving from mapping a pretrained ANN, we work on training networks on the neuromorphic hardware. Here, we first looked at the biologically plausible learning algorithm called STDP which is a Hebbian learning rule for learning without supervision. Simplified computational interpretations of STDP is either unstable and/or complex such that it is costly to implement on hardware. Thus, in this work, we proposed a stable version of STDP and applied intentional approximations for low-cost hardware implementation called Quantized 2-Power Shift (Q2PS) rule. With this version, we performed both unsupervised learning for feature extraction and supervised learning for classification in a multilayer SNN to achieve comparable to better accuracy on MNIST dataset compared to manually labelled two-layered networks.
Next, we approached training multilayer SNNs on a neuromorphic hardware with backpropagation, a gradient-based optimization algorithm that forms the backbone of deep neural networks (DNN). Although STDP is biologically plausible, its not as robust for learning deep networks as backpropagation is for DNNs. However, backpropagation is not biologically plausible and not suitable to be directly applied to SNNs, neither can it be implemented on a neuromorphic hardware. Thus, in the first part of this work, we devise a set of approximations to transform backprogation to the spike domain such that it is suitable for SNNs. After the set of approximations, we adapted the connectivity and weight update rule in backpropagation to enable learning solely based on the locally available information such that it resembled a rate-based STDP algorithm. We called this Error-Modulated STDP (EMSTDP). In the next part of this work, we implemented EMSTDP on Intel\u27s Loihi neuromorphic chip to realize online in-hardware supervised learning of deep SNNs. This is the first realization of a fully spike-based approximation of backpropagation algorithm implemented on a neuromorphic processor. This is the first step towards building an autonomous machine that learns continuously from its environment and experiences
Inference and Learning in Spiking Neural Networks for Neuromorphic Systems
Neuromorphic computing is a computing field that takes inspiration from the biological and physical characteristics of the neocortex system to motivate a new paradigm of highly parallel and distributed computing to take on the demands of the ever-increasing scale and computational complexity of machine intelligence esp. in energy-limited systems such as Edge devices, Internet-of-Things (IOT), and cyber physical systems (CPS). Spiking neural network (SNN) is often studied together with neuromorphic computing as the underlying computational model . Similar to the biological neural system, SNN is an inherently dynamic and stateful network. The state and output of SNN do not only dependent on the current input, but also dependent on the history information. Another distinct property of SNN is that the information is represented, transmitted, and processed as discrete spike events, also referred to as action potentials. All the processing happens in the neurons such that the computation itself is massively distributed and parallel. This enables low power information transmission and processing.
However, it is inefficient to implement SNNs on traditional Von Neumann architecture due to the performance gap between memory and processor. This has led to the advent of energy-efficient large-scale neuromorphic hardware such as IBM\u27s TrueNorth and Intel\u27s Loihi that enables low power implementation of large-scale neural networks for real-time applications. And although spiking networks have theoretically been shown to have Turing-equivalent computing power, it remains a challenge to train deep SNNs; the threshold functions that generate spikes are discontinuous, so they do not have derivatives and cannot directly utilize gradient-based optimization algorithms for training. Biologically plausible learning mechanism spike-timing-dependent plasticity (STDP) and its variants are local in synapses and time but are unstable during training and difficult to train multi-layer SNNs.
To better exploit the energy-saving features such as spike domain representation and stochastic computing provided by SNNs in neuromorphic hardware, and to address the hardware limitations such as limited data precision and neuron fan-in/fan-out constraints, it is necessary to re-design a neural network including its structure and computing. Our work focuses on low-level (activations, weights) and high-level (alternative learning algorithms) redesign techniques to enable inference and learning with SNNs in neuromorphic hardware.
First, we focused on transforming a trained artificial neural network (ANN) to a form that is suitable for neuromorphic hardware implementation. Here, we tackle transforming Long Short-Term Memory (LSTM), a version of recurrent neural network (RNN) which includes recurrent connectivity to enable learning long temporal patterns. This is specifically a difficult challenge due to the inherent nature of RNNs and SNNs; the recurrent connectivity in RNNs induces temporal dynamics which require synchronicity, especially with the added complexity of LSTMs; and SNNs are asynchronous in nature. In addition, the constraints of the neuromorphic hardware provided a massive challenge for this realization. Thus, in this work, we invented a store-and-release circuit using integrate-and-fire neurons which allows the synchronization and then developed modules using that circuit to replicate various parts of the LSTM. These modules enabled implementation of LSTMs with spiking neurons on IBM\u27s TrueNorth Neurosynaptic processor. This is the first work to realize such LSTM networks utilizing spiking neurons and implement on a neuromorphic hardware. This opens avenues for the use of neuromorphic hardware in applications involving temporal patterns.
Moving from mapping a pretrained ANN, we work on training networks on the neuromorphic hardware. Here, we first looked at the biologically plausible learning algorithm called STDP which is a Hebbian learning rule for learning without supervision. Simplified computational interpretations of STDP is either unstable and/or complex such that it is costly to implement on hardware. Thus, in this work, we proposed a stable version of STDP and applied intentional approximations for low-cost hardware implementation called Quantized 2-Power Shift (Q2PS) rule. With this version, we performed both unsupervised learning for feature extraction and supervised learning for classification in a multilayer SNN to achieve comparable to better accuracy on MNIST dataset compared to manually labelled two-layered networks.
Next, we approached training multilayer SNNs on a neuromorphic hardware with backpropagation, a gradient-based optimization algorithm that forms the backbone of deep neural networks (DNN). Although STDP is biologically plausible, its not as robust for learning deep networks as backpropagation is for DNNs. However, backpropagation is not biologically plausible and not suitable to be directly applied to SNNs, neither can it be implemented on a neuromorphic hardware. Thus, in the first part of this work, we devise a set of approximations to transform backprogation to the spike domain such that it is suitable for SNNs. After the set of approximations, we adapted the connectivity and weight update rule in backpropagation to enable learning solely based on the locally available information such that it resembled a rate-based STDP algorithm. We called this Error-Modulated STDP (EMSTDP). In the next part of this work, we implemented EMSTDP on Intel\u27s Loihi neuromorphic chip to realize online in-hardware supervised learning of deep SNNs. This is the first realization of a fully spike-based approximation of backpropagation algorithm implemented on a neuromorphic processor. This is the first step towards building an autonomous machine that learns continuously from its environment and experiences
- โฆ