192 research outputs found
The Yin-Yang dataset
The Yin-Yang dataset was developed for research on biologically plausible error backpropagation and deep learning in spiking neural networks. It serves as an alternative to classic deep learning datasets, especially in early-stage prototyping scenarios for both network models and hardware platforms, for which it provides several advantages. First, it is smaller and therefore faster to learn, thereby being better suited for small-scale exploratory studies in both software simulations and hardware prototypes. Second, it exhibits a very clear gap between the accuracies achievable using shallow as compared to deep neural networks. Third, it is easily transferable between spatial and temporal input domains, making it interesting for different types of classification scenarios
Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection
Recent advances in Voice Activity Detection (VAD) are driven by artificial
and Recurrent Neural Networks (RNNs), however, using a VAD system in
battery-operated devices requires further power efficiency. This can be
achieved by neuromorphic hardware, which enables Spiking Neural Networks (SNNs)
to perform inference at very low energy consumption. Spiking networks are
characterized by their ability to process information efficiently, in a sparse
cascade of binary events in time called spikes. However, a big performance gap
separates artificial from spiking networks, mostly due to a lack of powerful
SNN training algorithms. To overcome this problem we exploit an SNN model that
can be recast into an RNN-like model and trained with known deep learning
techniques. We describe an SNN training procedure that achieves low spiking
activity and pruning algorithms to remove 85% of the network connections with
no performance loss. The model achieves state-of-the-art performance with a
fraction of power consumption comparing to other methods.Comment: 5 pages, 2 figures, 2 table
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Spiking Neural Networks (SNNs) claim to present many advantages in terms of
biological plausibility and energy efficiency compared to standard Deep Neural
Networks (DNNs). Recent works have shown that DNNs are vulnerable to
adversarial attacks, i.e., small perturbations added to the input data can lead
to targeted or random misclassifications. In this paper, we aim at
investigating the key research question: ``Are SNNs secure?'' Towards this, we
perform a comparative study of the security vulnerabilities in SNNs and DNNs
w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack
methodology, i.e., without the knowledge of the internal structure of the SNN,
which employs a greedy heuristic to automatically generate imperceptible and
robust adversarial examples (i.e., attack images) for the given SNN. We perform
an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN
having the same number of layers and neurons (to obtain a fair comparison), in
order to study the efficiency of our methodology and to understand the
differences between SNNs and DNNs w.r.t. the adversarial examples. Our work
opens new avenues of research towards the robustness of the SNNs, considering
their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference
on Neural Networks (IJCNN
Neuromorphic Online Learning for Spatiotemporal Patterns with a Forward-only Timeline
Spiking neural networks (SNNs) are bio-plausible computing models with high
energy efficiency. The temporal dynamics of neurons and synapses enable them to
detect temporal patterns and generate sequences. While Backpropagation Through
Time (BPTT) is traditionally used to train SNNs, it is not suitable for online
learning of embedded applications due to its high computation and memory cost
as well as extended latency. Previous works have proposed online learning
algorithms, but they often utilize highly simplified spiking neuron models
without synaptic dynamics and reset feedback, resulting in subpar performance.
In this work, we present Spatiotemporal Online Learning for Synaptic Adaptation
(SOLSA), specifically designed for online learning of SNNs composed of Leaky
Integrate and Fire (LIF) neurons with exponentially decayed synapses and soft
reset. The algorithm not only learns the synaptic weight but also adapts the
temporal filters associated to the synapses. Compared to the BPTT algorithm,
SOLSA has much lower memory requirement and achieves a more balanced temporal
workload distribution. Moreover, SOLSA incorporates enhancement techniques such
as scheduled weight update, early stop training and adaptive synapse filter,
which speed up the convergence and enhance the learning performance. When
compared to other non-BPTT based SNN learning, SOLSA demonstrates an average
learning accuracy improvement of 14.2%. Furthermore, compared to BPTT, SOLSA
achieves a 5% higher average learning accuracy with a 72% reduction in memory
cost.Comment: 9 pages,8 figure
์์ฑ ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ๋ฅผ ํ์ฉํ ์ ์ ๋ ฅ ์๋ ์ค ์์
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ, 2020. 8. ๋ฐ๋ณ๊ตญ.์ ๊ฒฝ๋ง ๋ชจ๋ฐฉ ์์คํ
์ ํฐ ๋
ธ์ด๋ง ๊ตฌ์กฐ์ ๊ณ์ฐ ์์คํ
์ด ๊ฐ์ง๋ ์ฝ์ ์ธ ๋ณต์กํ ์ธ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐ๊ณผ ์๋์ง ์๋น์ ํจ์จ์ฑ์ ๊ฐ๋ฅ์ฑ์ผ๋ก ์๋
๊ฐ ๋ง์ ๋ถ์ผ์์ ์ฐ๊ตฌ๋๊ณ ์๊ณ ์ผ๋ถ๋ ์์ฉํ ๋จ๊ณ์๊น์ง ์ด๋ฅด๋ ๋ค. ์ด ์ ๊ฒฝ ๋ชจ๋ฐฉ ์์คํ
์ ์๋
์ค ๋ชจ๋ฐฉ ์์์ ๋ด๋ฐ ํ๋ก๋ก ์ด๋ฃจ์ด ์ง๋๋ฐ ์๋
์ค ๋ชจ๋ฐฉ ์์๋ ์ ํธ์ ๋ฌ๊ณผ ๊ธฐ์ต ๊ธฐ๋ฅ์ ๋ด๋นํ๊ณ ์๋ค.
์๋
์ค๋ ์ ์ฒด ์ ๊ฒฝ๋ชจ๋ฐฉ ์์คํ
์์ ๊ฐ์ฅ ํฐ ๋ถ๋ถ์ ์ฐจ์ง ํ๋ค. ๋ฐ๋ผ์ ์์คํ
๋ด ๋๋ถ๋ถ์ ์ ๋ ฅ ์๋น๊ฐ ์๋
์ค ๋ถ๋ถ์์ ์ผ์ด๋๊ฒ ๋๋ฏ๋ก ์ ์ ๋ ฅ ๊ตฌํ์ด ํ์์ ์ธ ์์๋ค. ์ด๋ฐ ์ด์ ๋ก ์ ์ ๋ ฅ ์์์ ํนํ๋ ์์์ธ ํฐ๋ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ (TFET), ๋ค๊ฑฐํฐ๋ธ ์ปคํ์ํฐ ์ ๊ณํจ๊ณผ ํธ๋์ง์คํฐ (NCFET), ๊ฐ์ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ (FeFET) ๋ฐ ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ (FBFET) ๋ฑ์ด ์ฐ๊ตฌ๋๊ณ ์๋ค.
์ด๋ฐ ๋ค์ํ ์์์ค์ ํ์ฌ์ ์๋ณดํ ๊ธ์-์ฐํ๋ฌผ-๋ฐ๋์ฒด (CMOS) ๊ณต์ ์ ๊ทธ๋๋ก ์ฌ์ฉํ ์ ์๋ ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ๋ ๋ด๋ฐ ํ๋ก์ ๋์์ ์ ์์ด ํ์ํ ์ ๊ฒฝ๋ง ๋ชจ๋ฐฉ ์์คํ
์์ ๋๋ ์์ฐ ๊ฐ๋ฅ์ฑ์ ์์ด์ ๋งค์ฐ ์ ๋ฆฌํ๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ด ํผ๋๋ฐฑ ์ ๊ณ ํจ๊ณผ ํธ๋์ง์คํฐ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๊ณ NAND ํ๋์ ๋ฉ๋ชจ๋ฆฌ ๊ตฌ์กฐ์์ ์ฌ์ฉํ๋ ํ์ธ๋ฌ ๋
ธ๋ฅด๋ค์ ํฐ๋๋ง(Fowler-Nordheim tunneling)์ ๋ฐฉ์์ผ๋ก ์ฐจ์น ํธ๋ฉ ์ธต์ ์๋
์ค ์์์ ๊ฐ์ค์น๋ฅผ ๊ธฐ์ตํ๋ ๋ฐฉ์์ ์๋
์ค ์ฅ์น๋ฅผ ์ ์ํ๊ณ ์๋ค.
ํด๋น ์์์ ์ ์ ๋ ฅ ํน์ฑ๊ณผ ๊ตฌ๋ ๋ฐฉ๋ฒ์ ํ
ํฌ๋๋ก์ง ์ปดํจํฐ ์ง์ ์ค๊ณ (TCAD) ์๋ฎฌ๋ ์ด์
์ ์ฌ์ฉํ์ฌ ์ ํจ์ฑ์ ํ์ธ ํ์๊ณ , ์์ธ๋ ๋ฐ๋์ฒด ๊ณต๋ ์ฐ๊ตฌ์ (ISRC) ์ CMOS ๊ณต์ ์ ์ฌ์ฉํ์ฌ ์์๋ฅผ ์ ์ํ์๊ณ ์ ๊ธฐ์ ํน์ฑ ์ธก์ ์ ํตํด ์ ์๋ ๋ฐฉ๋ฒ์ ํ์ธ ๋ฐ ๊ฒ์ฆ ํ์๋ค.The neuromorphic system has been widely used and commercialized in many fields in recent years due to its potential for complex problem solving and low energy consumption. The basic elements of this neuromorphic system are synapse and
neuron circuit, in which synapse research is focused on emerging electronic devices such as resistive change memory (RRAM), phase-change memory (PCRAM), magnetoresistive random-access memory (MRAM), and FET-based devices.
Synapse is responsible for the memory function of the neuromorphic system, that is, the current sum quantization with the specific weight value. and the neuron is responsible for integrating signals that have passed through the synapse and transmitting information to the next synapse. Since the synapse element is the largest portion of the whole system, It consumes most of the power of the entire system. So low power implementation is essential for the synapse device. In order to reduce power consumption, it is necessary to lower the off-current leakage and operate on low voltage. To overcome the limitation of MOSFETs in terms of ION/IOFF ratio, small sub-threshold swing and power consumption, various devices such as a tunneling field-effect transistor (TFET), negative capacitor field-effect transistor (NCFET), ferroelectric field-effect transistor (FeFET), and feedback field-effect transistor (FBFET) have been studied.
Another important factor in synapse devices is the cost aspect. The deep learning technology that made Alpha-go exist is also an expensive system. As we can see from the coexistence of supercomputers and personal computers in the past, the development of low-cost chips that can be used by individuals, in the end, is inevitable. Because a CMOS compatible process must be possible since the neuron circuit is needed to fabricate at the same time, which helps to ensure mass productivity. FET-based devices are CMOS process compatible, which is suitable for the mass production environment.
A positive FBFET (Feedback Field Effect Transistor) device has a very low sub-threshold current, SS (subthreshold swing) performance, and ION/IOFF ratio at the low operating voltage. We are proposing the synaptic device with a positive FBFET with a storage layer.
From the simulation study, the operation method is studied for the weight modulation of the synaptic device and electrical measurement confirms accumulated charge change by program and erase condition each. These results for the synaptic transistor in this dissertation can be one of the candidates in low power neuromorphic systems.1 Introduction 1
1.1 Limitation of von Neumann Architecture computing 1
1.2 Biological Synapse 3
1.3 Spiking Neural Network (SNN) 5
1.4 Requirements of synaptic device 7
1.5 Advantage of Feedback Field-effect transistor (FBFET) 9
1.6 Outline of the Dissertation 10
2 Positive Feedback FET with storage layer 11
2.1 Normal operation Principle of FBFET 14
2.2 Operation Mechanism by Drain Input Pulse 16
2.3 Weight Modulation Mechanism 20
2.4 TCAD Simulation Result for Weighted Sum 23
2.5 TCAD Simulation Result for Program and Erase 28
2.6 Array structure and Inhibition scheme 31
3 Fabrication and Measurement 36
3.1 Fabrication process of FBFET synapse 37
3.2 Measurement result 41
3.3 Hysteresis Reduction 49
3.4 Temperature Compensation method 53
4 Modeling and High level simulation 56
4.1 Compact modeling for SPICE 56
4.2 SPICE simulation for VMM 60
5 Conclusion 64
5.1 Review of Overall Work 64
5.2 Future work 65
Abstract (In Korean) 75Docto
- โฆ