192 research outputs found

    Beyond Weights: Deep learning in Spiking Neural Networks with pure synaptic-delay training

    Get PDF

    The Yin-Yang dataset

    Get PDF
    The Yin-Yang dataset was developed for research on biologically plausible error backpropagation and deep learning in spiking neural networks. It serves as an alternative to classic deep learning datasets, especially in early-stage prototyping scenarios for both network models and hardware platforms, for which it provides several advantages. First, it is smaller and therefore faster to learn, thereby being better suited for small-scale exploratory studies in both software simulations and hardware prototypes. Second, it exhibits a very clear gap between the accuracies achievable using shallow as compared to deep neural networks. Third, it is easily transferable between spatial and temporal input domains, making it interesting for different types of classification scenarios

    Spiking neural networks trained with backpropagation for low power neuromorphic implementation of voice activity detection

    Full text link
    Recent advances in Voice Activity Detection (VAD) are driven by artificial and Recurrent Neural Networks (RNNs), however, using a VAD system in battery-operated devices requires further power efficiency. This can be achieved by neuromorphic hardware, which enables Spiking Neural Networks (SNNs) to perform inference at very low energy consumption. Spiking networks are characterized by their ability to process information efficiently, in a sparse cascade of binary events in time called spikes. However, a big performance gap separates artificial from spiking networks, mostly due to a lack of powerful SNN training algorithms. To overcome this problem we exploit an SNN model that can be recast into an RNN-like model and trained with known deep learning techniques. We describe an SNN training procedure that achieves low spiking activity and pruning algorithms to remove 85% of the network connections with no performance loss. The model achieves state-of-the-art performance with a fraction of power consumption comparing to other methods.Comment: 5 pages, 2 figures, 2 table

    Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: ``Are SNNs secure?'' Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference on Neural Networks (IJCNN

    Neuromorphic Online Learning for Spatiotemporal Patterns with a Forward-only Timeline

    Full text link
    Spiking neural networks (SNNs) are bio-plausible computing models with high energy efficiency. The temporal dynamics of neurons and synapses enable them to detect temporal patterns and generate sequences. While Backpropagation Through Time (BPTT) is traditionally used to train SNNs, it is not suitable for online learning of embedded applications due to its high computation and memory cost as well as extended latency. Previous works have proposed online learning algorithms, but they often utilize highly simplified spiking neuron models without synaptic dynamics and reset feedback, resulting in subpar performance. In this work, we present Spatiotemporal Online Learning for Synaptic Adaptation (SOLSA), specifically designed for online learning of SNNs composed of Leaky Integrate and Fire (LIF) neurons with exponentially decayed synapses and soft reset. The algorithm not only learns the synaptic weight but also adapts the temporal filters associated to the synapses. Compared to the BPTT algorithm, SOLSA has much lower memory requirement and achieves a more balanced temporal workload distribution. Moreover, SOLSA incorporates enhancement techniques such as scheduled weight update, early stop training and adaptive synapse filter, which speed up the convergence and enhance the learning performance. When compared to other non-BPTT based SNN learning, SOLSA demonstrates an average learning accuracy improvement of 14.2%. Furthermore, compared to BPTT, SOLSA achieves a 5% higher average learning accuracy with a 72% reduction in memory cost.Comment: 9 pages,8 figure

    ์–‘์„ฑ ํ”ผ๋“œ๋ฐฑ ์ „๊ณ„ ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ๋ฅผ ํ™œ์šฉํ•œ ์ €์ „๋ ฅ ์‹œ๋ƒ…์Šค ์†Œ์ž

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2020. 8. ๋ฐ•๋ณ‘๊ตญ.์‹ ๊ฒฝ๋ง ๋ชจ๋ฐฉ ์‹œ์Šคํ…œ์€ ํฐ ๋…ธ์ด๋งŒ ๊ตฌ์กฐ์˜ ๊ณ„์‚ฐ ์‹œ์Šคํ…œ์ด ๊ฐ€์ง€๋Š” ์•ฝ์ ์ธ ๋ณต์žกํ•œ ์ธ์‹ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐ๊ณผ ์—๋„ˆ์ง€ ์†Œ๋น„์˜ ํšจ์œจ์„ฑ์˜ ๊ฐ€๋Šฅ์„ฑ์œผ๋กœ ์ˆ˜๋…„๊ฐ„ ๋งŽ์€ ๋ถ„์•ผ์—์„œ ์—ฐ๊ตฌ๋˜๊ณ  ์žˆ๊ณ  ์ผ๋ถ€๋Š” ์ƒ์šฉํ™” ๋‹จ๊ณ„์—๊นŒ์ง€ ์ด๋ฅด๋ €๋‹ค. ์ด ์‹ ๊ฒฝ ๋ชจ๋ฐฉ ์‹œ์Šคํ…œ์€ ์‹œ๋ƒ…์Šค ๋ชจ๋ฐฉ ์†Œ์ž์™€ ๋‰ด๋Ÿฐ ํšŒ๋กœ๋กœ ์ด๋ฃจ์–ด ์ง€๋Š”๋ฐ ์‹œ๋ƒ…์Šค ๋ชจ๋ฐฉ ์†Œ์ž๋Š” ์‹ ํ˜ธ์ „๋‹ฌ๊ณผ ๊ธฐ์–ต ๊ธฐ๋Šฅ์„ ๋‹ด๋‹นํ•˜๊ณ  ์žˆ๋‹ค. ์‹œ๋ƒ…์Šค๋Š” ์ „์ฒด ์‹ ๊ฒฝ๋ชจ๋ฐฉ ์‹œ์Šคํ…œ์—์„œ ๊ฐ€์žฅ ํฐ ๋ถ€๋ถ„์„ ์ฐจ์ง€ ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์‹œ์Šคํ…œ๋‚ด ๋Œ€๋ถ€๋ถ„์˜ ์ „๋ ฅ ์†Œ๋น„๊ฐ€ ์‹œ๋ƒ…์Šค ๋ถ€๋ถ„์—์„œ ์ผ์–ด๋‚˜๊ฒŒ ๋˜๋ฏ€๋กœ ์ €์ „๋ ฅ ๊ตฌํ˜„์ด ํ•„์ˆ˜์ ์ธ ์š”์†Œ๋‹ค. ์ด๋Ÿฐ ์ด์œ ๋กœ ์ €์ „๋ ฅ ์†Œ์ž์— ํŠนํ™”๋œ ์†Œ์ž์ธ ํ„ฐ๋„ ์ „๊ณ„ ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ (TFET), ๋„ค๊ฑฐํ‹ฐ๋ธŒ ์ปคํŽ˜์‹œํ„ฐ ์ „๊ณ„ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ (NCFET), ๊ฐ•์œ ์ „๊ณ„ ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ (FeFET) ๋ฐ ํ”ผ๋“œ๋ฐฑ ์ „๊ณ„ ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ (FBFET) ๋“ฑ์ด ์—ฐ๊ตฌ๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฐ ๋‹ค์–‘ํ•œ ์†Œ์ž์ค‘์— ํ˜„์žฌ์˜ ์ƒ๋ณดํ˜• ๊ธˆ์†-์‚ฐํ™”๋ฌผ-๋ฐ˜๋„์ฒด (CMOS) ๊ณต์ •์„ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ํ”ผ๋“œ๋ฐฑ ์ „๊ณ„ ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ๋Š” ๋‰ด๋Ÿฐ ํšŒ๋กœ์™€ ๋™์‹œ์— ์ œ์ž‘์ด ํ•„์š”ํ•œ ์‹ ๊ฒฝ๋ง ๋ชจ๋ฐฉ ์‹œ์Šคํ…œ์—์„œ ๋Œ€๋Ÿ‰ ์ƒ์‚ฐ ๊ฐ€๋Šฅ์„ฑ์— ์žˆ์–ด์„œ ๋งค์šฐ ์œ ๋ฆฌํ•˜๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด ํ”ผ๋“œ๋ฐฑ ์ „๊ณ„ ํšจ๊ณผ ํŠธ๋žœ์ง€์Šคํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ๊ตฌ์กฐ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ํŒŒ์šธ๋Ÿฌ ๋…ธ๋ฅด๋‹ค์ž„ ํ„ฐ๋„๋ง(Fowler-Nordheim tunneling)์„ ๋ฐฉ์‹์œผ๋กœ ์ฐจ์น˜ ํŠธ๋žฉ ์ธต์— ์‹œ๋ƒ…์Šค ์†Œ์ž์˜ ๊ฐ€์ค‘์น˜๋ฅผ ๊ธฐ์–ตํ•˜๋Š” ๋ฐฉ์‹์˜ ์‹œ๋ƒ…์Šค ์žฅ์น˜๋ฅผ ์ œ์•ˆํ•˜๊ณ  ์žˆ๋‹ค. ํ•ด๋‹น ์†Œ์ž์˜ ์ €์ „๋ ฅ ํŠน์„ฑ๊ณผ ๊ตฌ๋™ ๋ฐฉ๋ฒ•์„ ํ…Œํฌ๋†€๋กœ์ง€ ์ปดํ“จํ„ฐ ์ง€์› ์„ค๊ณ„ (TCAD) ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ์œ ํšจ์„ฑ์„ ํ™•์ธ ํ•˜์˜€๊ณ , ์„œ์šธ๋Œ€ ๋ฐ˜๋„์ฒด ๊ณต๋™ ์—ฐ๊ตฌ์†Œ (ISRC) ์˜ CMOS ๊ณต์ •์„ ์‚ฌ์šฉํ•˜์—ฌ ์†Œ์ž๋ฅผ ์ œ์ž‘ํ•˜์˜€๊ณ  ์ „๊ธฐ์  ํŠน์„ฑ ์ธก์ •์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์„ ํ™•์ธ ๋ฐ ๊ฒ€์ฆ ํ•˜์˜€๋‹ค.The neuromorphic system has been widely used and commercialized in many fields in recent years due to its potential for complex problem solving and low energy consumption. The basic elements of this neuromorphic system are synapse and neuron circuit, in which synapse research is focused on emerging electronic devices such as resistive change memory (RRAM), phase-change memory (PCRAM), magnetoresistive random-access memory (MRAM), and FET-based devices. Synapse is responsible for the memory function of the neuromorphic system, that is, the current sum quantization with the specific weight value. and the neuron is responsible for integrating signals that have passed through the synapse and transmitting information to the next synapse. Since the synapse element is the largest portion of the whole system, It consumes most of the power of the entire system. So low power implementation is essential for the synapse device. In order to reduce power consumption, it is necessary to lower the off-current leakage and operate on low voltage. To overcome the limitation of MOSFETs in terms of ION/IOFF ratio, small sub-threshold swing and power consumption, various devices such as a tunneling field-effect transistor (TFET), negative capacitor field-effect transistor (NCFET), ferroelectric field-effect transistor (FeFET), and feedback field-effect transistor (FBFET) have been studied. Another important factor in synapse devices is the cost aspect. The deep learning technology that made Alpha-go exist is also an expensive system. As we can see from the coexistence of supercomputers and personal computers in the past, the development of low-cost chips that can be used by individuals, in the end, is inevitable. Because a CMOS compatible process must be possible since the neuron circuit is needed to fabricate at the same time, which helps to ensure mass productivity. FET-based devices are CMOS process compatible, which is suitable for the mass production environment. A positive FBFET (Feedback Field Effect Transistor) device has a very low sub-threshold current, SS (subthreshold swing) performance, and ION/IOFF ratio at the low operating voltage. We are proposing the synaptic device with a positive FBFET with a storage layer. From the simulation study, the operation method is studied for the weight modulation of the synaptic device and electrical measurement confirms accumulated charge change by program and erase condition each. These results for the synaptic transistor in this dissertation can be one of the candidates in low power neuromorphic systems.1 Introduction 1 1.1 Limitation of von Neumann Architecture computing 1 1.2 Biological Synapse 3 1.3 Spiking Neural Network (SNN) 5 1.4 Requirements of synaptic device 7 1.5 Advantage of Feedback Field-effect transistor (FBFET) 9 1.6 Outline of the Dissertation 10 2 Positive Feedback FET with storage layer 11 2.1 Normal operation Principle of FBFET 14 2.2 Operation Mechanism by Drain Input Pulse 16 2.3 Weight Modulation Mechanism 20 2.4 TCAD Simulation Result for Weighted Sum 23 2.5 TCAD Simulation Result for Program and Erase 28 2.6 Array structure and Inhibition scheme 31 3 Fabrication and Measurement 36 3.1 Fabrication process of FBFET synapse 37 3.2 Measurement result 41 3.3 Hysteresis Reduction 49 3.4 Temperature Compensation method 53 4 Modeling and High level simulation 56 4.1 Compact modeling for SPICE 56 4.2 SPICE simulation for VMM 60 5 Conclusion 64 5.1 Review of Overall Work 64 5.2 Future work 65 Abstract (In Korean) 75Docto
    • โ€ฆ
    corecore