7 research outputs found

    Impact of RTN on Pattern Recognition Accuracy of RRAM-based Synaptic Neural Network

    Get PDF
    Resistive switching memory devices can be categorized into either filamentary or non-filamentary ones depending on the switching mechanisms. Both types have been investigated as novel synaptic devices in hardware neural networks, but there is a lack of comparative study between them, especially in random telegraph noise (RTN) which could induce large resistance fluctuations. In this work, we analyze the amplitude and occurrence rate of RTN in both Ta2O5 filamentary and TiO2/a-Si (a-VMCO) non-filamentary RRAM devices and evaluate its impact on the pattern recognition accuracy of neural networks. It is revealed that the non-filamentary RRAM has a tighter RTN amplitude distribution and much lower RTN occurrence rate than its filamentary counterpart which leads to negligible RTN impact on recognition accuracy, making it a promising candidate in synaptic application

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Mitigating Asymmetric Nonlinear Weight Update Effects in Hardware Neural Network Based on Analog Resistive Synapse

    No full text

    AND-ํ˜• ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์–ด๋ ˆ์ด๋ฅผ ํ™œ์šฉํ•œ ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์ŠคํŒŒ์ดํ‚น ๋‰ด๋Ÿด ๋„คํŠธ์›Œํฌ ๊ตฌํ˜„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022.2. ์ด์ข…ํ˜ธ.Neuromorphic engineering aims to implement a brain-inspired computing architecture as an alternative paradigm to the von Neumann processor. In this work, hardware-based neural networks that enable on-chip training using a thin-film transistor-type AND flash memory array architecture are designed. The synaptic device constituting the array is characterized by a doped p-type body, a gate insulator stack composed of SiO2 / Si3N4 / Al2O3, and a partially curved poly-Si channel. The p-body reduces the circuit burden on the high voltage driver required for both the source and drain lines when changing the synaptic weights. The high-ฮบ material included in the gate insulator stack helps to lower the operating voltage of the device. As the device scales down, the structural characteristics of the device have the potential to increase the efficiency of the memory operation and the immunity to the voltage drop effect that occurs in the bit-lines of the array. In the AND array architecture using fabricated synaptic devices, a pulse scheme for selective memory operation is proposed and verified experimentally. Based on the measured characteristics of the fabricated synaptic devices and arrays, we design two types of hardware-based spiking neural networks (SNNs) according to the learning purpose. First, we propose a hardware-based SNN for unsupervised learning with spiking-timing-dependent plasticity (STDP) learning rule. The designed network does not use the pulses generated by the external circuitry, but the necessary pulses are generated in each spike neuron circuit. In this architecture, the STDP rule is implemented by the effective pulse scheme for using poly-silicon AND arrays. With the proposed pulse scheme and SNN, 91.63% of recognition accuracy is obtained in MNIST handwritten digit pattern learning using 200 output neurons. Second, we propose a hardware-based SNN for supervised learning with a direct feedback alignment (DFA) learning rule. Due to the DFA algorithm, which does not need to have the same synaptic weight in the forward path and backward path, the AND array architecture can be utilized in designing an efficient on-chip training neural network. Pulse schemes suitable for the proposed AND array architecture are also devised to implement the DFA algorithm in neural networks. In the system-level simulation, the recognition accuracy of up to 97.01% is obtained in the MNIST pattern learning task based on the proposed pulse scheme and computing architecture. In addition, we propose and verify the integration fabrication method of the proposed synaptic array and complementary metal-oxide-semiconductor (CMOS) circuits. Here, the CMOS circuits include either an integrate-and-fire circuit or a circuit that can change the width or amplitude of the spike signal. The proposed integration fabrication method has the advantage of reducing the number of masks and steps due to the shared process of the synaptic array and CMOS circuit. The proposed integration fabrication method is significant because it presents a methodology for efficient implantation of hardware-based neural networks as well as verification of excellent compatibility of the proposed synaptic device with CMOS.๋‰ด๋กœ๋ชจํ”ฝ ๊ธฐ์ˆ ์€ ํฐ ๋…ธ์ด๋งŒ ํ”„๋กœ์„ธ์„œ์˜ ๋Œ€์•ˆ์œผ๋กœ์„œ ๋‘๋‡Œ์—์„œ ์˜๊ฐ์„ ๋ฐ›์€ ์ปดํ“จํŒ… ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฐ•๋ง‰ ํŠธ๋žœ์ง€์Šคํ„ฐํ˜• ๋ฐ ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜จ์นฉ ํ›ˆ๋ จ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์‹ ๊ฒฝ๋ง์„ ์„ค๊ณ„ํ•œ๋‹ค. ์–ด๋ ˆ์ด๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ์‹œ๋ƒ…์Šค ์†Œ์ž๋Š” ๋„ํ•‘๋œ pํ˜• ๋ฐ”๋””, SiO2 / Si3N4 / Al2O3๋กœ ๊ตฌ์„ฑ๋œ ๊ฒŒ์ดํŠธ ์ ˆ์—ฐ๋ง‰ ์Šคํƒ ๋ฐ ๋ถ€๋ถ„์ ์œผ๋กœ ๊ตฌ๋ถ€๋Ÿฌ์ง„ ํด๋ฆฌ์‹ค๋ฆฌ์ฝ˜ ์ฑ„๋„์„ ํŠน์ง•์œผ๋กœ ํ•œ๋‹ค. ์‹œ๋ƒ…์Šค ์†Œ์ž ๊ตฌ์กฐ์— ํฌํ•จ๋œ ๋ฐ”๋”” ์˜์—ญ์€ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜๋ฅผ ๋ณ€๊ฒฝํ•  ๋•Œ ์†Œ์Šค ๋ฐ ๋“œ๋ ˆ์ธ ๋ผ์ธ ๋ชจ๋‘์— ํ•„์š”ํ•œ ๊ณ ์ „์•• ๋“œ๋ผ์ด๋ฒ„์˜ ํšŒ๋กœ ๋ถ€๋‹ด์„ ์ค„์ผ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๊ฒŒ์ดํŠธ ์ ˆ์—ฐ๋ง‰ ์Šคํƒ์— ํฌํ•จ๋œ high- ฮบ ๋ฌผ์งˆ์€ ์‹œ๋ƒ…์Šค ์†Œ์ž์˜ ๋™์ž‘ ์ „์••์„ ๋‚ฎ์ถœ ์ˆ˜ ์žˆ๋‹ค. ์‹œ๋ƒ…์Šค ์†Œ์ž์˜ ํฌ๊ธฐ๊ฐ€ ์ถ•์†Œ๋จ์— ๋”ฐ๋ผ ์†Œ์ž์˜ ๊ตฌ์กฐ์ ์ธ ํŠน์ง•์€ ๋ฉ”๋ชจ๋ฆฌ ๋™์ž‘์˜ ํšจ์œจ์„ฑ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์–ด๋ ˆ์ด์˜ ๋น„ํŠธ ๋ผ์ธ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ „์•• ๊ฐ•ํ•˜ ํšจ๊ณผ์— ๋Œ€ํ•œ ๋‚ด์„ฑ์„ ์ฆ๊ฐ€์‹œํ‚จ๋‹ค. ์šฐ๋ฆฌ๋Š” ์ œ์ž‘๋œ ์‹œ๋ƒ…์Šค ์†Œ์ž๋ฅผ ์ด์šฉํ•œ ANDํ˜• ์–ด๋ ˆ์ด ๊ตฌ์กฐ์—์„œ ์„ ํƒ์ ์ธ ๋ฉ”๋ชจ๋ฆฌ ๋™์ž‘์„ ์œ„ํ•œ ํŽ„์Šค ๋ฐฉ์‹์„ ์ œ์•ˆํ•˜๊ณ  ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆํ•œ๋‹ค. ์ดํ›„ ์ œ์ž‘๋œ ์‹œ๋ƒ…์Šค ์†Œ์ž ๋ฐ ์–ด๋ ˆ์ด์˜ ์ธก์ •๋œ ํŠน์„ฑ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•™์Šต ๋ชฉ์ ์— ๋”ฐ๋ผ 2๊ฐ€์ง€ ์œ ํ˜•์˜ ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์ŠคํŒŒ์ดํฌ ์‹ ๊ฒฝ๋ง (SNN)์„ ์„ค๊ณ„ํ•œ๋‹ค. ๋จผ์ € ์ŠคํŒŒ์ดํฌ ์‹œ์  ์˜์กด ๊ฐ€์†Œ์„ฑ ๊ธฐ๋ฐ˜ ํ•™์Šต ๊ทœ์น™์„ ์ด์šฉํ•˜์—ฌ ๋น„์ง€๋„ ํ•™์Šต์„ ์œ„ํ•œ ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ SNN์„ ์ œ์•ˆํ•œ๋‹ค. ์„ค๊ณ„๋œ ๋„คํŠธ์›Œํฌ๋Š” ์™ธ๋ถ€ ํšŒ๋กœ์—์„œ ํŽ„์Šค๋ฅผ ์ƒ์„ฑํ•˜์ง€ ์•Š์œผ๋ฉฐ ๊ฐ ์ŠคํŒŒ์ดํฌ ๋‰ด๋Ÿฐ ํšŒ๋กœ์—์„œ ํ•„์š”ํ•œ ํŽ„์Šค๋“ค์ด ์ƒ์„ฑ๋œ๋‹ค. ์ด๋Ÿฌํ•œ ๋„คํŠธ์›Œํฌ์—์„œ ์ŠคํŒŒ์ดํฌ ์‹œ์  ์˜์กด ๊ฐ€์†Œ์„ฑ ๊ธฐ๋ฐ˜ ํ•™์Šต ๊ทœ์น™์€ ํด๋ฆฌ์‹ค๋ฆฌ์ฝ˜ ANDํ˜• ์–ด๋ ˆ์ด๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•œ ํšจ๊ณผ์ ์ธ ํŽ„์Šค ๊ตฌ๋™ ๋ฐฉ์‹์„ ํ†ตํ•ด ๊ตฌํ˜„๋œ๋‹ค. ์ œ์•ˆ๋œ ํŽ„์Šค ๊ตฌ๋™ ๋ฐฉ์‹๊ณผ SNN์„ ๊ธฐ๋ฐ˜์œผ๋กœ 200๊ฐœ์˜ ์ถœ๋ ฅ ๋‰ด๋Ÿฐ์„ ์‚ฌ์šฉํ•˜๋Š” MNIST ํ•„๊ธฐ ์ˆซ์ž ํŒจํ„ด ํ•™์Šต์—์„œ 91.63 %์˜ ์ธ์‹ ์ •ํ™•๋„๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ๋‘ ๋ฒˆ์งธ๋กœ, ์šฐ๋ฆฌ๋Š” ์ง์ ‘ ํ”ผ๋“œ๋ฐฑ ์ •๋ ฌ ํ•™์Šต ๊ทœ์น™์„ ์‚ฌ์šฉํ•˜์—ฌ ์ง€๋„ ํ•™์Šต์„ ์œ„ํ•œ ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ SNN์„ ์ œ์•ˆํ•œ๋‹ค. ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ์™€ ์—ญ๋ฐฉํ–ฅ ๊ฒฝ๋กœ์—์„œ ๋™์ผํ•œ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์งˆ ํ•„์š”๊ฐ€ ์—†๋Š” ์ง์ ‘ ํ”ผ๋“œ๋ฐฑ ์ •๋ ฌ ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ์ธํ•ด ANDํ˜• ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜๋Š” ํšจ์œจ์ ์ธ ์˜จ์นฉ ํ›ˆ๋ จ ์‹ ๊ฒฝ๋ง ์„ค๊ณ„์— ํ™œ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค. ANDํ˜• ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜์— ์ ํ•ฉํ•œ ํŽ„์Šค ๊ตฌ๋™ ๋ฐฉ์‹๋„ ์‹ ๊ฒฝ๋ง์—์„œ ์ง์ ‘ ํ”ผ๋“œ๋ฐฑ ์ •๋ ฌ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ๊ณ ์•ˆ๋œ๋‹ค. ์‹œ์Šคํ…œ ์ˆ˜์ค€ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์—์„œ ์ œ์•ˆ๋œ ํŽ„์Šค ๊ตฌ๋™ ๋ฐฉ์‹๊ณผ ์ปดํ“จํŒ… ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” MNIST ํŒจํ„ด ํ•™์Šต์—์„œ ์ตœ๋Œ€ 97.01%์˜ ์ธ์‹ ์ •ํ™•๋„๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ์ œ์•ˆ๋œ ์‹œ๋ƒ…์Šค ์–ด๋ ˆ์ด์™€ CMOS ํšŒ๋กœ์˜ ์ง‘์  ๊ณต์ • ๊ณผ์ •์„ ์ œ์•ˆํ•˜๊ณ  ์ด๋ฅผ ๊ฒ€์ฆํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ์ง‘์  ๊ณต์ • ๋ฐฉ๋ฒ•์€ ์‹œ๋ƒ…์Šค ์–ด๋ ˆ์ด์™€ CMOS ํšŒ๋กœ์˜ ๊ณต์ • ๊ณผ์ •์„ ๊ณต์œ ํ•จ์œผ๋กœ์จ ๋งˆ์Šคํฌ์™€ ๊ณต์ • ์ˆ˜๋ฅผ ์ค„์ผ ์ˆ˜ ์žˆ๋Š” ์žฅ์ ์ด ์žˆ๋‹ค. ์ œ์•ˆ๋œ ์ง‘์  ๊ณต์ • ๋ฐฉ๋ฒ•์€ ์ œ์•ˆํ•˜๋Š” ์‹œ๋ƒ…์Šค ์†Œ์ž์™€ CMOS์™€์˜ ์šฐ์ˆ˜ํ•œ ํ˜ธํ™˜์„ฑ์„ ๊ฒ€์ฆํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์‹ ๊ฒฝ๋ง์„ ํšจ์œจ์ ์œผ๋กœ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์‹œํ•œ๋‹ค๋Š” ์ ์—์„œ ์˜์˜๋ฅผ ๊ฐ–๋Š”๋‹ค.Abstract i Contents iv List of Figures viii List of Tables xxvi 1. Introduction 1 1.1 Neuromorphic computing 1 1.2 Hardware-based spiking neural network 5 1.3 Purpose of research 8 1.4 Dissertation outline 11 2. TFT-type AND flash memory array 12 2.1 Device structure and fabrication 12 2.2 Characteristics of the device 17 2.3 Measurement results as a synaptic device 22 2.4 Measurement results as a synaptic array 33 3. Hardware-based SNN for unsupervised learning 48 3.1 SNN using spike-timing-dependent plasticity (STDP) 48 3.2 Pulse scheme for STDP learning rule 54 3.3 MNIST pattern learning and classification 62 4. Hardware-based SNN for supervised learning 67 4.1 SNN using direct feedback alignment (DFA) 67 4.2 Pulse scheme for DFA learning rule 73 4.3 MNIST pattern learning and classification 81 5. Hardware implementation of neural networks 86 5.1 Integration of a synaptic array and CMOS circuits 86 5.2 Measurement results of a synaptic array 101 5.3 Measurement results of CMOS circuits 115 6. Conclusion 139 Appendix A. Neuron circuits to implement a hardware-based neural network using the STDP learning algorithm and the pulse scheme not including the inhibition pulses 142 Appendix B. Neuron circuits to implement a hardware-based neural network using the STDP learning algorithm and the pulse scheme including the inhibition pulses 158 Bibliography 172 Abstract in Korean 181 List of Publications 183๋ฐ•

    Chalcogenide and metal-oxide memristive devices for advanced neuromorphic computing

    Get PDF
    Energy-intensive artificial intelligence (AI) is prevailing and changing the world, which requires energy-efficient computing technology. However, traditional AI driven by von Neumann computing systems suffers from the penalties of high-energy consumption and time delay due to frequent data shuttling. To tackle the issue, brain-inspired neuromorphic computing that performs data processing in memory is developed, reducing energy consumption and processing time. Particularly, some advanced neuromorphic systems perceive environmental variations and internalize sensory signals for localized in-senor computing. This methodology can further improve data processing efficiency and develop multifunctional AI products. Memristive devices are one of the promising candidates for neuromorphic systems due to their non-volatility, small size, fast speed, low-energy consumption, etc. In this thesis, memristive devices based on chalcogenide and metal-oxide materials are fabricated for neuromorphic computing systems. Firstly, a versatile memristive device (Ag/CuInSe2/Mo) is demonstrated based on filamentary switching. Non-volatile and volatile features are coexistent, which play multiple roles of non-volatile memory, selectors, artificial neurons, and artificial synapses. The conductive filamentsโ€™ lifetime was controlled to present both volatile and non-volatile behaviours. Secondly, the sensing functions (temperature and humidity) are explored based on Ag conductive filaments. An intelligent matter (Ag/Cu(In, Ga)Se2/Mo) endowing reconfigurable temperature and humidity sensations is developed for sensory neuromorphic systems. The device reversibly switches between two states with differentiable semiconductive and metallic features, demonstrating different responses to temperature and humidity variations. Integrated devices can be employed for intelligent electronic skin and in-sensor computing. Thirdly, the memristive-based sensing function of light was investigated. An optoelectronic synapse (ITO/ZnO/MoO3/Mo) enabling multi-spectrum sensitivity for machine vision systems is developed. For the first time, this optoelectronic synapse is practical for front-end retinomorphic image sensing, convolution processing, and back-end neuromorphic computing. This thesis will benefit the development of advanced neuromorphic systems pushing forward AI technology
    corecore