6 research outputs found

    Embracing the Unreliability of Memory Devices for Neuromorphic Computing

    Full text link
    The emergence of resistive non-volatile memories opens the way to highly energy-efficient computation near- or in-memory. However, this type of computation is not compatible with conventional ECC, and has to deal with device unreliability. Inspired by the architecture of animal brains, we present a manufactured differential hybrid CMOS/RRAM memory architecture suitable for neural network implementation that functions without formal ECC. We also show that using low-energy but error-prone programming conditions only slightly reduces network accuracy

    ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์‹ ๊ฒฝ๋ง์„ ์œ„ํ•œ SiO2 ํ•€ ๊ธฐ๋ฐ˜ AND-ํ˜• ํ”Œ๋ž˜์‹œ ์‹œ๋ƒ…์Šค ์–ด๋ ˆ์ด

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022. 8. ์ตœ์šฐ์˜.Neuromorphic computing systems have emerged as a novel artificial intelligence paradigm to overcome the von Neumann bottleneck by mimicking the biological nervous system. Synaptic devices for hardware-based neural networks (HNNs) in neuromorphic computing systems require parallel computability, high scalability, low-power operation, and selective write operation. In this work, a SiO2 fin-based AND flash memory synaptic device for a HNN is proposed. The proposed device having a round-shaped channel structure with a 6 nm-wide thin oxide fin improves program performance compared to a flash synaptic device with planar-type channel by locally enhancing electric fields. The AND flash cell shows a high on/off current ratio over 105, a low sup-pA off-current, and a high dynamic range of synaptic weights over 103 with a low program voltage below 9 V. Selective write operation is performed using program and erase inhibition pulse schemes in the fabricated AND array based on SiO2 fin, and weighted sum operation is experimentally verified. In addition, a 3D AND flash synaptic array with round-shaped poly-Si channel is designed and fabricated to improve scalability. Key fabrication steps are proposed to address misalignment issues. The proposed 3D AND array performs selective write operation using program and erase inhibition pulse schemes. A novel synaptic architecture with two AND flash memory cells for off-chip learning is proposed. The novel synapse structure based on AND flash cells is used to perform parallel XNOR operation and bit-counting for binary neural networks (BNNs). Proposed BNN based on the AND flash array structure exhibits a classification accuracy of 89.9% on CIFAR-10 dataset, comparable to that of an ideal software-based BNN. Furthermore, differential synaptic architecture using AND flash array is proposed to improve robustness against on-current retention loss.๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ… ์‹œ์Šคํ…œ์€ ์ƒ๋ฌผํ•™์  ์‹ ๊ฒฝ๊ณ„๋ฅผ ๋ชจ๋ฐฉํ•˜์—ฌ ํฐ ๋…ธ์ด๋งŒ ๋ณ‘๋ชฉ ํ˜„์ƒ์„ ๊ทน๋ณตํ•˜๋Š” ์ƒˆ๋กœ์šด ์ธ๊ณต ์ง€๋Šฅ ํŒจ๋Ÿฌ๋‹ค์ž„์œผ๋กœ ๋“ฑ์žฅํ•˜์˜€๋‹ค. ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ… ์‹œ์Šคํ…œ์˜ ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์‹ ๊ฒฝ๋ง์„ ์œ„ํ•œ ์‹œ๋ƒ…์Šค ์†Œ์ž๋Š” ๋ณ‘๋ ฌ ์—ฐ์‚ฐ ๊ฐ€๋Šฅ์„ฑ, ๋†’์€ ์ง‘์ ๋„, ์ €์ „๋ ฅ ๋™์ž‘, ์„ ํƒ์ ์ธ ์“ฐ๊ธฐ ๋™์ž‘์„ ํ•„์š”๋กœ ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š”, ํ•˜๋“œ์›จ์–ด ๊ธฐ๋ฐ˜ ์‹ ๊ฒฝ๋ง์„ ์œ„ํ•œ SiO2 ํ•€ ๊ธฐ๋ฐ˜์˜ AND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์–ด๋ ˆ์ด๋ฅผ ์ œ์•ˆํ•œ๋‹ค. 6 nm ํญ์˜ ์–‡์€ ์‚ฐํ™”๋ฌผ ํ•€ ๊ธฐ๋ฐ˜์˜ ์›ํ˜• ์ฑ„๋„ ๊ตฌ์กฐ๋ฅผ ๊ฐ–๋Š” ์ œ์•ˆ๋œ ์†Œ์ž๋Š” ๊ตญ๋ถ€์ ์œผ๋กœ ์ „๊ณ„๋ฅผ ๊ฐ•ํ™”ํ•˜์—ฌ ํ‰๋ฉดํ˜• ์ฑ„๋„ ๊ตฌ์กฐ์˜ ํ”Œ๋ž˜์‹œ ์‹œ๋ƒ…์Šค ์†Œ์ž ๋Œ€๋น„ ํ”„๋กœ๊ทธ๋žจ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. AND ํ”Œ๋ž˜์‹œ ์…€์€ 105 ์ด์ƒ์˜ ๋†’์€ ์˜จ/์˜คํ”„ ์ „๋ฅ˜ ๋น„์œจ, pA ๋ฏธ๋งŒ์˜ ์˜คํ”„ ์ „๋ฅ˜, ๊ทธ๋ฆฌ๊ณ  9 V ์ดํ•˜์˜ ๋‚ฎ์€ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์ „์••์„ ์‚ฌ์šฉํ•˜์—ฌ 103 ์ด์ƒ์˜ ๋†’์€ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜์˜ ๋™์  ๋ฒ”์œ„๋ฅผ ๋ณด์ธ๋‹ค. SiO2 ํ•€์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์ œ์ž‘๋œ AND ์–ด๋ ˆ์ด์—์„œ๋Š” ํ”„๋กœ๊ทธ๋žจ ๋ฐ ์ด๋ ˆ์ด์ฆˆ ์–ต์ œ ํŽ„์Šค ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ์„ ํƒ์  ์“ฐ๊ธฐ ๋™์ž‘์ด ํšจ์œจ์ ์œผ๋กœ ์ˆ˜ํ–‰๋˜๊ณ  ๊ฐ€์ค‘์น˜ ํ•ฉ ๋™์ž‘์ด ์‹คํ—˜์ ์œผ๋กœ ๊ฒ€์ฆ๋œ๋‹ค. ๋˜ํ•œ, ์ง‘์ ๋„๋ฅผ ๋†’์ด๊ธฐ ์œ„ํ•ด ์›ํ˜• ํด๋ฆฌ์‹ค๋ฆฌ์ฝ˜ ์ฑ„๋„์„ ๊ฐ–๋Š” 3D AND ํ”Œ๋ž˜์‹œ ์‹œ๋ƒ…ํ‹ฑ ์–ด๋ ˆ์ด๊ฐ€ ์„ค๊ณ„ ๋ฐ ์ œ์ž‘๋œ๋‹ค. ์˜ค์ •๋ ฌ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ์ฃผ์š” ๊ณต์ • ๋‹จ๊ณ„๊ฐ€ ์ œ์•ˆ๋œ๋‹ค. ์ œ์•ˆ๋œ 3์ฐจ์› AND ์–ด๋ ˆ์ด๋Š” ํ”„๋กœ๊ทธ๋žจ ๋ฐ ์ด๋ ˆ์ด์ฆˆ ์–ต์ œ ํŽ„์Šค ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ์„ ํƒ์  ์“ฐ๊ธฐ ๋™์ž‘์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ์˜คํ”„ ์นฉ ํ•™์Šต์„ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ AND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ์…€์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์ƒˆ๋กœ์šด ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. AND ํ”Œ๋ž˜์‹œ ์…€ ๊ธฐ๋ฐ˜์˜ ์ƒˆ๋กœ์šด ์‹œ๋ƒ…์Šค ๊ตฌ์กฐ๋Š” ์ด์ง„ ์‹ ๊ฒฝ๋ง์„ ์œ„ํ•œ ๋ณ‘๋ ฌ XNOR ์—ฐ์‚ฐ๊ณผ ๋น„ํŠธ-์…ˆ์„ ์ˆ˜ํ–‰ํ•˜๋„๋ก ์‚ฌ์šฉ๋œ๋‹ค. AND ํ”Œ๋ž˜์‹œ ์–ด๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ œ์•ˆ๋œ ์ด์ง„ ์‹ ๊ฒฝ๋ง์€ CIFAR-10 ๋ฐ์ดํ„ฐ์—์„œ ์ด์ƒ์ ์ธ ์†Œํ”„ํŠธ์›จ์–ด ๊ธฐ๋ฐ˜ ์ด์ง„์‹ ๊ฒฝ๋ง์˜ ์ธ์‹ ์ •ํ™•๋„์™€ ์œ ์‚ฌํ•œ 89.9%์˜ ์ •ํ™•๋„๋ฅผ ๋ณด์ธ๋‹ค. ๋‚˜์•„๊ฐ€ ์šฐ๋ฆฌ๋Š” AND ํ”Œ๋ž˜์‹œ ์–ด๋ ˆ์ด๋ฅผ ์ด์šฉํ•œ ์ฐจ๋™ ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜์„ ์ œ์•ˆํ•˜์—ฌ ์ „๋ฅ˜ ์œ ์ง€ ์†์‹ค์— ๋Œ€ํ•œ ์•ˆ์ •์„ฑ์„ ๋†’์ธ๋‹ค.1. Introduction 1 1.1 Neuromorphic computing 1 1.2 Synaptic devices 3 1.3 Purpose of research 5 1.4 Dissertation outline 7 2. SiO2 fin-based AND flash synaptic array 8 2.1 Device structure 8 2.2 Fabrication process 10 2.3 Cell characteristics 16 2.4 Array characteristics 25 3. 3D AND flash synaptic array with rounded channel 34 3.1 Device structure 34 3.2 Fabrication process 37 3.2.1 Cell process steps 39 3.2.2 WL contact pad process steps 54 3.3 Cell characteristics 57 3.4 Array characteristics 62 4. Off-chip learning based on AND flash synaptic Array 72 4.1 Binary neural networks based on AND flash synaptic array 72 4.1.1 AND flash synaptic architecture 72 4.1.2 Differential synaptic architecture 80 4.2 Quantized neural networks based on AND flash synaptic array 84 5. Conclusion 86 Bibliography 89 Abstract in Korean 96๋ฐ•

    ๋‚ธ๋“œ ํ”Œ๋ž˜์‹œ ์…€ ์ŠคํŠธ๋ง ๊ธฐ๋ฐ˜์˜ ์‹œ๋ƒ…ํ‹ฑ ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021.8. ์ด์ข…ํ˜ธ.Neuromorphic computing using synaptic devices has been proposed to efficiently process vector-matrix multiplication (VMM) which is a significant task in DNN. Until now, resistive RAM (RRAM) was mainly used as synaptic devices for neuromorphic computing. However, a number of limitations still exist for RRAMs to implement a large-scale synaptic device array due to device nonideality such as variation, endurance and monolithic integration of RRAMs and CMOS peripheral circuits. Due to these problems, SRAM cells, which are mature silicon memory, have been proposed as synaptic devices. However, SRAM occupies large area (~150 F2 per bitcell) and on-chip SRAM capacity (~a few MB) is insufficient to accommodate a large number of parameters. In this dissertation, synaptic architectures based on NAND flash cell strings are proposed for off-chip learning and on-chip learning. A novel synaptic architecture based on NAND cell strings is proposed as a high-density synapse capable of XNOR operation for binary neural networks (BNNs) in off-chip learning. By changing the threshold voltage of NAND flash cells and input voltages in complementary fashion, the XNOR operation is successfully demonstrated. The large on/off current ratio (~7ร—105) of NAND flash cells can implement high-density and highly reliable BNNs without error correction codes. We propose a novel synaptic architecture based on a NAND flash memory for highly robust and high-density quantized neural networks (QNN) with 4-bit weight. Quantization training can minimize the degradation of the inference accuracy compared to post-training quantization. The proposed operation scheme can implement QNN with higher inference accuracy compared to BNN. On-chip learning can significantly reduce time and energy consumption during training, compensate the weight variation of synaptic devices, and can adapt to changing environment in real time. On-chip learning using the high-density advantage of NAND flash memory structure is of great significance. However, the conventional on-chip learning method used for RRAM array cannot be utilized when using NAND flash cells as synaptic devices because of the cell string structure of NAND flash memory. In this work, a novel synaptic array architecture enabling forward propagation (FP) and backward propagation (BP) in the NAND flash memory is proposed for on-chip learning. In the proposed synaptic architecture, positive synaptic weight and negative synaptic weight are separated in different array to enable weights to be transposed correctly. In addition, source-lines (SL) are separated, which is different from conventional NAND flash memory, to enable both the FP and BP in the NAND flash memory. By applying input and error input to bit-lines (BL) and string-select lines (SSL) in NAND cell array, respectively, accurate vector-matrix multiplication is successfully performed in both FP and BP eliminating the effect of pass cells. The proposed on-chip learning system is much more robust to weight variation compared to the off-chip learning system. Finally, superiority of the proposed on-chip learning architecture is verified by circuit simulation of a neural network.DNN์—์„œ ์ค‘์š”ํ•œ ์ž‘์—…์ธ ๋ฒกํ„ฐ-๋งคํŠธ๋ฆญ์Šค ๊ณฑ์…ˆ (VMM)์„ ํšจ์œจ์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์‹œ๋ƒ…์Šค ์†Œ์ž๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์ด ํ™œ๋ฐœํžˆ ์—ฐ๊ตฌ๋˜๊ณ  ์žˆ๋‹ค. ์ง€๊ธˆ๊นŒ์ง€ RRAM (Resistive RAM)์ด ์ฃผ๋กœ ๋‰ด๋กœ๋ชจํ”ฝ ์ปดํ“จํŒ…์˜ ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ RRAM์€ ์†Œ์ž์˜ ์‚ฐํฌ๊ฐ€ ํฌ๊ณ  ์‹ ๋ขฐ์„ฑ์ด ์ข‹์ง€ ์•Š์œผ๋ฉฐ CMOS ์ฃผ๋ณ€ ํšŒ๋กœ์™€ ํ†ตํ•ฉ์ด ์–ด๋ ค์šด ๋ฌธ์ œ๋กœ ์ธํ•ด ๋Œ€๊ทœ๋ชจ ์‹œ๋ƒ…์Šค ์†Œ์ž ์–ด๋ ˆ์ด๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ๋Š” ์—ฌ์ „ํžˆ ๋งŽ์€ ์ œํ•œ์ด ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋กœ ์ธํ•ด ์„ฑ์ˆ™ํ•œ ์‹ค๋ฆฌ์ฝ˜ ๋ฉ”๋ชจ๋ฆฌ์ธ SRAM ์…€์ด ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ์ œ์•ˆ๋˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ SRAM์€ ์…€ ๋‹น ๋ฉด์  (~150 F2 per bitcell)์ด ํฌ๊ณ  ๋˜ํ•œ ์˜จ์นฉ SRAM ์šฉ๋Ÿ‰ (~a few MB) ์€ ๋งŽ์€ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ˆ˜์šฉํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•˜์ง€ ์•Š๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์˜คํ”„ ์นฉ ํ•™์Šต๊ณผ ์˜จ ์นฉ ํ•™์Šต์„ ์œ„ํ•ด NAND ํ”Œ๋ž˜์‹œ ์…€ ์ŠคํŠธ๋ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. NAND ์…€ ์ŠคํŠธ๋ง ๊ธฐ๋ฐ˜์˜ ์ƒˆ๋กœ์šด ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜๋Š” ์˜คํ”„ ์นฉ ํ•™์Šต์—์„œ ์ด์ง„ ์‹ ๊ฒฝ๋ง (BNN)์„ ์œ„ํ•œ XNOR ์—ฐ์‚ฐ์ด ๊ฐ€๋Šฅํ•œ ๊ณ ๋ฐ€๋„ ์‹œ๋ƒ…์Šค๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ์ƒํ˜ธ ๋ณด์™„์ ์ธ ๋ฐฉ์‹์œผ๋กœ NAND ํ”Œ๋ž˜์‹œ ์…€์˜ ์ž„๊ณ„ ์ „์••๊ณผ ์ž…๋ ฅ ์ „์••์„ ๋ณ€๊ฒฝํ•จ์œผ๋กœ์จ XNOR ์—ฐ์‚ฐ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•œ๋‹ค. NAND ํ”Œ๋ž˜์‹œ ์…€์˜ ํฐ ์˜จ/์˜คํ”„ ์ „๋ฅ˜ ๋น„์œจ(~ 7x105)์€ ECC ์—†์ด ๊ณ ๋ฐ€๋„ ๋ฐ ๊ณ ์‹ ๋ขฐ์„ฑ์˜ BNN์„ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์šฐ๋ฆฌ๋Š” 4๋น„ํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ–๋Š” ๋งค์šฐ ๊ฒฌ๊ณ ํ•˜๋ฉฐ ๊ณ ์ง‘์ ์˜ ์–‘์žํ™”๋œ ์‹ ๊ฒฝ๋ง(QNN)์„ ์œ„ํ•œ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์˜ ์ƒˆ๋กœ์šด ์‹œ๋ƒ…ํ‹ฑ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์–‘์žํ™” ํ•™์Šต์€ ํ›ˆ๋ จ ํ›„ ์–‘์žํ™”์— ๋น„ํ•ด ์ถ”๋ก  ์ •ํ™•๋„์˜ ์ €ํ•˜๋ฅผ ์ตœ์†Œํ™”ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋™์ž‘ ๋ฐฉ์‹์€ BNN์— ๋น„ํ•ด ๋” ๋†’์€ ์ถ”๋ก  ์ •ํ™•๋„๋ฅผ ๊ฐ€์ง€๋Š” QNN์„ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์˜จ ์นฉ ํ•™์Šต์€ ํ›ˆ๋ จ ์ค‘ ์‹œ๊ฐ„๊ณผ ์—๋„ˆ์ง€ ์†Œ๋น„๋ฅผ ํฌ๊ฒŒ ์ค„์ด๊ณ  ์‹œ๋ƒ…์Šค ์†Œ์ž์˜ ์‚ฐํฌ๋ฅผ ๋ณด์ƒํ•˜๋ฉฐ ๋ณ€ํ™”ํ•˜๋Š” ํ™˜๊ฒฝ์— ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ ์‘ํ•  ์ˆ˜ ์žˆ๋‹ค. NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ ๊ตฌ์กฐ์˜ ๋†’์€ ์ง‘์ ๋„๋ฅผ ์‚ฌ์šฉํ•œ ์˜จ ์นฉ ํ•™์Šต์€ ๋งค์šฐ ์œ ์šฉํ•˜๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ธฐ์กด์˜ RRAM ์–ด๋ ˆ์ด์— ์‚ฌ์šฉ๋˜๋Š” ์˜จ ์นฉ ํ•™์Šต ๋ฐฉ๋ฒ•์€ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์˜ ์…€ ์ŠคํŠธ๋ง ๊ตฌ์กฐ๋กœ ์ธํ•ด NAND ํ”Œ๋ž˜์‹œ ์…€์„ ์‹œ๋ƒ…์Šค ์†Œ์ž๋กœ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ํ™œ์šฉํ•  ์ˆ˜ ์—†๋‹ค. ์ด ์—ฐ๊ตฌ์—์„œ๋Š” ์˜จ ์นฉ ํ•™์Šต์„ ์œ„ํ•ด NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ (FP) ๋ฐ ์—ญ๋ฐฉํ–ฅ ์ „ํŒŒ (BP)๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ์ƒˆ๋กœ์šด ์‹œ๋ƒ…์Šค ์–ด๋ ˆ์ด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆ๋œ ์‹œ๋ƒ…์Šค ์•„ํ‚คํ…์ฒ˜์—์„œ๋Š” ๊ฐ€์ค‘์น˜๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ „์น˜๋  ์ˆ˜ ์žˆ๋„๋ก ์–‘์˜ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜์™€ ์Œ์˜ ์‹œ๋ƒ…์Šค ๊ฐ€์ค‘์น˜๊ฐ€ ์„œ๋กœ ๋‹ค๋ฅธ ์–ด๋ ˆ์ด๋กœ ๋ถ„๋ฆฌ๋œ๋‹ค. ๋˜ํ•œ ๊ธฐ์กด NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์™€ ๋‹ฌ๋ฆฌ ์†Œ์Šค ๋ผ์ธ (SL)์„ ๋ถ„๋ฆฌํ•˜์—ฌ NAND ํ”Œ๋ž˜์‹œ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ์™€ ์—ญ๋ฐฉํ–ฅ ์ „ํŒŒ๋ฅผ ๋ชจ๋‘ ์—ฐ์‚ฐํ•  ์ˆ˜ ์žˆ๋‹ค. NAND ์…€ ์–ด๋ ˆ์ด์˜ ๋น„ํŠธ ๋ผ์ธ (BL) ๋ฐ ์ŠคํŠธ๋ง ์„ ํƒ ๋ผ์ธ (SSL)์— ๊ฐ๊ฐ ์ž…๋ ฅ ๋ฐ ์˜ค๋ฅ˜ ์ž…๋ ฅ์„ ์ธ๊ฐ€ํ•จ์œผ๋กœ์จ PASS ์…€์˜ ํšจ๊ณผ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ์ˆœ๋ฐฉํ–ฅ ์ „ํŒŒ ๋ฐ ์—ญ๋ฐ•ํ–ฅ ์ „ํŒŒ ๋ชจ๋‘์—์„œ ์ •ํ™•ํ•œ ๋ฒกํ„ฐ ํ–‰๋ ฌ ๊ณฑ์…ˆ์ด ์„ฑ๊ณต์ ์œผ๋กœ ์ˆ˜ํ–‰๋˜๋„๋ก ํ•œ๋‹ค. ์ œ์•ˆ๋œ ์˜จ ์นฉ ํ•™์Šต ์‹œ์Šคํ…œ์€ ์˜คํ”„ ์นฉ ํ•™์Šต ์‹œ์Šคํ…œ์— ๋น„ํ•ด ์†Œ์ž์˜ ์‚ฐํฌ์— ๋Œ€ํ•ด ํ›จ์”ฌ ์˜ํ–ฅ์ด ์ ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ œ์•ˆ๋œ ์˜จ ์นฉ ํ•™์Šต ์•„ํ‚คํ…์ฒ˜์˜ ์šฐ์ˆ˜์„ฑ์„ ์‹ ๊ฒฝ๋ง์˜ ํšŒ๋กœ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ๊ฒ€์ฆํ•˜์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Background 1 Chapter 2 Binary neural networks based on NAND flash memory 7 2.1 Synaptic architecture for BNN 7 2.2 Measurement results 13 2.3 Binary neuron circuit 23 2.4 Simulation results 27 2.5 Differential scheme 32 2.5.1 Differential synaptic architecture 32 2.5.2 Simulation results 41 Chapter 3 Quantized neural networks based on NAND flash memory 47 3.1 Synaptic architecture for QNN 47 3.2 Measurement results 55 3.3 Simulation results 66 Chapter 4 On-chip learning based on NAND flash memory 74 4.1 Synaptic architecture for on-chip learning 74 4.2 Measurement results 82 4.3 Neuron circuits 90 4.4 Simulation results 93 Chapter 5 Conclusion 100 Bibliography 104 Abstract in Korean 111๋ฐ•

    In-Memory and Error-Immune Differential RRAM Implementation of Binarized Deep Neural Networks

    No full text
    International audienceRRAM-based in-Memory Computing is an exciting road for implementing highly energy efficient neural networks. This vision is however challenged by RRAM variability, as the efficient implementation of in-memory computing does not allow error correction. In this work, we fabricated and tested a differential HfO 2-based memory structure and its associated sense circuitry, which are ideal for in-memory computing. For the first time, we show that our approach achieves the same reliability benefits as error correction, but without any CMOS overhead. We show, also for the first time, that it can naturally implement Binarized Deep Neural Networks, a very recent development of Artificial Intelligence, with extreme energy efficiency, and that the system is fully satisfactory for image recognition applications. Finally, we evidence how the extra reliability provided by the differential memory allows programming the devices in low voltage conditions, where they feature high endurance of billions of cycles
    corecore